text
stringlengths
1
1.88M
meta
dict
\section{Introduction} Unsupervised representation learning is the area of research that aims to extract units from unlabelled speech that are consistent with the phonemic transcription \cite{clsp,zs15,zs17}. As opposed to text, speech is subject to large variability. Two speech sequences with the same transcription can have significantly different raw speech signals. In order to work on speech sequences in an unsupervised way, there is a need for robust acoustic representations. To address that challenge, recent methods use {\em speech embeddings}, i.e.~fixed-size representations of variable-length speech sequences \cite{herman_cae,nils,settle,riad,emb2,emb3,emb5,cae}.\footnote{A speech sequence is a non-silent part of the speech signal (not necessarily a word). It can be transcribed into a phoneme $n$-gram.} Speech embeddings can be used in many applications, such as key-word spotting\cite{query,query2,query3}, spoken term discovery\cite{utd,utd2,utd3}, and segmentation of speech into words \cite{goldwater,seg1,seg2}. It is convenient to evaluate the reliability of speech embeddings without being tied to a particular downstream task. One way to do that is to compute the intrinsic quality of speech embeddings. The basic idea is that a reliable speech embedding should maximise the information relevant to its type and minimise irrelevant token-specific information. Two popular metrics have been used: the mean average precision (MAP) \cite{map} and the ABX discrimination score \cite{abx}. ABX and MAP are mathematically distinct yet they are expected to correlate well with each other as they both evaluate the discriminability of speech embeddings in terms of their transcription. However, \cite{nils} revealed a surprising result: the best model according to the ABX, is also the worst one according to the MAP. Following \cite{nils}'s results, we observed that this kind of discrepancies is much more common than we had expected. If a model performs well according to the MAP and bad according to the ABX, which metric should be trusted? For research in this field to go forward, there is a need to quantify the correlation of these two metrics. In this paper, we wanted to go further and check that MAP and ABX can also predict performances on a downstream task. Such tasks are numerous, but one of them has not yet received enough interest: the \textit{unsupervised frequency estimation}. We define the frequency of a speech sequence as the number of times the phonetic transcription of this sequence appears in the corpus. When dealing with text corpora, frequencies can be computed exactly with a lookup table and are used in many NLP applications. In the absence of labels, deriving the frequency of a speech sequence becomes a problem of density estimation. Estimated frequencies can be useful in representation learning by enabling efficient sampling of tokens in a speech database \cite{riad}. Also, frequencies could be used for the unsupervised word segmentation using algorithms similar to those used in text \cite{goldwater}. In Section~\ref{embeddings}, we present the range of embedding models that can be grouped in five categories of increasing expected reliability: hand-crafted, unsupervised, self-supervised, supervised plus a top-line embedding. In Section~\ref{tasks}, we present the MAP and ABX metrics and introduce our frequency estimation task. In Section~\ref{results}, we present results on the five speech datasets from the ZeroSpeech Challenge \cite{zs15,zs17}. From these results, we draw guidelines for future improvements in the field of acoustic speech embeddings. \section{Embedding Methods}\label{embeddings} \subsection{Acoustic features} Neural networks learn representations on top of input features. Therefore we used two types of acoustic features known as the log-MEL filterbanks (Mel-F)\cite{melf} and the Perceptual Linear Prediction (PLP)\cite{plp}. These two features can be considered as two levels of phonetic abstraction: a high-level one (PLP) and a low-level one (Mel-F). Formally, let us define a speech sequence $s_t$ by $x_1$,$x_2$,...,$x_T$, where $x_i \in \mathbb{R}^n$ is called a frame of the acoustic features. $T$ is the number frames in the sequence $s_t$. In our setting, these frames are spaced out every 10 ms each representing a 25 ms span of the raw signal. \subsection{Hand-crafted model: Gaussian downsampling} Holzenberger and al. \cite{nils} described a method to create fixed-size embedding vectors that requires no training of neural networks: the Gaussian down-sampling (GD). Given a sequence $s_t$, $l$ equidistant frames are sampled and a Gaussian average is computed around each sample. It returns an embedding vector $e_t$ of size $l \times n$ for any size $T$ of input sequences. Therefore, given our two acoustic features, two baselines model are derived: the Gaussian-down-sampling-PLP (GD-PLP) and the Gaussian-down-sampling-Mel-F (GD-Mel-F). Similarly, we derived a simple top-line model. Instead of using hand-crafted features, we can use the transcription of a given random segment. Each frame $x_i$ in a sequence $s_t$ will be assigned a 1-hot vector referring directly to the phoneme being said. This model goes through the same Gaussian averaging process to form the Gaussian-down-sampling-1hot (GD-1hot) model. This model is almost the true labels notwithstanding the information loss due to compression. \subsection{Unsupervised model: RNNAE} A more elaborate way to create speech embeddings is to learn them on top of acoustic features using neural networks. Specifically, recurrent neural networks (RNN) can be trained with back-propagation in an auto-encoding (AE) objective: the RNNAE \cite{nils,emb3}. Formally, the model is composed of an encoder network, a decoder network and a speaker encoder network. The encoder maps $s_t$ to $e_t$, a fixed-size vector. The speaker encoder maps the speaker identity to a fixed size vector $spk_t$. Then, the decoder concatenate $e_t$ and $spk_t$ and maps them to $\hat{s}_t$, a reconstruction of $s_t$. The three networks are trained jointly to minimise the \textit{Mean Square Error} between $\hat{s}_t$ and $s_t$. \subsection{Self-supervised and supervised models: CAE, Siamese and CAE-Siamese} \subsubsection{Advanced training objectives} We consider two popular embedding models. They are also encoder-decoders but they use additional side information. One is trained according to the Siamese objective \cite{riad,siamese,settle} the other is a correspondence auto-encoder (CAE) objective \cite{cae}. Both models assume a set of pairs of sequences from the training corpus. Positive pair are assumed to have the same transcription, negative pairs, different transcriptions. Let $p_t=(s_t,s_{t'},y)$ where $(s_t,s_{t'})$ is a pair of sequences of lengths $T$ and $T'$. A binary value $y$ indicates the positive or negative nature of the pair. We will see how to find such pairs in the next sub-section. The CAE objective uses only positive pairs. The auto-encoder is asked to encode $s_t$ into $e_t$ and decode it into $\hat{s}_t$. The speaker encoder network is used similarly as for the RNNAE. To satisfy the CAE objective, $\hat{s}_t$ has to minimise the \textit{Mean Square Error} between $\hat{s}_t$ and $s_t'$. It forces the auto-encoder to learn a common representation for $s_t$ and $s_{t'}$ . The Siamese objective does not need the decoder network. It encodes both $s_t$ and $s_{t'}$ and forces the encoder to learn a similar or different representation depending on whether the pair is positive or negative. $$L_s(e_{t},e_{t'},y)=y \cos(e_t,e_{t'})\, -\, (1-y) \max(0,\cos(e_t,e_{t'})-\gamma)$$ where $\cos$ is the cosine similarity and $\gamma$ is a margin. This latter accounts for negative pairs whose transcriptions have phonemes in common. These pairs should not have embeddings 'too' far away from each other. The CAE and Siamese objective can also be combined into a CAE-Siamese loss by a weighted sum of their respective loss function \cite{caesiamese}. \subsubsection{Finding and choosing pairs of speech embeddings} Finding positive pairs of speech sequences is an area of research called \textit{unsupervised term discovery} (UTD) \cite{utd,utd2,utd3,thual}. Such UTD systems can be DTW alignment based \cite{utd} or involve a k-Nearest-Neighbours search \cite{thual}. We opted for the latter, as it is both scalable and among the state-of-the-art methods. It encodes exhaustively all possible speech sequences with an embedding model, and used optimised $k$-NN search \cite{faiss} to retrieve acoustically similar pairs of speech sequences (see the details in \cite{thual}). In our experiments, we used the pairs retrieved by $k$-NN on \textit{GD-PLP} encoded sequences to train our self-supervised models (CAE,Siamese, CAE-Siamese). As a supervised alternative, it is possible to sample `gold' pairs, i.e.~pairs of elements that have the exact same transcription. These `gold' pairs are given to the CAE, Siamese and CAE-Siamese to train supervised models. These supervised models indicate how good these self-supervised models could be if we enhanced the UTD system. \section{Evaluation metrics and frequency estimation}\label{tasks} \subsection{Intrinsic quality metrics: ABX and MAP} The intrinsic quality of an acoustic speech embedding can be measured using two types of discrimination tasks: the MAP (also called same-different)\cite{map} and ABX tasks \cite{abx}. Let us consider a set of $n$ acoustic speech embeddings: $((e_1,t_1),(e_2,t_2),...(e_n,t_n))$ where $e_i$ are the embeddings and $t_i$ the transcriptions. The ABX task creates all possible triplets ($e_a$,$e_b$,$e_x$) such that: $t_a = t_x$ and $t_b \neq t_x$. The model is asked to predict 1 or 0 to indicate if $e_x$ is of type $t_a$ or $t_b$. Such triplets are instances of the phonetic contrast between $t_a$ and $t_b$. Formally for a given a triplet, the task is to predict: $$y(e_x,e_a,e_b)=\mathds{1}_{d(e_a,e_x) \leq d(e_b,e_x)}$$ The error rate on this classification task is the ABX score. It is first averaged by type of contrast (all triplets having the same $t_a$ and $t_b$) then average over all contrasts.\\ The MAP task forms a list of all possible pairs of embeddings ($e_a$,$e_x$). The model is asked to predict 1 or 0 to indicate if $e_x$ and $e_a$ have the same type, i.e the same transcription, or not. Formally for a given pair, the model predicts: $$y(e_a,e_x,\theta) =\mathds{1}_{d(e_a,e_x) \leq \theta}$$ The precision and recall on this classification task are computed for various values of $\theta$. The final score or the MAP task is obtained by integrating over the precision-recall curve. \subsection{Downstream task: unsupervised frequency estimation} \subsubsection{The R$^2$ metric} Here, we introduce the novel task of frequency estimation as the assignment, for each speech sequence, of a positive real value that correlate with how frequent the transcription of this sequence is in a given reference corpus\footnote{This estimation could be up to a scaling coefficient; the task of finding exact count estimates is a harder task, not tackled in this paper.}. To evaluate the quality of frequency estimates, we use the correlation determinant $R^2$ between estimation and true frequencies. We compute this number in log space, to take into account the power-law distribution of frequencies in natural languages \cite{zipf}. This coefficient is between 0 and 1 and tells what percentage of variance in the true frequencies can be explained by the estimated frequencies. \subsubsection{$k$-NN and density estimation} We propose to estimate frequencies using density estimation, also called the Parzen-Rosenblatt window method \cite{parzen}. Let $N$ be the number of speech sequence embeddings. First, these $N$ embeddings are indexed into a $k$-NN graph, noted $G$, where all distances between embeddings are computed. Then, for each embedding, we search for the $k$ closest embeddings in $G$. Formally, given an embedding $e_t$ from the $k$-NN graph $G$, we compute its $k$ distances to its $k$ closest neighbours ($d_{n_1}$,...$d_{n_k}$). The frequency estimation is a density estimation function $\kappa$ of the $k$-NN graph $G$ that has three parameter: a Gaussian kernel $\beta$, the number of neighbours $k$ and the embedding $e_t$. $$\kappa_G(e_t,\beta,k)= \sum_{i=1}^{k} \exp^{-\beta d_{n_i}^2}$$ This density estimation yields a real number in $[1,k]$, which we take as our frequency estimation. We set $k$ to $2000$, the maximal frequency that should be predicted using the transcription of our training corpus (the Buckeye, see section 4.1). Then, we must tune $\beta$, the dilation of the space of a given embedding model. For each model, we choose $\beta$ such that it maximises the variance of the estimated log frequencies, thereby covering the whole spectrum of possible log frequencies, in our case $[0,\log(k)]$, which is beneficial for power-law types of distribution. Note that the $\beta$ kernel cannot be too large (resp.~small), as it would predict only high (resp. ~low) values. \subsubsection{Density estimation versus clustering} \begin{table}[H] \centering\small \begin{tabular}{lrrr} \toprule Models/methods & K-means & HC-K-means & $k$-NN \\ \midrule GD-1hot & 0.67 & 0.73 & \textbf{0.74} \\ RNNAE Mel-F & 0.30 & 0.35 & \textbf{0.41} \\ CAE Siamese Mel-F & 0.26 & 0.37 & \textbf{0.43} \\ \bottomrule \end{tabular} \newline \newline \caption{Frequency estimations using K-means, HC-K-means and $k$-NN density estimation on a subset of the Buckeye corpus}\label{3} \vspace{-2em} \end{table} We compared density estimation with an alternative method: the clustering of speech embeddings. Jansen et al.\cite{jansen} did a thorough benchmark of clustering methods on the task of clustering speech embeddings. Across all their metrics, the model that performs best is Hierarchical-K-means (HC-K-means), an improved version of K-means for a higher computational cost. In particular HC-K-means performs better than GMM HC-K-means is not scalable to our data sets, so we extracted 1\% of the Buckeye corpus in order to compare it with our method. A similar size of corpus is used by Jansen et al.\cite{jansen}. We applied $k$-NN, K-means and HC-K-means from the python library scikit-learn \cite{scikit-learn} on three of our models on this subset. For K-means and HC-K-means, we used the hyper-parameters that gave the best scores in \cite{jansen}, namely k-means++ initialisation and average linkage function for HC-K-means. On our subset, the ground truth number of clusters is $K=33000$. Yet, we did a grid-search on the value of $k$ that maximises the $R^2$ score for frequency estimation. We found that K-means and HC-K-means perform better for $K=20000$. It shows these algorithms are not tuned to handle data distributed according to the Zipf's law. Indeed K-means is subject the so-called `uniform effect' and tends to find clusters of uniform sizes \cite{uniform}. Table \ref{3} shows that even by optimising the number of clusters $K$, the $k$-NN method outperforms K-means and HC-K-means. \section{Experiments}\label{results} \subsection{Data sets} Five data sets at our disposal from the ZeroSpeech challenge: Conversational English (a sample of the Buckeye \cite{pitt2005} corpus), English, French, Xitsonga and Mandarin \cite{zs15,zs17}. These are multi-speaker non-overlapping (i.e one speaker per file) recordings of speech. All silences were removed using \textit{voice activity detection} and corrected manually. Each corpus was split into all possible segmentations to produce random speech sequences as described in \cite{thual}. Random speech sequences span from $70ms$ to $1s$. Shorter than $70ms$ sequences may contain less than one phoneme or be ill-pronounced phonemes. Therefore we removed very short sequences to avoid issues that are out of scope of this study. The Buckeye sample corpus contains 12 speakers and 5 hours of speech. The French and English corpora being much larger, we reduced their number of speech sequences and speakers to the size of the Buckeye. Mandarin and Xitsonga are smaller data sets and were left untouched. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{LaTeX/results.PNG} \caption{Value of metrics and the downstream task across models, corpora. The average column is the average score over all corpora} \label{1} \end{figure*} \subsection{Training and hyperparameters} Our encoder-decoder network is a specific use of a three-layers bi-directional LSTM as described by Holzenberger et al. \cite{nils} with hyper-parameters selected to minimise the ABX error on the Buckeye corpus. The speaker embedding network is a single fully connected layer with fifteen neurons. Our UTD system \cite{thual} uses the embeddings of the GD-PLP model. A set of speech pairs is returned, sorted by cosine similarity. We selected the pairs that have a cosine similarity above $0.85$ as it seemed to be optimal on the Buckeye corpus according to the ABX metric. In comparison, we trained our supervised models with `gold' pairs, i.e pairs with the exact same transcription. Each corpus was randomly split into train (90\%), dev(5\%) and test (5\%). Neural networks were trained on the train set, early stopping was done using the development set and metrics computed on the test set. Specifically, we trained each model on the five training sets using the Buckeye's hyper-parameters. MAP was ABX were computed on the test sets. Frequency estimation was computed by indexing the five training sets and building $k$-NN graphs. For each element of a given test set, we searched neighbours and estimated frequencies using the $k$-NN graphs. We used the FAISS \cite{faiss} library that provides an optimised $k$-NN implementation. \subsection{Results} \subsubsection{Across models} The results of the two metrics and downstream task are shown in Figure \ref{1} and the following broad trends can be observed. \begin{itemize}[leftmargin=*] \item Supervised models yield substantially lower performance than the ground truth 1-hot encodings, on all metric and all languages. These supervised models have a margin for improvement as they do not learn optimal embeddings despite having access to ground truth labels. \item Supervised models outperform their corresponding self-supervised model, in almost all metrics and for all languages. It means that self-supervision has also a margin for improvement given better pairs from the UTD systems. \item Among self-supervised and supervised models, the CAE-Siamese Mel-F takes the pole position. This model seems to be able to combine the advantages of both training objectives. A result already claimed by \cite{caesiamese}. \item (self) supervised neural neural networks trained on low-level acoustic features (Mel-F) performs better or equally well as high-level acoustic features (PLP). This shows that neural networks can learn their own high-level acoustic features from low-level information. \item Self-supervised models are expected to outperform unsupervised models because they use side information. Yet many configurations do not show this consistently. Only the Buckeye data set seems consistent, but this dataset is the one on which pairs were selected through a grid-search to minimise the ABX error. This may be due to the variable quality of the pairs found by UTD; better UTD is therefore needed to help self-supervised models. \item Unsupervised models are supposed to be better than hand-crafted models because they can adjust by learning from the dataset. Yet, this is not consistently found. Hand crafted models are worse than unsupervised models for ABX and frequency estimation but not for MAP. \item In detail, which model is best in a particular language depends on the metric. \end{itemize} \subsubsection{Across metrics and frequency estimation} In Table \ref{2}, we quantified the possibility to observe the discrepancies that we have just discussed . We computed the correlation $R^2$ across the three `average' columns. Cross correlation scores range from $R^2=0.33$ to $0.53$; the top-line model is not included when computing these scores. \begin{table}[H] \centering\small \begin{tabular}{lrrr} \toprule R$^2$ & Frequency est. & MAP & ABX \\ \midrule Frequency est. & 1.0 & 0.34 & 0.53 \\ MAP & 0.34 & 1.0 & 0.45 \\ ABX & 0.53 & 0.45 & 1.0 \\ \bottomrule \end{tabular} \newline \newline \caption{Correlation $R^2$ across the 'average' column of MAP, ABX and frequency estimation}\label{2} \vspace{-2em} \end{table} These correlations are low enough to permit sizeable discrepancies across metrics and the downstream task. One of our model, the RNNAE Mel-F, epitomises the problem. This model is comparatively bad according to the MAP but good according to ABX and the frequency estimation. It means that MAP and ABX reveal different aspect of the reliability of embedding models. Therefore, only large progress according to one metric assures a progress according to an other metric. It shows the limit of intrinsic evaluation of speech embeddings. Moderate variations on a intrinsic metric cannot guarantee a progress on a given downstream task. ABX and MAP scores are averages over multiple phonetic contrasts. These contrasts could be clustered based on their phonetic frequencies, average lengths or number of phonemes in common. Such fined-grained analyses can sometimes give understanding divergences across metrics. However, we have been unable to find a categorisation of results that make sense of Figure~\ref{1} as a whole. There are currently no fully reliable metrics to assess the intrinsic quality of speech embeddings. \section{Conclusion} We quantified the correlation across two intrinsic metrics (MAP and ABX) and a novel downstream task: frequency estimation. Although MAP and ABX agree on general categories (like supervised versus unsupervised embeddings), we also found large discrepancies when it comes to select a particular model highlighting the limits of these intrinsic quality metrics. However convenient intrinsic metrics may be, they only show partial views of the overall reliability of a model. We showed using frequency estimation that variations on intrinsic quality metrics should not be accounted for certain progress on downstream tasks. More attention should be brought on downstream tasks that have the credit to answer practical problems. \\ \vspace{-1em} \section{Acknowledgements} We thank Matthijs Douze for useful comments on density estimation. We also thank IDRIS from CNRS for offering GPU resources on the supercomputer Jean Zay. This work was funded in part by the Agence Nationale pour la Recherche (ANR-17-EURE-0017 Frontcog, ANR-10-IDEX-0001-02 PSL*, ANR-19-P3IA-0001 PRAIRIE 3IA Institute), CIFAR, and a research gift by Facebook. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-11-09T02:13:46", "yymm": "2007", "arxiv_id": "2007.13542", "language": "en", "url": "https://arxiv.org/abs/2007.13542" }
\section{Introduction} We have previously written about the scale-invariance paradox and shown how it may be resolved by the introduction of filtered-partitioned forms of the transfer spectra \cite{McComb08}, \cite{McComb14a}. In the present paper we carry on this work to show how the underlying symmetries of the triadic interactions in wavenumber space also have implications for any more general study of the Lin equation. We have remarked elsewhere that to treat the Lin equation as purely a local energy balance equation is to be in danger of failing to realize that it is actually a highly non-local equation which couples all modes together. It is in fact the basis of the cascade picture of turbulent energy transfer, and it is important to always bear in mind that the transfer spectrum can be written as an integral over all wavenumbers of a term containing the triple-moment. In the present work we will argue that it is desirable to extend this scrutiny to the filtered-partitioned forms of the transfer spectrum in order to achieve a fuller understanding of the basic energy transfer processes. This paper is organized as follows. We begin by stating the Lin equations and making some observations about the conventional interpretation of its role as an energy balance in wavenumber. Next we remind ourselves about the scale-invariance paradox and how it may be resolved. Then we move on to discussing the ways in which the Lin equation can be modified in order to clarify its role. \section{The Lin equation} We begin with the (by now) well-known spectral energy balance equation in its most familiar form, thus: \begin{equation} \left( \frac{\partial}{\partial t} + 2 \nu k^2 \right) E(k,t) = T(k,t), \label{enbalt} \end{equation} where $E(k,t)$ is the energy spectrum, $T(k,t)$ is the energy transfer spectrum and $\nu$ is the kinematic viscosity. A full derivation and discussion will be found in the book \cite{McComb14a}. We will also follow the growing practice of referring to it as the Lin equation. Now let us integrate each term of (\ref{enbalt}) with respect to wavenumber, from zero up to some arbitrarily chosen wavenumber $\kappa$: \begin{equation} \frac{\partial}{\partial t}\int_{0}^{\kappa} dk\, E(k,t) = \int^{\kappa}_{0} dk\, T(k,t) -2 \nu\int_{0}^{\kappa} dk\, k^2 E(k,t). \label{fluxbalt1} \end{equation} The energy transfer spectrum may be written as \begin{equation} T(k,t) = \int^{\infty}_{0} dj\, S(k,j;t), \label{ts} \end{equation} where, as is well known, $S(k,j;t)$ can be expressed in terms of the triple moment. Its antisymmetry under interchange of $k$ and $j$ guarantees energy conservation in the form: \begin{equation} \int^{\infty}_{0} dk\, T(k,t) =0. \label{encon} \end{equation} With some use of the antisymmetry of $S$, along with equation (\ref{encon}), equation (\ref{fluxbalt1}) may be written as \begin{equation} \frac{\partial}{\partial t}\int_{0}^{\kappa} dk\, E(k,t) = - \int^{\infty}_{\kappa} dk\,\int^{\kappa}_{0} dj\, S(k,j;t) -2 \nu\int_{0}^{\kappa} dk\, k^2 E(k,t). \label{fluxbalt2} \end{equation} In this familiar form, the integral of the transfer term is readily interpreted as the net flux of energy from wavenumbers less than $\kappa$ to those greater than $\kappa$, at any time $t$. This the well known basis for the energy cascade. It is usual to introduce a specific symbol $\Pi$ for this energy flux, thus: \begin{equation} \Pi (\kappa,t) = \int^{\infty}_{\kappa} dk\, T(k,t) =-\int^{\kappa}_{0} dk\, T(k,t), \label{tp} \end{equation} where the second equality follows from (\ref{encon}). In order to consider the stationary case, we may introduce an input spectrum $W(k)$. It is also convenient to introduce the dissipation spectrum $D(k,t)$ such that: \begin{equation} D(k,t) = 2\nu k^2 E(k,t). \end{equation} With these introductions, and some rearrangement, we may write the energy balance equation as: \begin{equation} \frac{\partial E(k,t)}{\partial t} = W(k) + T(k,t) - D(k,t). \label{enbalt2} \end{equation} Figure (\ref{fig1}) illustrates the general form of the energy transfers involved. \begin{figure} \begin{center} \includegraphics[width=0.65\textwidth, trim=0px 200px 0px 200px,clip]{figs/fig1.pdf} \end{center} \caption{\small A schematic view of the energy transfer in isotropic turbulence. The input spectrum $I(k)$ can represent either the work spectrum W(k) or $-\partial E(k.t)/ \partial t$; or the combined effects of both terms. All the other symbols have their usual meaning as defined in the text.} \label{fig1} \end{figure} It should be noted that this general schematic form applies both to the stationary case and the case of free decay, with the input term $I(k)$ being interpreted as appropriate to each case. \section{The paradox and its resolution} The inertial range of wavenumbers is defined as being where the time derivative (or input term) and the viscous term are negligible. Hence, from equation (\ref{enbalt}), it follows that the criterion for an inertial range of wavenumbers can be taken as the vanishing of the transfer spectrum; and, from equation (\ref{tp}), the constancy of the flux. In other words, for wavenumbers $\kappa$ \emph{in the inertial range} we might expect to have have: \begin{equation} T(\kappa,t)=0 \qquad \mbox{and} \qquad \Pi(\kappa,t) = \varepsilon. \label{conflux} \end{equation} Scale invariance, can be summed up as the observation that the energy spectrum takes the form of a power law (which is in itself scale-free) and that there is a constant rate of energy transfer over a range of wavenumbers, which must necessarily be equal to the rate of energy dissipation. In practice, the second criterion of equation (\ref{conflux}) is widely used to identify the inertial range. This criterion was first put forward in 1941 by Obukhov \cite{Obukhov41} and first used to derive the famous $-5/3$ spectrum using dimensional analysis by Onsager in 1945 \cite{Onsager45}. More recently, the books by Leslie \cite{Leslie73} and McComb \cite{McComb90a},\cite{McComb14a} all follow Kraichnan \cite{Kraichnan59b}, and cite the criterion $\Pi=\varepsilon$; as does work by, for instance, Bowman \cite{Bowman96}, Thacker \cite{Thacker97}, and Falkovich \cite{Falkovich06}. However, the first criterion given in equation (\ref{conflux}) only holds for a single wavenumber and this fact is the scale-invariance paradox. There are two inertial-range criteria in (\ref{conflux}); and, by elementary calculus, they seem to be equivalent. This point is illustrated in Fig. (\ref{fig2}). It shows an extended region where the flux is constant and also the transfer spectrum is zero. This makes an appealingly simple picture of spectral energy transfers but unfortunately it is wrong. The transfer spectrum always passes through zero at a single point as illustrated in Fig. (\ref{fig1}). \begin{figure} \begin{center} \includegraphics[width=0.65\textwidth, trim=0px 200px 0px 200px,clip]{figs/fig2.pdf} \end{center} \caption{\small The expected behaviour of $T(k)$, on the basis of elementary calculus, to correspond to the scale invariance of $\Pi(k)$. The fact that $T(k)$ does not behave like that is the scale-invariance paradox.} \label{fig2} \end{figure} This property of $T(k)$ was first discovered in 1963 by Uberoi \cite{Uberoi63} and later, extensive investigations confirmed that the transfer spectrum always has a single zero-crossing \cite{Bradshaw67,Helland77} and pragmatic, approximate procedures were introduced to allow the inertial range to be identified from the behaviour of the transfer spectrum \cite{Lumley64}. For a discussion of this topic, see \cite{McComb92}. So, let us consider again equation (\ref{fluxbalt2}) for the transfer of energy from low wavenumbers to high. Now we wish to draw attention to the fact that, although the first term on the right hand side correctly represents the integral over wavenumber $k$ of the transfer spectrum from zero up to $\kappa$, nevertheless the integrand is not actually $T(k)$ (from now on, we shall suppress time arguments in the interests of conciseness). In fact the integrand represents \emph{some part of} $T(k)$, because the internal integration with respect to the dummy variable $j$ has been truncated at $j=\kappa$. In order to clarify this situation, it will be found helpful to introduce low- and high-pass filtering operations, based on a cut-off wavenumber $k=\kappa$, on the Fourier components of the velocity field. These operations are used for the study of spectral mode elimination in the context of large-eddy simulation and its associated subgrid modelling: see, for example, \cite{McComb01a} and references therein. We are thus led to introduce transfer spectra which have been filtered with respect to $k$ and which have had their integration over $j$ partitioned at the filter cut-off, i.e. $j=\kappa$. Beginning with the Heaviside unit step function, defined by: \begin{eqnarray} H(x) & = & 1 \qquad \mbox{for} \qquad x > 0; \\ & = & 0 \qquad \mbox {for} \qquad x < 0. \end{eqnarray} we may define low-pass and high-pass filter functions, thus: \begin{equation} \theta^{-}(x) = 1 - H(x), \end{equation} and \begin{equation} \theta^{+}(x) = H(x). \end{equation} We may then decompose the transfer spectrum, as given by (\ref{ts}), into four constituent parts, \begin{equation} T^{--}(k|\kappa) = \theta^{-}(k-\kappa)\int^{\kappa}_{0}dj\, S(k,j); \label{tmm} \end{equation} \begin{equation} T^{-+}(k|\kappa) = \theta^{-}(k-\kappa)\int^{\infty}_{\kappa}dj\, S(k,j); \label{tmp} \end{equation} \begin{equation} T^{+-}(k|\kappa) = \theta^{+}(k-\kappa)\int^{\kappa}_{0}dj\, S(k,j); \label{tpm} \end{equation} and \begin{equation} T^{++}(k|\kappa) = \theta^{+}(k-\kappa)\int^{\infty}_{\kappa}dj\, S(k,j), \label{tpp} \end{equation} such that the overall requirement of energy conservation is satisfied: \begin{equation} \int^{\infty}_{0}dk\left[T^{--}(k|\kappa) + T^{-+}(k|\kappa) + T^{+-}(k|\kappa) + T^{++}(k|\kappa)\right] = 0. \end{equation} It is readily verified that the individual filtered/partitioned transfer spectra have the following properties: \begin{equation} \int^{\kappa}_{0}dk\, T^{--}(k|\kappa) = 0; \label{mm} \end{equation} \begin{equation} \int^{\kappa}_{0}dk\, T^{-+}(k|\kappa) = -\Pi(\kappa); \label{mp} \end{equation} \begin{equation} \int^{\infty}_{\kappa}dk\, T^{+-}(k|\kappa) = \Pi(\kappa); \label{pm} \end{equation} and \begin{equation} \int^{\infty}_{\kappa}dk\, T^{++}(k|\kappa) = 0. \label{pp} \end{equation} Equation (\ref{fluxbalt1}) may be rewritten in terms of the filtered/partitioned transfer spectrum as: \begin{equation} \frac{d}{dt}\int^{\kappa}_{0}dk\, E(k,t) = -\int^{\infty}_{\kappa}dk\, T^{+-}(k|\kappa) -2\nu_{0}\int^{\kappa}_{0}dk\, k^{2}E(k,t). \label{fluxbaltmod} \end{equation} We note from equation (\ref{mm}) that $T^{--}(k|\kappa)$ is conservative on the interval $[0,\kappa]$, and hence does not appear in (\ref{fluxbaltmod}), while $T^{-+}(k|\kappa)$ has been replaced by $-T^{+-}(k|\kappa)$, using (\ref{mp}) and (\ref{pm}). Filtered and partitioned transfer spectra have been measured, using DNS, in the context of spectral large-eddy simulation. In particular, Zhou and Vahala \cite{Zhou93a} found that the resolvable-scales energy transfer spectrum $T^{<<}(k)$ (i.e. $T^{--}(k|\kappa)$ in our notation) is conservative on the interval $0\leq k \leq \kappa$, in agreement with our equation ({\ref{mm}}); while the resolvable-subgrid transfer spectrum (i.e. our $T^{-+}(k|\kappa)$) is zero over a range of wavenumbers. Similar behaviour has also been found in the more detailed investigation by McComb and Young \cite{McComb98}. \begin{figure} \begin{center} \includegraphics[width=0.65\textwidth, trim=0px 200px 0px 0px,clip]{figs/fig3.pdf} \end{center} \caption{\small The behaviour of the filtered-partitioned transfer spectra: the paradox resolved!} \label{fig3} \end{figure} As we have previously pointed out in \cite{McComb08}, experimentalists, who do not have access to partitioned versions of the transfer spectrum, will still find pragmatic procedures, such as the Lumley criterion for the inertial range \cite{Lumley64}, useful. However, those working with DNS or analytical theory, can avoid the paradox by changing their definition of energy fluxes, from those given by (\ref{tp}), to the forms\footnote{We should mention that these forms are exactly equivalent to Kraichan's original definition of what he called the \emph{transport power} \cite{Kraichnan59b}. In later work \cite{Kraichnan64b}, his definition of the transport power was equivalent to equation (\ref{tp}) in the present paper.}: \begin{equation} \Pi (\kappa,t) = \int^{\infty}_{\kappa} dk\, T^{+-}(k|\kappa,t) =-\int^{\kappa}_{0} dk\, T^{-+}(k|\kappa,t), \label{tpmod} \end{equation} where $T^{+-}(k|\kappa,t)$ is defined by (\ref{tpm}) and $T^{-+}(k|\kappa,t)$ by (\ref{tmp}). This is equivalent to (\ref{tp}); but, unlike it, avoids the paradox. This resolution of the paradox is shown schematically in Fig. (\ref{fig3}). \section{Modifications to the Lin equation} In view of the above discussion, the obvious step now is to filter the energy spectrum in the same way as we have done for the transfer spectrum, and consider low-$k$ and high-$k$ forms of the Lin equation. In order to do this we make the decomposition: \begin{equation} E(k,t) = E^-(k|\kappa,t) + E^+(k|\kappa,t), \label{decompE} \end{equation} where $E^-$ is defined for $k\leq \kappa$ and $E^+$ is defined for $k\geq \kappa$. Trivially, we can also do this for the input spectrum $W(k)$ and dissipation spectrum $D(k,t)$, and equation (\ref{enbalt2}) can be written in low-$k$ and high-$k$ forms respectively, as: \begin{equation} \frac{\partial E^-(k|\kappa,t)}{\partial t} = W^-(k|\kappa) + T^{--}(k|\kappa,t) + T^{-+}(k|\kappa,t) - D^-(k|\kappa,t), \quad \mbox{for}\quad k \leq \kappa; \label{enbalt_low} \end{equation} and \begin{equation} \frac{\partial E^+(k|\kappa,t)}{\partial t} = W^+(k|\kappa) + T^{+-}(k|\kappa,t) + T^{++}(k|\kappa,t) - D^+(k|\kappa,t), \quad \mbox{for}\quad k \geq \kappa. \label{enbalt_high} \end{equation} For this decomposition to be meaningful, the Reynolds number must be large enough for the inertial flux to be equal to the dissipation, in accordance with the second criterion of equation (\ref{conflux}). As we increase the Reynolds number beyond this critical value, we have an increasing range of wavenumbers $k$ which satisfy that criterion, and this is the \emph{inertial range}. We shall denote this range by \[ k_{\mbox{\scriptsize bot}} \leq k \leq k_{\mbox{\scriptsize top}} \quad \equiv \quad \mbox{the inertial range of wavenumbers,} \] where we now have to define $k_{\mbox{\scriptsize bot}}$ and $k_{\mbox{\scriptsize top}}$. For sake of simplicity, we will consider stationary turbulence and omit the time variables. First, we need to consider the nature of the forcing spectrum $W(k)$. In formulating the turbulence problem according to the tenets of statistical physics, this is normally taken to arise from the introduction of random stirring forces, which are assumed to be of \emph{white noise} form. In particular, the forcing spectrum is taken to be peaked near the origin in wavenumber space, so that the turbulence that results from it is due to the Navier-Stokes equation, and not specifically related to the forcing. We should note that a different view was taken from the late 1970s onwards, in connection with the application of renormalization group methods to the Navier-Stokes equation. See either of the books \cite{McComb90a} or \cite{McComb14a} for a general discussion of this point. Accordingly, for theoretical approaches to the statistical closure problem, and also for direct numerical simulation, we should choose a form of forcing spectrum $W(k)$ which satisfies the conditions: \begin{equation} \int_0^\infty dk W(k) = \varepsilon_W \simeq \int_0^{k_{\mbox{\scriptsize bot}}} dk W(k), \label{kbot} \end{equation} where the equality defines $\varepsilon_W$, while the approximate equality defines $k_{\mbox{\scriptsize bot}}$, which we take to be the lower limit of the inertial range. In general, we would require $k_{\mbox{\scriptsize bot}}$ to be very much smaller than the Kolmogorov dissipation wavenumber $k_d$ which is generally taken as being an indicator of the dissipation range of wavenumbers. Experimenters have usually taken the the upper limit of the inertial range to be about $0.1 k_d - 0.2 k_d$. In fact we will define $k_{\mbox{\scriptsize top}}$ by another approximate equality, thus: \begin{equation} \int_0^\infty dk D(k) = \varepsilon \simeq \int_{k_{\mbox{\scriptsize top}}}^\infty dk D(k), \label{ktop} \end{equation} where the equality is the conventional definition of the dissipation rate, and the approximate equality defines the upper limit of the inertial range $k_{\mbox{\scriptsize top}}$. With these points in mind, we may simplifly the low-wavenumber and high-wavenumber forms of the Lin equation, respectively (\ref{enbalt_low}) and (\ref{enbalt_high}), to: \begin{equation} \frac{\partial E^-(k|\kappa,t)}{\partial t} = W(k) + T^{--}(k|\kappa,t) + T^{-+}(k|\kappa,t), \quad \mbox{for}\quad k \leq \kappa; \label{Lin-low} \end{equation} and \begin{equation} \frac{\partial E^+(k|\kappa,t)}{\partial t} = T^{+-}(k|\kappa,t) + T^{++}(k|\kappa,t) - D(k,t), \quad \mbox{for}\quad k \geq \kappa. \label{Lin-high} \end{equation} That is, for sufficiently high Reynolds numbers, and an appropriate choice of stirring forces, we may simplify matters by treating the input spectrum as being confined to the low-wavenumber region and the dissipation spectrum as being confined to the high-wavenumber region. Deriving the flux balance equations from (\ref{Lin-low}) and (\ref{Lin-high}), and invoking equations (\ref{kbot}) and (\ref{ktop}), we obtain the final flux balances as: \begin{equation} \varepsilon_W - \Pi(\kappa) = 0 \quad \mbox{for}\quad k \leq \kappa; \end{equation} and \begin{equation} \Pi(\kappa) - \varepsilon = 0 \quad \mbox{for}\quad k \geq \kappa. \end{equation} Reminding ourselves that the transfer spectrum has its single zero crossing at $k=k_\ast$, we may define the maximum value of the inertial flux as \begin{equation} \Pi_{\mbox{max}} = \Pi(k_\ast) = \varepsilon_T, \end{equation} and at the same time introduce the useful symbol $\varepsilon_T$ for the maximum flux. Since $k_\ast$ must lie within the inertial range, we can write the general criterion for the existence of the inertial range as: \begin{equation} \Pi(\kappa) = \varepsilon_T = \varepsilon_W = \varepsilon. \end{equation} For completeness it should be noted that this analysis is readily extended to the case of free decay, if we replace $\varepsilon_W$ by the energy decay rate $\varepsilon_D$. Further details may be found in \cite{McComb14a}. \section{Conclusion} Provided we are faced with the ideal situation, where the input and the output (\emph{i.e.} dissipation) are well separated in wavenumber space, equations (\ref{Lin-low}) and (\ref{Lin-high}) may provide a new, and one might hope, productive basis for the study of the energy transfers in isotropic turbulence. The corresponding partitioned-filtered Navier-Stokes equations are readily deduced and may be studied by direct numerical simulation as a four-component composite dynamical system, where the four components correspond to the four filtered-partitioned transfer spectra. Also, there is a growing use of hybrid approaches in fluid dynamics problems, and the closure problem could be approached in such a way by using different methods to tackle the different filtered-partitioned transfer spectra. For instance, in the low-$k$ system, we might use the local energy transfer theory \cite{McComb17a} for $T^{--}(k)$, and renormalization group methods \cite{McComb06} for $T^{-+}(k)$; or, conceivably, the other way round! It would require investigation. For the ideal situation just discussed, where we have the input and output (or, production and dissipation) ranges of wavenumber well separated, we need to choose the input spectrum $W(k)$ to be peaked near the origin; and also we need the Reynolds number to be reasonably high. If for some reason, we cannot satisfy these conditions, then we must resort to equations (\ref{enbalt_low}) and (\ref{enbalt_high}). However, even so, we must still have the Reynolds number large enough for the condition for the existence of an inertial range to be satisfied. Lastly, I should emphasise that Fig. (\ref{fig3}) is very much a schematic indication of how this graph should look, based on the small amount of information available to us. The behaviour of these filtered-partitioned transfer spectra was studied in the 1990s in the context of subgrid modelling and renormalization group methods: see \cite{McComb08} for references. Computers have advanced a lot since then, so we end with a plea to the effect that this field of study should be revived in the context of later work. An informal introduction to this topic may be found in the post of 23 July on the following weblog: blogs.ed.ac.uk/physics-of-turbulence/. \section*{Acknowledgements} I wish to thank John Morgan who worked on this topic with me as part of his MPhys research project in the academic year 2018/19. It was John's idea to plot Fig. (\ref{fig3}) in order to make the resolution of the scale-invariance paradox clearer and he also prepared the figures.
{ "timestamp": "2020-07-28T02:41:53", "yymm": "2007", "arxiv_id": "2007.13622", "language": "en", "url": "https://arxiv.org/abs/2007.13622" }
\section{Introduction} \label{sec:intro} \input{src/sections/introduction} \section{State-of-the-Art on \ac{ML}-based for RF Signal Classification} \label{sec:related} \input{src/sections/related} \section{\ac{RAT} Characterisation} \label{sec:classifier} \input{src/sections/classifier.tex} \section{Dataset Generation} \label{sec:implementation} \input{src/sections/implementation.tex} \section{Performance Evaluation} \label{sec:validation} \input{src/sections/validation} \section{Conclusion} \label{sec:conclusion} \input{src/sections/conclusion} \section*{Acknowledgements} The research leading to this work received funding from the European Horizon 2020 Program under the grant agreement No. 732174 (ORCA project). In addition, this work was partly funded by Science Foundation Ireland (SFI) and the National Natural Science Foundation of China (NSFC) under the SFI-NSFC Partnership Programme Grant Number 17/NSFC/5224. \balance \bibliographystyle{./templates/IEEEtran} \section*{Acronyms} \begin{acronym}[IMT-Advanced] \acro{NR-U}{New Radio Unlicensed} \acro{IoU}{intersection over union} \acro{mAP}{mean Average Precision} \acro{LSTM}{long short term memory} \acro{FNN}{fully connected neural network} \acro{RForest}{Random Forest} \acro{3GPP}{3rd Generation Partnership Project} \acro{ABS}{Almost Blank Subframes} \acro{Adam}{Adaptive Moment Optimisation} \acro{ADC}{Analogue-to-Digital Converter} \acro{AMPS}{Advanced Mobile Phone System} \acro{AoA}{Angle of Arrival} \acro{AoD}{Angle of Departure} \acro{AP-CNN}{Amplitude and Phase shift \ac{CNN}} \acro{ASP}{Antenna Scan Period} \acro{BBU}{Baseband Unit} \acro{BER}{Bit Error Rate} \acro{BLER}{Block Error Rate} \acro{BPSK}{Binary \ac{PSK}} \acro{BS}{Base Station} \acro{bw}{bandwidth} \acro{CBRS}{Citizens Broadband Radio Service} \acro{CDMA}{Code Division Multiple Access} \acro{CDM}{Code Division Multiplexing} \acro{CFO}{Carrier \acl{FO}} \acro{C-MTC}{Mission-Critical \acs{MTC}} \acro{CN}{Core Network} \acro{CNN}{Convolutional Neural Network} \acro{CP}{Cyclic Prefix} \acro{C-RAN}{Cloud-\ac{RAN}} \acro{CR}{Cognitive Radio} \acro{CriC}{Critical Communication} \acro{CSAT}{Carrier Sense Adaptive Transmission} \acro{CS}{Cyclic Suffix} \acro{CSMF}{Communication Service Management Function} \acro{CV}{Computer Vision} \acro{DAC}{Digital-to-Analogue Converter} \acro{D-AMPS}{Digital \acs{AMPS}} \acro{DC}{Direct Current} \acrodefplural{RAT}[RATs]{Radio Access Technologies} \acrodefplural{SDS}[SDSs]{Software-defined Switches} \acro{DEQUE}{Double-Ended Queue} \acro{DL}{Deep Learning} \acro{DNN}{Deep Neural Network} \acro{DSA}{Dynamic Spectrum Access} \acro{DS-CDMA}{Direct Sequence \acs{CDMA}} \acro{E2E}{end-to-end} \acro{ECC}{Electronic Communications Committee} \acro{EDGE}{Enhanced Data rates for \acs{GSM} Evolution} \acro{eMBB}{Enhanced \acl{MBB}} \acro{eV2X}{Enhanced \ac{V2X}} \acro{EV-DO}{Evolution-Data Optimized} \acro{FCC}{Federal Communications Commission} \acro{FD}{Frame Duration} \acro{FDMA}{Frequency Division Multiple Access} \acro{FDM}{Frequency Division Multiplexing} \acro{FHSS}{Frequency-hopping spread spectrum} \acro{FI}{Frame Interval} \acro{FM}{Frequency Modulation} \acro{FO}{Frequency Offset} \acro{FPGA}{Field-Programmable Gate Array} \acro{FS}{Flow Space} \acro{FSK}{Frequency-Shift Keying} \acro{GLDB}{Geo-Location Database} \acro{GP-KNN}{Genetic Programming with K-Nearest Neighbors} \acro{GPP}{general purpose process} \acro{GPRS}{General packet radio service} \acro{GPU}{Graphics Processing Unit} \acro{GRC}{Global Radio Coordinator} \acro{GSM}{Global System for Mobile communication} \acro{H2H}{Human-to-Human} \acro{HetNet}{Heterogeneous Network} \acro{HSDPA}{High Speed Downlink Packet Access} \acro{HSPA}{High Speed Packet Access} \acro{HSUPA}{High-Speed Uplink Packet Access} \acro{iDEN}{Integrated Digital Enhanced Network} \acro{IFD}{Inter-Frame Duration} \acro{IMT-2000}{International Mobile Telecommunications-2000} \acro{IMT-Advanced}{International Mobile Telecommunications-Advanced} \acro{INR}{Interference-to-Noise Ratio} \acro{IoT}{Internet of Things} \acro{IPM}{Intra-Pulse Modulation} \acro{ISM}{Industrial, Scientific and Medical} \acro{ITU}{International Telecommunication Union} \acro{LBT}{Listen Before Talk} \acro{LFM}{Linear Frequency Modulation} \acro{LRC}{Local Radio Controller} \acro{LTE-A}{\acs{LTE}-Advanced} \acro{LTE}{Long-Term Evolution} \acro{LTE-U}{\ac{LTE} in unlicensed spectrum} \acro{LWA}{\acs{LTE}-\acs{WLAN} aggregation} \acro{mAP}{mean average precision} \acro{MCD}{Measurement Capable Device} \acro{mIoT}{Massive \ac{IoT}} \acro{ML}{Machine Learning} \acro{M-MTC}{Massive \acs{MTC}} \acro{MNO}{Mobile Network Operator} \acro{MTC}{Machine Type Communication} \acro{MVNO}{Mobile Virtual Network Operator} \acro{NMT}{Nordic Mobile Telephone} \acro{NSMF}{Network Slice Management Function} \acro{NS}{Network Slice} \acro{NSSMF}{Network Slice Subnet Management Function} \acro{OFDMA}{Orthogonal Frequency Division Multiple Access} \acro{OFDM}{Orthogonal Frequency Division Multiplexing} \acro{ONF}{Open Network Foundation} \acro{OS}{Operational System} \acro{OTT}{Over-The-Top} \acro{PDC}{Personal Digital Cellular} \acro{PER}{Packet Error Rate} \acro{PLC}{Process Logic Controller} \acro{POCSAG}{Post Office Code Standardization Advisory Group} \acro{PRI}{Pulse Repetition Interval} \acro{PSD}{Power Spectral Density} \acro{PSK}{Phase-Shift Keying} \acro{PW}{Pulse Width} \acro{QoE}{Quality of Experience} \acro{QoS}{Quality of Service} \acro{RAN}{Radio Access Network} \acro{RAT}{Radio Access Technology} \acro{ReLU}{Rectified Linear Unit} \acro{REM}{Radio Environment Map} \acro{RF}{Radio Frequency} \acro{RMSE}{Root Mean Squared Error} \acro{RMSProp}{Root Mean Square Propogation} \acro{RSSI}{Received Signal Strength Indicator} \acro{Rx}{receiver} \acro{S/A}{Sensors/Actuators} \acro{S-CNN}{Spectrogram \ac{CNN}} \acro{SDN}{Software Defined Network} \acro{SDR}{Software-Defined Radio} \acro{SDS}{Software-defined Switch} \acro{SER}{Symbol Error Rate} \acro{SFI}{Science Foundation Ireland} \acro{SIMD}{Single Instruction Multiple Data} \acro{SINR}{Signal-to-Interference-plus-Noise Ratio} \acro{SNR}{Signal-to-Noise Ratio} \acro{SRO}{Symbol Rate Offset} \acro{SU}{Secondary User} \acro{SVM}{Support Vector Machine} \acro{TACS}{Total Access Communications System} \acro{TDMA}{Time Division Multiple Access} \acro{TDM}{Time Division Multiplexing} \acro{TVWS}{TV White Space} \acro{Tx}{transmitter} \acro{UE}{User Equipment} \acro{UHD}{\acs{USRP} Hardware Driver} \acro{UMTS}{Universal Mobile Telecommunications System} \acro{URLLC}{Ultra-Reliable Low Latency Communication} \acro{USRP}{Universal Software Radio Peripheral} \acro{V2X}{Vehicular-to-Everything} \acro{VM}{Virtual Machine} \acro{VOC}{Visual Object Classes} \acro{WCDMA}{Wideband Direct Sequence \acs{CDMA}} \acro{WiMax}{Worldwide Interoperability for Microwave Access} \acro{WLAN}{Wireless Local Area Network} \acro{WMN}{Wireless Mesh Network} \acro{WMWG}{Wireless and Mobile Working Group} \acro{WNV}{Wireless Network Virtualization} \acro{WSN}{Wireless Sensor Network} \acro{YOLO}{You Only Look Once} \acro{5G}{fifth generation of wireless technology} \end{acronym} \subsection{Image-based \ac{RAT} Classifier} We developed a \ac{CNN}-based classifier for recognising different \acp{RAT} coexisting in shared spectrum. Our classifier can identify multiple \acp{RAT} by directly applying object detection to spectrograms. The \ac{CNN} must be trained and validated against target objects. Depending on the size of the neural network and the computing platform available, the training and validation of the \ac{CNN} from scratch may take between hours and days. One way to reduce this time is by applying transfer learning, which relies on the partial reuse of a previously trained model (trained on a different set of tasks) for addressing a new task. This implies retraining an existing network, typically by fine tuning the weights from the hidden layers close to the output layer, to make the network more suitable to the new task. As such, the first layers, which are typically good at extracting basic features such as edge detection in computer vision tasks, are reused for the new task as well. Transfer learning significantly decreases the amount of data required for the training process and, consequently, the duration of the training process. The application of transfer learning requires the choice of a previously trained network as a starting point. A broad range of pre-trained networks already exists; these are suitable for different problems, e.g., predictive text, speech recognition, and image object detection. For the spectrum sharing scenario, where it is necessary to dynamically assess how the spectrum is being occupied, we need a model that can provide acceptable classification accuracy in real-time. We also require a solution that can provide not just the classification of the object, but also its localisation in the image (as discussed later, we rely on this localisation information for feature extraction). We employ the well-known object detection model \ac{YOLO} \cite{yolo} as the starting point for our \ac{RAT} classifier. \ac{YOLO} is one of the most efficient solutions in the literature for real-time implementation of object detection. This model outputs both the class of the detected objects, as well as their position in the input image. Using weights and architecture from \ac{YOLO} pre-trained on ImageNet \cite{imagenet}, we modify the Softmax layer, which corresponds to the last layer before the output of the model. During the training process, the Softmax layer is explicitly optimised for the classification of \ac{LTE} and WiFi waveforms. The architecture we adopted is presented in \cite{yolo2} and it has 19 convolution layers and 5 max-pooling layers. Moreover, our model can easily be extended for supporting more \acp{RAT}, by retraining it with datasets that include new waveforms. \begin{figure}[t] \centering \begin{subfigure}[b]{0.49\columnwidth} \includegraphics[width=\linewidth]{lte.png} \caption{\ac{LTE} detection.} \label{fig:1} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\columnwidth} \includegraphics[width=\linewidth]{wifi.png} \caption{WiFi detection.} \label{fig:2} \end{subfigure} \caption{Spectrogram with the bounding boxes created by our \ac{ML}-based signal classifier. The positions of the bounding boxes represent the detection of the frame, and the colour represents the classification, blue for LTE and white for WiFi.}\label{fig:spec} \end{figure} The training itself requires the fine-tuning of parameters related to the learning rate and convergence of the classification, known as the hyperparameters. Hyperparameters are parameters chosen before the training process, for example, learning rate, optimiser, epochs, etc. We detail our choices for the hyperparameters below: \begin{itemize} \item {Learning Rate}: is the amount by which the weights in an ML model are updated. We set it to $10^{-5}$; with this value, the model did not overfit and was able to learn the objects' characteristics. \item {Epoch}: is an iteration of the training process where the model is filled with all the elements of the training dataset. If a model is trained with too many epochs, it can overfit to the training data, while if a model uses too few epochs, it might not learn the necessary features to perform the classification. After testing several values, we set the number of epochs to 50,000. \item {Mini-batch}: is a part of the dataset used to update the network's weights. The first approaches in \ac{ML} used the entire dataset to update the weights in the network; however, the work of~\cite{masters2018revisiting} argues that this update should use a smaller part of the dataset, called a mini-batch. The mini-batch approach can increase model performance when it uses batches with values between 2 to 32 \cite{masters2018revisiting}. In the early stages of the design process of our solution, we observed good performance when setting the mini-batch to 32. \item{Optimiser}: is the function that modifies the weights of each neuron with the purpose of minimising the loss function. The loss function indicates how close the output of the model is to the expected result. The main objective of the learning process is to optimise the loss function, making the predicted output closer to the expected one without over-fitting to the training data. We chose the optimiser \ac{Adam} because it has the feature of accelerating the search for the minimum value of the loss function and reducing oscillations. \end{itemize} After trained, our model produces the identification of the \ac{RAT} (i.e., the result of the classification) and the coordinates of each frame detected in the spectrogram image. Figure~\ref{fig:spec} shows examples of LTE and WiFi frames detected, surrounded by bounding boxes: blue for LTE, white for WiFi. The four coordinates of each of these bounding boxes are used by the feature extraction component, discussed next. Once our model is trained and validated, it can provide results on the fly, making it suitable for real time applications. Our classifier analyses frames in batches of three frames each, providing three outputs at the same time; this allows us to parallelise the classification task and use multiple cores in parallel. A trade-off that is important to consider is the implication of this design choice on real-time detection and \ac{RAT} classification: the number of images analysed simultaneously cannot be too large, otherwise the model will not operate in real-time. In our implementation, we evaluated the classification speed using an computer with Intel Core i7-6820HK processor and GeForce GTX 1070 Mobile. With this commercial off-the-shelf \ac{GPU}, we are able to analyse three images in around 0.1ms with 2 classes and trained with a commercial transmission dataset (described later). \begin{comment} \begin{table}[] \centering \caption{Summary of the hyperparameters for the classifier model.} { \begin{tabular}{|l|l|} \hline Parameters & Value \\ \hline Learning rate & $10^{-5}$ \\ Epochs & 50.000 \\ Batch & 32 \\ Optimiser & Adam \\\hline Training dataset & 400 spectograms\\\hline \end{tabular} } \label{tab:hyper} \end{table} \end{comment} \subsection{Post-processing Feature Extraction} Once the classification of the \ac{RAT} is completed, the feature extraction component allows us to obtain additional information about the \acp{RAT} present in a given channel. The spectrogram corresponds to a band of frequencies [$f_1$, $f_2$], collected during a time interval [$t_1$, $t_2$]. Then, we calculate the granularity that each pixel in the image represents in the time and frequency domains, as an increment value in time ($I_T$) and frequency ($I_F$), respectively. This mapping depends on the size of the spectrogram ($[X_{min},X_{max}], [Y_{min},Y_{max}]$)\footnote{Note that uppercase $X$ and $Y$ refer to the spectrogram, and lowercase $x$ and $y$ refer to the bounding box around a frame.}. The trained model provides the corners of a rectangle that encloses a transmission frame, denoted by coordinates $x_{min}, x_{max}, y_{min}, y_{max}$. Given the coordinates of this rectangle, i.e., the bounding box, as well as the values of each time and frequency increment, we can localise the signals in the spectrum and in time. In order to calculate the bandwidth of the signal ($b_w$) and its centre frequency ($f_c$), we use the horizontal coordinates of the corners of the bounding box, translating them into their respective value in frequency. The \ac{FD} of the signal is calculated in a similar manner, but now using the vertical coordinates of the corners of the bounding box. To calculate the average \ac{FI}, we must first calculate the average time the channel stays without a transmission (CWT), which is the total time represented in a spectrogram subtracted by the time that is occupied by frame transmissions. Then, the \ac{FI} is given by CWT divided by the number of transmissions on the spectrogram. We summarise the formulas we use for extracting the features of different \acp{RAT} in Table~\ref{tab:formulas}, and illustrate the representation of the relevant values on a spectrogram in Figure \ref{fig:post}. \begin{table}[] \centering \caption{The mapping between the image position and the parameters of interest in time and frequency domains.} \resizebox{0.49\textwidth}{!} { \begin{tabular}{|l|l|} \hline Parameters Time/Frequency & Position Mapping \\ \hline $I_t$ & $(t_2-t_1)/(Y_{max}-Y_{min})$ \\ $I_f$ & $(f_2-f_1)/(X_{max}-X_{min})$ \\ $b_w$ & $(x_{max} - x_{min}) * I_f$ \\ $f_c$ & $f_1+(I_f*x_{min})+(b_{w}/2)$ \\ FD & $(y_{max} - y_{min})*I_t$ \\ CWT & $(t_2-t_1) - (frame\_rate * f_{av})$ \\ FI & $CWT/frame\_rate$ \\ \hline \end{tabular} } \label{tab:formulas} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{images/post2.png} \caption{Parameters representation in a spectrogram.} \label{fig:post} \vspace{-1em} \end{figure} \subsection{Tree of tasks} \begin{comment} \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.33\textwidth} \includegraphics[width=\columnwidth]{Picture6.png} \caption{Probability of detection.} \label{fig:specs} \end{subfigure}\hfill \begin{subfigure}[t]{0.33\textwidth} \includegraphics[width=\columnwidth]{Picture7.png} \caption{\ac{RMSE} of the \ac{SNR} estimator.} \label{fig:graph2} \end{subfigure}\hfill \begin{subfigure}[t]{0.33\textwidth} \includegraphics[width=\columnwidth]{Picture8.png} \caption{\ac{RMSE} of the \ac{CFO} estimator.} \label{fig:graph3} \end{subfigure} \caption{Evaluation of the preamble with a length of 1031 samples, used to synchronise the \acp{SDR} in the dataset \ac{RF} generator.}\label{fig:preamble} \end{figure*} \end{comment} Our dataset generator allows the generation of datasets with different: (i) waveforms, e.g., WiFi, \ac{LTE}, and \ac{PSK} signals; (ii) waveform-specific features, e.g., modulation order and frame length, and DSP transformations, e.g., \ac{FO}, soft gains, shape filtering, and multipath emulation; (iii) \ac{RF} parameters, e.g., centre frequency, hardware gains. Each permutation of parameters and waveform types is translated into IQ signals that are transmitted over the air between \acp{SDR}. Then, the received IQ signals and the associated parameters are stored in data files for later access. We developed a pipeline-based approach for generating traces of \ac{RF} waveforms with different characteristics. The process is implemented as a graph of individual tasks, e.g., producing a waveform, setting the frame duration, and setting the transmission gain. Each task can be configured and run independently. Each of the task's parameters can be a list of different values, and the task generates respective output files for all the input values. The subsequent task receives a set of different input files from the previous task and performs its operation on all of them. Such a pipeline-based approach facilitates the extension and inclusion of new tasks, the parallelisation of tasks, and resuming from intermediate points. \subsection{Synchronisation and Channel Estimation} A compelling aspect of our dataset generator is the automatic labelling provided by it, as this is essential for the process of training an \ac{ML} model. The labelling is created in different formats, including the \ac{VOC} format that is used in object detection approaches. To provide automatic labelling, it is essential to keep the \ac{SDR} transmitter and receiver synchronised so that the labels of their transmitted and received samples remain consistent. We accomplish the synchronisation and channel estimation through the periodic transmission of preambles. The preamble used during dataset generation needs to display strong robustness to noise, so that it can collect samples at the low \ac{SNR} levels that are generally required in \ac{RF} signal/waveform classification use cases. We chose a preamble structure composed of several short Zadoff-Chu sequences with absolute phase shift by an m-sequence for coarse frequency, time offset estimation, and disambiguation, followed by a long Zadoff-Chu sequence for precise frequency offset estimation. For this study, we selected a preamble length of 1031 samples to guarantee robust synchronisation, with probability of preamble detection close to 1 even for values of \ac{SNR} lower than -5 dB. Whenever preamble synchronisation fails, the generator triggers a retransmission. \subsection{Detection and Classification Performance} In this section, we evaluate the detection performance and classification accuracy of our model, and demonstrate its robustness in detecting and classifying RF waveforms under different SNR conditions and interference levels. We used the dataset generator described in the previous section to compose a dataset of images, i.e., spectrograms and labels, for two radio access technology classes, LTE and WiFi. This scenario resembles real world use cases of coexistence in unlicensed spectrum~\cite{wifilte}. Moreover, our model can be extended, for instance, by increasing the diversity of the RATs included in the training dataset. Extending the training dataset might be useful in a scenario where a technology operating in the unlicensed spectrum might share it with Bluetooth or Zigbee, for example. \begin{comment} \begin{figure}[t] \includegraphics[width=\columnwidth]{demo.jpg} \caption{Experimental setup with three Ettus USRP B210s.} \label{fig:procedure} \vspace{-1em} \end{figure} \end{comment} \subsubsection{Performance of the Classifier Under Different SNRs} In this analysis, we evaluate the detection and classification performance of our solution under different SNR conditions. For this evaluation, we generated a dataset with different levels of transmission power, measuring the SNR at the receiver side. We used 400 images to train the model and adopted the configuration described in Section \ref{sec:classifier}, which empirically produced satisfying results. As explained in Section \ref{sec:implementation}, our dataset generator has a minimum SNR threshold value for synchronisation of the preamble over-the-air. The measurements start with an SNR value of -13dB and go up to 35dB. Each spectrogram represents a 50ms time interval and a 20MHz band. \begin{figure} \includegraphics[width=\linewidth]{images/snr-75.png} \caption{Percentage of correctly detected objects and precision as a function of SNR.} \label{fig:snr} \end{figure} First, we are interested in assessing the ability of our model to detect the transmitted frames correctly. The top curve in Figure \ref{fig:snr} shows the percentage of correctly detected frames as a function of SNR. Detection is around 98\% for all SNR values tested, except -13 dB: at that SNR, the edges of the transmitted frames are not as sharp, as illustrated in Figure~\ref{fig:snrs}, resulting in a lower probability of detection. Next, we are interested in assessing our model's ability to classify the detected frames. The precision metric is commonly used in classification problems \cite{metrics}, and it represents the percentage of all detected frames that are correctly classified. The precision is shown in Figure~\ref{fig:snr} and varies from 86\% for an SNR of -~13dB to 98\% at SNR between -3 and 32dB. For the highest SNRs, 32 and 35dBs, we obtained an accuracy of 96\%. It is worth mentioning that when the SNR is very high the leakage in the transmission also increases, as illustrated in Figure~\ref{fig:snrs}, which in our evaluation compromised 2\% of classification accuracy. Figure~\ref{fig:snr} shows that when the SNR is low, both the ability to detect the frame and to correctly classify it are impaired. Although the higher leakage does not influence the ability to detect the frames, it slightly affects the classification performance. \begin{figure}[t] \centering \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth]{images/wifi-02.png} \caption{WiFi detection for SNR of -13dB.} \label{fig:wifi-02} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth]{images/wifi-05.png} \caption{WiFi detection for SNR of 12dB.} \label{fig:wifi-05} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth]{images/wifi-099.png} \caption{WiFi detection for SNR of 35dB.} \label{fig:wifi-099} \end{subfigure} \caption{Illustration of WiFi signals under different SNRs.}\label{fig:snrs} \end{figure} \subsubsection{Interfering Transmissions Under Different SNRs} In this analysis, we evaluate the ability of our model to detect and classify frames under the effect of cross-technology interference. We consider two signals with the same bandwidth: the desired signal is an LTE transmission, and the interfering signal is a WiFi transmission. The desired signal is transmitted with an SNR of 29 dB, and the \ac{SNR} of the interfering signal varies between 3dB to 35dB, both in the same centre frequency and with 20MHz of bandwidth. The spectrograms have the same characteristics mentioned in the previous section. Figure~\ref{fig:over} shows the results of our experiment. The model could detect the \ac{LTE} frames 97\% of the time, with this accuracy declining slightly as the \ac{SNR} of the interfering WiFi transmission increases. The curve representing the precision of the model shows that it improves in classifying the frames once the SNR of the WiFi signal increases. This happens because when the interfering WiFi frames had lower \ac{SNR}, the model had issues clearly classifying the transmissions as either \ac{LTE} or WiFi. However, once the SNR of the WiFi is higher than the \ac{SNR} of the LTE transmissions, the model is more successful on classifying them, achieving 86\% of accuracy. \begin{figure} \includegraphics[width=\linewidth]{images/overlapping-75.png} \caption{Correct object detection and precision per SNR of the interference signal.} \label{fig:over} \end{figure} These results show that even in a scenario of strong cross-technology interference, our model is capable of detecting the frames and classifying different RATs, providing a reasonable characterisation of the environment. To the best of our knowledge, this is the first work to assess the performance of an ML model for RAT classification under the effect of interference with overlapping transmissions. \subsection{Feature Extraction}\label{feature} \begin{figure*} \begin{subfigure}[h]{0.5\linewidth} \includegraphics[width=\columnwidth]{images/feature-band.png} \caption{Bandwidth deviation.} \label{fig:acband} \end{subfigure} \hfill \begin{subfigure}[h]{0.5\linewidth} \includegraphics[width=\columnwidth]{images/feature-freq.png} \caption{Centre frequency deviation.} \label{fig:acfreq} \end{subfigure}% \hfill \begin{subfigure}[h]{0.5\linewidth} \includegraphics[width=\columnwidth]{images/feature-frame.png} \caption{Frame duration deviation.} \label{fig:acframe} \end{subfigure}% \hfill \begin{subfigure}[h]{0.5\linewidth} \includegraphics[width=\columnwidth]{images/feature-inter.png} \caption{Inter-frame duration deviation.} \label{fig:acinter} \end{subfigure}% \hfill \caption{Feature extraction deviation evaluation in time and frequency domain.} \label{fig:feature} \end{figure*} To evaluate the capabilities of our feature extraction component, we generated several datasets using different combinations of: transmission bandwidths, frame duration, inter-frame duration, and centre frequency. The average SNR of the transmissions in this evaluation is 29 dB. Figure~\ref{fig:feature} illustrates the accuracy in the feature extraction, for different transmission characteristics. In our experiments, the value of the $I_f$ is 192.307KHz, which means that each pixel in the spectrograms accounts for a variation of 192.307KHz in the frequency domain. For example, if the calculated centre frequency is off by a single pixel, the computed value will deviate 192.307KHz from the correct centre frequency. The same applies in the time domain, where each pixel accounts for a variation of $I_t=519\mu$s. Figures~\ref{fig:acband} and~\ref{fig:acfreq} illustrate the accuracy in the extraction of frequency domain features. For all cases tested, the median deviation from the ground truth is at most 2\%. The results of the extraction of time-domain features are shown in Figures~\ref{fig:acframe} and~\ref{fig:acinter}. For these, the median deviation from the ground truth is at most 4\%. Figure~\ref{fig:acframe} illustrates that when the frame duration to be detected is smaller, the solution tends to have an average error higher than when the frame has a longer duration. This happens because it is harder to identify the precise size of smaller objects. As depicted in Figure \ref{fig:acinter}, the extraction of inter-frame duration shows similar accuracy. Our model detects a high percentage of the transmitted frames. Whenever the model fails to detect a frame, it assumes that the spectrum is empty for that period, increasing the extracted inter-frame duration. However, even in those cases, our model achieves a median deviation of less than 2 percent in all the cases. Considering the results discussed in this subsection, we can conclude that our model is capable of extracting the signal features with high precision. Moreover, if necessary for specific applications, a higher precision can be achieved by using higher-resolution spectrograms, i.e., smaller $I_f$ and $I_t$ values. \subsection{Performance Comparison Using Public Datasets} In this section, we evaluate our model using a publicly available dataset of commercial LTE and WiFi transmissions collected in Belgium. This evaluation is crucial because it shows that our model can work in real-world scenarios. First, we investigate the accuracy of our model as a function of the number of spectrograms in the training dataset. Then, to demonstrate the ability of our object detection model to classify commercial transmissions accurately, we compare our solution to the ones proposed in \cite{imecpaper}, which used the same publicly available dataset. \begin{figure} \includegraphics[width=\columnwidth]{images/n-spec.png} \caption{Number of spectrogram in the training phase versus accuracy of the model.} \label{fig:specresult} \end{figure} We start by analysing how the number of the samples (spectrogram images) affects the performance of the proposed model. The volume of training data can limit the application of \ac{ML}, because \ac{ML} techniques usually require a considerable amount of data to learn. For example, the work of \cite{imecpaper} used more than 12 thousand images for training the CNN solution based on spectrograms. In this section, we assess the performance of our model, considering the volume of training data We repeated the training in an identical setup while only adjusting the number of spectrograms used: 2, 10, 20, 30, 40, 50, 100, 200, and 400. The training samples equally represent the LTE and WiFi classes. Figure~\ref{fig:specresult} illustrates how accuracy depends on the number of spectrograms used in training the model. The best accuracy achieved was 96\% with 400 spectrograms. Hence, we limited the size of our training dataset to 400 images, as this volume of training data is sufficient for our model to achieve a comparable accuracy to the CNN image-based solution presented in \cite{imecpaper}, while using a considerably lower number of training images (only 3.23\% of the dataset size used in \cite{imecpaper}). \begin{figure} \includegraphics[width=\columnwidth]{images/comp_real.png} \caption{Classification accuracy of different ML solutions.} \label{fig:comparison} \end{figure} We then compared the object detection-based classification solution presented in this paper against other \ac{RAT} classification solutions in \cite{imecpaper}. These solutions include a \ac{FNN}, a \ac{RForest} \cite{random}, a \ac{CNN} solution based on \ac{RSSI}, a \ac{CNN} solution based on IQ samples, and a CNN solution based on spectrograms. The results of this comparison are shown in Figure \ref{fig:comparison}. The CNN-based solutions, including the solution presented in this paper, correctly identify the RAT with accuracy above 95\%. The CNNs for IQ and image-based solutions achieve marginally better accuracy compared to our proposed solution. However, our solution provides additional information regarding spectrum usage that can enhance the efficient use of the spectrum. \iffalse \subsection{Evaluation of the bounding boxes: publicly available and generated datasets} In this section, we evaluate how precisely our bounding boxes are being generated. In the field of object detection, the evaluation of the generation of accurate bounding boxes resorts to a metric named \ac{mAP}. This metric was introduced during the PASCAL VOC 2012 \cite{pascal} competition and has been used to calculate the precision of an object detection model. The first step for the calculation of the \ac{mAP} is the calculation of the AP for every class of each model. The ground truth of the object position is necessary to perform this evaluation. To calculate the AP it is necessary to plot the precision curve, which identifies the precision and the recall of our model. The recall is the ratio of the number of frames that are correctly classified to the number of transmitted frames. The area under the curve is then used to calculate the AP value. The \ac{mAP} value is the mean of all AP: in our case we have 2 classes, so the \ac{mAP} is the mean of 2 APs. Using the \ac{mAP} metric, we compared the performance of our model trained with the publicly available dataset to our model trained with the generated dataset. Figure~\ref{fig:ap-real} illustrates the values of APs calculated by the model trained with the dataset collected in Belgium. Figure~\ref{fig:ap-lab} shows the values of APs for both classes calculated by the model trained with the generated dataset. The model created with data collected in Belgium shows inferior performance compared to the one trained with the dataset generated in the laboratory. The model used with commercial data cannot find more than 75\% of all the transmitted LTE frames and no more than 92\% for WiFi frames. However, it maintains the precision of the detected objects above 90\% for LTE and near to 99\% for WiFi. The model performs better for WiFi transmissions because the distance between the LTE BSs can vary a few kilometres, which influences the SNR and impairments of the collected data. In the WiFi scenario, it was collected in a more stable scenario, where the distance does not vary as much. In the AP graph of the model trained with dataset generated in the laboratory, Figure~\ref{fig:ap-lab}, our model detects more than 97\% of the LTE frames and 94\% of the WiFi frames with high precision, achieving approximately 99\% detection of the objects. The slightly difference between the RATs exists because the WiFi transmissions have a shorter frame duration than the LTE transmisions, which makes slightly harder to find all the frames. We believe that the model trained with the data generated by us has a better performance due to the automatic labelling, being more precise than manual label approaches. There is also the fact that the spectrograms generated by the public dataset were collected by different \acp{BS} and under different circumstances, which may have influenced the results as the same did not apply to the generated data in a controlled environment. The \ac{mAP} of our model trained with the public dataset is 83.04\%, and the \ac{mAP} of the same model trained with the generated data is 96.17\%. To the best of our knowledge, this is the first work that evaluates the mAP of an object detection model for \acp{RAT} classification. It is worth mentioning that when the YOLOv2 is used on the VOC 2007 dataset \cite{yolo2}, it achieves 78.6 mAP for images with a resolution of 544x544. \begin{figure} \centering \begin{subfigure}[h]{0.9\linewidth} \includegraphics[width=\linewidth]{images/AP_lte-real.png} \caption{\ac{LTE} AP from our model trained with the dataset collected in Belgium.} \label{fig:aplte-real} \end{subfigure} \hfill \begin{subfigure}[h]{0.9\linewidth} \includegraphics[width=\linewidth]{images/AP_wifi-real.png} \caption{WiFi AP from our model trained with the dataset collected in Belgium.} \label{fig:apwifi-real} \end{subfigure}% \caption{AP of the \ac{ML}-based signal classifier trained with the dataset collected in Belgium.} \hfill \label{fig:ap-real} \end{figure} \begin{figure} \centering \begin{subfigure}[h]{0.9\linewidth} \includegraphics[width=\linewidth]{images/AP_lte-lab.png} \caption{\ac{LTE} AP from our model trained with the generated dataset.} \label{fig:aplte-lab} \end{subfigure} \hfill \begin{subfigure}[h]{0.9\linewidth} \includegraphics[width=\linewidth]{images/AP_wifi-lab.png} \caption{WiFi AP from our model trained with the generated data.} \label{fig:apwifi-lab} \end{subfigure}% \caption{AP of the \ac{ML}-based signal classifier trained with the generated dataset.}\label{fig:ap-lab} \hfill \end{figure} \begin{comment} However, in object detection, a model can detect any number of false objects in an image, which means there is an infinite number of possible incorrect detection. To tackle this issue, in object detection, the metric accuracy is used to express how reliable are the predictions from a model, i.e., it illustrates the percentage of the predictions' correctness, as shown in Equation~\ref{eq:precision}. Furthermore, for estimating the number of misclassifications, we can simply calculate $1 - Accuracy$. \end{comment} \fi \section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE Computer Society conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Communications Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi
{ "timestamp": "2020-07-28T02:39:59", "yymm": "2007", "arxiv_id": "2007.13561", "language": "en", "url": "https://arxiv.org/abs/2007.13561" }
\section{Introduction} Data Compression and the associated techniques coming from Information Theory, has a long and very influential history for the storage and mining of biological data \cite{giancarloCompression09}. In recent years, it has received increasing attention via the proposal of novel specialized compressors, due to the facts that (a) storage costs have become quite significant given the massive amounts of data produced by HTS technologies (see \cite{Fritz11} for an enlightening analysis, which is still valid \cite{pavlichin2018}); (b) generic compressors, even of the latest generation, e.g. LZ4 \cite{Lz4}, BZIP2 \cite{Bzip2}, are inadequate for the task of biological data compression. Good analytic reviews of the State of the Art are provided in \cite{giancarlo2014compressive,Numanagic2016}, although no clear winner compressor has emerged. It is worth of mention that data compression is now considered also as something convenient to speed-up the processing of Bioinformatics pipelines. Indeed, following the idea of computing on compressed data, developed in Computer Science under the term Succinct Data Structures \cite{navarro2016}, the concept of Compressive Genomics has been proposed, with some highly specialized proof of principle \cite{loh2012}. Due to the same reasons of massive data production, Big Data Technologies for Genomics and the Life Sciences, have been indicated as a direction to be actively pursued \cite{kahn2011future}, with MapReduce \cite{dean2008mapreduce}, Hadoop \cite{HadoopGuide} and Spark \cite{SparkGuide} being the preferred ones \cite{Cattaneo19}. This is not just a following of a \vir{Big Data trend} that has proved successful in other fields of Science, since Bioinformatics solutions based on those techniques can be more effective than classic HPC ones, thanks to their scalability with available hardware \cite{KCH} and to their easiness of use. For later reference, it is worth pointing out that those technologies have \vir{compression capabilities} via built-in generic data compressors, e.g., BZIP2 \cite{Bzip2}. The corresponding software is referred to with the technical term Codec, where compression is coding and decompression is decoding. Moreover, although with quite some knowledge of those technologies, it is possible to add other compressors to Hadoop, i.e., additional Codecs. It is to be added that not all data compressors are amenable to a profitable incorporation due to the requirement of {\em splittable} compression: a file is divided into (un)compressed data blocks that can be compressed and decompressed separately granting in any case the integrity of the entire file. Indeed, processing files compressed using a non-splittable format is still possible under Hadoop, but at a cost of very long decompression times (data not shown but available upon request). Further discussion on those topics is in Section \ref{sec:split}. In order to make a compressor splittable, when its standard version is not, requires major code reorganization and rewriting. In what follows, the term {\em standard} denotes a compressor that executes on a sequential machine, i.e., a PC. Given the above discussion about Data Compression, it is rather surprising that the deployment of specialized compressors for biological data in Big Data technologies is episodic, in particular for FASTA/Q file formats, e.g., \cite{shi2016}, that host a substantial part of genomic data. \subsection{Methodological Contributions} We provide two contributions for the deployment of standard specialised compressors for FASTA/Q files within MapReduce-Hadoop, together with the corresponding software. \begin{itemize} \item {\bf Splittable Compressor Meta-Codec.} When a standard compressor is splittable, we provide a method that facilitates its incorporation in Hadoop. Use of the software library associated to the method offers a substantial savings of programming time for a rather complicated task. Intuitively, the {\bf Splittable Compressor Meta-Codec} performs a transformation of a standard splittable compressor in an Hadoop splittable Codec for that compressor. \item {\bf Universal Compressor Meta-Codec.} Independently of being splittable or not, as long as some mild assumptions in regard to input/output handling, we provide a method to incorporate a data compressor in Hadoop, making it splittable. It is worth pointing out that the vast majority of standard specialized FASTA/Q compressors are not splittable. Again, intuitively, the {\bf Universal Compressor Meta-Codec} performs a transformation of a standard compressor in an Hadoop splittable Codec for that compressor. \end{itemize} A few comments are in order. The {\bf Splittable Compressor Meta-Codec} provides a template useful for accelerating and simplifying the development of specialized Hadoop Codecs. The {\bf Universal Compressor Meta-Codec} allows to support in Hadoop any standard compressor with no programming at all, provided that is usable as a command-line application. The first option has to be preferred when interested in achieving the best performance possible, at a cost of analyzing the internal format employed by files processed with that compressor and writing the required integration code. The second option allows to almost instantaneously support any command-line compressor, but at a cost of possibly reduced performance that we have measured to be negligible with respect to the direct use of the {\bf Splittable Compressor Meta-Codec}. Both methods work also for Spark, when it uses the Hadoop File System. Finally, given the pace at which new standard specialized compressors are implemented, our methods can readily support the deployment of those future implementations in Hadoop. For later use, we refer to the version of a standard compressor with the prefix HS when the incorporation in Hadoop has been made by using the {\bf Splittable Compressor Meta-Codec} or an Hadoop splittable Codec is already available, e.g., LZ4 becomes HS\_LZ4. Analogously, we use the prefix HU, when the {\bf Universal Compressor Meta-Codec.} has been used. . \subsection{Practical Contributions} We provide experimental evidence that our methods are a major advance in dealing with massive data production in genomics within one of the Big Data technologies of choice. Indeed, for the {\bf Universal Compressor Meta-Codec}, we show the following via an experimental comparative analysis involving a selection of specialized FASTA/Q compressors vs the generic compression Codecs already available in Hadoop. \begin{itemize} \item{\bf Disk space savings.} The size of the FASTA/Q files is significantly reduced with the use of specialized HU Codecs vs the generic HS available in Hadoop. Consequently, the cost of the hardware required to store them in the Hadoop File System is reduced. \item{\bf Reading time savings.} When using a specialized HU, the additional time required to decompress a FASTA/Q file in memory is counterbalanced by the much smaller amount of time required to load that file from the Hadoop File System. This results in a significant reduction of the overall reading time. \item{\bf Network communication time overhead savings.} The number of concurrent tasks required to process, in a distributed way, a FASTA/Q file compressed via an HU is greatly reduced, thus allowing for a significant reduction of the network communication time overhead required for the recombination of their outputs. \end{itemize} As for the {\bf Splittable Compressor Meta-Codec}, we reach the same conclusions as above, but the experimentation is somewhat limited: the only standard specialized compressor for FASTA/Q files featuring a splittable format is DSRC \cite{roguski2014dsrc}. Finally, disk space and reading time savings apply also to the Apache Spark framework, when used to process FASTA/Q files stored on the Hadoop File System. \section{Methodologies} This section is organized as follows. Section \ref{sec:split} is dedicated to introduce some basic notions about Hadoop, useful for the presentation of our methods. Section \ref{subsec:guidelines} outlines some technical problems regarding the design of a splittable Codec for Hadoop, proposing our solutions. The last two section are dedicated to the description of our two Meta-Codecs. \subsection{Preliminary}\label{sec:split} MapReduce is a programming paradigm for the development of algorithms able to process Big Data on a distributed system on an efficient and scalable way. It is based on the definition of a sequence of {\em map} and {\em reduce} functions that are executed, as {\em tasks}, on the nodes of a distributed system. Data communications between consecutive tasks is automatically handled by the underlying distributed computing framework, including the {\em shuffle} operation, required to move data from one node to another one of the distributed system. In Section 1 of the Supplementary Material\xspace we provide more information about this topic, including Hadoop, one of the most popular MapReduce implementation. Here we limit ourselves to describe how files are stored in the the Hadoop File System, i.e. HDFS. When uploading a large file to HDFS (by default, larger than $128$MB), it is automatically partitioned into several parts of equal size, where each part is called {\em HDFS data block} and is physically assigned to a Datanode, the nodes of the distributed system that execute map and reduce tasks. For fault-tolerance reasons, HDFS data blocks can be replicated on several Datanodes according to a user-defined {\em replication factor}. This allows to process a HDFS data block even if the Datanode originally containing it becomes unavailable. By default, Hadoop assumes that each map task processes only the content of one particular HDFS data block. However, it may happen that, because of the aforementioned partitioning, a record to be analyzed by one map task is cut into two parts located in two different HDFS data blocks. We refer to these cases as {\em disalignments}. This circumstance is managed by HDFS through the introduction of the {\em input split} concept or {\em split}, for short. It can be used, at the application level, to logically redefine the range of data to be processed by each map task, thus allowing a map task to process data found on HDFS data blocks different than the one it is processing. \subsubsection{Hadoop Support for the Input of Compressed Files} Currently, Hadoop supports two types of Codecs: \begin{itemize} \item \emph{Stream-oriented.} Codecs in this class require that the whole file be available to each map task prior to decompressing it. For this reason, when a map task starts its execution, a request is issued to the other nodes of the cluster. As a result, all the parts of the file to be processed are collected from these nodes and merged into a single local file. This type of Codec can be developed by creating a new Java class implementing the standard Hadoop \texttt{CompressionCodec} interface. \item \emph{Block-oriented.} Codecs in this class allow each map task to decompress only a portion of the input file, without requiring the remaining parts of it. They assume the compressed file to be logically split into data blocks, here referred to as {\em compressed data blocks}, where each of them can be decompressed independently of the others. Assuming the possibility of knowing the boundaries of each compressed data block, a map task can autonomously extract and decompress all the compressed data blocks existing in its HDFS data blocks. This type of Coded can be developed by creating a new Java class implementing the standard Hadoop \texttt{SplittableCompressionCodec} interface. It is worth noting that the stream-oriented approach implies a significant computational overhead, as the same file is decompressed as many times as the number of map tasks processing it. It implies also a significant communication overhead, because the same file has to be replicated on each computational node running at least a map task. Finally, it may prevent a job from running at all because map tasks may not have enough memory to handle the decompression of the input file (e.g., when handling large files). For this reason, in this research, we focus on block-oriented Codecs, i.e., \emph{splittable} Codecs. \end{itemize} \label{sec:Methods} \subsection{General Guidelines for the Design of an Hadoop Splittable Codec.} \label{subsec:guidelines} Here we consider some problems that a programmer must face in order to obtain an Hadoop splittable Codec, offering solutions. We concentrate on genomic files, although the guidelines apply to any lossless textual compressor. There are two problems to face when extracting genomic sequences from a splittable compressed file. The first is about inferring the logical internal organization of the compressed file in regard to determine the relative position of the compressed data blocks. The second is in regard to the management of the possible disalignments existing between the physical partitioning of the file, as determined by HDFS, and the internal logical organization of the compressed file in compressed data blocks. In Section \ref{subsec:inferring} and in Section \ref{subsec:disalignments}, respectively, these problems are described in details and the solution we propose is presented. \begin{figure}[ht] \centering \includegraphics[scale=.25]{img/layout2.png} \caption{The layout of a block-oriented compressed data file when uploaded to HDFS. In the figure, (a) the original file includes an header, a footer and $8$ compressed data blocks. (b) When uploaded to HDFS, it is partitioned into $4$ HDFS data blocks. (c) As a result of the partitioning, the compressed data block labeled as $CB{5}$ is divided into two parts and assigned to two different HDFS data blocks. Using the {\em Compressed Block Split} strategy, each compressed data block is modeled as a distinct split. (d) Using the {\em Enhanced Split} strategy, several compressed data blocks are grouped into fewer input splits.} \label{fig:layout2} \end{figure} \subsubsection{Determining the Internal Structure of a Compressed File} \label{subsec:inferring} A map task can extract and decompress the compressed data blocks existing in the HDFS data block it is analyzing only if it knows their size and relative positions. However, this information could be stored elsewhere (e.g., in the footer of the compressed file) or it could be encoded implicitly. In the following, we provide a solution for efficiently dealing with the most frequent scenario, i.e., the one where the list of compressed data blocks is made explicitly available. We refer the interested reader to \cite{Bzip2} for an example of a solution for encoding this list implicitly. \paragraph{\em Explicit Representation.} An explicit list of all the compressed data blocks existing in a compressed file is maintained in an auxiliary {\em index} data structure. This latter may either be located at the beginning or at the end of the file (e.g., DSRC \cite{roguski2014dsrc}), or it can be saved in multiple copies along a file. In some other cases, this data structure can be saved in an external file complementing the compressed file. In this case, the solution proposed here is to have one process to retrieve the index before processing the compressed file and send a copy to all nodes of the distributed system using the standard Hadoop {\tt Configuration} class. Then, each computing node makes available this information to the map tasks that it runs, thus allowing them to determine the list and the relative position of the compressed data blocks in their HDFS data blocks. \subsubsection{Managing Disalignments between Compressed Data Blocks and HDFS Data Blocks} \label{subsec:disalignments} When uploading a large compressed splittable file on HDFS, it is likely that several of its compressed data blocks would be broken into parts located on different HDFS data blocks, because of the partitioning strategy used by the distributed file system. An example of such a case is discussed in Figure \ref{fig:layout2}. The file is initially stored as a whole on a local file system (Figure \ref{fig:layout2}(a)). If uploaded without specifying any splitting strategy, it would be partitioned into separate parts independently of the compressed data blocks, as pictured in Figure \ref{fig:layout2}(b). This would imply a severe performance overhead when reading the content of compressed data blocks spawn across different parts. Here, a first possible solution, denoted as {\em Compressed Block Split} strategy, would be to model as input splits all the compressed data blocks existing in a compressed file (see Figure \ref{fig:layout2}(c)). However, this strategy may imply a performance overhead because the typical size of compressed data blocks is usually orders of magnitude smaller than those of the HDFS data blocks. Thus, the number of input splits would be much larger than the number of HDFS data blocks. A more efficient solution, here denoted as {\em Enhanced Split} strategy, is to fit several compressed data blocks into the same Hadoop input split and, then, have each map task query a local index listing the offset of all the single compressed data blocks existing in a split (see Figure \ref{fig:layout2}(d)). At this point, when processing compressed data blocks in a split, two cases may occur: \begin{itemize} \item{\bf standard case:} the compressed data block is entirely contained in a single HDFS data block. In such a circumstance, it is retrieved using the information contained in the index and, then, decompressed using the considered Codec. \item{\bf exceptional case:} the compressed data block is physically divided by HDFS into two parts, $p_{1}$ and $p_{2}$. These parts are located on two HDFS data blocks but are assigned to the same input split. In such a case, a copy of $p_{2}$ is automatically pulled from the Datanode holding it. Then, $p_{1}$ and $p_{2}$ are properly concatenated to obtain $p$. The resulting compressed data block is decompressed using the Codec decompression function. \end{itemize} \subsection{The architecture of the Splittable Compressor Meta-Codec} \label{subsec:codec} This Meta-Codec consists of a library of abstract Java classes and interfaces implementing a standard Hadoop splittable Codec for the compression of FASTA/Q files, but without any compression/decompression routine. \label{subsubsec:customcodec} Its architecture is based on a specialization of the generic compressors and decompressors interface coming with Hadoop and targeting block-based Codecs. It offers the possibility to automatically assemble a compressed file as a set of compressed data blocks while maintaining their index using an explicit representation, as described in Section \ref{subsec:inferring}. In addition, the compressed data blocks are organized according to the Enhanced Split strategy (see Section \ref{subsec:disalignments}). Also the creation of the compressed data blocks index is automatically managed by our Meta-Codec, which also provides the ability to share the content of the index with all nodes of an Hadoop distributed system so to allow for each node to know the exact boundaries of the compressed data blocks it has to process. Additional details regarding the architecture of this Meta-Codec are given in Figure 1 of the Supplementary Material\xspace. Here we limit ourselves to mention that it includes the following Java classes. \begin{itemize} \item{\texttt{CodecInputFormat.}} It fetches the list of compressed data blocks existing in a compressed file and sends it to all the nodes of an Hadoop cluster together with the instructionts required for their decompression. Then, it defines the input splits as containers of compressed data blocks. These operations are compressor-dependent and require the implementation of several abstract methods like \texttt{extractMetadata}, to extract the metadata from the input file, and \texttt{getDataPosition}, to point to the starting address of the first compressed data block. \item{\texttt{NativeSplittableCodec.}} Assuming the compression/decompression routines for a particular Codec are available as a standard library installed on the underlying operation system, it simplifies its integration in the Codec under development. \item{\texttt{CodecInputStream.}} It reads the compressed data blocks existing in a HDFS data block, according to the input split strategy defined by the \texttt{CodecInputFormat}. The compressed data blocks are decompressed on-the-fly by invoking the decompression function of the considered compressor and returned to the main application. Some of these operations are compressor-dependent and require the implementation of the \texttt{setParameters} abstract method. This method is used to pass to the Codec the command-line parameters required by the compressor, e.g execution flags, in order to correctly decompress the compressed data blocks. \item{\texttt{CodecDecompressor.}} It decompresses the compressed data blocks given by the \texttt{CodecInputStream}. It requires the implementation of the \texttt{decompress} abstract method. \item{\texttt{NativeCodecDecompressor.}} It decompresses the compressed data blocks given by the \texttt{CodecInputStream}. It requires the implementation of the \texttt{decompress} method through the native interface. \end{itemize} \subsection{The architecture of the Universal Compressor Meta-Codec} \label{subsec:UC} This Meta-Codec is a software component able to automatically expose as a HU splittable Codec the compression/decompression routines offered by a given standard compressor. As opposed to the {\bf Splittable Compressor Meta-Codec}, requiring some programming, it works as a ready-to-use black box, since the only information it needs is the set of command lines to be used for compressing and for decompressing an input file by means of a standard compressor. Assuming there is an input file to compress in a splittable way, this method works by splitting the file into uncompressed data blocks and, then, compressing each uncompressed data block using an external compression application according to the command line given at configuration time. As for the {\bf Splittable Compressor Meta-Codec}, compressed data blocks are organized following the Enhanced Split strategy (see Section \ref{subsec:disalignments}). The resulting file will use an index for the explicit representation of the compressed data blocks existing therein (see Section \ref{subsec:inferring}) based on the following format. \begin{itemize} \item {\bf compression\_format}: A unique id number telling the Codec format used for this file. \item {\bf compressed\_data\_blocks\_number}: Number of compressed data blocks existing in the file. \item {\bf blocks\_sizes\_list}: List of the size of all the compressed data blocks included in the file. \item {\bf uncompressed\_block\_size}: The size of the data structure used for decompressing the compressed data blocks. \end{itemize} The decompression is achieved by exploiting the information contained in the aforementioned index. The usage of this Meta-Codec assumes the possibility of parking as files on a local device the content of the (un)compressed data blocks to process. For efficiency reasons, these are saved on the local RAM disk, a virtual device usable as a disk but with the same performance of memory. The Java classes for this Meta-Codec, shown in Figure 2 of the Supplementary Material\xspace, are the following. \begin{itemize} \item \texttt{Algo}. Contains the command-line instructions of a particular compressor, defined through the configuration file. \item \texttt{UniversalCodec}. Contains fields and methods for managing data compression and decompression. \item \texttt{UniversalInputFormat}. Extends the \texttt{CodecInputFormat} class, implementing the methods according to the compressed file structure. \item \texttt{UniversalDecompressor}. Extends the \texttt{CodecDecompressor} class, implementing the method \texttt{decompress}, according to the command-line commands of the \texttt{Algo} object. \end{itemize} \section{Results and Discussion} \label{sec:experiments} In order to quantify the advantages of deploying FASTA/Q Codecs in Hadoop via our methods, we perform the following experiments. \begin{itemize} \item{\bf Experiment 1: An assessment of disk space savings}. The aim here is to determine the possible disk space savings achievable thanks to the adoption of a specialized HU or HS Codec, when storing FASTA/FASTQ files on the Hadoop HDFS distributed file system, with respect to the usage of general-purpose HS Codecs available in Hadoop. \item {\bf Experiment 2: An assessment of the possible performance loss due to the usage of an HU Codec against an HS Codec}. The aim here is to evaluate the potential performance loss that is experienced when processing a compressed file using a compressor obtained by means of our {\bf Universal Compressor Meta-Codec } rather than using a compressor obtained via the {\bf Splittable Compressor Meta-Codec}. This experiment is implemented by comparing HU\_DSRC, obtained via the first Meta-Codec vs HS\_DSRC, obtained via the latter Meta-Codec. \item{\bf Experiment 3: An assessment of reading times savings}. The aim here is to determine if the trade-off between the cost to be paid for reading and unpacking compressed FASTA/Q files, once compressed with an HU Codec, and the time saved thanks to the smaller amount of data to read from HDFS is positive. Following the methodology used in \cite{fastdoop}, this experiment is implemented by benchmarking a very simple Hadoop application. It runs only map tasks whose goal is to count the number of occurrences of the letters $\{A,C,G,T,N\}$ in the input sequences, without producing any output. That is, the application spends most of its time reading data from HDFS. \item {\bf Experiment 4: An assessment of network communication time overhead savings}. The aim here is to establish if the smaller amount of network traffic due to the reduced number of map tasks needed to process a FASTA/Q file compressed with an HU Codec has a beneficial effect on the overall shuffle time of an application, compared to the case where the input file is uncompressed. This experiment is implemented by benchmarking an application where each map task counts the number of occurrences of the letters $\{A,C,G,T,N\}$, in each of the sequences read from an input file. Once finished, the map task emits, as output, the overall count for each of the considered sequences. The reduce tasks gather and aggregate the output of all map tasks, and print on output the overall number of occurrences of each distinct letter. That is, the execution of this experiment requires a communication activity between map and reduce tasks that is proportional to the number of map tasks being used. \end{itemize} \subsection{Experimental Setting} \subsubsection{Choice of Compression Codecs: Standard Specialized or Available in Hadoop} \label{subsec:compressors} For our experiments, all the standard splittable general-purpose compression Codecs available with Hadoop have been considered: BZIP2 \cite{Bzip2}, LZ4 \cite{Lz4} and ZSTD \cite{Zstd}. As for the specialized FASTA/Q files compressors, we have developed a set of compression Codecs based on SPRING \cite{spring}, DSRC \cite{roguski2014dsrc}, Fqzcomp \cite{bonfield2013compression}, MFCompress \cite{pinho2013mfcompress}. These have been chosen, with independent experiments, as they cover the range of possibilities in terms of the trade-off compression and time. A list of all these Codecs is reported in Table \ref{tab:encoders}, with their relevant features for this research. We recall from the Introduction, for the convenience of the reader, the terminology we use for denoting these compressors: we use the prefix HS when referring to compressors that are already present in Hadoop or that have have been incorporated in it using our {\bf Splittable Compressor Meta-Codec}, and the prefix HU when the incorporation has been made with our {\bf Universal Compressor Meta-Codec }. It is to be remarked that while the general purpose compressors have been designed to compress well and be fast in compression/decompression times, the specialized ones are not so uniform with respect to this design criteria. For instance, HU\_SPRING compresses very well, but it is very slow in compression/decompression times, while HU\_DSRC offers a good balance of those aspects. To place every compressor at a peer, we use their default settings. \begin{table}[t] \centering \begin{tabular}{lccc} \textbf{Compressor} & \textbf{Input Format} & \textbf{Implementation} \\ & \textbf{Type} & \\ \hline BZIP2 \cite{Bzip2} & Any file & HS\\ LZ4 \cite{Lz4} & Any file & HS\\ ZSTD \cite{Zstd} & Any file & HS\\ DSRC \cite{roguski2014dsrc} & FASTQ files & HS/HU \\ Fqzcomp \cite{bonfield2013compression} & FASTQ files & HU \\ MFCompress \cite{pinho2013mfcompress} & FASTA files & HU\\ SPRING \cite{spring} & FASTA/Q files & HU\\ \end{tabular} \caption{List of splittable Codecs considered in our experiments. For each splittable Codec it is reported: 1) the originating compressor; 2) the input format it supports; 3) whether or not it has been developed using our {\bf Splittable Compressor Meta-Codec} (HS) or our {\bf Universal Compressor Meta-Codec} (HU) or directly supported (HS). } \label{tab:encoders} \end{table} \subsubsection{Datasets} We have used for our experiments a collection of FASTQ and FASTA files, of different sizes. The FASTQ files contain a set of reads extracted from a collection of genomic sequences coming from the Pinus Taeda genome \cite{PinusTaeda2013}, while the FASTA files contain a set of reads extracted from a collection of genomic sequences coming from the Human genome \cite{Human2008}. We have chosen these datasets because they are so large to represent a relevant benchmark for the type of experiment we were interested in Section 4 of the Supplementary Material\xspace. Moreover, the choice of using collection of reads is to consider files that are the end product of HTS technologies. \begin{comment} \begin{table}[t] \centering \begin{tabular}{r|r|r|r|r|r} \textbf{Dataset} & \textbf{HS\_BZIP2} & \textbf{HS\_LZ4} & \textbf{HS\_ZSTD} & \textbf{HU\_MFCompress} & \textbf{HU\_SPRING}\\ \hline 16GB & 3.01GB & 7.21GB & 3.85GB & 2.22GB & 2.01GB\\ 32GB & 6.02GB & 14.42GB & 7.69GB & 4.43GB & 3.82GB\\ 64GB & 11.98GB & 28.71GB & 15.32GB & 9.26GB & 6.68GB\\ 96GB & 18.06GB & 43.25GB & 23.08GB & ?GB & ?GB\\ \end{tabular} \caption{Size of the FASTA input datasets when compressed with HS\_BZIP2, HS\_LZ4, HS\_ZSTD, HU\_MFCompress and HU\_SPRING compressors.} \label{tab:FAdatasetsFast} \end{table} \begin{table}[t] \centering \begin{tabular}{r|r|r|r|r|r} \textbf{Dataset} & \textbf{BZ2} & \textbf{LZ4} & \textbf{ZSTD} & \textbf{MFCompress} & \textbf{SPRING}\\ \hline 16GB & 2.93GB & ?GB & ?GB & ?GB & 2.01GB\\ 32GB & 5.86GB & 9.18GB & 6.93GB & 4.66GB & 3.82GB\\ 64GB & 11.65GB & 18.27GB & 13.81GB & 9.26GB & 6.68GB\\ 96GB & 17.58GB & ?GB & ?GB & ?GB & ?GB\\ \end{tabular} \caption{Size of the FASTA input datasets when compressed with HS\_BZIP2, HS\_LZ4, HS\_ZSTD, HU\_MFCompress and HU\_SPRING compressors.} \label{tab:FAdatasetsSlow} \end{table} \begin{table}[t] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{r|r|r|r|r|r|r} \textbf{Dataset} & \textbf{BZ2} & \textbf{LZ4} & \textbf{ZSTD} & \textbf{DSRC} & \textbf{Fqzcomp} & \textbf{SPRING}\\ \hline 16GB & 3.11GB & 7.42GB & 4GB & 2.45GB & 2.29GB & 1.89GB\\ 32GB & 6.2GB & 14.42GB & 7.69GB & 4.9GB & 4.57GB & 3.69GB\\ 64GB & 12.6GB & 28.71GB & 15.32GB & 9.98GB & 9.31GB & ?GB\\ 96GB & 19.09GB & 45.41GB & 24.45GB & 15.16GB & 14.13GB & ?GB\\ \end{tabular} } \caption{Size of the FASTQ input datasets when compressed with BZ2, LZ4, ZSTD, DSRC, Fqzcomp and SPRING encoders using, when possible, compression speed set to maximum} \label{tab:FQdatasetsFast} \end{table} \begin{table}[t] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{r|r|r|r|r|r|r} \textbf{Dataset} & \textbf{BZ2} & \textbf{LZ4} & \textbf{ZSTD} & \textbf{DSRC} & \textbf{Fqzcomp} & \textbf{SPRING}\\ \hline 16GB & 3.04GB & 4.67GB & 3.61GB & 2.26GB & 2.27GB & 1.89GB\\ 32GB & 6.06GB & 9.32GB & 7.2GB & 4.51GB & 4.53GB & 3.69GB\\ 64GB & 12.31GB & 18.9GB & 14.6GB & 9.19GB & 9.22GB & ?GB\\ 96GB & 18.65GB & 28.57GB & 22.03GB & 13.96GB & 14.00GB & ?GB\\ \end{tabular} } \caption{Size of the FASTQ input datasets when compressed with BZ2, LZ4, ZSTD, DSRC, Fqzcomp and SPRING encoders using, when possible, compression ratio set to maximum} \label{tab:FQdatasetsSlow} \end{table} \end{comment} \subsubsection{Hardware} The testing platform used for our experiments is a $9$ nodes Linux-based Hadoop cluster, with one node acting as \textit{resource manager} and the remaining nodes being used as workers. Each node of this cluster is equipped with two 8-core Intel Xeon E3-12@2.70 GHz processor and 32GB of RAM. Moreover, each node has a 200 GB virtual disk reserved to HDFS, for an overall capacity of about 1.6 TB. All the experiments have been performed using the Hadoop 3.1.1 software distribution. \subsection{Analysis of the experiments} \subsubsection{Experiment 1: Specialized compression yields significant disk space savings on Hadoop.} The results of this experiment, reported in Tables \ref{tab:bs_fasta}-\ref{tab:bs_fastq}, confirm the ability of the specialized HU and HS Codecs, i.e. the ones that have been imported in Hadoop using our methods, to reach a compression ratio much higher than that of generic HS Codecs already available in Hadoop. This is witnessed by the much smaller number of HDFS data blocks needed to store a distributed compressed representation of each file, with respect to uncompressed files. \begin{table} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c} Dataset & NoCompress & HS\_BZIP2 & HS\_LZ4 & HS\_ZSTD & HU\_SPRING & HU\_MFCompress \\ \hline 16G & 128 & 24 & 58 & 31 & 18 & 19 \\ 32G & 256 & 47 & 116 & 62 & 35 & 38 \\ 64G & 512 & 94 & 231 & 124 & 69 & 76 \\ 96G & 768 & 141 & 346 & 185 & 104 & 113 \\ \end{tabular} } \caption{Size of the FASTA input datasets, in terms of HDFS data blocks, when compressed with general-purpose and FASTA specialized compression Codecs. The size of each HDFS data block is 128 MB.} \label{tab:bs_fasta} \end{table} \begin{table} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} Dataset & NoCompress & HU\_DSRC & HS\_DSRC & HU\_Fqzcomp & HS\_BZIP2 & HS\_LZ4 & HS\_ZSTD & HU\_SPRING \\ \hline 16G & 128 & 22 & 20 & 20 & 25 & 60 & 33 & 20 \\ 32G & 256 & 44 & 40 & 40 & 49 & 119 & 64 & 40 \\ 64G & 512 & 90 & 80 & 82 & 99 & 241 & 130 & 81 \\ 96G & 768 & 122 & 122 & 110 & 150 & 364 & 196 & 103 \\ \end{tabular} } \caption{Size of the FASTQ input datasets, in terms of HDFS data blocks, when compressed with general-purpose and FASTQ specialized compression Codecs. The size of each HDFS data block is 128 MB.} \label{tab:bs_fastq} \end{table} \subsubsection{Experiment 2: The performance overhead of our {\bf Universal Compressor Meta-Codec} with respect to our {\bf Splittable Compressor Meta-Codec} is negligible.} The decompression time performance guaranteed by our {\bf Universal Compressor Meta-Codec } when executing a particular compressor is very similar to that of a specialized implementation of the same compressor by means of our {\bf Splittable Compressor Meta-Codec}. This is clearly visible in Figures \ref{fig:task1FQGARR} and \ref{fig:task2FQGARR}, where we report the performance of HS\_DSRC and HU\_DSRC. Indeed, the two Codecs exhibit very similar performance, but the one based on our {\bf Universal Compressor Meta-Codec } took few minutes to be developed while the specialized one required non trivial programming skills as well as several days of work. \subsubsection{Experiment 3: a careful use of compression yields significant reading-times savings on Hadoop.} Space savings may turn into I/O time slow-down, when the decompression procedure is slow. Such a trade-off is well known for generic standard compressors. Here we study it in regard to HU specialized Codecs. Indeed, such a trade-off is clearly visible when comparing, e.g., the performance of HU\_DSRC with those of HU\_SPRING. As reported in Tables \ref{tab:bs_fasta} and \ref{tab:bs_fastq}, FASTQ files compressed with HU\_SPRING require about a smaller number of HDFS data blocks to process than those compressed with HU\_DSRC. Despite this, the performance of HU\_DSRC when used in the first benchmarking task are much better than that of HU\_SPRING because of its much faster decompression routines. In details, the best performance is achieved by HS\_DSRC, HU\_DSRC and HS\_ZSTD, but for different reasons: the first two because of their more efficient compression algorithm, the third because of its faster decompression routines. We also observe that the speed-up achieved by HS\_DSRC increases with the input file size. To explain this, consider that when managing the 16G input file, HS\_DSRC returns a number of HDFS data blocks to process that is smaller than the number of available processing cores. So, not all the available processing capability of the cluster is exploited. When the size of the input increases to 32G, the number of HDFS data blocks gets larger and allows to use all the available processing cores, thus resulting in an improved overall efficiency. This speed-up gets increasing because, as well as the input size grows, the number of HDFS data blocks to process per core increases as well, giving Hadoop the possibility to reschedule tasks over the cores having a smaller workload. On the bottom side, the HS\_BZIP2 and HU\_SPRING are the ones exhibiting the worst performance, because of their very slow decompression routines. \subsubsection{Experiment 4: a careful use of compression may yield significant network-overhead savings on Hadoop.} The smaller amount of HDFS data blocks required to store a compressed file yields a beneficial effect also on the network overhead required by Hadoop to recombine the output of the map tasks and, consequently, on the overall execution time. This is visible in Figures \ref{fig:task2FQGARR} and \ref{fig:task2FAGARR}, where we observe that the usage of compression allows for a significant speed-up, even when running more complex applications that the one considered in experiment 3. Interestingly, here the benchmarking task run using HU\_DSRC and HS\_DSRC is faster than the one run using HS\_ZSTD (see Figure \ref{fig:task2FQGARR}). The reason is that the smaller number of compressed data blocks produced by the DSRC algorithm implies a smaller number of Hadoop map tasks to be concurrently run for analyzing the input dataset thus reducing, in turn, the network overhead required for feeding the reduce tasks with the output of the map tasks. The smaller network overhead achievable by using either HS\_DSRC or HU\_DSRC rather than HS\_ZSTD is witnessed by the reduced shuffle time, as observable in Figure \ref{fig:shuffleFQGARR}. \begin{comment} \begin{figure}[ht] \centering \includegraphics[scale=.5]{img/countMap_fasta.png} \caption{Execution time speedup measured while running the first benchmarking task when considering compressed datasets of increasing size and different encoders, with respect to the execution on the equivalent uncompressed datasets (TeraStat).} \label{fig:task1FA} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=.5]{img/countMapReduce_fasta.png} \caption{Execution time speedup measured while running the second benchmarking task when considering compressed datasets of increasing size and different encoders, with respect to the execution on the equivalent uncompressed datasets (TeraStat).} \label{fig:task2FA} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=.5]{img/shuffle_fasta.png} \caption{Time spent running the shuffle phase during the second benchmarking task when considering compressed datasets of increasing size and different compressors, compared to the execution of the same task on uncompressed datasets (TeraStat).} \label{fig:shuffleFA} \end{figure} \end{comment} \begin{comment} \subsection{Results for FASTA Files} The full set of results for the remaining experiments are reported in in graphical form, in Figure \ref{fig:task1FAGARR} and in Figure \ref{fig:task2FAGARR}. The abscissa indicates the original size of the compressed dataset and the ordinate the speedup with respect to the time required to process the corresponding uncompressed dataset, Several comments are in order. The results show that compression succeeds in significantly improving the performance of the considered tasks. In almost all considered cases, the usage of compression led to a significant speedup with respect to the usage of non-compressed files. We also observe that the usage of special-purpose encoders is generally preferable, mostly because of their more pronounced compression efficiency, with respect to general-purpose encoders. However, the performance of these codes is not only bound to their compression efficiency but also their ability to quickly decompress a file. This is clearly visible when comparing, e.g., the performance of the DSRC Codec with those of the SPRING Codec. As reported in Table \ref{tab:FQdatasetsFast} and in Table \ref{tab:FAdatasetsSlow}, FASTQ files compressed with SPRING are about a 15\% smaller than those compressed with DSRC but, despite this, the performance of DSRC when used in the first benchmarking task are much better because of its faster decompression time. In details, the best performance is achieved by the DSRC and by the ZSTD Codecs, but for two different reasons: the first because of its more efficient compression algorithm, the second because of its faster decompression routines. We also observe that the speed-up achieved by DSRC increases with the input file size. To explain this, consider that when managing the 16G input file, DSRC returns a number of HDFS data blocks to process that is smaller than the number of available processing cores. So, not all the available processing capability of the cluster is exploited. When the size of the input increases to 32G, the number of HDFS data blocks gets larger and allows to use all the available processing cores, thus resulting in an improved overall efficiency. This speed-up gets increasing because, as well as the input size grows, the number of HDFS data blocks to process per core increases as well, giving Hadoop the possibility to reschedule tasks over the cores having a smaller workload. It is interesting to note that the performance exhibited by experiments run using DSRC via the universal Codec are were similar to the one measured when running the experiment using the DSRC specialized Codec. This seems to suggest that the overhead introduced by the universal Codec is very small, thus making this the preferred option when integrating a new Codec for dealing with compressed genomic sequences in Hadoop. Another effect worth to be mentioned is the irregular speed-up observed on some of the considered Codecs when increasing the input size. This effect is due to the number of stages required for processing all the HDFS data blocks of each input. According to this input and to the compression Codec, For instances, assuming that the compressed encoding of an input file could be processed in a certain number of stages, doubling its size could result in the execution of a number of stages larger than two times the original one. On the bottom side, the BZ2 and the SPRING Codecs are the ones exhibiting the worst performance, because of their very slow decompression routines. The situation is similar for the second experiment, but with one important difference. In this case, the benchmarking task run using the DSRC encoder if faster than the one run using the ZSTD encoder (see Figure \ref{fig:task2FQGARR}. The reason is that the smaller number of compressed data blocks produced by the DSRC encoder implies a smaller number of Hadoop map tasks to be concurrently run for analyzing the input dataset thus reducing, in turn, the network overhead required for feeding the reduce tasks with the output of the map tasks. The smaller network overhead achievable by using DSRC rather than ZSTD is witnessed by the reduced shuffle time, as observable in Figure \ref{fig:shuffleFQGARR}. \subsection{Results for FASTQ Files} The results of the first experiment, when run on FASTA and FASTQ files, are reported in Table \ref{tab:FQdatasetsFast} and Table \ref{tab:FQdatasetsSlow}. When considering Codecs with a tunable compression ratio, i.e. the general-purpose ones, we compressed twice each file. The first time we opted for the maximum compression achievable, at a cost of a slower decompression (see Table \ref{tab:FAdatasetsSlow} and Table \ref{tab:FQdatasetsSlow}). The second time we opted for the fastest decompression possible, at a cost of a lower compression rate (the results are in Table \ref{tab:FAdatasetsFast} and Table \ref{tab:FQdatasetsFast}). As expected, special-purpose encoders reach a compression ratio much higher than that achievable using general-purpose encoders. SPRING is indeed the compressor able to reach the best disk-space savings. Despite being a general-purpose compressor, BZIP2 exhibits a compression performance very similar to that of special-purpose, but at a cost of very slow compression/decompression routines. The full set of results for the remaining experiments are reported in in graphical form, in Figure \ref{fig:task1FQGARR} and in Figure \ref{fig:task2FQGARR}. The abscissa indicates the original size of the compressed dataset and the ordinate the speedup with respect to the time required to process the corresponding uncompressed dataset, Several comments are in order. As in the FASTA case, the results show that compression succeeds in significantly improving the performance of the considered tasks. In almost all considered cases, the usage of compression led to a significant speedup with respect to the usage of non-compressed files. We also observe that the usage of special-purpose encoders is generally preferable, mostly because of their more pronounced compression efficiency, with respect to general-purpose encoders. However, the performance of these codes is not only bound to their compression efficiency but also their ability to quickly decompress a file. This is clearly visible when comparing, e.g., the performance of the DSRC Codec with those of the SPRING Codec. As reported in Table \ref{tab:FQdatasetsFast} and in Table \ref{tab:FAdatasetsSlow}, FASTQ files compressed with SPRING are about a 15\% smaller than those compressed with DSRC but, despite this, the performance of DSRC when used in the first benchmarking task are much better because of its faster decompression time. In details, the best performance is achieved by the DSRC and by the ZSTD Codecs, but for two different reasons: the first because of its more efficient compression algorithm, the second because of its faster decompression routines. We also observe that the speed-up achieved by DSRC increases with the input file size. To explain this, consider that when managing the 16G input file, DSRC returns a number of HDFS data blocks to process that is smaller than the number of available processing cores. So, not all the available processing capability of the cluster is exploited. When the size of the input increases to 32G, the number of HDFS data blocks gets larger and allows to use all the available processing cores, thus resulting in an improved overall efficiency. This speed-up gets increasing because, as well as the input size grows, the number of HDFS data blocks to process per core increases as well, giving Hadoop the possibility to reschedule tasks over the cores having a smaller workload. It is interesting to note that the performance exhibited by experiments run using DSRC via the universal Codec are were similar to the one measured when running the experiment using the DSRC specialized Codec. This seems to suggest that the overhead introduced by the universal Codec is very small, thus making this the preferred option when integrating a new Codec for dealing with compressed genomic sequences in Hadoop. Another effect worth to be mentioned is the irregular speed-up observed on some of the considered Codecs when increasing the input size. This effect is due to the number of stages required for processing all the HDFS data blocks of each input. According to this input and to the compression Codec, For instances, assuming that the compressed encoding of an input file could be processed in a certain number of stages, doubling its size could result in the execution of a number of stages larger than two times the original one. On the bottom side, the BZ2 and the SPRING Codecs are the ones exhibiting the worst performance, because of their very slow decompression routines. The situation is similar for the second experiment, but with one important difference. In this case, the benchmarking task run using the DSRC encoder if faster than the one run using the ZSTD encoder (see Figure \ref{fig:task2FQGARR}. The reason is that the smaller number of compressed data blocks produced by the DSRC encoder implies a smaller number of Hadoop map tasks to be concurrently run for analyzing the input dataset thus reducing, in turn, the network overhead required for feeding the reduce tasks with the output of the map tasks. The smaller network overhead achievable by using DSRC rather than ZSTD is witnessed by the reduced shuffle time, as observable in Figure \ref{fig:shuffleFQGARR}. \end{comment} \begin{comment} \begin{table} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} Dataset & NoCompress & DSRC (UC) & DSRC (GC) & Fqzcomp & BZ2 & LZ4 & ZSTD & SPRING \\ \hline 16G & 128 & 22 & 20 & 20 & 25 & 60 & 33 & 20 \\ 32G & 256 & 44 & 40 & 40 & 49 & 119 & 64 & 40 \\ 64G & 512 & 90 & 80 & 82 & 99 & 241 & 130 & 81 \\ 96G & 768 & 122 & 122 & 110 & 150 & 364 & 196 & 103 \\ \end{tabular} } \caption{Number of HDFS data blocks, with a block size of 128MB, for the FASTQ files.} \label{tab:bs_fastq} \end{table} \begin{table} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} Dataset & NoCompress & BZ2 & LZ4 & ZSTD & SPRING & MFCompress \\ \hline 16G & 128 & 24 & 58 & 31 & 18 & 19 \\ 32G & 256 & 47 & 116 & 62 & 35 & 38 \\ 64G & 512 & 94 & 231 & 130 & 69 & 76 \\ 96G & 768 & & 346 & 185 & 104 & 113 \\ \end{tabular} } \caption{Number of HDFS data blocks, with a block size of 128MB, for the FASTQ files.} \label{tab:bs_fastq} \end{table} \end{comment} \begin{comment} \begin{figure} \centering \includegraphics[scale=.4]{img/countMap_fastq.png} \caption{Execution time speedup measured while running the first benchmarking task when considering FASTQ compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets (TeraStat).} \label{fig:task1FQ} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/countMapReduce_fastq.png} \caption{Execution time speedup measured while running the second benchmarking task on FASTQ format data when considering compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets (TeraStat).} \label{fig:task2FQ} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/shuffle_fastq.png} \caption{Time spent running the shuffle phase during the second benchmarking task when considering compressed FASTQ-format datasets of increasing size and different compressors, compared to the execution of the same task on uncompressed datasets (TeraStat).} \label{fig:shuffleFQ} \end{figure} \end{comment} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_countMap_fastq.png} \caption{Execution time speedup measured while running the first benchmarking task when considering FASTQ compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets.} \label{fig:task1FQGARR} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_countMapReduce_fastq.png} \caption{Execution time speedup measured while running the second benchmarking task on FASTQ format data when considering compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets.} \label{fig:task2FQGARR} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_shuffle_fastq.png} \caption{Time spent running the shuffle phase during the second benchmarking task when considering compressed FASTQ-format datasets of increasing size and different compressors, compared to the execution of the same task on uncompressed datasets.} \label{fig:shuffleFQGARR} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_countMap_fasta.png} \caption{Execution time speedup measured while running the first benchmarking task when considering FASTA compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets.} \label{fig:task1FAGARR} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_countMapReduce_fasta.png} \caption{Execution time speedup measured while running the second benchmarking task on FASTA format data when considering compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets.} \label{fig:task2FAGARR} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_shuffle_fasta.png} \caption{Time spent running the shuffle phase during the second benchmarking task when considering compressed FASTA-format datasets of increasing size and different compressors, compared to the execution of the same task on uncompressed datasets.} \label{fig:shuffleFAGARR} \end{figure} \section{Conclusions} We have provided two general methods that can be used to transform standard FASTA/Q data compression programs into Hadoop splittable data compression Codecs. Being the methods general, they can be used for specialized standard compression programs that will be developed in the future. Another main characteristic of our methods is that they require very little, or none at all, programming and knowledge of Hadoop to carry out a rather complex task. Our methods apply also to the Apache Spark framework, when used to process FASTA/Q files stored on the Hadoop File System. We have also shown that the use of specialized FASTA/Q Hadoop Codecs, not available before this work, is advantageous in terms of space and time savings. That is, we provide effective and readily usable tools that have a non-negligible effect on saving costos in genomic data storage and processing within Big Data Technologies. \section*{Acknowledgements} All authors would like to thank the computing time on a cutting edge OpenStack Virtual Datacenter for this research made available by GARR. Discussions with Simona Ester Rombo in the early stages of this research have been helpful. \section*{Funding} G.C., R.G. and U.F.P. are partially supported by GNCS Project 2019 \vir{Innovative methods for the solution of medical and biological big data}. R.G. is additionally supported by MIUR-PRIN project \vir{Multicriteria Data Structures and Algorithms: from compressed to learned indexes, and beyond} grant n. 2017WR7SHH. U.F.P. and F.P. are partially supported by Universit\`{a} di Roma - La Sapienza Research Project 2018 \vir{Analisi, sviluppo e sperimentazione di algoritmi praticamente efficienti}. \bibliographystyle{abbrv} \section{The MapReduce\xspace Programming Paradigm and Hadoop} \label{sec:mr-hadoop} \subsection{The Paradigm} \label{sec:mr} MapReduce\xspace \cite{dean2008mapreduce} is a paradigm for the processing of large amounts of data on a distributed computing infrastructure. Assuming the input data is organized as a set of \KV{key}{value} pairs, it is based on the definition of two functions. The {\em map} function processes an input \KV{key}{value} pair and returns a (possibly empty) intermediate set of \KV{key}{value} pairs. The {\em reduce} function merges all the intermediate values sharing the same \empty{key} to form a (possibly smaller) set of values. These functions are run, as tasks, on the nodes of a distributed computing framework. All the activities related to the management of the lifecycle of these tasks as well as the collection of the map function results and their transmission to the reduce functions are transparently handled by the underlying framework (\emph{implicit parallelism}), with no burden on the programmer. \subsection{Apache Hadoop} \label{sec:hadoop} Apache Hadoop is the most popular framework supporting the MapReduce\xspace paradigm. It allows for the execution of distributed computations thanks to the interplay of two architectural components: YARN (\emph{Yet Another Resource Negotiator}) \cite{vavilapalli2013apache} and HDFS (\emph{Hadoop Distributed File System}) \cite{HDFS}. YARN manages the lifecycle of a distributed application by keeping track of the resources available on a computing cluster and allocating them for the execution of application tasks modeled after one of the supported computing paradigms. HDFS is a distributed and block-structured file-system designed to run on commodity hardware and able to provide fault tolerance through replication of data. A basic Hadoop cluster is composed of a single \emph{master node} and multiple \emph{worker nodes}. The master node arbitrates the assignment of computational resources to applications to be run on the cluster and maintains an index of all the directories and the files stored in the HDFS distributed file system. Moreover, it tracks the worker nodes physically storing the HDFS data blocks making up these files. The worker nodes host a set of \emph{worker}s (also called \emph{Containers}), in charge of running the map and reduce tasks of a MapReduce\xspace application, as well as using the local storage to maintain a subset of the HDFS data blocks. One of the main characteristics of Hadoop is its ability to exploit \emph{data-local} computing. By this term, we mean the possibility to move applications closer to the data (rather than the vice-versa). This allows to greatly reduce network congestion and increase the overall throughput of the system when processing large amounts of data. Moreover, in order to reliably maintain files and to properly balance the load between different nodes of a cluster, large files are automatically split into smaller HDFS data blocks, replicated and spread across different nodes. % % \section{Specialized Compressors Supported by means of our {\bf Splittable Compressor Meta-Codec}} \label{sec:DSRC} Among the many compression algorithms specialized for genomic data \cite{Numanagic2016}, DSRC is the only featuring a splittable Codec among the data compression tools achieving the best performance, based on benchmarking, when dealing with FASTA/Q files. It represents a robust testbed for our solution because its original implementation has been developed in C++ and its integration within a Java Codec is not trivial to realize. A DSRC standard compressed file is organized in three parts. \begin{itemize} \item {\bf Body.} It contains a set of compressed data blocks. Each of these is compressed and can be decompressed independently from the others. The default size of each compressed data block is 10MB. \item {\bf Header.} It reports the number of compressed data blocks existing in that file, the size of the footer and its relative position inside the file. \item {\bf Footer.} It reports the size of each compressed data block and the flags used for its compression. \end{itemize} \subsection{Implementation details} \label{subsec:specialpurpose} The special-purpose Codec supporting DSRC, HS\_DSRC, has been obtained following our {\bf Splittable Compressor Meta-Codec}, as described in Section 2.3 of the Main Manuscript\xspace. It required the development of two Java classes: \texttt{DSRCInputFormat} and \texttt{DSRCCodec}. In particular, \texttt{DSRCCodec} uses the JNI framework\cite{Jni} to load in memory and instantiate the dynamic library containing the DSRC native implementation. Then, it uses the \texttt{DSRCInputFormat} class to extract the information regarding the DSRC parameters and the list of compressed data blocks, according to the DSRC format. In addition, this class initializes the \texttt{CodecInputStream} object, pointing to the file to be decompressed during the execution of a job. Finally, it runs the \texttt{NativeCodecDecompressor decompress} method on each compressed data block to obtain its decompressed version. \section{Specialized Compressors Supported by means of our {\bf Universal Compressor Meta-Codec}} In this Section we provide details about the work done for incorporating in Hadoop the specialized compressors reported in Section 3.1.1 of the Main Manuscript\xspace, using our {\bf Universal Compressor Meta-Codec}. For each compressor, the only step required to support it is the definition of a set of properties stating the supported input file types and the command-line required for compressing and decompressing a generic input file. Let X be the unique name denoting the compressor to be supported and F the file being processed, the following command line properties are available for its integration: \begin{itemize} \item{\texttt{uc.X.compress.cmd}}: the command line to be used for compressing F using X. \item{\texttt{uc.X.decompress.cmd}}: the command line to be used for decompressing F using X. \item{\texttt{uc.X.io.input.flag}}: the command line flag used to specify the input filename. \item{\texttt{uc.X.io.output.flag}}: the command line flag used to specify the output filename. \item{\texttt{uc.X.compress.ext}}: the extension used by X for saving a compressed copy of F. \item{\texttt{uc.X.decompress.ext}}: the extension used by X for saving a decompressed copy of X ("fastq" by default). \item{\texttt{uc.X.io.reverse}}: if X requires the output file name to be specified before the input file name, it is set to \emph{true}. {false}, otherwise. \end{itemize} In Table \ref{tab:my_label}, the command lines used for integrating the target specialized compressors using our {\bf Universal Compressor Meta-Codec} are reported. \begin{table} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{1}{|c|}{Properties} & \multicolumn{5}{c|}{Compressors}\\\cline{2-6} & SPRING (for FASTQ) & SPRING (for FASTA) & DSRC & FqzComp & MFCompress\\ \hline uc.X.compress.cmd & spring -c & spring -c --fasta-input & dsrc c -t8 & fqz\_comp & MFCompressC -t 8 -p 8\\ uc.X.decompress.cmd & spring -d & spring -d & dsrc d -t8 & fqz\_comp -d & MFCompressD -t 8\\ uc.X.io.input.flag & -i & -i & & & \\ uc.X.io.output.flag & -o & -o & & & -o \\ uc.X.compress.ext & .spring & .spring & .dsrc & .fqz & .mfc \\ uc.X.decompress.ext & & .fasta & & & .fasta \\ uc.X.io.reverse & & & & & true \\ \hline \end{tabular} } \caption{Command line properties required for supporting several specialized compressors using our {\bf Universal Compressor Meta-Codec}} \label{tab:my_label} \end{table} \section{Datasets} \label{sec:datasets} The FASTQ files used in our experiments contain a set of reads extracted uniformly at random from a collection of genomic sequences coming from the Pinus Taeda genome \cite{PinusTaeda2013}. The FASTA files used in our experiments contain a set of reads extracted uniformly at random from a collection of genomic sequences coming from the Human genome \cite{Human2008}. Details about the files included in these datasets are reported in Table \ref{tab:FAdataset} and Table \ref{tab:FQdataset}. \begin{table}[!ht] \centering \begin{tabular}{|l|r|c|} \hline Name & \# of reads & Avg. read length \\ \hline 16GB & 96,407,378 & 100 \\ 32GB & 192,653,438 & 100 \\ 64GB & 385,306,876 & 100 \\ 96GB & 577,960,314 & 100 \\ \hline \end{tabular} \caption{Files included in the FASTA dataset used in our experiments} \label{tab:FAdataset} \end{table} \begin{table}[!ht] \centering \begin{tabular}{|l|r|c|} \hline Name & \# of reads & Avg. read length \\ \hline 16GB & 44,681,859 & 151 \\ 32GB & 89,363,718 & 151 \\ 64GB & 178,727,437 & 151 \\ 96GB & 268,091,154 & 151 \\ \hline \end{tabular} \caption{Files included in the FASTQ dataset used in our experiments} \label{tab:FQdataset} \end{table} \begin{figure}[!ht] \centering \includegraphics[scale=.45]{img/generic-class-diagram} \caption{UML class diagram of our \textbf{Splittable Compressor Meta-Codec}} \label{fig:classGeneric} \end{figure} \begin{figure}[!ht] \centering \includegraphics[scale=.35]{img/universal-class-diagram} \caption{UML class diagram of our \textbf{Universal Compressor Meta-Codec}} \label{fig:classUniversal} \end{figure} \newpage \begin{figure}[!ht] \centering \includegraphics[scale=.45]{img/dsrc-class-diagram} \caption{UML class diagram of HS\_DSRC } \label{fig:classDSRC} \end{figure} \clearpage \bibliographystyle{abbrv}
{ "timestamp": "2020-07-28T02:44:01", "yymm": "2007", "arxiv_id": "2007.13673", "language": "en", "url": "https://arxiv.org/abs/2007.13673" }
\section{Introduction} In recent years, coarse-graining renormalization group methods for the tensor network have become the essential numerical tools to study classical and quantum lattice models. One advantage is the ability to study systems in the thermodynamic limit. However, it can be tricky to extract the critical properties of an infinite system, due to the crossover to a mean-field like behavior when the system approaches the critical point \cite{Liu:2010dl}. There are more complex tensor coarse-graining methods \cite{Evenbly:2015csa, Evenbly:2015ey, Yang:2017hj}, which can be used to extract conformal data such as the central charge and the scaling dimensions. But these methods are typically computationally more expensive. Another standard route to extract the critical properties is to calculate the physical properties of a finite-size system, then perform finite-size scaling (FSS) analysis. To the best of our knowledge, however, tensor network methods have rarely been used to perform FSS analysis for two-dimensional (2D) classical systems. A new exceptions include the corner transfer matrix renormalization group study for 2D classical lattice models \cite{Nishino:1997kn}, and the tensor network based FSS analysis of the Fisher zeros \cite{Denbleyker:2014cda, Hong:2019cm}. This is partly due to the lack of an efficient and accurate method to calculate the higher-order moments via tensor network methods, while higher-order moments and their ratios are key ingredients of the FSS analysis. Recently, an algorithm to calculate higher order moments via HOTRG is proposed in Ref.~\cite{Morita:2018gy}. This motivates us to revisit the route of tensor network based FSS analysis. In this work we demonstrate that the FSS analysis can be performed within the tensor network framework in a fashion that is similar to the Monte Carlo simulations. In particular, we study the phase transitions and the critical properties of the ``deformed-AKLT'' family of states on 2D square lattice. One-dimensional (1D) Affleck, Kennedy, Lieb, and Tasaki (AKLT) state \cite{Affleck:1987jya, Affleck:1988hl} is originally constructed to understand Haldane's conjecture \cite{Haldane:1983ip} for the integer-spin chains. Later, it becomes a canonical example for the concept of the symmetry protected topological order \cite{Pollmann:2010ih}. It is straightforward to generalize the valence bond construction of the 1D AKLT state to higher dimensions. In recent years, the 2D generalizations of the AKLT states and their parent Hamiltonians have been actively investigated to understand the nature of the symmetry protected topological order \cite{Chen:2011gt, You:2014fo, Wierschem:2016kv, Huang:2016bf, Pomata:2018ep}. Specifically, the ``deformed-AKLT'' family of states is a two-parameter family of states on the 2D square lattice, which can be obtained by deforming locally the 2D AKLT state. As one varies the parameters within the parameter space, the state exhibits parameter-induced phase transitions. It is known that the phase diagram consists of a disordered AKLT phase, a ordered Ferromagnetic (FM) phase (or equivalently a N\'eel phase), and a critical XY phase \cite{Niggemann:2000ta, Pomata:2018ep}. Specifically, we use higher order tensor renormalization group (HOTRG) \cite{Xie:2012iy} method to coarse-grain the tensor network and to calculate the correlations as well as the higher moments~\cite{Morita:2018gy}. Then we perform FSS analysis based on the moments, the correlations, and their dimensionless ratios. For the AKLT-FM transition, we are able to determine the critical point and the critical exponents accurately. Furthermore, the values of the critical exponents confirm that the transition belongs to the 2D Ising universality class. For the AKLT-XY transition, we show that the critical point can be located by the FSS analysis of the correlation ratio. The manuscript is organized as follows: In Sec.\ref{model_method} we briefly describe how to construct higher dimensional generalization of the AKLT state and how to construct the deformed-AKLT family of states on 2D square lattice. We present our FSS analysis and the results for the AKLT-FM transition in Sec.\ref{AKLT-FM}. Then in Sec.\ref{AKLT-XY} we investigate the BKT transition into the critical XY phase. Finally in Sec.\ref{discussion} we discuss various aspects of the approaches used in this work. \section{Model and Method\label{model_method}} Our starting point is a generalized AKLT state on an arbitrary lattice, which admits a tensor network state representation. Consider a lattice with coordination number $q$. First we put a virtual spin-$\frac{1}{2}$ state at the end of each link. Then we project the $q$ virtual spin states on each vertex onto the subspace of spin-$\frac{q}{2}$ with the projector: \begin{equation} \mathcal{P}_q= \sum c_{s} \left|\frac{q}{2}, s \right\rangle \langle s_1,s_2,...,s_q|, \end{equation} where $s_i = \pm \frac{1}{2}$ and $s=\sum_i s_i \in [-\frac{q}{2}, \dots, \frac{q}{2}]$ are the virtual and physical spin index in their $S^z$ basis respectively, and $c_{s}$ are the Clebsch-Gordan coefficients. Next we put a bond state $|\omega\rangle$ on each link $l$ of the lattice, where the bond states $ | \omega \rangle$ is one of the Bell states \begin{align} & | \phi^{+} \rangle = | +\frac{1}{2}, +\frac{1}{2}\rangle + |-\frac{1}{2}, -\frac{1}{2}\rangle \notag \\ & | \phi^{-} \rangle = | +\frac{1}{2}, +\frac{1}{2}\rangle - |-\frac{1}{2}, -\frac{1}{2}\rangle = I \otimes \sigma^z | \phi^{+} \rangle \notag \\ & | \psi^{+} \rangle = | +\frac{1}{2}, -\frac{1}{2}\rangle + |-\frac{1}{2}, +\frac{1}{2}\rangle = I \otimes \sigma^x | \phi^{+} \rangle \notag \\ & | \psi^{-} \rangle = | +\frac{1}{2}, -\frac{1}{2} \rangle - |-\frac{1}{2}, +\frac{1}{2}\rangle =I \otimes i\sigma^y | \phi^{+} \rangle. \end{align} Here $\sigma^x$, $\sigma^y$, and $\sigma^z$ are Pauli matrices. The spin-$\frac{q}{2}$ AKLT state is then defined as \begin{equation} \left|\Psi_{\text{AKLT}} \left(\frac{q}{2} \right) \right\rangle = \bigotimes_{v \in V} ( \mathcal{P}_q)_v \bigotimes_{l \in L} | \omega \rangle_l. \end{equation} To construct the ``deformed-AKLT'' family of states, we apply the following diagonal, spin-flip invariant deformation \begin{equation} D(\vec{a}) = \sum_{s= -q/2} ^{q/2} \frac{a_{|s|} }{c_s} | s \rangle \langle s|, \end{equation} where $s$ is the physical spin index. The resulting family of states can hence be expressed as \begin{equation} \left|\psi_{\text{AKLT}} \left(\frac{q}{2}, \vec{a} \right) \right\rangle = \bigotimes_{v \in V} \left( D(\vec{a}) \mathcal{P}_q \right)_v \bigotimes_{l \in L} |\omega\rangle_l. \end{equation} \begin{figure}[t] \includegraphics[width=1.0\columnwidth]{AKLT_pd.pdf} \includegraphics[width=1.0\columnwidth]{M2.pdf} \caption{(Color online) (a) Sketch of the phase diagram. Red dashed lines correspond to the three scans of the parameters performed in this work. (a) Second moment $\langle m^2_z \rangle$ as a function of $a_1$ with $a_2=\sqrt{6}$ near the AKLT-FM transition. (b) Second moment $\langle m^2_x \rangle$ as a function of $a_2$ with $a_1=0.5$ near the AKLT-XY transition. The linear system sizes are $L=64, 128, 256, 512$.} \label{fig:phase} \end{figure} We note in passing that on a bipartite lattice one may change from one bond state to another by applying a SU(2) on-site transformation $U_A$ and $U_B$ to all of the sites in sublattices A and B, respectively. Due to the SU (2) invariance of the projector $\mathcal{P}_q$, this is equivalent to performing the transformation to every bond state, i.e., $| \omega \rangle \to (U_A)^{1/2} \otimes (U_B)^{1/2} | \omega \rangle$. Thus, given physical data from any bond state, we may produce the corresponding information for another bond state. Consequently, the location and the nature of the parameter induced transition is same regardless the particular $| \omega \rangle$ used on a bipartite lattice. This equivalence has been discussed and numerically confirmed in Ref.~\cite{Huang:2016bf, Pomata:2018ep}. In this work we focus on the spin-2 AKLT state and the deformed spin-2 AKLT family of states on a 2D square lattice. Following the recipe above they can easily be constructed by setting $q=4$. Furthermore the deformation matrix reads $D(\vec{a})=\text{diag}(\frac{a_2}{\sqrt{6}}, \frac{a_1}{\sqrt{6/4}}, 1, \frac{a_1}{\sqrt{6/4}}, \frac{a_2}{\sqrt{6}})$. This results in a two parameters family of states, while the AKLT point corresponds to $(a_2, a_1, a_0)=(\sqrt{6}, \sqrt{3/2}, 1)$. It is known that there are parameter induced phase transitions as on tunes one of the $a$s within the parameter space \cite{Niggemann:2000ta, Pomata:2018ep}. The AKLT point is inside the gapped, disordered phase with SPT order and we will denote this phase as the AKLT-phase. In the limit of $a_2 \rightarrow \infty$ the state enters a symmetry broken phase and become N\'eel or ferromagnetic ordered depending on the particular bond state used. In this work we use $|\psi^+\rangle$ as the bond state, resulting in a uniform tensor network. In this case, the symmetry broken phase corresponds to a ferromagnetic (FM) phase with spontaneous uniform magnetization $\langle S^z\rangle$ in the $z$-direction. It was conjectured in Ref.~\cite{Niggemann:2000ta} that this order-disorder transition corresponds to two simultaneous Ising transitions. The conjecture is based on the simulation for a system of $30\times 30$ sites. The critical exponent $\eta$ is estimated to be $\frac{1}{2}$, which is twice the exact Ising exponent $\frac{1}{4}$. The transition is further explored in Ref.\cite{Pomata:2018ep}. By using TNR, it is found that the central charge $c=\frac{1}{2}$ and the conformal tower obtained by TNR matches the Ising CFT. On the other hand, near $a_2=a_1=0$ there is a finite region in which the state is in a XY phase with divergent correlation length. It is theorized in Ref.\cite{Pomata:2018ep} that the in the continuum limit, the system can be described by the compactified-free-boson CFT, which has central charge $c=1$. Consequently, the phase transition between the XY phase and the AKLT phase is of BKT type. In that work, TNR and loop-TNR were used to evaluate the central charge $c$ and the coupling $g$. The phase boundary is then estimated by locating the position at which the central charge $c$ or the coupling $g$ drops sharply below 1 or 4 respectively. In Fig.\ref{fig:phase}(a) we sketch the phase diagram based on the known results in the literature. To study the AKLT-FM transition we fix $a_2=\sqrt{6}$ and vary $a_1$ across the phase boundary. We also study the limiting case of $a_1=0$ while we vary $a_2$ across the phase boundary. It is expected that in this limit the universality class is different from 2D Ising model \cite{Pomata:2018ep}. Finally to study the AKLT-XY transition we fix $a_1=0.5$ and vary $a_2$ across phase boundary. In Fig.\ref{fig:phase}(a) we also sketch the lines of these three scans. The spin-2 deformed AKLT states naturally admit a tensor network state (TNS) representation: \begin{equation} |\Psi (\vec{a}) = \sum_{s_1, s_2, \cdots, s_i} \text{tTr} \left( A^{1} A^{2} \cdots A^{i} \cdots \right) |s_1 s_2 \cdots s_i \cdots \rangle, \end{equation} where $A^i_{s_i i_1 i_2 i_3 i_4}$ is a rank-5 local tensor on site-$i$ with a physical index $s_i$ and $4$ virtual bond indices $i_1, i_2, i_3, i_4 \in [0,1]$. $\text{tTr}$ denotes tensor trace over all the virtual bond indices. Specifically the non-zero elements of the $A$ tensor are: \begin{align} & A_{2,1111} =A_{-2,0000} =a_2, \notag \\ & A_{1,1110} =A_{1,1101} =A_{1,1011} =A_{1,0111} =a_1, \notag \\ & A_{-1,0001} =A_{-1,0010} =A_{-1,0100} =A_{-1,1000} =a_1, \notag \\ & A_{0,1100}=A_{0,1001}=A_{0,0011}=a_0, \notag \\ & A_{0,0110}=A_{0,0101}=A_{0,1010}=a_0. \end{align} The norm squared of such a TNS is given by \begin{equation} \langle \Psi | \Psi \rangle = \text{tTr} \left( \mathbf{T}^1 \mathbf{T}^2 \cdots \mathbf{T}^i \cdots \right) \end{equation} where the local {\em doubled tensor} $\mathbf{T}^i$ on site-$i$ is obtained by contracting the physical indices of $A^i$ and $(A^i)^*$: \begin{equation} \mathbf{T}^i_{(i_1 i^\prime_1), (i_2 i^\prime_2), (i_3 i^\prime_3), (i_4 i^\prime_4)} = \sum_{s_i} A^i_{s_i i_1 i_2 i_3 i_4} \times \left(A^i_{s_i i^\prime_1 i^\prime_2 i^\prime_3 i^\prime_4}\right)^*. \end{equation} It is convenient to treat double tensor $\mathbf{T}^i$ as a rank-4 tensor, with a compound index $(i i^\prime)\in [0,1,2,3]$ on each leg. In other word, the bond-dimension for each leg is 4. It is also straightforward to express the expectation value of operators as a tensor trance. For example, \begin{equation} \langle \Psi | S^{1z} S^{2z} | \Psi \rangle = \text{tTr} \left( \mathbf{T}^{1z} \mathbf{T}^{2z} \mathbf{T}^3 \cdots \mathbf{T}^i \cdots \right), \end{equation} where \begin{equation} \mathbf{T}^{1z} = \sum_{s_i} A^i_{s_i i_1 i_2 i_3 i_4} \times \hat{S}^{1z} \times \left(A^i_{s_i i^\prime_1 i^\prime_2 i^\prime_3 i^\prime_4}\right)^* \end{equation} and similarly for $\mathbf{T}^{2z}$. Due to the spin-flip symmetry, in the FM phase $|\Psi (\vec{a})\rangle$ is a superposition of both possible ordered states, and the magnetization is strictly zero. One can apply a very tiny symmetry breaking field to induce a non-zero magnetization, but one has numerically take the zero field limit to obtain the spontaneous magnetization. In this work, we will use the second and fourth moments of the magnetization to characterize the ordered phase. These even moments can pick up non-zero value in the absence of the external field. In general, performing exactly the tensor trace in two and higher dimension is exponentially difficult. There are, however, many approximation schemes which scale down the cost to the polynomial of cut-off bond dimension. These include, for example, corrner transfer matrix (CTMRG) \cite{Nishino:1997kn, Orus:2012ft}, tensor renormalization group (TRG) \cite{Levin:2007jua}, higher-order tensor renormalization group (HOTRG) \cite{Xie:2012iy}, etc. In this work we mainly use HOTRG. While HOTRG is often used to study the system in thermodynamic limit, here we focus on the finite-size system with linear size $L=2^{N+1}$, where $N$ is number of HOTRG steps. We note in passing that the accuracy of the HOTRG is determined by the cut-off dimension $D_{\text{cut}}$. In the following we set $D_{\text{cut}}=50$ unless mentioned otherwise. We have checked that this cut-off dimension is large enough for the calculations in this work. Furthermore, we use the method proposed in Ref.~\cite{Morita:2018gy} to evaluate the higher moments at different sizes. \section{AKLT-FM transition} \label{AKLT-FM} In this section we study the AKLT-FM transition and demonstrate that the critical point and the critical exponents can be determined accurately using tensor network methods based FSS analysis. We start from the AKLT point and vary $a_1$ across the phase boundary as shown in Fig.~\ref{fig:phase}(a). Two approaches are used to estimate the critical point and the critical exponents. Both approaches rely on the finite-size scaling hypothesis, which states that near a continuous phase transition a quantity $Q$ shall scale as \begin{equation} Q(a, L) = L^{c_2} f( (a-a_c)L^{c_1}), \end{equation} where $a$ is the tuning parameter, $a_c$ is the critical point, $c_1, c_2$ are the critical exponents, and $f$ is the scaling function. In the first approach, the critical point $a_c$ and the critical exponents $c_1, c_2$ are estimated {\em simultaneously} by collapsing the data of Q from various $a$ and $L$. In this work, we use the the kernel method proposed in Ref.\cite{Harada:2011js} to perform the scaling analysis. In the second approach we first use the crossing point of the dimensionless quantities to determine the critical point, then we use the finite-size scaling at the critical point to estimate the critical exponents. Since a dimensionless quantity $\tilde{Q}$ shall scale as \begin{equation} \tilde{Q}(a, L) = f( (a-a_c) L^{c_1}), \end{equation} one finds $\tilde{Q}(a_c, L)=f(0)$ and data from different sizes should cross at the critical point $a_c$. However, if the correction to the scaling is not negligible the crossing point will drift as $L$ increases. In this case a better estimation of the critical point can be obtained by extrapolating the crossing points. Consider the $k$-th moment of the uniform magnetization in the $x, y, z$ directions: $\langle m^k_{x, y, z} \rangle \equiv \langle (\frac{1}{L^2}\sum_{i} S^{ x, y, z}_i)^k \rangle$. These moments can be calculated via HOTRG by using the procedure proposed in Ref.\cite{Morita:2018gy}. Since there is no magnetic order in the AKLT phase one has $\langle m^k_{x,y,z} \rangle=0$. On the other hand in the FM phase one has $\langle m^k_z \rangle \neq 0$ and $\langle m^k_{x,y}\rangle=0$. Near the phase transition the $k$-th moment $\langle m^k_z\rangle$ shall scale as \begin{equation} \langle m^k_z \rangle (a, L) = L^{-k\beta/\nu} f_k( (a-a_c) L^{1/\nu}), \end{equation} where $\beta$ is the standard critical exponent associated with the magnetization $ m \propto (a-a_c)^\beta$ and $f_k$ is the scaling function. In particular, we calculate the second moment $\langle m^2\rangle$ and the fourth moments $\langle m^4\rangle$. Furthermore, we consider the Binder ratio of the fourth moment and the square of second moment, $U_2 \equiv \langle m^4\rangle/ \langle m^2\rangle^2$. Since it is dimensionless it shall scale as \begin{equation} U_2(a, L) = \frac{\langle m^4\rangle}{\langle m^2\rangle^2} = f_U( (a-a_c) L^{1/\nu}). \end{equation} \begin{figure}[t] \includegraphics[width=1.0\columnwidth]{SQ_AKLT-FM_M2_M4_U2_TP.pdf} \caption{(Color online) (a) Second moment $\langle m^2_z \rangle$, (c) forth moment $\langle m^4_z \rangle$, (e) Binder ratio $U_2$ as a function of $a_1$ for $L=64, 128, 256, 512$. (b), (d), (f) Rescaled $\langle m^2_z \rangle$, $\langle m^4_z \rangle$, and $U_2$ as a function of $ (a_1-a_c) L^{1/\nu}$. Inset of (e) Crossing points $a_{c}(L)$ of $U_2(L)$ and $U_2(2L)$ as a function of $1/L$.} \label{fig:BSA} \end{figure} \begin{figure}[t] \includegraphics[width=1.0\columnwidth]{SQ_AKLT-FM_C2_C4_R_TP.pdf} \caption{(Color online) (a) $C_{\text{max}}(L)$, (c) $C_{\text{halfmax}}(L)$, (e) correlation ratio $R$ as a function of $a_1$ for $L=64, 128, 256, 512$. (b),(d),(f) Rescaled $C_{\text{max}}(L)$, $C_{\text{halfmax}}(L)$, and $R$ as a function of $ (a_1-a_c) L^{1/\nu}$. Inset of (e) Crossing points $a_{c}(L)$ of $R(L)$ and $R(2L)$ as a function of $1/L$.} \label{fig:BSA_C} \end{figure} In Fig.\ref{fig:phase}(b) we plot the second moment $\langle m^2_z\rangle$ near the AKLT-FM transition. We fix $a_2=\sqrt{6}$ and vary $a_1$ across the phase boundary. We observe that the second moment become non-zero around $a_1 \approx 0.9$. Furthermore, the transition become sharper as system size increases, indicating a second-order phase transition in the thermodynamic limit. In Fig.\ref{fig:BSA}(a), (c), (e) we show $\langle m^2_z \rangle$, $\langle m^4_z \rangle$, and Binder ratio $U_2$ near the critical point. By using the kernel method to collapse the data from different sizes we estimate the critical point and critical exponents. In Fig.\ref{fig:BSA}(b),(d),(f) we show the rescaled data and we observe that the data collapse very well. For the critical point, we find $a_c \approx 0.894(6), 0.894(7), 0.894(8)$ respectively from $\langle m^2_z \rangle$, $\langle m^4_z \rangle$, and $U_2$. We also find $1/\nu \approx 1.0(1), 1.0(1), 1.0(2)$, where $\nu$ is the exponent associated with the correlation length $\xi \propto (a-a_c)^{-\nu}$. Furthermore we find $2\beta/\nu \approx 0.2(6)$ from $\langle m^2_z \rangle$ and $4\beta/\nu \approx 0.5(2)$ from $\langle m^4_z \rangle$. The estimated critical points obtained with different quantities are very close to each other, and the values of the exponent are consistent with the expected 2D Ising universality class. Next we consider the spin-spin correlation function in the $x$, $y$, and $z$ directions, \begin{equation} C^{x,y,z}(\mathbf{r}) \equiv \frac{1}{L^2} \sum_{\mathbf{r}_i} \langle S^{x,y,z}_{\mathbf{r}_i} S^{x,y,z}_{\mathbf{r}_i + \mathbf{r}}\rangle. \end{equation} For finite system near the critical point, there are two length scales: system size $L$ and correlation length $\xi$. On general ground the following scaling form with two scaling variables is expected: \begin{equation} C^{x,y,z}(\mathbf{r}) = |\mathbf{r}|^{-(D-2+\eta)} h_{x,y,z}(\mathbf{r}/L, L/\xi), \end{equation} where $\eta$ is the anomalous dimension and $h_{x,y,z}$ are the scaling functions. In particular we calculate the correlation at maximum distance: $C_{\text{max}}(L) \equiv C^{x,y,z}((L/2,L/2))$ and half of maximum distance: $C_{\text{halfmax}}(L) \equiv C^{x,y,z}((L/4,L/4))$. In passing we note that the correlation at maximum and half-maximum distance can be calculated efficiently using HOTRG. In a conventional second-order phase transition where the correlation length diverges as a power law $\xi \propto t^{-\nu}$, one has \begin{equation} C_{\text{max, halfmax}}(a, L) = L^{-(D-2+\eta)} h_{\text{max, halfmax}}( (a-a_c) L^{1/\nu}), \end{equation} where $h_{\text{max, halfmax}}$ are scaling functions and $D$ is the dimension of the system. In addition, we consider the dimensionless correlation ratio $R$ of $C_{\text{max}}(L)$ and $C_{\text{halfmax}}(L)$, which should scale as \begin{equation} R(a, L) \equiv \frac{ C_{\text{max}}(a,L) }{C_{\text{halfmax}}(a,L) } = h_R( tL^{1/\nu} ), \end{equation} with some scaling function $h_R$. In Fig.\ref{fig:BSA_C}(a), (c), (e) we show $C_{\text{max}}(L)$, $C_{\text{halfmax}}(L)$, and correlation ratio $R$ as a function of $a_1$, while in Fig.\ref{fig:BSA_C}(b), (d), (f) we show the rescaled data. By collapsing the data we find $a_c \approx 0.894(7), 0.894(7), 0.894(8)$ and $1/\nu \approx 1.0(1), 1.0(1), 1.0(4)$ respectively from $C_{\text{max}}(L)$, $C_{\text{halfmax}}(L)$, and $R$. We observe again that the estimated critical points are highly consistent with each other as well as the results from moments and Binder ratio. Furthermore, from both $C_{\text{max}}(L)$ and $C_{\text{halfmax}}(L)$ we find $\eta \approx 0.2(5)$. These values are again consistent with the 2D Ising universality class. \begin{figure}[t] \includegraphics[width=1.0\columnwidth]{SQ_ac.pdf} \caption{(Color online) (a) $\ln(Q)$ as a function of $\ln(L)$, where $Q=\langle m^2_z\rangle, \langle m^4_z\rangle, C_{\text{max}}, C_{\text{halfmax}}$. Finite-size scaling at critical point. (b) $\ln\left(\frac{dQ}{da}\right)$ as a function of $\ln(L)$, where $Q=U_2$ and $-R$.} \label{fig:Q_ac} \end{figure} Now we move on to the second approach. In this approach, we first use the crossing point of the dimensionless quantities $U_2$ and $R$ to estimate the critical point. We then study the finite-size scaling of various quantities at the critical point to estimate the corresponding exponents. In the inset of Fig.\ref{fig:BSA}(e) we plot the crossing points $a_{c,U_2}(L)$ of $U_2(L)$ and $U_2(2L)$ as a function of $1/L$. We observe that $a_{c,U_2}(L)$ decreases monotonically as $L$ increases. By fitting the finite-size crossing points to a power-law function $a_{c,U_2}(L) = a_c + b L^\lambda$ we find $a_c \approx 0.894(8)$. In the inset of Fig.\ref{fig:BSA_C}(e) we plot the crossing points $a_{c,R}(L)$ of $R$ as a function of $1/L$. In this case we find that the crossing points of $R$ does not drift much and we estimate $a_c \approx 0.894(8)$ by the crossing point of the largest size. In passing we note that these results are highly consistent with the results from the first approach. After locating the critical point, the values of the critical exponents can be estimated by studying the finite-size scaling of various quantities at the (estimated) critical point. At the critical point the $k$-th moment shall scales as \begin{equation} \langle m^k_z \rangle(a=a_c, L) \propto L^{k\beta/\nu}, \end{equation} while the correlation function shall scale as \begin{equation} C_{\text{max, halfmax}}(a=a_c, L) \propto L^{D-2+\eta}. \end{equation} Finally the critical exponent $\nu$ can also be estimated by the derivates of the dimensionless quantities at the critical point \begin{equation} \left. \frac{dU_2(a,L)}{da} \right|_{a_c} \propto L^{1/\nu}, \end{equation} and similarly for $-dR/da$. Here we obtain the slope via numerical differentiation. In Fig.~\ref{fig:Q_ac} we plot the log of above mentioned quantities as a function of $\ln(L)$. We use data from $L=64,128,256,512$ to perform the liner fit and the slop of the fitted line is the corresponding exponent. In the figure, we also show data from smaller sizes with $L=4,8,16,32$ and deviation from the liner fit is clearly observed. From the fitting we find $2\beta/\nu \approx 0.2(6)$ from $\langle m^2_z\rangle$ and $4\beta/\nu \approx 0.5(3)$ from $\langle m^4_z\rangle$. For the exponent $\eta$, we find $\eta \approx 0.2(7)$ from both $C_{\text{max}}(L)$ and $C_{\text{halfmax}}(L)$. Finally from $dU_2/da$ and $-dR/da$ we find $1/\nu \approx 0.9(9)$ and $1.0(3)$. All these values are consistent with the 2D Ising universality class. In Table.~\ref{tb:AKLT-FM_ac} we summarize the values of the estimated critical point. The label $X$ indicates that the results are obtained from the crossing point analysis. In Table.~\ref{tb:AKLT-FM} we summarize the values of the exact and estimated critical exponents. The label $a_c$ indicates that the results are obtained from the finite-size scaling at the critical point. We observe that for both methods used in this section, the estimated critical point are highly consistent with each other. Furthermore, the estimated critical exponents are highly consistent with the expected 2D Ising universality class. This demonstrates that the tensor network based FSS analysis can be used to determine precisely the critical point as well as the critical exponents for a second-order phase transition. It was pointed out in Ref.\cite{Pomata:2018ep} that as $a_1 \rightarrow 0$ the central charge $c$ becomes $1$. As a result, in this limit the AKLT-FM transition can not belong to the 2D Ising universality class. To study the phase transition in this limit we fix $a_1=0$ and vary $a_2$ across the phase boundary. We then apply the above mentioned finite-size scaling analysis to estimate the critical point and exponents. The detail of the finite-size scaling analysis is presented in appendix~\ref{a1=0}. From the data collapse and the crossing points of $U_2$ and $R$ we find $a_c \approx {1.779(6)}$, which is consistent with the result in Ref.\cite{Pomata:2018ep}. Furthermore, the exponents obtained are clearly different from the exponents of the 2D Ising model. This confirms that the transition at $a_1=0$ does not belong to the 2D Ising universality class. \begin{widetext} \begin{table}[tb] \caption{Summary of the estimated critical point. ($X$ indicates that the results are obtained from the crossing point analysis.)} \label{tb:AKLT-FM_ac} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \hline & $m^2_z$ & $m^4_z$ & $U_2$ & $C_{\text{max}}$ & $C_{\text{halfmax}}$ & $R$ & $U_2$(X) & $R$(X) \\ \hline $a_c$ & {0.894(6)} & {0.894(7)} & {0.894(8)} & {0.894(7)} & {0.894(7)} & {0.894(8)} & {0.894(8)} & {0.894(8)} \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[tb] \caption{Summary of the exact and estimated critical exponents.} \label{tb:AKLT-FM} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \hline & 2D Ising & $m^2_z$ & $m^2_z(a_c)$ & $m^4_z$ & $m^4_z(a_c)$ & $U_2$ & $C_{\text{max}}$ & $C_{\text{max}}(a_c)$ & $C_{\text{halfmax}}$ & $C_{\text{halfmax}}(a_c)$ & $R$ & $\left.\frac{dU_2}{da}\right|_{a_c}$ & $\left.\frac{dR}{da}\right|_{a_c}$ \\ \hline $1/\nu$ & 1 & {1.0(1)} & - & {1.0(1)} & - & {1.0(2)} & {1.0(1)} & -& {1.0(1)} & - & {1.0(4)} & {0.9(9)} & {1.0(3)} \\ \hline $2\beta/\nu$ & 1/4 & {0.2(6)} & {0.2(6))} & - & - & - & - & - & - & - & - & - & -\\ \hline $4\beta/\nu$ & 1/2 & - & - & {0.5(2)} & {0.5(3)} & - & - & - & - & - & - & - & -\\ \hline $\eta$ & 1/4 & - & - & - & - & - & {0.2(5)} & {0.2(7)} & {0.2(5)} & {0.2(7)} & - & - & - \\ \hline \end{tabular} \end{center} \end{table} \end{widetext} \section{AKLT-XY transition} \label{AKLT-XY} \begin{figure}[t] \includegraphics[width=1.0\columnwidth]{SQ_AKLT-XY.pdf} \caption{(Color online) (a) Binder ratio $U_2$ as a function of $a_2$. (b) Correlation ratio $R$ as a function of $a_2$. Inset: Crossing point $a_c$ of $R(L)$ and $R(2L)$ as a function of $1/\ln(L)$. (c) Correlation function $C^x(r)$ with various $a_2$. (d) Fitted $\eta$ as a function of $a_2$.} \label{fig:AKLT-XY} \end{figure} In this section we study the AKLT-XY transition. In Ref.~\cite{Pomata:2018ep} it is theorized that in the critical XY phase the system is described by the compactified-free-boson CFT with central charge $c=1$ and the phase transition between the XY phase and the AKLT phase is of BKT type. In that work, TNR and loop-TNR were used to evaluate the central charge $c$ and the coupling $g$. The phase boundary is estimated by locating the position at which the central charge $c$ or the coupling $g$ drops sharply below 1 or 4 respectively. In this work, we will use tensor network based finite-size scaling analysis to investigate the AKLT-XY transition. In general it is difficult to accurately determine the phase boundary of a BKT transition. Inside the XY phase the correlation length diverges in the thermodynamic limit, while it grows exponentially as one approaches the XY phase. In this case the correlation ratio reads \begin{equation} R(a, L) \equiv \frac{ C_{\text{max}}(a,L) }{C_{\text{halfmax}}(a,L) } = h_R( L/\xi(a) ). \end{equation} Due to the divergence of the correlation length, the correlation ratio in the XY phase becomes $h_R(0)$. Consequently data from different sizes should collapse within the XY phase, in contrast to cross at the critical point for a second-order phase transition. In Ref.~\cite{Tomita:2002cka, Surungan:2019hjf} it is proposed and tested that correlation ratio is better than Binder ratio in locating the BKT transition, due to the cancelation of the logarithmic corrections \cite{Nomura:1995tq}. In the following we preset our results for the AKLT-XY transition. Specifically, we fix $a_1=0.5$ and vary $a_2$ across the phase boundary. Similar to the study of the AKLT-FM transition, we evaluate the second and forth moments, the correlation functions at maximum and half-maximum distance as well as the dimensionless Binder ratio and correlation ratio. In Fig.\ref{fig:phase}(c) we plot the second moment of the magnetization in the $x$-direction $\langle m^2_x\rangle$ as a function of $a_2$. We observe that as $a_2$ decreases $\langle m^2_x\rangle$ becomes non-zero. However, for a fixed $a_2$ the value of $\langle m^2_x\rangle$ decreases monotonically as system size increases, indicating that in the thermodynamics limit one has $\langle m^2_x\rangle=0$. Furthermore, we have checked that $\langle m^2_x\rangle = \langle m^y_2\rangle$ and $\langle m^2_z\rangle=0$ within this parameter regime. Qualitatively, these results are consistent with a BKT transition. In Fig.~\ref{fig:AKLT-XY}(a) we plot the Binder ratio $U_2$ as a function of $a_2$ near the phase boundary. We find that curves from different sizes never cross. Furthermore, they do not merge in the XY phase either. As a result, one cannot use Binder ratio to locate the BKT transition point. In Fig.~\ref{fig:AKLT-XY}(b) we plot the correlation ratio $R$ as a function of $a_2$. In this case the curves from different sizes do cross each other. However, they do not collapse in the XY phase. Similar phenomenon was observed in Ref.~\cite{Komura:2010hz}, in which the correlation ratio is used to study the BKT transition of the generalized XY model. The non-merging suggests that for this BKT transition, the correction to FSS is extremely large. To better estimate the critical point, we plot the crossing point $a_{c,R}(L)$ of $R(L)$ and $R(2L)$ as a function of $1/\ln(L)$ in the inset of Fig.~\ref{fig:AKLT-XY}(b). We observe that all data fall nearly on a straight line. By linear fitting we find $a_c \approx 1.0(3)$, which is consistent with the phase boundary estimated in Ref.~\cite{Pomata:2018ep}. We also evaluate the spin-spin correlation function $C^x(r) = \langle S^x(0,0) S^x(r,r)\rangle$ near the phase boundary, as shown in Fig.~\ref{fig:AKLT-XY}(c). In the thermodynamic limit the spin-spin correlation function should decay as a power law $ r^{-\eta}$ in the XY phase, while it should decay as $ r^{-\eta} e^{-r/\xi}$ in the AKLT phase. Furthermore, it is hypothesized in Ref.~\cite{Pomata:2018ep} that $\eta=1/4$ at the AKLT-XY phase boundary. To better estimate $\eta$, we study a system with linear size $L=2048$ with a larger cut-off bond-dimension in HOTRG, $\text{D}_{\text{cut}}=56$. Without assuming the precise location of the critical point, we fit data to both decay forms mentioned above. We find that the second form always fits the data well. In contract, the first form fits well only when the system is sufficiently inside the XY phase. However, when the first form fits well, the fitted value of $\eta$ is very close to one obtained by second form. In Fig.~\ref{fig:AKLT-XY}(d) we plot $\eta$ obtained by second form as a function of $a_2$. We observe that $\eta$ decrease almost linearly as it approaches the XY phase. Furthermore, we find that $\eta \approx 1/4$ at the estimated $a_c$, consistent with the finding in Ref.~\cite{Pomata:2018ep}. \section{Summary and Discussion\label{discussion}} In this work we present a tensor network based finite-size scaling analysis on the parameter induced phase transitions of the deformed AKLT family of states on 2D square lattice. In particular we use HOTRG to evaluate the moments, the correlation functions, and their dimensionless ratios on a finite-size system. We then use conventional FSS techniques to estimate the critical points and exponents. Two approaches are used. In the first approach, we estimate the critical point and exponent simultaneously via data collapse. In the second approach the critical point is first located by the crossing point of the dimensionless quantities. The finite-size scaling of various quantities at the estimated critical point is then used to estimate the corresponding exponents. For the AKLT-FM transition, we show that both the critical point and critical exponents can be estimated accurately. Furthermore, the estimated critical exponents are highly consistent with the expected 2D Ising universality class. We also study the limiting case of $a_1=0$ and confirm that the AKLT-FM transition in this limit is not Ising-like. For the more elusive BKT type AKLT-XY transition, we demonstrate that the crossing point of the correlation ratio can be used to locate the critical point with reasonable accuracy. We also estimate the critical exponent $\eta$ by fitting the spin-spin correlation function and show that $\eta$ becomes $1/4$ at the phase boundary as expected by the theory proposed in Ref.\cite{Pomata:2018ep}. Some comments are now in order. Typically the tensor network method is used to evaluate quantities in the thermodynamics limit, where spontaneous symmetry breaking can happen. However, contracting exactly the 2D tensor network is exponentially difficult, and certain cut-off on the bond-dimension has to be implemented to keep the calculation manageable. This effectively induces an upper limit on the correlation length . Consequently near the critical point, where the correlation length diverges, the tensor network method may fail to capture the true critical behavior but show mean-field like behavior \cite{Liu:2010dl}. In this work we take a different approach. We use tensor network methods to evaluate physical quantities in a finite size system. We look for system sizes, which are large enough to show scaling behavior, but small enough so that physical quantities evaluated via HOTRG are accurate enough. We then employ conventional FSS analysis to extract the critical point and the critical exponents. Our results show that this approach is feasible, opening new directions in using tensor network method to study critical properties. To the best of knowledge, tensor network based FSS analysis is not widely used in the literature. One of the reason is that an efficient algorithm to calculate higher-order moments for 2D tensor network is not proposed until recently \cite{Morita:2018gy}, while higher-order moments and their dimensionless ratio are essential quantities to be used in conventional FSS analysis. By using the above mentioned algorithm, we are able to perform FSS analysis based on moments and accurate results are obtained. Furthermore, we also demonstrate that one can perform FSS analysis based on correlations at different distances and their dimensionless ratio. Since it is relatively straightforward to evaluate correlations using tensor network methods, it is very interesting to explore potential applications in this direction. Finally, we would like to point out potential generalizations in the future. Recently, several new renormalization schemes of tensor networks are proposed. For example, the core-tensor RG \cite{Lan:2019et}, the anisotropic TRG \cite{Adachi:2019paf}, and the triad RGs \cite{Kadoh:2019tp}. The main motivation is to reduce the scaling of the algorithm in terms of the cut-off and physical dimensions, and higher dimensional calculations become feasible. It would be very interesting to use these schemes to perform tensor network based FSS on higher-dimensional systems. Furthermore, due to the nature of the HOTRG, it is cumbersome to reach systems with linear sizes that are not a power of two. On the other hand, method such as core-tensor RG can reach any system size with the same effort. We expect that an even better FSS analysis can be achieved by accessing more sizes in the scaling regime. \begin{acknowledgments} This work was supported by the MOST of Taiwan under Grants No. 107-2112-M-007-018-MY3 and No. MOST108-2112-M-029-006-MY3. Pochung Chen thanks Kenji Harada for helpful discussions in using his Bayesian scaling analysis toolkit. The numerical calculation was done using the Uni10 tensor network library \cite{Kao:2015gb}. https://uni10.gitlab.io/. \end{acknowledgments}
{ "timestamp": "2020-07-28T02:41:22", "yymm": "2007", "arxiv_id": "2007.13607", "language": "en", "url": "https://arxiv.org/abs/2007.13607" }
\subsection{Data sets} As mentioned, we considered data sets from the UCI and LibSVM repositories \citep{UCI,libsvm}, as well as \dataset{Fashion-MNIST} from Zalando Research\footnote{\url{https://research.zalando.com/welcome/mission/research-projects/fashion-mnist/}}. We used data sets with size $3000 \leq N \leq 70000$ and dimension $d \leq 1000$. These relatively large data sets were chosen in order to provide meaningful bounds in the standard bagging setting, where individual trees are trained on $n=0.8N$ randomly subsampled points with replacement and the size of the overlap of out-of-bag sets is roughly $n/9$. An overview of the data sets is given in Table~\ref{tab:data_sets}. \begin{table}[t] \centering \caption{Data set overview. $c_{\min}$ and $c_{\max}$ denote the minimum and maximum class frequency.} \label{tab:data_sets} \input{experiments/tab_datasets} \end{table} For all experiments, we removed patterns with missing entries and made a stratified split of the data set. For data sets with a training and a test set (\dataset{SVMGuide1}, \dataset{Splice}, \dataset{Adult}, \dataset{w1a}, \dataset{MNIST}, \dataset{Shuttle}, \dataset{Pendigits}, \dataset{Protein}, \dataset{SatImage}, \dataset{USPS}) we combined the training and test sets and shuffled the entire set before splitting. \subsection{Standard random forests with optimized weights}\label{apx:bagging-optimized} This section contains numerical values and additional figures for the optimization experiments provided in the second experiment in the body (Figure~\ref{fig:opt_mvrisk_and_rho}). $\operatorname{FO}$ was optimized using Theorem~\ref{thm:lambdabound} and the alternating update rules of \citep{TIWS17}. For optimizing $\operatorname{TND}$, we used iRProp$^+$ \citep{igel:01e}, see Appendix~\ref{app:minimization}. We denote the weights after optimization of $\operatorname{FO}$ and $\operatorname{TND}$ by $\rho^*_{\operatorname{FO}}$ and $\rho^*_{\operatorname{TND}}$, respectively. Figures~\ref{fig:opt_bounds_bin} and \ref{fig:opt_bounds_mul} show the bounds before and after optimization for binary and multiclass data sets respectively. The $\operatorname{FO}$ bound achieves higher reduction after minimization, however, as illustrated in both figures and Figure~\ref{fig:opt_mvrisk_and_rho} in the body, this improvement comes at the cost of considerable increase of the test loss $L(MV_{\rho^*_{\operatorname{FO}}},\testset)$. The latter happens because $\operatorname{FO}$ places most of the posterior mass on a few top classifiers and diminishes the power of the ensemble, see Figure~\ref{fig:example_rho}. The improvement of the $\operatorname{TND}$ after minimization is more modest, but on a highly positive side it does not degrade the classifier. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{experiments/optimization/plot_opt_bounds_bin.pdf} \caption{Comparison of the bounds before (not dotted bars) and after (dotted bars) optimization for the binary data sets. The test risk is shown in black.} \label{fig:opt_bounds_bin} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{experiments/optimization/plot_opt_bounds_mul.pdf} \caption{Comparison of the bounds before (not dotted bars) and after (dotted bars) optimization for the multiclass data sets. The test risk is shown in black.} \label{fig:opt_bounds_mul} \end{figure} Table~\ref{tab:opt_mvrisk} shows the numerical values used in Figure~\ref{fig:opt_mvrisk}. \subsection{Weight optimization in standard vs.\ reduced bagging setting}\label{apx:reduced-optimized} We also considered weight optimization in the reduced bagging setting. Figure~\ref{fig:mvrisk_compare} compares the test risk using optimized weights in the full and the reduced bagging settings. For most data sets, the change to test risk is similar to the change observed for uniformly weighted forests, with a few exceptions with slightly smaller or larger increases. For $\operatorname{TND}$, we see the same tendency; Figure~\ref{fig:bound_compare} compares $\operatorname{TND}$ using uniform and optimized weights in the full and reduced bagging settings, and overall the change between full and reduced bagging is similar for uniform and optimized weights. As before, we also observe improvement of $\operatorname{TND}$ in most cases, as can be seen in Figures~\ref{fig:opt_bound_reduce_compare_bin} and \ref{fig:opt_bound_reduce_compare_mul}. \fi \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{experiments/optimization/plot_opt_bounds_redcomp_bin.pdf} \caption{Comparison of the bounds computed for the random forest with optimized weights in the standard bagging (not dotted) and the reduced bagging (dotted) setting on binary data sets.} \label{fig:opt_bound_reduce_compare_bin} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{experiments/optimization/plot_opt_bounds_redcomp_mul.pdf} \caption{Comparison of the bounds computed for the random forest with optimized weights in the standard bagging (not dotted) and the reduced bagging (dotted) setting on multiclass data sets.} \label{fig:opt_bound_reduce_compare_mul} \end{figure} \begin{figure}[ht] \centering \includegraphics{experiments/reduced/plot_mvrisk_compare.pdf} \caption{Median, 25\%, and 75\% quantiles of the ratio between the test risk in the reduced and full bagging settings with uniform and optimized weights $\rho^*_{\operatorname{TND}}$. Results on \dataset{Mushroom} and \dataset{Shuttle} are left out, as the test risk is 0 in some cases.} \label{fig:mvrisk_compare} \end{figure} \begin{figure}[ht] \centering \includegraphics{experiments/reduced/plot_bound_compare.pdf} \caption{Median, 25\%, and 75\% quantiles of the ratio between the $\operatorname{TND}$ bound in the reduced and full bagging settings with uniform and optimized weights $\rho^*_{\operatorname{TND}}$.} \label{fig:bound_compare} \end{figure} \subsection{Standard uniformly weighted random forests}\label{apx:rf-bagging} This section provides additional figures and numerical values of the bounds computed for the standard uniformly weighted random forest using bagging (Figure~\ref{fig:rf_example_bounds} in the body), as well as additional statistics for the experiments. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{experiments/uniform/plot_uniform_bin_bagging.pdf} \caption{Plot of the bounds for binary data sets with the standard uniformly weighted random forests. The test losses are depicted by black lines.} \label{fig:uniform_all_bin} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{experiments/uniform/plot_uniform_mul_bagging.pdf} \caption{Plot of the bounds for multiclass data sets with standard uniformly weighted random forests. The test losses are depicted by black lines.} \label{fig:uniform_all_mul} \end{figure} Figures~\ref{fig:uniform_all_bin} and \ref{fig:uniform_all_mul} plot the bounds obtained by the standard random forest for the binary and multiclass data sets respectively. Table~\ref{tab:rf_bagging_bounds} reports the means and standard deviations for all data sets. Additional information (randomized loss, tandem loss, etc.) is reported in Table~\ref{tab:rf_bagging_info}. $\operatorname{TND}$ is tightest for 2 out of 7 binary data sets and 3 out of 10 multiclass data sets, while $\operatorname{FO}$ is tightest for the rest. Figure \ref{fig:ratiofDisagreement} plots the ratio between the empirical disagreement $\E_{\rho^2}[\hat \mathbb{D}(h,h',S_h\cap S_{h'})]$ and the empirical randomized loss $\E_\rho[\hat L(h,S_h)]$ versus the ratio between the TND and FO bounds. This figure shows that $\operatorname{TND}$ bound tends to be tighter than $\operatorname{FO}$ when the disagreement is large in relation to the randomized loss. Since the amounts of data $|S_h \cap S_{h'}|$ available for estimation of the tandem losses are considerably smaller than the amounts of data $|S_h|$ available for estimation of the first order losses, the empirical disagreement has to be considerably larger than the empirical loss for $\operatorname{TND}$ to take the advantage over $\operatorname{FO}$. This is in agreement with the discussion provided in Sections \ref{sec:FOvsSO} and \ref{sec:fast-vs-slow}. Comparing $\operatorname{TND}$ to the other second order bounds, we see that $\operatorname{TND}$ is tighter (or almost as tight) in all cases, except for \dataset{Mushroom}, where $\operatorname{C1}$ is tighter. This is due to $\operatorname{C1}$ being given in terms of an upper bound on $\E_\rho[L(h)]$ and a lower bound on $\E_{\rho^2}[\mathbb{D}(h,h')]$. With the lower bound being almost zer , we have $\operatorname{C1} \approx 2\operatorname{FO}$ and since the disagreement is very low, $\operatorname{TND} \approx 4\operatorname{FO}$. We note that even though $\operatorname{C1}$ is tighter than $\operatorname{TND}$ in this case, it is still much weaker than $\operatorname{FO}$, because, as it has been discussed in Section~\ref{sec:FOvsSO}, problems with low disagreement are not well-suited for second order bounds. \begin{figure}[ht] \centering \includegraphics[width=0.75\linewidth]{experiments/uniform/scatter} \caption{Ratio between the empirical disagreement $\E_{\rho^2}[\hat \mathbb{D}(h,h',S_h\cap S_{h'})]$ and the empirical randomized loss $\E_\rho[\hat L(h,S_h)]$ versus the ratio between the TND bound and the FO bound. The data sets Mushroom, Shuttle and Protein are excluded. The first two because the randomized loss is extremely small. And the third one because the bounds are higher than 1.} \label{fig:ratiofDisagreement} \end{figure} \subsection{Random forests with reduced bagging vs.\ full bagging with uniform and optimized weights}\label{apx:rf-reduced} The $\operatorname{TND}$ bound depends on the size of overlaps $S_h\cap S_{h'}$, which are used to estimate the tandem losses and define the denominator of the bound. In order to ensure that the overlaps $S_h\cap S_{h'}$ are not too small, it might be beneficial to generate splits with $|S_h|$ of at least $(2/3)n$, so that $|S_h\cap S_{h'}|$ is at least $n/3$. In our application to random forests we reduce the number of sampled points in bagging from $n$ to $n/2$, which increases the number of out-of-bag samples $|S_h|$ from roughly $n/3$ to roughly $(2/3)n$ and the overlaps from roughly $n/9$ to $n/3$. We show that the corresponding decrease in $|T_h|$ leads to a relatively small decrease of prediction quality of individual trees and improves the bounds. We call the bagging procedure that samples $n$ points with replacement a \emph{standard bagging} or \emph{full bagging} and the procedure that samples $n/2$ points \emph{reduced bagging}. This section presents results for random forests trained with \emph{reduced bagging}, including comparisons to the full bagging setting. Figure~\ref{fig:mvrisk_compare} compares the test risk in the full bagging and the reduced bagging settings with uniform and optimized weights. In both uniform and optimized weights we see a limited increase (and in a few cases even a small decrease) in test risk when reducing the amount of data sampled in bagging, indicating that reduced bagging has relatively minor impact on the quality of a uniformly weighted ensemble. At the same time, Figures~\ref{fig:bound_compare}, \ref{fig:bounds_compare_binary}, \ref{fig:bounds_compare_multi}, \ref{fig:opt_bound_reduce_compare_bin}, and \ref{fig:opt_bound_reduce_compare_mul} show that the bounds are improved in most cases, sometimes considerably. Table~\ref{tab:rf_sample_bounds} reports the means and standard deviations for all data sets. Additional information (randomized loss, tandem loss, etc.) is reported in Table~\ref{tab:rf_reduced_info}. Table~\ref{tab:opt_mvrisk_reduced} reports the performance of the final majority vote with and without optimized weights. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{experiments/uniform/plot_uniform_bin_compare.pdf} \caption{Comparison of the bounds in the full (not dotted) and reduced bagging (dotted) setting with uniform weighting for binary data sets. The test risk is shown in black. } \label{fig:bounds_compare_binary} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{experiments/uniform/plot_uniform_mul_compare.pdf} \caption{Comparison of the bounds in the full (not dotted) and reduced bagging (dotted) setting with uniform weighting for multiclass data sets. The test risk is shown in black. } \label{fig:bounds_compare_multi} \end{figure} \subsection{$\operatorname{DIS}$ bound vs.\ $\operatorname{TND}$ bound in presence of unlabeled data}\label{apx:unlabeled} In this section we compare the tightness of the $\operatorname{TND}$ and $\operatorname{DIS}$ bounds in a setting, where a lot of unlabeled data is available. We considered the largest binary data sets ($N>8000$) from Table~\ref{tab:data_sets}. As in the previous setting, 20\% of the data, $\testset$, was reserved for testing. The remaining 80\%, were split with a fraction $r\in [0,1]$ of patterns $S$ used for training, and a fraction $(1-r)$ set aside as unlabeled patterns, $S_u$. Forests with 100 trees were trained with bagging, using the Gini criterion for splitting and considering $\sqrt{d}$ features in each split. We considered values of $r\in \{0.05,0.1,...,0.5\}$. For each split, we repeated the experiment 20 times. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{experiments/unlabeled/plot_dis_bound.pdf} \caption{Test risk, $\operatorname{FO}$, $\operatorname{TND}$ and $\operatorname{DIS}$ bounds as a function of the fraction $r$ of labeled points. Means and standard deviations over 20 runs are reported.} \label{fig:rf_unlabeled} \end{figure} Figure~\ref{fig:rf_unlabeled} plots the test risk and $\operatorname{FO}$, $\operatorname{TND}$ and $\operatorname{DIS}$ bounds as a function of $r$. For each data set, the mean and standard deviation over 20 runs are plotted. In agreement with the discussion in Section~\ref{sec:fast-vs-slow}, $\operatorname{DIS}$ had the highest advantage over $\operatorname{TND}$ when the amount of unlabeled data relative to labeled data was the largest. As the amount of unlabeled data relative to labeled data was decreasing the difference between the bounds became smaller, with $\operatorname{TND}$ eventually overtaking $\operatorname{DIS}$ in most cases. \section{Introduction} Weighted majority vote is a fundamental technique for combining predictions of multiple classifiers. In machine learning, it was proposed for neural networks by \cite{hansen:90} and became popular with the works of \citet{Bre96,Bre01} on bagging and random forests and the work of \citet{FS96} on boosting. \citet{Zhu15} surveys the subsequent development of the field. Weighted majority vote is now part of the winning strategies in many machine learning competitions \citep[e.g.,][]{chen2016xgboost,hoch2015ensemble,puurula2014kaggle,stallkamp:12}. Its power lies in the cancellation of errors effect \citep{eckhardt:85}: when individual classifiers perform better than a random guess and make independent errors, the errors average out and the majority vote tends to outperform the individual classifiers. A central question in the design of a weighted majority vote is the assignment of weights to individual classifiers. This question was resolved by \citet{BK16} under the assumptions that the expected error rates of the classifiers are known and their errors are independent. However, neither of the two assumptions is typically satisfied in practice. When the expected error rates are estimated based on a sample, the common way of bounding the expected error of a weighted majority vote is by twice the error of the corresponding randomized classifier \citep{LST02}. A randomized classifier, a.k.a.\ Gibbs classifier, associated with a distribution (weights) $\rho$ over classifiers draws a single classifier at random at each prediction round according to $\rho$ and applies it to make the prediction. The error rate of the randomized classifier is bounded using PAC-Bayesian analysis \citep{McA98,See02,LST02}. We call this a \emph{first order bound}. The factor 2 bound on the gap between the error of the weighted majority vote and the corresponding randomized classifier follows from the observation that an error by the weighted majority vote implies an error by at least a weighted half of the base classifiers. The bound is derived using Markov's inequality. While the PAC-Bayesian bounds for the randomized classifier are remarkably tight \citep{GLLM09,TIWS17}, the factor 2 gap is only tight in the worst-case, but loose in most real-life situations, where the weighted majority vote typically performs better than the randomized classifier rather than twice worse. The reason for looseness is that the approach does not take the correlation of errors into account. In order to address the weakness of the first order bound, \citet{LLM+07} have proposed PAC-Bayesian C-bounds, which are based on Chebyshev-Cantelli inequality (a.k.a.\ one-sided Chebyshev's inequality) and take correlations into account. The idea was further developed by \citet{LMR11}, \citet{GLL+15}, and \citet{LMRR17}. However, the C-bounds have two severe limitations: (1) They are defined in terms of classification margin and the second moment of the margin is in the denominator of the bound. The second moment is difficult to estimate from data and significantly weakens the tightness of the bounds \citep{LIS19}. (2) The C-bounds are difficult to optimize. \citet{GLL+15} were only able to minimize the bounds in a highly restrictive case of self-complemented sets of voters and aligned priors and posteriors. In binary classification a set of voters is self-complemented if for any hypothesis $h\in\cal{H}$ the mirror hypothesis $-h$, which always predicts the opposite label to the one predicted by $h$, is also in $\cal{H}$. A posterior $\rho$ is aligned on a prior $\pi$ if $\rho(h) + \rho(-h) = \pi(h) + \pi(-h)$ for all $h\in\cal{H}$. Obviously, not every hypothesis space is self-complemented and such sets can only be defined in binary, but not in multiclass classification. Furthermore, the alignment requirement only allows to shift the posterior mass within the mirror pairs $(h,-h)$, but not across pairs. If both $h$ and $-h$ are poor classifiers and their joint prior mass is high, there is no way to remedy this in the posterior. \citet{LIS19} have shown that for standard random forests applied to several UCI datasets the first order bound is typically tighter than the various forms of C-bounds proposed by \citet{GLL+15}. However, the first order approach has its own limitations. While it is possible to minimize the bound \citep{TIWS17}, it ignores the correlation of errors and minimization of the bound concentrates the weight on a few top classifiers and reduces the power of the ensemble. Our experiments show that minimization of the first order bound typically leads to deterioration of the test error. We propose a novel analysis of the risk of weighted majority vote in multiclass classification, which addresses the weaknesses of previous methods. The new analysis is based on a second order Markov's inequality, $\P[Z \geq \varepsilon] \leq \E[Z^2]/\varepsilon^2$, which can be seen as a relaxation of the Chebyshev-Cantelli inequality. We use the inequality to bound the expected loss of weighted majority vote by four times the expected \emph{tandem loss} of the corresponding randomized classifier: The tandem loss measures the probability that two hypotheses drawn independently by the randomized classifier simultaneously err on a sample. Hence, it takes correlation of errors into account. We then use PAC-Bayesian analysis to bound the expected tandem loss in terms of its empirical counterpart and provide a procedure for minimizing the bound and optimizing the weighting. We show that the bound is reasonably tight and that, in contrast to the first order bound, minimization of the bound typically does not deteriorate the performance of the majority vote on new data. We also present a specialized version of the bound for binary classification, which takes advantage of unlabeled data. It expresses the expected tandem loss in terms of a difference between the expected loss and half the expected disagreement between pairs of hypotheses. In the binary case the disagreements do not depend on the labels and can be estimated from unlabeled data, whereas the loss of a randomized classifier is a first order quantity, which is easier to estimate than the tandem loss. We note, however, that the specialized version only gives advantage over the general one when the amount of unlabeled data is considerably larger than the amount of labeled data. \section{General problem setup} \label{sec:generalsetup} \myparagraph{Multiclass classification} Let $S=\{(X_1,Y_1),\ldots,(X_n,Y_n)\}$ be an independent identically distributed sample from $\cal{X}\times\cal{Y}$, drawn according to an unknown distribution $D$, where $\cal{Y}$ is finite and $\cal{X}$ is arbitrary. A hypothesis is a function $h : \cal{X} \rightarrow \cal{Y}$, and $\cal H$ denotes a space of hypotheses. We evaluate the quality of a hypothesis $h$ by the 0-1 loss $\ell(h(X),Y)=\1[h(X)\neq Y]$, where $\1[\cdot]$ is the indicator function. The expected loss of $h$ is denoted by $L(h) = \mathbb{E}_{(X,Y)\sim D} [\ell(h(X),Y)]$ and the empirical loss of $h$ on a sample $S$ of size $n$ is denoted by $\hat{L}(h,S) = \frac{1}{n} \sum_{i=1}^n \ell (h(X_i),Y_i)$. \myparagraph{Randomized classifiers} A \emph{randomized classifier} (a.k.a.\ Gibbs classifier) associated with a distribution $\rho$ on $\cal{H}$, for each input $X$ randomly draws a hypothesis $h\in{\cal H}$ according to $\rho$ and predicts $h(X)$. The expected loss of a randomized classifier is given by $\mathbb{E}_{h\sim \rho} [L(h)]$ and the empirical loss by $\mathbb{E}_{h\sim\rho}[\hat{L}(h,S)]$. To simplify the notation we use $\E_D[\cdot]$ as a shorthand for $\E_{(X,Y)\sim D}[\cdot]$ and $\E_\rho[\cdot]$ as a shorthand for $\E_{h\sim \rho}[\cdot]$. \myparagraph{Ensemble classifiers and majority vote} Ensemble classifiers predict by taking a weighted aggregation of predictions by hypotheses from ${\cal H}$. The $\rho$-weighted majority vote $\MV_\rho$ predicts $\MV_\rho (X)= \arg\max_{y\in{\cal Y}} \E_\rho[\1[h(X) = y]] $, where ties can be resolved arbitrarily. If majority vote makes an error, we know that at least a $\rho$-weighted half of the classifiers have made an error and, therefore, $\ell(\MV_\rho(X),Y) \leq \1[\E_\rho[\1[h(X)\neq Y]] \geq 0.5]$. This observation leads to the well-known first order oracle bound for the loss of weighted majority vote. \begin{theorem}[First Order Oracle Bound] \label{thm:first-order} \[ L(\MV_\rho)\leq 2\E_\rho[L(h)]. \] \end{theorem} \begin{proof} We have $L(\MV_\rho) = \E_D[\ell(\MV_\rho(X),Y)] \leq \P[\E_\rho[\1[h(X)\neq Y]] \geq 0.5]$. By applying Markov's inequality to random variable $Z = \E_\rho[\1[h(X)\neq Y]]$ we have: \begin{equation*} L(\MV_\rho) \leq \P[\E_\rho[\1[h(X)\neq Y]] \geq 0.5] \leq 2\E_D[\E_\rho[\1[h(X)\neq Y]]] = 2\E_\rho[L(h)]. \qedhere \end{equation*} \end{proof} PAC-Bayesian analysis can be used to bound $\E_\rho[L(h)]$ in Theorem~\ref{thm:first-order} in terms of $\E_\rho[\hat L(h,S)]$, thus turning the oracle bound into an empirical one. The disadvantage of the first order approach is that $\E_\rho[L(h)]$ ignores correlations of predictions, which is the main power of the majority vote. \section{New second order oracle bounds for the majority vote} The key novelty of our approach is using a second order Markov's inequality: for a non-negative random variable $Z$ and $\varepsilon > 0$, we have $\P[Z \geq \varepsilon] = \P[Z^2 \geq \varepsilon^2] \leq \varepsilon^{-2}\E[Z^2]$. We define the \emph{tandem loss} of two hypotheses $h$ and $h'$ on a sample $(X,Y)$ by $\ell(h(X),h'(X),Y) = \1[h(X)\neq Y \wedge h'(X)\neq Y]$. (\citet{LLM+07} and \citet{GLL+15} use the term joint error for this quantity.) The tandem loss counts an error on a sample $(X,Y)$ only if both $h$ and $h'$ err on it. The \emph{expected tandem loss} is defined by \[ L(h,h') = \E_D[\1[h(X)\neq Y \wedge h'(X)\neq Y]]. \] The following lemma, given as equation (7) by \citet{LLM+07} without a proof, relates the expectation of the second moment of the standard loss to the expected tandem loss. We use $\rho^2$ as a shorthand for the product distribution $\rho \times \rho$ over ${\cal H} \times {\cal H}$ and the shorthand $\E_{\rho^2}[L(h,h')] = \E_{h\sim\rho, h'\sim\rho}[L(h,h')]$. \begin{lemma}In multiclass classification \label{lem:second-monent} \[ \E_D[\E_\rho[\1[h(X) \neq Y]]^2] = \E_{\rho^2}[L(h,h')]. \] \end{lemma} A proof is provided in Appendix~\ref{app:L2D}. A combination of second order Markov's inequality with Lemma~\ref{lem:second-monent} leads to the following result. \begin{theorem}[Second Order Oracle Bound] \label{thm:MV-bound} In multiclass classification \begin{equation} \label{eq:MV-bound} L(\MV_\rho) \leq 4\E_{\rho^2}[L(h,h')]. \end{equation} \end{theorem} \begin{proof} By second order Markov's inequality applied to $Z = \E_\rho[\1[h(X)\neq Y]]$ and Lemma~\ref{lem:second-monent}: \begin{equation*} L(\MV_\rho) \leq \P[\E_\rho[\1[h(X) \neq Y]] \geq 0.5] \leq 4\E_D[\E_\rho[\1[h(X)\neq Y]]^2] = 4\E_{\rho^2}[L(h,h')]. \qedhere \end{equation*} \end{proof} \subsection{A specialized bound for binary classification} We provide an alternative form of Theorem~\ref{thm:MV-bound}, which can be used to exploit unlabeled data in binary classification. We denote the \emph{expected disagreement} between hypotheses $h$ and $h'$ by $\mathbb{D}(h,h') = \E_D[\1[h(X)\neq h'(X)]]$ and express the tandem loss in terms of standard loss and disagreement. (The lemma is given as equation (8) by \citet{LLM+07} without a proof.) \begin{lemma} \label{lem:L2D} In binary classification \[ \E_{\rho^2}[L(h,h')] = \E_\rho[L(h)] - \frac{1}{2}\E_{\rho^2}[\mathbb{D}(h,h')]. \] \end{lemma} A proof of the lemma is provided in Appendix~\ref{app:L2D}. The lemma leads to the following result. \begin{theorem}[Second Order Oracle Bound for Binary Classification] \label{thm:MV-bound-binary} In binary classification \begin{equation} \label{eq:MV-binary} L(\MV_\rho) \leq 4\E_\rho[L(h)] - 2\E_{\rho^2}[\mathbb{D}(h,h')]. \end{equation} \end{theorem} \begin{proof} The theorem follows by plugging the result of Lemma~\ref{lem:L2D} into Theorem~\ref{thm:MV-bound}. \end{proof} The advantage of the alternative way of writing the bound is the possibility of using unlabeled data for estimation of $\mathbb{D}(h,h')$ in binary prediction (see also \citealp{GLL+15}). We note, however, that estimation of $\E_{\rho^2}[\mathbb{D}(h,h')]$ has a slow convergence rate, as opposed to $\E_{\rho^2}[L(h,h')]$, which has a fast convergence rate. We discuss this point in Section~\ref{sec:fast-vs-slow}. \subsection{Comparison with the first order oracle bound} \label{sec:FOvsSO} From Theorems~\ref{thm:first-order} and \ref{thm:MV-bound-binary} we see that in binary classification the second order bound is tighter when $\E_{\rho^2}[\mathbb{D}(h,h')] > \E_\rho[L(h)]$. Below we provide a more detailed comparison of Theorems~\ref{thm:first-order} and \ref{thm:MV-bound} in the worst, the best, and the independent cases. The comparison only concerns the oracle bounds, whereas estimation of the oracle quantities, $\E_\rho[L(h)]$ and $\E_{\rho^2}[L(h,h')]$, is discussed in Section~\ref{sec:fast-vs-slow}. \myparagraph{The worst case} Since $\E_{\rho^2}[L(h,h')] \leq \E_\rho[L(h)]$ the second order bound is at most twice worse than the first order bound. The worst case happens, for example, if all hypotheses in $\cal{H}$ give identical predictions. Then $\E_{\rho^2}[L(h,h')] = \E_\rho[L(h)] = L(\MV_\rho)$ for all $\rho$. \myparagraph{The best case} Imagine that $\cal{H}$ consists of $M\geq 3$ hypotheses, such that each hypothesis errs on $1/M$ of the sample space (according to the distribution $D$) and that the error regions are disjoint. Then $L(h) = 1/M$ for all $h$ and $L(h,h') = 0$ for all $h\neq h'$ and $L(h,h)=1/M$. For a uniform distribution $\rho$ on $\cal{H}$ the first order bound is $2\E_\rho[L(h)] = 2/M$ and the second order bound is $4\E_{\rho^2}[L(h,h')] = 4/M^2$ and $L(\MV_\rho)=0$. In this case the second order bound is an order of magnitude tighter than the first order. \myparagraph{The independent case} Assume that all hypotheses in $\cal{H}$ make independent errors and have the same error rate, $L(h) = L(h')$ for all $h$ and $h'$. Then for $h\neq h'$ we have $L(h,h') = \E_D[\1[h(X)\neq Y \wedge h'(X)\neq Y]] = \E_D[\1[h(X)\neq Y]\1[h'(X)\neq Y]] = \E_D[\1[h(X)\neq Y]]\E_D[\1[h'(X)\neq Y]] = L(h)^2$ and $L(h,h)=L(h)$. For a uniform distribution $\rho$ the second order bound is $4\E_{\rho^2}[L(h,h')] = 4(L(h)^2 + \frac{1}{M}L(h)(1-L(h)))$ and the first order bound is $2\E_{\rho}[L(h)] = 2L(h)$. Assuming that $M$ is large, so that we can ignore the second term in the second order bound, we obtain that it is tighter for $L(h) < 1/2$ and looser otherwise. The former is the interesting regime, especially in binary classification. In Appendix~\ref{app:alternative} we give additional intuition about Theorems~\ref{thm:first-order} and \ref{thm:MV-bound} by providing an alternative derivation. \subsection{Comparison with the oracle C-bound} The oracle C-bound is an alternative second order bound based on Chebyshev-Cantelli inequality (Theorem~\ref{thm:Chebyshev-Cantelli} in the appendix). It was first derived for binary classification by \citet[Theorem 2]{LLM+07} and several alternative forms were proposed by \citet[Theorem 11]{GLL+15}. \citet[Corollary 1]{LMRR17} extended the result to multiclass classification. To facilitate the comparison with our results we write the bound in terms of the tandem loss. In Appendix~\ref{app:C-bound-proof} we provide a direct derivation of Theorem~\ref{thm:C-bound} from Chebyshev-Cantelli inequality and in Appendix~\ref{app:equivalence} we show that it is equivalent to prior forms of the oracle C-bound. \begin{theorem}[C-tandem Oracle Bound] If $\E_\rho[L(h)] < 1/2$, then \[ L(\MV_\rho) \leq \frac{\E_{\rho^2}[L(h,h')] - \E_\rho[L(h)]^2}{\E_{\rho^2}[L(h,h')] - \E_\rho[L(h)] + \frac{1}{4}}. \] \label{thm:C-bound} \end{theorem} The theorem is essentially identical to the first form of oracle C-bound by \citet[Theorem 2]{LLM+07} and, as we show, it holds for multiclass classification. In Appendix~\ref{app:Chebyshev-Cantelli} we show that the second order Markov's inequality behind Theorem~\ref{thm:MV-bound} is a relaxation of Chebyshev-Cantelli inequality. Therefore, the oracle C-bound is always at least as tight as the second order oracle bound in Theorem~\ref{thm:MV-bound}. In particular, \citeauthor{GLL+15} show that if the classifiers make independent errors and their error rates are identical and below 1/2, the oracle C-bound converges to zero with the growth of the number of classifiers, whereas, as we have shown above, the bound in Theorem~\ref{thm:MV-bound} only converges to $4L(h)^2$. However, the oracle C-bound has $\E_{\rho^2}[L(h,h')]$ and $\E_\rho[L(h)]$ in the denominator, which comes as a significant disadvantage in its estimation from data and minimization \citep{LIS19}, as we also show in our empirical evaluation. \section{Second order PAC-Bayesian bounds for the weighted majority vote}\label{sec:pacbayes} We apply PAC-Bayesian analysis to transform oracle bounds from the previous section into empirical bounds. The results are based on the following two theorems, where we use $\KL(\rho\|\pi)$ to denote the Kullback-Leibler divergence between distributions $\rho$ and $\pi$ and $\kl(p\|q)$ to denote the Kullback-Leibler divergence between two Bernoulli distributions with biases $p$ and $q$. \begin{theorem}[PAC-Bayes-kl Inequality, \citealp{See02}] For any probability distribution $\pi$ on ${\cal H}$ that is independent of $S$ and any $\delta \in (0,1)$, with probability at least $1-\delta$ over a random draw of a sample $S$, for all distributions $\rho$ on ${\cal H}$ simultaneously: \begin{equation} \label{eq:PBkl} \kl\lr{\E_\rho[\hat L(h,S)]\middle\|\E_\rho\lrs{L(h)}} \leq \frac{\KL(\rho\|\pi) + \ln(2 \sqrt n/\delta)}{n}. \end{equation} \label{thm:PBkl} \end{theorem} The next theorem provides a relaxation of the PAC-Bayes-kl inequality, which is more convenient for optimization. The upper bound is due to \citet{TIWS17} and the lower bound follows by an almost identical derivation, see Appendix~\ref{app:PBlambdaLower}. Both results are based on the refined Pinsker's lower bound for the kl-divergence. Since both the upper and the lower bound are deterministic relaxations of PAC-Bayes-kl, they hold simultaneously with no need to take a union bound over the two statements. \begin{theorem}[PAC-Bayes-$\lambda$ Inequality, \citealp{TIWS17}]\label{thm:lambdabound} For any probability distribution $\pi$ on ${\cal H}$ that is independent of $S$ and any $\delta \in (0,1)$, with probability at least $1-\delta$ over a random draw of a sample $S$, for all distributions $\rho$ on ${\cal H}$ and all $\lambda \in (0,2)$ and $\gamma > 0$ simultaneously: \begin{align} \E_\rho\lrs{L(h)} &\leq \frac{\E_\rho[\hat L(h,S)]}{1 - \frac{\lambda}{2}} + \frac{\KL(\rho\|\pi) + \ln(2 \sqrt n/\delta)}{\lambda\lr{1-\frac{\lambda}{2}}n},\label{eq:PBlambda}\\ \E_\rho\lrs{L(h)} &\geq \lr{1 - \frac{\gamma}{2}}\E_\rho[\hat L(h,S)] - \frac{\KL(\rho\|\pi) + \ln(2 \sqrt n/\delta)}{\gamma n}.\label{eq:PBlambda-lower} \end{align} \label{thm:PBlambda} \end{theorem} \subsection{A general bound for multiclass classification} We define the \emph{empirical tandem loss} \[ \hat L(h,h',S) = \frac{1}{n}\sum_{i=1}^n \1[h(X_i)\neq Y_i \wedge h'(X_i) \neq Y_i] \] and provide a bound on the expected loss of $\rho$-weighted majority vote in terms of the empirical tandem losses. \begin{theorem} \label{thm:tandem-lambda} For any probability distribution $\pi$ on $\cal{H}$ that is independent of $S$ and any $\delta\in(0,1)$, with probability at least $1-\delta$ over a random draw of $S$, for all distributions $\rho$ on $\cal{H}$ and all $\lambda\in(0,2)$ simultaneously: \[ L(\MV_\rho) \leq 4\lr{\frac{\E_{\rho^2}[\hat L(h,h',S)]}{1-\lambda/2} + \frac{2\KL(\rho\|\pi) + \ln(2\sqrt n/\delta)}{\lambda(1-\lambda/2)n}}. \] \end{theorem} \begin{proof} The theorem follows by using the bound in equation~\eqref{eq:PBlambda} to bound $\E_{\rho^2}[L(h,h')]$ in Theorem~\ref{thm:MV-bound}. We note that $\KL(\rho^2\|\pi^2) = 2\KL(\rho\|\pi)$ \citep[Page 814]{GLL+15}. \end{proof} It is also possible to use PAC-Bayes-kl to bound $\E_{\rho^2}[L(h,h')]$ in Theorem~\ref{thm:MV-bound}, which actually gives a tighter bound, but the bound in Theorem~\ref{thm:tandem-lambda} is more convenient for minimization. \citet{TS13} have shown that for a fixed $\rho$ the expression in Theorem~\ref{thm:tandem-lambda} is convex in $\lambda$ and has a closed-form minimizer. In Appendix~\ref{app:psd} we show that for fixed $\lambda$ and $S$ the bound is convex in $\rho$. Although in our applications $S$ is not fixed and the bound is not necessarily convex in $\rho$, a local minimum can still be efficiently achieved by gradient descent. A bound minimization procedure is provided in Appendix~\ref{app:minimization}. \subsection{A specialized bound for binary classification} We define the \emph{empirical disagreement} \[ \hat \mathbb{D}(h,h',S') = \frac{1}{m} \sum_{i=1}^m \1[h(X_i)\neq h'(X_i)], \] where $S' = \lrc{X_1,\dots,X_m}$. The set $S'$ may have an overlap with the inputs X of the labeled set $S$, however, $S'$ may include additional unlabeled data. The following theorem bounds the loss of weighted majority vote in terms of empirical disagreements. Due to possibility of using unlabeled data for estimation of disagreements in the binary case, the theorem has the potential of yielding a tighter bound when a considerable amount of unlabeled data is available. \begin{theorem} \label{thm:disagreement} In binary classification, for any probability distribution $\pi$ on $\cal{H}$ that is independent of $S$ and $S'$ and any $\delta\in(0,1)$, with probability at least $1-\delta$ over a random draw of $S$ and $S'$, for all distributions $\rho$ on $\cal{H}$ and all $\lambda\in(0,2)$ and $\gamma > 0$ simultaneously: \begin{align*} L(\MV_\rho) &\leq 4\lr{\frac{\E_\rho[\hat L(h,S)]}{1-\lambda/2} + \frac{\KL(\rho\|\pi) + \ln(4\sqrt n/\delta)}{\lambda(1-\lambda/2)n}}\\ &\qquad - 2\lr{(1-\gamma/2) \E_{\rho^2}[\hat \mathbb{D}(h,h',S')] - \frac{2\KL(\rho\|\pi) + \ln(4\sqrt m/\delta)}{\gamma m}}. \end{align*} \end{theorem} \begin{proof} The theorem follows by using the upper bound in equation~\eqref{eq:PBlambda} to bound $\E_\rho[L(h)]$ and the lower bound in equation~\eqref{eq:PBlambda-lower} to bound $\E_{\rho^2}[\mathbb{D}(h,h')]$ in Theorem~\ref{thm:MV-bound-binary}. We replace $\delta$ by $\delta/2$ in the upper and lower bound and take a union bound over them. \end{proof} Using PAC-Bayes-kl to bound $\E_\rho[L(h)]$ and $\E_{\rho^2}[\mathbb{D}(h,h')]$ in Theorem~\ref{thm:MV-bound-binary} gives a tighter bound, but the bound in Theorem~\ref{thm:disagreement} is more convenient for minimisation. The minimization procedure is provided in Appendix~\ref{app:minimization}. \subsection{Ensemble construction} \label{sec:validation} \citet{TIWS17} have proposed an elegant way of constructing finite data-dependent hypothesis spaces that work well with PAC-Bayesian bounds. The idea is to generate multiple splits of a data set $S$ into pairs of subsets $S = T_h \cup S_h$, such that $T_h \cap S_h = \varnothing$. A hypothesis $h$ is then trained on $T_h$ and $\hat L(h,S_h)$ provides an unbiased estimate of its loss. The splits cannot depend on the data. Two examples of such splits are splits generated by cross-validation \citep{TIWS17} and splits generated by bagging in random forests, where out-of-bag (OOB) samples provide unbiased estimates of expected losses of individual trees \citep{LIS19}. It is possible to train multiple hypotheses with different parameters on each split, as it happens in cross-validation. The resulting set of hypotheses produces an ensemble, and PAC-Bayesian bounds provide generalization bounds for a weighted majority vote of the ensemble and allow optimization of the weighting. There are two minor modifications required: the weighted empirical losses $\E_\rho[\hat L(h,S)]$ in the bounds are replaced by weighted validation losses $\E_\rho[\hat L(h,S_h)]$, and the sample size $n$ is replaced by the minimal validation set size $n_{\texttt{min}} = \min_h |S_h|$. It is possible to use any data-independent prior, with uniform prior $\pi(h) = 1/|\cal{H}|$ being a natural choice in many cases \citep{TIWS17}. For pairs of hypotheses $(h,h')$ we use the overlaps of their validation sets $S_h\cap S_{h'}$ to calculate an unbiased estimate of their tandem loss, $\hat L(h,h',S_h\cap S_{h'})$, which replaces $\hat L(h,h',S)$ in the bounds. The sample size $n$ is then replaced by $n_{\texttt{min}} = \min_{h,h'} (S_h\cap S_{h'})$. \subsection{Comparison of the empirical bounds} \label{sec:fast-vs-slow} We provide a high-level comparison of the empirical first order bound ($\operatorname{FO}$), the new empirical second order bound based on the tandem loss ($\operatorname{TND}$, Theorem~\ref{thm:tandem-lambda}), and the new empirical second order bound based on disagreements ($\operatorname{DIS}$, Theorem~\ref{thm:disagreement}). The two key quantities in the comparison are the sample size $n$ in the denominator of the bounds and fast and slow convergence rates for the standard (first order) loss, the tandem loss, and the disagreements. \citet{TS13} have shown that if we optimize $\lambda$ for a given $\rho$, the PAC-Bayes-$\lambda$ bound in equation~\eqref{eq:PBlambda} can be written as \[ \E_\rho[L(h)] \leq \E_\rho[\hat L(h,S)] + \sqrt{\frac{2\E_\rho[\hat L(h,S)]\lr{\KL(\rho\|\pi) + \ln(2\sqrt{n}/\delta)}}{n}} + \frac{2\lr{\KL(\rho\|\pi) + \ln(2\sqrt{n}/\delta)}}{n}. \] This form of the bound, introduced by \citet{McA03}, is convenient for explanation of fast and slow rates. If $\E_\rho[\hat L(h,S)]$ is large, then the middle term on the right hand side dominates the complexity and the bound decreases at the rate of $1/\sqrt{n}$, which is known as a \emph{slow rate}. If $\E_\rho[\hat L(h,S)]$ is small, then the last term dominates and the bound decreases at the rate of $1/n$, which is known as a \emph{fast rate}. \myparagraph{$\operatorname{FO}$ vs.\ $\operatorname{TND}$} The advantage of the $\operatorname{FO}$ bound is that the validation sets $S_h$ available for estimation of the first order losses $\hat L(h,S_h)$ are larger than the validation sets $S_h\cap S_{h'}$ available for estimation of the tandem losses. Therefore, the denominator $n_{\texttt{min}} = \min_h |S_h|$ in the $\operatorname{FO}$ bound is larger than the denominator $n_{\texttt{min}} = \min_{h,h'}|S_h\cap S_{h'}|$ in the $\operatorname{TND}$ bound. The $\operatorname{TND}$ disadvantage can be reduced by using data splits with large validation sets $S_h$ and small training sets $T_h$, as long as small training sets do not overly impact the quality of base classifiers $h$. Another advantage of the $\operatorname{FO}$ bound is that its complexity term has $\KL(\rho\|\pi)$, whereas the $\operatorname{TND}$ bound has $2\KL(\rho\|\pi)$. The advantage of the $\operatorname{TND}$ bound is that $\E_{\rho^2}[L(h,h')] \leq E_\rho[L(h)]$ and, therefore, the convergence rate of the tandem loss is typically faster than the convergence rate of the first order loss. The interplay of the estimation advantages and disadvantages, combined with the advantages and disadvantages of the underlying oracle bounds discussed in Section~\ref{sec:FOvsSO}, depends on the data and the hypothesis space. \myparagraph{$\operatorname{TND}$ vs.\ $\operatorname{DIS}$} The advantage of the $\operatorname{DIS}$ bound relative to the $\operatorname{TND}$ bound is that in presence of a large amount of unlabeled data the disagreements $\mathbb{D}(h,h')$ can be tightly estimated (the denominator $m$ is large) and the estimation complexity is governed by the first order term, $\E_\rho[L(h)]$, which is "easy" to estimate, as discussed above. However, the $\operatorname{DIS}$ bound has two disadvantages. A minor one is its reliance on estimation of two quantities, $\E_\rho[L(h)]$ and $\E_{\rho^2}[\mathbb{D}(h,h')]$, which requires a union bound, e.g., replacement of $\delta$ by $\delta/2$. A more substantial one is that the disagreement term is desired to be large, and thus has a slow convergence rate. Since slow convergence rate relates to fast convergence rate as $1/\sqrt{n}$ to $1/n$, as a rule of thumb the $\operatorname{DIS}$ bound is expected to outperform $\operatorname{TND}$ only when the amount of unlabeled data is at least quadratic in the amount of labeled data, $m > n^2$. \section{Empirical evaluation} We studied the empirical performance of the bounds using standard random forests \citep{Bre01} on a subset of data sets from the UCI and LibSVM repositories \citep{UCI,libsvm}. An overview of the data sets is given in Table~\ref{tab:data_sets} in the appendix. The number of points varied from 3000 to 70000 with dimensions $d<1000$. For each data set we set aside 20\% of the data for a test set $\testset$ and used the remaining data, which we call $S$, for ensemble construction and computation of the bounds. Forests with 100 trees were trained until leaves were pure, using the Gini criterion for splitting and considering $\sqrt{d}$ features in each split. We made 50 repetitions of each experiment and report the mean and standard deviation. In all our experiments $\pi$ was uniform and $\delta = 0.05$. We present two experiments: (1) a comparison of tightness of the bounds applied to uniform weighting, and (2) a comparison of weighting optimization the bounds. Additional experiments, where we explored the effect of using splits with increased validation and decreased training subsets, as suggested in Section~\ref{sec:fast-vs-slow}, and where we compared the $\operatorname{TND}$ and $\operatorname{DIS}$ bounds in presence of unlabeled data, are described in Appendix~\ref{app:experiments}. The python source code for replicating the experiments is available at Github\footnote{\url{https://github.com/StephanLorenzen/MajorityVoteBounds}}. \begin{figure}[t] \centering \includegraphics{experiments/uniform/plot_uniform_example_bagging.pdf} \caption{Test risk (black) and the bounds for a uniformly weighted random forest on a subset of binary (left) and multiclass (right) datasets. Plots for the remaining datasets are provided in Figures~\ref{fig:uniform_all_bin} and \ref{fig:uniform_all_mul} in the appendix. } \label{fig:rf_example_bounds} \end{figure} \myparagraph{Uniform weighting} In Figure~\ref{fig:rf_example_bounds} we compare tightness of $\operatorname{FO}$, $\operatorname{C1}$ and $\operatorname{C2}$ (the two forms of C-bound by \citealp{GLL+15}, see Appendix~\ref{app:equivalence} for the oracle forms), the C-tandem bound ($\operatorname{CTD}$, Theorem~\ref{thm:C-bound}), and $\operatorname{TND}$ applied to uniformly weighted random forests on a subset of data sets. The right three plots are multiclass datasets, where $\operatorname{C1}$ and $\operatorname{C2}$ are inapplicable. The outcomes for the remaining datasets are reported in Figures~\ref{fig:uniform_all_bin} and \ref{fig:uniform_all_mul} in the appendix. Since no optimization was involved, we used the PAC-Bayes-kl to bound $\E_\rho[L(h)]$, $\E_{\rho^2}[L(h,h')]$, and $\E_{\rho^2}[\mathbb{D}(h,h')]$ in the first and second order bounds, which is tighter than using PAC-Bayes-$\lambda$. The $\operatorname{TND}$ bound was the tightest for 5 out of 16 data sets, and provided better guarantees than the C-bounds for 4 out of 7 binary data sets. In most cases, the $\operatorname{FO}$-bound was the tightest. \myparagraph{Optimization of the weighting} We compared the loss on the test set $\testset$ and tightness after using the bounds for optimizing the weighting $\rho$. As already discussed, the C-bounds are not suitable for optimization (see also \citealp{LIS19}) and, therefore, excluded from the comparison. We used the PAC-Bayes-$\lambda$ form of the bounds for $\E_\rho[L(h)]$, $\E_{\rho^2}[L(h,h')]$, and $\E_{\rho^2}[\mathbb{D}(h,h')]$ for optimization of $\rho$ and then used the PAC-Bayes-kl form of the bounds for computing the final bound with the optimized $\rho$. Optimization details are provided in Appendix~\ref{app:minimization}. \begin{figure}[t] \centering \begin{subfigure}{.49\linewidth} \centering \includegraphics[width=\linewidth]{experiments/optimization/plot_mvrisk_bag.pdf} \caption{} \label{fig:opt_mvrisk} \end{subfigure}% \hfill \begin{subfigure}{.46\linewidth} \centering \includegraphics[width=\linewidth]{experiments/optimization/plot_rhos.pdf} \caption{} \label{fig:example_rho} \end{subfigure} \caption{(a) The median, 25\%, and 75\% quantiles of the ratio $\hat L(\MV_{\rho^*}, \testset) / \hat L(\MV_u,\testset)$ of the test loss of majority vote with optimized weighting $\rho^*$ generated by $\operatorname{FO}$ and $\operatorname{TND}$. The plot is on a logarithmic scale. Values above 1 represent degradation in performance on new data and values below 1 represent an improvement. (b) The optimized weights $\rho^*$ generated by $\operatorname{FO}$ and $\operatorname{TND}$.} \label{fig:opt_mvrisk_and_rho} \end{figure} Figure~\ref{fig:opt_mvrisk} compares the ratio of the loss of majority vote with optimized weighting to the loss of majority vote with uniform weighting on $\testset$ for $\rho^*$ found by minimization of $\operatorname{FO}$ and $\operatorname{TND}$. The numerical values are given in Table~\ref{tab:opt_mvrisk} in the appendix. While both bounds tighten with optimization, we observed that optimization of $\operatorname{FO}$ considerably weakens the performance on $\testset$ for all datasets, whereas optimization of $\operatorname{TND}$ did not have this effect and in some cases even improved the outcome. Figure~\ref{fig:example_rho} shows optimized distributions for two sample data sets. It is clearly seen that $\operatorname{FO}$ placed all the weight on a few top trees, while $\operatorname{TND}$ hedged the bets on multiple trees. The two figures demonstrate that the new bound correctly handled interactions between voters, as opposed to $\operatorname{FO}$. \section{Discussion} We have presented a new analysis of the weighted majority vote, which provides a reasonably tight generalization guarantee and can be used to guide optimization of the weights. The analysis has been applied to random forests, where the bound can be computed using out-of-bag samples with no need for a dedicated hold-out validation set, thus making highly efficient use of the data. We have shown that in contrary to the commonly used first order bound, minimization of the new bound does not lead to deterioration of the test error, confirming that the analysis captures the cancellation of errors, which is the core of the majority vote. \begin{ack} We thank Omar Rivasplata and the anonymous reviewers for their suggestions for manuscript improvements. AM is funded by the Spanish Ministry of Science, Innovation and Universities under the projects TIN2016-77902-C3-3-P and PID2019-106758GB-C32, and by a Jose Castillejo scholarship CAS19/00279. SSL acknowledges funding by the Danish Ministry of Education and Science, Digital Pilot Hub and Skylab Digital. CI acknowledges support by the Villum Foundation through the project Deep Learning and Remote Sensing for Unlocking Global Ecosystem Resource Dynamics (DeReEco). YS acknowledges support by the Independent Research Fund Denmark, grant number 0135-00259B. \end{ack} \section*{Broader impact} Ensemble classifiers, in particular random forests, are among the most important tools in machine learning \citep{Delgado2014,Zhu15}, which are very frequently applied in practice \citep[e.g.,][]{chen2016xgboost,hoch2015ensemble,puurula2014kaggle,stallkamp:12}. Our study provides generalization guarantees for random forests and a method for tuning the weights of individual trees within a forest, which can lead to even higher accuracies. The result is of high practical relevance. Given that machine learning models are increasingly used to make decisions that have a strong impact on society, industry, and individuals, it is important that we have a good theoretical understanding of the employed methods and are able to provide rigorous guarantees for their performance. And here lies the strongest contribution of the line of research followed in our study, in which we derive rigorous bounds on the generalization error of random forests and other ensemble methods for multiclass classification.
{ "timestamp": "2020-12-18T02:19:57", "yymm": "2007", "arxiv_id": "2007.13532", "language": "en", "url": "https://arxiv.org/abs/2007.13532" }
\section{Introduction} Analysis of an outcome response to a counterfactual shift in the covariate distribution is of interest in policy studies. Such a counterfactual analysis requires accounting for the Oaxaca-Blinder decomposition of heterogeneous outcome distributions into structural heterogeneity ($F_{Y|X}$) and distributional heterogeneity ($F_X$); see \citet{fortin2011decomposition} for a review. To conduct a credible counterfactual analysis, it is crucial to control a structure ($F_{Y|X}$) with rich information about $X$ while applying a counterfactual shift in the distribution of $X$. In this light, a researcher ideally wants to use high-dimensional $X$ in data. Motivated by this feature of causal inference and the recently increasing availability of high-dimensional data, we develop a novel theory and method of estimation and inference for heterogeneous counterfactual effects with high-dimensional controls. The existing literature features a number of alternative approaches and frameworks of counterfactual analysis. Among others, we focus on the unconditional quantile partial effect \citep*[UQPE;][]{firpo2009unconditional} in the unconditional quantile regression based on the re-centered influence function (RIF) of \citet{firpo2009unconditional} for two reasons: (i) its advantage of providing ``a simple way of performing detailed decompositions'' \citep[][p. 76]{fortin2011decomposition} and (ii) its popularity.\footnote{As of February 17, 2022, \citet{firpo2009unconditional} have attracted 2275 Google Scholar citations.} This parameter measures the marginal effect of counterfactually shifting the distribution of a coordinate of $X$ on population quantiles of an outcome. The UQPE is defined with the conditional distribution $F_{Y|X}$ and the marginal distribution $F_X$. Let $X=(X_1,X_{-1})$ denote the status quo, where $X_1$ is a scalar treatment variable of interest and $X_{-1}$ consists of controls. We focus on the change from $X$ to $(X_1+\varepsilon,X_{-1})$ throughout this paper, while our analysis can be generalized to the change in any fixed direction. The counterfactual distribution of $Y$ after this change is $$ F_{Y}^\varepsilon(y)=\int F_{Y|X=(x_1+\varepsilon,x_{-1})}(y)dF_X(x). $$ The UQPE with respect to the first coordinate, $X_1$, of $X$ is defined by \begin{equation}\label{eq:parameterofinterest} UQPE(\tau)=\left.\frac{\partial Q_\tau(F_{Y}^\varepsilon)}{\partial\varepsilon}\right|_{\varepsilon=0}, \end{equation} where $Q_\tau$ is the $\tau$-th quantile operator. The UQPE measures the change in the outcome quantile when the distribution of $X$ changes infinitesimally in the direction of the first coordinate. Under the assumption of conditional exogeneity, as in \citet[Section 2.3]{chernozhukov2013inference}, the UQPE can be interpreted as the causal effect of changing the distribution of $X$ infinitesimally. Without such an assumption, UQPE may still be of interest as a summary statistic of the counterfactual distributional relationship between $Y$ and $X_1$. To fix ideas, suppose that a policy maker is interested in analyzing the counterfactual effects of extending the duration $X_1$ of an exposure to the Job Corps training program on the outcome $Y$ of hourly wages, controlling for a large number of demographic, socioeconomic and behavioral attributes $X_{-1}$. The median of the actual distribution of hourly wages is given by $Q_{0.5}(F_Y^0)$. On the other hand, if the exposures are extended by $\varepsilon$ days for every participant, then the median of the counterfactual distribution of hourly wages becomes $Q_{0.5}(F_Y^\varepsilon)$. In this case, $Q_{0.5}(F_Y^\varepsilon) - Q_{0.5}(F_Y^0)$ measures the counterfactual effect on the median of the wage distribution, and $UQPE(\tau) = \lim_{\varepsilon \rightarrow 0} \big(Q_{0.5}(F_Y^\varepsilon) - Q_{0.5}(F_Y^0)\big)/\varepsilon$ measures its marginal effect. While the RIF regression approach is indeed simpler to implement than alternative methods of counterfactual analysis as emphasized by \citet{fortin2011decomposition}, an estimation of the UQPE still requires a three-step procedure. The first step is an estimation of unconditional quantiles. The second step implements the RIF regression. The third step integrates the RIF regression estimates to in turn estimate the UQPE. \citet[]{firpo2009unconditional} provide an estimation procedure for the case of low-dimensional data. If we allow for high-dimensional controls with the aforementioned motivation, then the second step will require some estimation of the high-dimensional RIF regression, and the traditional techniques to incorporate estimation errors of the second step into the third step no longer apply. To overcome this challenge, we construct a novel doubly/locally robust score for estimation of the UQPE. The key insight for the construction is the identification result in \citet[p. 958]{firpo2009unconditional} that the UQPE has the same structure as the average derivative estimator, whose influence function in the presence of nonparametric preliminary estimation has been well studied in the existing literature (e.g., \citealp{newey1994asymptotic}). With this doubly/locally robust score, we obtain a Z-estimation criterion with robustness against perturbations in functional nuisance parameters as in \cite{belloni2014uniform} and \cite{belloni2018uniformly}, and can thereby use the debiased estimation approach \citep[e.g.,][]{belloni2014uniform,chernozhukov2017double,chernozhukov2018double,chernozhukov2016locally}, which allows one to obtain the asymptotic distribution of a UQPE estimator, independently of the second-step estimation as far as it satisfies some convergence rate conditions satisfied by major nonparametric estimators and machine learners. To provide a readily applicable method for practitioners, we focus on a specific method of estimation and bootstrap inference in the main text, but we also provide a generic method in the online supplement. \noindent\textbf{Notations.} In this paper, we will use the following mathematical symbols and notations. $\mathcal{X}$ denotes the support of $X$. For a vector $v$, we define $\mathrm{Supp}(v)=\{i: v_i\ne 0\}$, $\|v\|_1=\sum_{i}|v_i|$, $\|v\|_2=(\sum_{i}v_i^2)^{1/2}$, and $\|v\|_\infty=\max_{i}|v_i|$. Denote the cardinality of $\mathrm{Supp}(v)$ by $\|v\|_0$. For a matrix $A$, we define $\|A\|_\infty = \max_{ij}|A_{ij}|$. We let $\Lambda$ be the standard logistic CDF and $\Phi$ be the standard normal CDF. \section{Robust Score, Estimation, and Inference for $UQPE(\tau)$}\label{sec:scoreconstruction} In this section, we develop a new score for doubly/locally robust estimation of the UQPE. We then present specific estimation and inference procedures in Sections \ref{sec:est_procedure} and \ref{sec:inf_procedure}, respectively. It is worthwhile to mention here that our analysis allows the dimensionality of $X$ to depend on the sample size $N$ and to diverge as $N\rightarrow\infty$. Following \citet{firpo2009unconditional}, we can rewrite our parameter of interest, defined in \eqref{eq:parameterofinterest}, as a function of identifiable objects. Namely, $$ UQPE(\tau)=-\frac{\theta(\tau)}{f_Y(q_\tau)}, $$ where $f_Y$ is the density function of $Y$, $q_\tau$ is the $\tau$-th quantile of $Y$, and \begin{equation}\label{eq:definition_theta} \theta(\tau)=\int \frac{\partial F_{Y\mid X=x}(q_\tau)}{\partial x_1}dF_X(x). \end{equation} This equation is shown in \citet[Corollary 1]{firpo2009unconditional}. \subsection{Doubly/Locally Robust Score} We could estimate $\theta(\tau)$ based on \eqref{eq:definition_theta} and some estimator for $F_{Y\mid X}(\cdot)$. When $X$ is high-dimensional, this direct estimation of $\theta(\tau)$ can result in a large bias, a large variance, or both. Instead, we propose to construct an estimator for $\theta(\tau)$ based on an alternative representation: \begin{align} {\theta}(\tau) &= \int \frac{\partial F_{Y\mid X=x}(q_\tau)}{\partial x_1}dF_X(x)-\int \omega(x)(1\{y\leq q_\tau\}-{m}_0(x,q_\tau)) dF_{Y,X}(y,x)\notag\\ &= \int \Big( {m}_1(x,q_\tau)-\omega(x)\big(1\{y\leq q_\tau\}-{m}_0(x,q_\tau)\big) \Big) dF_{Y,X}(y,x), \label{eq:moment_conditions} \end{align} where $\omega(x)=\partial\log f_{X_1|X_{-1}=x_{-1}}(x_1)/\partial x_1$, $m_0(x,q)=F_{Y\mid X=x}(q)$ and $m_1(x,q)={\partial m_0(x,q)}/{\partial x_1}$. This representation in \eqref{eq:moment_conditions} comes from the influence adjustment term for the average derivative estimator \citep[][p.1369]{newey1994asymptotic}. Namely, $\int \omega(x)(1\{y\leq q_\tau\}-{m}_0(x,q_\tau))dF_{Y,X}(y,x)$ in \eqref{eq:moment_conditions} adjusts the estimation error from the regularized preliminary estimation. The advantage of \eqref{eq:moment_conditions} over \eqref{eq:definition_theta} is that \eqref{eq:moment_conditions} is doubly robust in the sense that \begin{equation}\label{eq:doublerobust1} \theta(\tau)=\int\left(\tilde{m}_1(x,q_\tau)-\omega(x)(1\{y\leq q_\tau\}-\tilde{m}_0(x,q_\tau))\right)dF_{Y,X}(y,x) \end{equation} and \begin{equation}\label{eq:doublerobust2} \theta(\tau)=\int\left({m}_1(x,q_\tau)-\tilde\omega(x)(1\{y\leq q_\tau\}-{m}_0(x,q_\tau))\right)dF_{Y,X}(y,x) \end{equation} hold for a set of values that the high-dimensional nuisance parameters $(\tilde\omega(x),$ $\tilde{m}_{0}(x,q),$ $\tilde{m}_{1}(x,q))$ take as far as $\tilde{m}_1(x,q)={\partial\tilde{m}_0(x,q)}/{\partial x_1}$ and some regularity conditions to be formally stated below are satisfied. Note that $(\tilde\omega(x),\tilde{m}_{0}(x,q),\tilde{m}_{1}(x,q))$ in \eqref{eq:doublerobust1} and \eqref {eq:doublerobust2} can be different from the true value $(\omega(x),{m}_{0}(x,q),{m}_{1}(x,q))$. Our doubly/locally robust moment involves two nuisance parameters, $\omega(x)$ and $m_0(x,q)$, which are based on the conditional distribution of $X_1$ on $X_{-1}$ and the conditional distribution of $Y$ on $X$, respectively. They are analogous to the propensity score and the conditional mean function in the doubly robust moment for the average treatment effect. The equality in \eqref{eq:doublerobust1} (resp. \eqref{eq:doublerobust2}) implies that, even if we mis-specify the conditional distribution of $Y$ given $X$ (resp. the conditional distribution of $X_1$ given $X_{-1}$), \eqref{eq:moment_conditions} provides the correct parameter value $\theta(\tau)$. The construction of our doubly/locally robust moment comes from the fact that ${\theta}(\tau)$ can be represented in two different ways: $$ {\theta}(\tau)=\int {m}_1(x,q_\tau)dF_X(x) \quad\mbox{ and }\quad {\theta}(\tau)=-\int \omega(x)1\{y\leq q_\tau\}dF_{Y,X}(y,x). $$ Each of the two representations only requires one of the two functions, ${m}_1(x,q_\tau)$ and $\omega(x)$, to be specified correctly. A precise statement for the double robustness and its proof are found in Appendix \ref{sec:doublerobust} in the online supplement. \subsection{Estimation Procedure} \label{sec:est_procedure} With the sample $\{(Y_i,X_i): i=1,\ldots,N\}$ and the moment condition \eqref{eq:moment_conditions}, we propose to estimate ${\theta}(\tau)$ by a plug-in method. Let $(\hat\omega(x),\hat{m}_{0}(x,q),\hat{m}_{1}(x,q))$ denote an estimator of $(\omega(x),{m}_{0}(x,q),{m}_{1}(x,q))$ -- a concrete procedure to construct $(\hat\omega(x),\hat{m}_{0}(x,q),\hat{m}_{1}(x,q))$ is provided below. Letting $\hat{q}_\tau$ denote the sample $\tau$-th empirical quantile of $Y$, we estimate $\theta(\tau)$ by \begin{align} \label{eq:thetahat} \hat{\theta}(\tau)=\frac{1}{N}\sum_{i=1}^N \left(\hat{m}_{1}(X_i,\hat{q}_\tau) - \hat\omega(X_i)(1\{Y_i \leq \hat{q}_\tau\} - \hat{m}_{0}(X_i,\hat{q}_\tau))\right). \end{align} With this estimator for $\theta(\tau)$, our proposed estimator for $UQPE(\tau)$ is in turn defined by $$ \widehat{UQPE}(\tau) = -\frac{\hat{ \theta}(\tau)}{\hat{f}_Y(\hat{q}_\tau)}, $$ where $\hat{f}_Y(y)$ is the kernel density estimator defined by $$ \hat{f}_Y(y) = \frac{1}{N}\sum_{i=1}^N\frac{1}{h_1}K_1\left(\frac{Y_i - y}{h_1}\right) $$ for a kernel function $K_1$ and a bandwidth parameter $h_1$. We use the logistic Lasso regression \citep{BCFH13} to construct $\hat{m}_{0}(x,q)$. Once we construct $\hat{m}_{0}(x,q)$, we in turn define $\hat{m}_{1}(x,q)$ by $$ \hat{m}_{1}(x,q) = \frac{\partial\hat{m}_{0}(x,q)}{\partial x_1}. $$ Consider the approximately sparse logistic regression model for $m_0(x,q)$: $$ m_0(X,q) = \Lambda(b(X)^\top\beta_{q})+(\mathrm{approximation\ error}), $$ where $b(X)$ is a $p_b$-dimensional observed vector and $\beta_{q}$ is an unknown parameter. Assumption \ref{assn:Lasso} (to be stated below in Section \ref{sec:theoretical_result}) specifies the conditions for the sparsity and formalizes the approximation error. In our numerical examples, we define $b(X)$ by including powers of $X$ up to the third degree and standardize each component of $b(X)$ so that the variance is one. We estimate $\beta_{q}$ by the Lasso penalized logistic regression \begin{equation}\label{eq:m0} \tilde{\beta}_{q} = \argmin_\beta -\frac{1}{N}\sum_{i=1}^N\log\left(\Lambda(b(X_i)^\top\beta)^{1\{Y_{i} \leq q\}}(1-\Lambda(b(X_i)^\top\beta))^{1\{Y_{i}>q\}}\right)+ \frac{\lambda_{L}}{N}\|{\Psi}_q\beta\|_1, \end{equation} where ${\Psi}_q$ is a diagonal matrix with penalty loadings defined in the next paragraph. We follow \cite{BCFH13} and set the regularization parameter as $$ \lambda_{L} = 1.1\Phi^{-1}(1-(0.1/\log(N))/(p_b\vee N))N^{1/2}. $$ We recommend using the post-Lasso estimator for $\beta_q$ defined by $$ \hat{\beta}_{q} = \argmin_{\beta \in \mathbb{R}^p: \mathrm{Supp}(\beta)\subset(\mathrm{Supp}(\tilde{\beta}_{q}) \cup S_1)}-\frac{1}{N}\sum_{i=1}^N\log\left(\Lambda(b(X_i)^\top\beta)^{1\{Y_{i} \leq q\}}(1-\Lambda(b(X_i)^\top\beta))^{1\{Y_{i}>q\}}\right), $$ where $S_1 \subset \{1,\ldots,p_b\}$ denotes the coordinate set of covariates researchers want to include in the post-Lasso regression. For the UQPE with respect to $X_1$, it is natural to include $X_1$ in the regression. The post-Lasso estimator can do so by setting $1\in S_1$, whereas the Lasso estimator $\tilde{\beta}_{q}$ may exclude $X_1$ from the regression. With $\hat{\beta}_{q}$, we can estimate $\hat{m}_{0}(x,q)$ by $$ \hat{m}_{0}(x,q) = \Lambda(b(x)^\top \hat{\beta}_{q}). $$ The penalty loading matrix ${\Psi}_q=\text{diag}(\psi_{q,1},\cdots ,\psi_{q,p_b})$ needs to be estimated to implement \eqref{eq:m0}. Ideally, we would like to use the infeasible penalty loading $$ \bar{\psi}_{q,j}=\sqrt{\frac{1}{N}\sum_{i=1}^N\left(1\{Y_i\leq q\}-m_{0}(X_i,q)\right)^2b^2_{j}(X_i)}. $$ Since $m_{0}(X,q)$ is unknown, \cite{BCFH13} propose the following iterative algorithm to obtain the feasible version of the loading matrix: \begin{enumerate} \item We start the algorithm with $\psi_{q,j}^{0}=\sqrt{\frac{1}{N}\sum_{i=1}^N1\{Y_i \leq q\}b^2_{j}(X_i)}.$ \item For $k=0,\cdots ,K-1$ for some fixed positive integer $K$, we can compute $\tilde{\beta}_{q}^{k}$ by \eqref{eq:m0} with $\tilde{\Psi}_{q}^{k}=\text{diag}(\psi_{q,1}^{k},\cdots,\psi_{q,p_b}^{k})$, and construct $$ \psi_{q,j}^{k+1}=\sqrt{\frac{1}{N}\sum_{i=1}^N \left(1\{Y_i\leq q\}-\Lambda (b(X_i)^\top\tilde{\beta}_{q}^{k})\right)^2b^2_{j}(X_i)}. $$ \item The final penalty loading matrix ${\Psi}_{q}^K=\text{diag}(\psi_{q,1}^K,\cdots,\psi_{q,p_b}^K)$ will be used for ${\Psi}_{q}$ in \eqref{eq:m0}. \end{enumerate} Next, we consider a regularized estimation of $\omega(X)$ based on the Riesz representer approach \citep{chernozhukov2018automatic,chernozhukov2018double2}. Suppose that $h(x)$ is a $p_h$-dimensional dictionary of approximating functions that are differentiable in $x_1$ and that $$ \omega(x)=h(x)^\top\overline{\rho}+(\mathrm{approximation\ error}) $$ holds, where Assumption \ref{ass:rr} (to be stated below in Section \ref{sec:theoretical_result}) formally describes this approximation. In our numerical examples, we define $h(X)$ by including powers of $X$ up to the third degree and standardize each component of $h(X)$ so that the variance is one. Since $\omega(x)={\partial \log f_{X_1|X_{-1}=x_{-1}}(x_1)}/{\partial x_1}$, the integration by parts yields $$ \mathbb{E}[h(X) \omega(X)] = - \mathbb{E}[\partial_{x_1}h(X)]. $$ Approximating $\omega(x)$ by $h(x)^\top\overline{\rho}$, we have $$ \mathbb{E}[h(X)h(X)^\top]\overline{\rho}=- \mathbb{E}[\partial_{x_1}h(X)]+(\mathrm{approximation\ error}). $$ Thus, $\overline{\rho}$ can be approximated by $\argmin_{\rho}\left(-2M^\top \rho + \rho^\top G\rho\right)$, where $G = \mathbb{E}[h(X)h(X)^\top]$ and $M = -\mathbb{E}[\partial_{x_1}h(X)]$. To accommodate high-dimensional $h(x)$, we use the regularized minimizer $$ \argmin_{\rho}\left(-2M^\top \rho + \rho^\top G\rho + \lambda_{R} \|\rho\|_1\right) $$ with $\lambda_{R}$ denoting a regularization parameter (cf. Assumption \ref{ass:rr}.5). In the simulations and empirical application, we use $\lambda_{R} = 2\log(\log(N)) \sqrt{\log(p_h)/N}$. The Riesz representer approach uses the sample analog of this objective to estimate $\omega(x)$. Namely, we estimate $\omega(x)$ by $$\hat\omega(x) = h(x)^\top \hat{\rho},$$ where $\hat{G} = \frac{1}{N}\sum_{i=1}^Nh(X_i)h(X_i)^\top$, $\hat{M} = -\frac{1}{N}\sum_{i=1}^N\partial_{x_1}h(X_i)$, and $$ \hat{\rho} = \argmin_{\rho}\left(-2 \hat{M}^\top \rho + \rho^\top \hat{G} \rho + \lambda_{R} \|\rho\|_1\right). $$ \subsection{Bootstrap Inference}\label{sec:inf_procedure} For an inference about $UQPE(\tau)$, we propose the multiplier bootstrap without requiring to recalculate the preliminary estimator $\hat\omega(x)$ in each bootstrap iteration. (More precisely, if we calculate $(\hat m_0(x,q), \hat m_1(x,q))$ on a grid of values of $q$ once, then we do not need to recalculate them in each bootstrap iteration either.) Using independent standard normal random variables $\{\eta_i\}_{i=1}^N$ that are independent of the data, we compute the bootstrap estimators $\hat{ \theta}^*(\tau)$ and $\widehat{UQPE}^*(\tau)$ in the following steps. The bootstrap estimator for $q_\tau$ is $\hat{q}^*_\tau$ defined by the $r_N^*$-th order statistic of $Y_i$, where $r_N^*$ is the integer part of $1+\sum_{i=1}^N\left(\tau+\eta_i(\tau - \mathbf{1}\{Y_i \leq \hat{q}_\tau\})\right)$.\footnote{It is equivalent to $\hat{q}^*_\tau = \arg\min_q \sum_{i=1}^N \rho_\tau(Y_i - q) - q \sum_{i=1}^N \eta_i(\tau - 1\{Y_i \leq \hat{q}_\tau\})$, which is the gradient bootstrap method \citep{chen2004quantile} and directly perturbs the score for the quantile $q_\tau$. By the sub-gradient condition, we have that $\hat{q}^*_\tau$ equals the $r_N^*$th order statistic of $Y_i$, where $r_N^*$ is the integer that satisfies $N\tau + \sum_{i=1}^N \eta_i(\tau - 1\{Y_i \leq \hat{q}_\tau\}) + 1 \geq r_N^* \geq N\tau + \sum_{i=1}^N \eta_i(\tau - 1\{Y_i \leq \hat{q}_\tau\})$. This procedure gives us a closed-form solution for $\hat{q}^*_\tau$.} The bootstrap estimators for $f_Y(y)$ and $\theta(\tau)$ are $$ \hat{f}_Y^*(y) = \frac{1}{\sum_{i=1}^N(\eta_i+1)}\sum_{i=1}^N(\eta_i+1)\frac{1}{h_1}K_1\left(\frac{Y_i - y}{h_1}\right), $$ and $$ \hat{\theta}^*(\tau)= \frac{1}{\sum_{i=1}^N(\eta_i+1)}\sum_{i=1}^N (\eta_i+1)\left(\hat{m}_{1}(X_i,\hat{q}^*_\tau) - \hat\omega(X_i)(1\{Y_i \leq \hat{q}^*_\tau\} - \hat{m}_{0}(X_i,\hat{q}^*_\tau))\right), $$ respectively. With these components, the bootstrap estimator $\widehat{UQPE}^*(\tau)$ is given by $$ \widehat{UQPE}^*(\tau) = -\frac{\hat{ \theta}^*(\tau)}{\hat{f}^*_Y(\hat{q}^*_\tau)}. $$ We can use the above multiplier bootstrap method to conduct various types of statistical inference about the UQPE. First, testing the hypothesis of $UQPE(\tau) = 0, \forall\tau\in \Upsilon$ for some closed interval $\Upsilon\subset(0,1)$ is of main interest in many empirical applications. Because $f_Y(q_\tau)$ is assumed to be bounded away from zero, such a hypothesis is equivalent to $\theta(\tau) = 0, \forall\tau \in \Upsilon$, where $\theta(\tau)$ can be estimated in a parametric rate. We can thus test $UQPE(\tau) = 0, \forall\tau\in \Upsilon$ by constructing a confidence band for $\{\theta(\tau): \tau \in \Upsilon \}$ and checking whether the constant zero function belongs to this band. Specifically, let $$ \hat{\sigma}^{\theta}(\tau) = \frac{Q_{\hat\theta^*(\tau)}(0.75) - Q_{\hat\theta^*(\tau)}(0.25)}{\Phi^{-1}(0.75)-\Phi^{-1}(0.25)} $$ denote an estimator of the standard error of $\hat{\theta}(\tau)$ for $\tau \in \Upsilon$, where $Q_{\hat\theta^*(\tau)}(0.75)$ and $Q_{\hat\theta^*(\tau)}(0.25)$ denote the 75th and 25th percentiles of $\hat\theta^*(\tau)$ conditional on the data. Let $c_{\Upsilon}^{\theta}(1-\alpha)$ denote the $(1-\alpha)$ quantile of $$ \sup_{\tau \in \Upsilon} \left\vert \frac{\hat{\theta}^*(\tau)-\hat{\theta}(\tau)}{\hat\sigma^{\theta}(\tau)} \right\vert $$ conditional on the data. Let $CB^{\theta}_{\Upsilon}$ denote the confidence band of $\theta(\cdot)$ on $\Upsilon$ whose lower and upper bounds at $\tau \in \Upsilon$ are given by $\hat{\theta}(\tau) \pm \hat\sigma^{\theta}(\tau) c_{\Upsilon}^\theta(1-\alpha)$. Second, we can similarly construct a confidence band for $\{UQPE(\tau):\tau\in \Upsilon\}$. Let $$ \hat\sigma(\tau) = \frac{Q_{\widehat{UQPE}^*(\tau)}(0.75) - Q_{\widehat{UQPE}^*(\tau)}(0.25)}{\Phi^{-1}(0.75)-\Phi^{-1}(0.25)} $$ denote an estimator of the standard error of $\widehat{UQPE}(\tau)$ for $\tau \in \Upsilon$, where $Q_{\widehat{UQPE}^*(\tau)}(0.75)$ and $Q_{\widehat{UQPE}^*(\tau)}(0.25)$ denote the 75th and 25th percentiles of $\widehat{UQPE}^*(\tau)$ conditional on the data. Let $c_\Upsilon(1-\alpha)$ denote the $(1-\alpha)$ quantile of $$ \sup_{\tau \in \Upsilon} \left\vert \frac{\widehat{UQPE}^*(\tau)-\widehat{UQPE}(\tau)}{\hat\sigma(\tau)}\right\vert $$ conditional on the data. Let $CB_\Upsilon$ denote the confidence band of $UQPE(\cdot)$ on $\Upsilon$ whose lower and upper bounds at $\tau \in \Upsilon$ are given by $\widehat{UQPE}(\tau) \pm \hat\sigma(\tau) c_\Upsilon(1-\alpha)$. \section{Asymptotic Theory}\label{sec:theoretical_result} In this section, we investigate the asymptotic properties of the estimators $(\hat{ \theta},\widehat{UQPE})$ and their bootstrap counterparts $(\hat{ \theta}^*,\widehat{UQPE}^*)$ introduced in the previous section. The uniformity over $\tau$ is relevant to applications (e.g., analysis of heterogeneous counterfactual effects across $\tau$), and therefore, in this section, we aim to control the residuals for the linear expansion uniformly over $\tau\in \Upsilon$ for some closed interval $\Upsilon\subset(0,1)$. Let $\mathcal{Q} = \{{q}_\tau: \tau \in \Upsilon\}$, and let $\mathcal{Q}^\delta$ denote the $\delta$ enlargement of $\mathcal{Q}$. \begin{assumption} \begin{enumerate} \item[] \item For every $\tau \in \Upsilon$, $F_{Y}^\varepsilon(q)$ is differentiable with respect to $\varepsilon$ in a neighborhood of zero for every $q$ in a neighborhood of $q_\tau$, and $Q_\tau(F_{Y}^\varepsilon)$ is well defined and is differentiable with respect to $\varepsilon$ in a neighborhood of zero. \item $\int\left(\sup_{q \in \mathcal{Q}^\delta}|{m}_1(x,q)|\right)^{2+d}dF_X(x)$ and $\int|\omega(x)|^{2+d}dF_X(x)$ are finite for some $d>0$. \item For every $x_{-1}$ in the support of $X_{-1}$, the conditional distribution of $X_1$ given $X_{-1}=x_{-1}$ has a probability density function, denoted by $f_{X_1\mid X_{-1}}$, which is continuously differentiable everywhere and is zero on the boundary of the support of the conditional distribution of $X_1$. \item $m_1(x,q)$ and $m_0(x,q)$ are differentiable with respect to $q$ for $q \in \mathcal{Q}^\delta$, and the derivatives are bounded in absolute value uniformly over $x \in \mathcal{X}$ and $q \in \mathcal{Q}^\delta$. \item $f_Y(y)$ is three times differentiable on $\mathcal{Q}^\delta$ with all the derivatives uniformly bounded. $f_Y(q_\tau)>0$ for every $\tau \in \Upsilon$. \end{enumerate} \label{assn:identification} \end{assumption} This assumption is on the model primitives. Assumptions \ref{assn:identification}.1 and \ref{assn:identification}.3--\ref{assn:identification}.6 impose regularity in terms of the smoothness of various functions representing features of the data. We use the enlargement $\mathcal{Q}^\delta$ instead of $\mathcal{Q}$ because with probability approaching one, $\hat{q}_\tau$ and $\hat{q}^*_\tau$ belong to the former but not necessarily the latter. Assumption \ref{assn:identification}.2 is the standard moment condition. We impose the following condition to bound the estimation error for $\hat{m}_{j}(x,q)$. \begin{assumption} \begin{enumerate} \item[] \item\label{assn:Lasso_boundedmoments} (Boundedness) For some positive constants $\delta,\overline{c},\underline{c}$, (i) $ \underline{c} \leq \int b_j(x)^2dF_X(x) \leq \overline{c}\mbox{ for every }j=1,\ldots,p, $ (ii) $ \sup_{x \in \mathcal{X},q\in \mathcal{Q}^\delta}|m_1(x,q)| \leq \overline{c}, $ and (iii) $ \sup_{x \in \mathcal{X},q\in \mathcal{Q}^\delta}|\frac{\partial}{\partial x_1}b(x)^\top\beta_q| \leq \overline{c}. $ \item (Restricted eigenvalue condition) There are positive constants $\overline{c},\underline{c}$ and a sequence $m_N\rightarrow \infty $ such that, with probability approaching one, $$ \underline{c}\leq \inf_{\beta \neq 0,\|\beta \|_{0}\leq m_N}\frac{\left(\frac{1}{N}\sum_{i=1}^N(b(X_i)^\top\beta)^2\right)^{1/2}}{\|\beta \|_{2}}\leq \sup_{\beta\neq 0,\|\beta \|_{0}\leq m_N}\frac{\left(\frac{1}{N}\sum_{i=1}^N(b(X_i)^\top\beta)^2\right)^{1/2}}{\|\beta \|_{2}}\leq \overline{c}, $$ $$ \underline{c}\leq \inf_{\beta \neq 0,\|\beta \|_{0}\leq m_N}\frac{\left(\frac{1}{N}\sum_{i=1}^N(\frac{\partial}{\partial x_1} b(X_i)^\top\beta)^2\right)^{1/2}}{\|\beta \|_{2}}\leq \sup_{\beta\neq 0,\|\beta \|_{0}\leq m_N}\frac{\left(\frac{1}{N}\sum_{i=1}^N(\frac{\partial}{\partial x_1} b(X_i)^\top\beta)^2\right)^{1/2}}{\|\beta \|_{2}}\leq \overline{c}, $$ $$ \sup_{\beta\neq 0,\|\beta \|_{0}\leq m_N}\left|\frac{\frac{1}{N}\sum_{i=1}^N(b(X_i)^\top\beta)^2}{\int(b(x)^\top\beta)^2dF_X(x)}-1\right| + \sup_{\beta\neq 0,\|\beta \|_{0}\leq m_N}\left|\frac{\frac{1}{N}\sum_{i=1}^N(\frac{\partial}{\partial x_1} b(X_i)^\top\beta)^2}{\int(\frac{\partial}{\partial x_1}b(x)^\top\beta)^2dF_X(x)}-1\right| = o_P(1). $$ \item (Sparsity) $\sup_{q \in\mathcal{Q}^\delta}\|\beta_q\|_0 \leq s_b$ for a sequence $s_b$ satisfying $s_b=o(m_N)$ and $\zeta_N s_b\sqrt{{\log(p_b)}/{N}} = o(1)$, where $ \zeta_N = \sup_{x \in \mathcal{X}}\max_{j =1,\ldots,p_b}\max\left\{\left|b_j(x)\right|,\left|\frac{\partial}{\partial x_1}b_j(x)\right|\right\}. $ \item (Approximation error) $$ \sup_{q \in \mathcal{Q}^\delta} \left(\int\left(\frac{\partial}{\partial x_1}(m_0(x,q)-\Lambda(b(x)^\top\beta_{q}))\right)^2dF_X(x)\right)^{1/2} = O(\sqrt{s_b \log(p_b)/N})$$ $$\sup_{x \in \mathcal{X},q\in \mathcal{Q}^\delta}\left|\frac{\partial}{\partial x_1}(m_0(x,q)-\Lambda(b(x)^\top\beta_{q}))\right|=O(\zeta_N s_b\sqrt{{\log(p_b)}/{N}}). $$ \end{enumerate} \label{assn:Lasso} \end{assumption} Several remarks are in order. First, Assumption \ref{assn:Lasso}.1 is the common regularity condition. Second, Assumptions \ref{assn:Lasso}.2--\ref{assn:Lasso}.4 are common in the literature of logistic regressions with an $\ell_1$ penalty. See, for instance, \cite{BCFH13}. Third, the various bounds for $m_1(x,q)$, $\beta_{q}$, $(m_0(X,q)-\Lambda(b(X)^\top\beta_{q}))$ need to hold uniformly over $q \in \mathcal{Q}^\delta$ because the estimator $\{\hat{q}_\tau: \tau \in \Upsilon\}$ belongs to $\mathcal{Q}^\delta$ for any fixed $\delta$ with probability approaching one. Fourth, as formally stated in Theorem \ref{thm:mhat} in the Online Supplement, Assumption \ref{assn:Lasso} can bound the estimation error for the logistic Lasso estimation: $$ \sup_{q \in \mathcal{Q}^\delta}\int\left|\hat{m}_{j}(x,q)-{m}_j(x,q)\right|^2dF_X(x) = O_P\left(\frac{s_b \log(p_b)}{N}\right) $$ and $$ \sup_{q \in \mathcal{Q}^\delta,x \in \mathcal{X}}\left| \hat{m}_{j}(x,q) - m_j(x,q)\right|= O_P\left( \zeta_N s_b\sqrt{\frac{\log(p_b)}{N}}\right). $$ We provide the regularity condition for the Riesz representer estimation of $\omega(x)$. \begin{assumption} \begin{enumerate} \item[] \item (Boundedness) There is a constant $C$ such that $\max_{1\leq j \leq p_h}|h_j(X)| \leq C$ with probability one. \item (Estimation error) $\|\hat{G}- G\|_\infty + \|\hat{M}- M\|_\infty = O_P\left(\sqrt{\frac{\log(p_h)}{N}}\right)$. \item (Sparsity) Let $s_h =C \left( \frac{\log(p_h)}{N}\right)^{-1/(1+2\xi)}$ for $C>1$, $\xi\geq 1/2$. Then, there is $\overline{\rho}$ with $\|\overline{\rho}\|_0 \leq s_h$ such that $$ \left(\int (\omega(x) - h(x)^\top \overline{\rho})^2dF_X(x)\right)^{1/2} \leq C (s_h)^{-\xi} \quad \text{and} \quad \sup_{x \in \mathcal{X}}|\omega(x)-h(x)^\top \overline{\rho}|=o(1). $$ \item (Restricted eigenvalue condition) $G$ and $\hat{G}$'s eigenvalues are uniformly bounded in $n$, with probability approaching one. Also, there are positive constants $\overline{c},\underline{c}$ and $m_N$ with $s_h = o(m_N)$ such that, with probability approaching one, $$ \underline{c}\leq \inf_{\rho \neq 0,\|\rho \|_{0}\leq m_N}\frac{\rho^\top \hat{G} \rho}{\|\rho\|_{2}^2}\leq \sup_{\Delta \neq 0,\|\rho \|_{0}\leq m_N}\frac{\rho^\top \hat{G} \rho}{\|\rho\|_{2}^2}\leq \overline{c} \quad \text{and} $$ $$\underline{c}\leq \inf_{\rho \neq 0,\|\rho \|_{0}\leq m_N}\frac{\rho^\top G \rho}{\|\rho\|_{2}^2}\leq \sup_{\rho \neq 0,\|\rho \|_{0}\leq m_N}\frac{\rho^\top G \rho}{\|\rho\|_{2}^2}\leq \overline{c}. $$ \item (Tuning parameter and dimensionality of $h(X)$) $\sqrt{ \log(p_h)/N}=o(\lambda_{R})$ and $\lambda_{R}=o(N^c\sqrt{\log(p_h)/N})$ for every $c>0$, and $p_h \leq C N^C$ for some $C>0$. \end{enumerate} \label{ass:rr} \end{assumption} Assumption \ref{ass:rr} follows \cite{chernozhukov2018automatic} to which we refer readers for more discussion. Specifically, we have their $\varepsilon_n=\sqrt{ \log(p_h)/N}$ and $r = \lambda_{R}$ and Assumption \ref{ass:rr}.4 implies \citet[Assumption 3]{chernozhukov2018automatic} by \citet[Lemma 4.1]{BRT09}. Theorem \ref{thm:omegahat} in the Online Supplement shows that Assumption \ref{ass:rr} can bound the estimation error for $\hat\omega(x)$: $$ \int \left(\hat\omega(x) - \omega(x)\right)^2dF_X(x) = o_P(N^c s_h \log(p_h)/N) \mbox{ and } \sup_{x \in \mathcal{X}}|\hat\omega(x) - \omega(x)| = o_P(1). $$ for all $c>0$. In Theorem \ref{thm:omegahat}, we also show the Riesz representer $\hat\omega(x)$ belongs to a class of functions whose entropy or complexity level is well-controlled. Such a result is new to the literature and essential for our theory as we use all the observations to estimate $\omega(x)$, and thus, are subject to the model selection bias. \cite{chernozhukov2018automatic} circumvent such bias via cross-fitting. In the Online Supplement, we also consider cross-fitting which can accommodate complicated general machine learning estimators for $\omega(x)$. \subsection{Testing $UQPE(\tau)=0, \forall\tau \in \Upsilon$}\label{sec:testing_uqpe} As mentioned earlier, testing $UQPE(\tau)=0, \forall \tau \in \Upsilon$ is equivalent to testing $\theta(\tau)=0, \forall\tau \in \Upsilon$. We can reject the null hypothesis if the constant zero function over $\Upsilon$ does not belong to $CB^\theta_\Upsilon$. In this section, we show that the proposed confidence band $\text{CB}^\theta_\Upsilon$ covers the true $\theta(\tau)$ uniformly over $\tau \in \Upsilon$ with the correct asymptotic size. We impose an additional rate condition about upper bounds on $s_h$ and $s_b$. \begin{assumption}\label{assn:rate_combined1} $(s_b\log(p_b)+s_h\log(p_h))^2 = o(N^{\frac{d}{2+d}})$, and there is some $c>0$ such that $\pi_N^2(s_h \log(p_h) + s_b \log(p_b)) = o(1)$ where $\pi_N=\sqrt{N^{2c} s_h\log(p_h)/N + (\zeta_N^{4/(2+d)}s_b^{(4+d)/(2+d)})\log(p_b)/N}$. \end{assumption} When $\omega(\cdot)$ defined in Assumption \ref{assn:identification} is bounded so that $d = \infty$ and $\zeta_N$ defined in Assumption \ref{assn:Lasso} is also bounded, $\pi_N$ is roughly equal to $\sqrt{N^{2c}s_h \log(p_h)/N + s_b \log(p_b)/N}$, which is just the convergence rate for the first-stage estimators. In this case, this additional condition holds as long as $s_h \log(p_h) + s_b \log(p_b)= o(N^{1/2-c})$ for some $c>0$, which implies $$ \sqrt{\frac{s_h \log (p_h)}{N}} + \sqrt{\frac{s_b \log (p_b)}{N}} = o(N^{-1/4}). $$ It means the convergence rate of the nuisance functions should be faster than the rate of $N^{1/4}$. Such a rate is sufficient for the influence function representation for $\hat{ \theta}(\tau)$ and $\hat{ \theta}^*(\tau)$ (in Theorem \ref{thm:theta}) due to the use of doubly/locally robust moment. Theorem \ref{cor:theta} provides a sufficient condition for the correct asymptotic size of the proposed confidence band $\text{CB}^\theta_\Upsilon$. It follows as a corollary of Theorem \ref{thm:theta} in the Online Supplement. \begin{theorem}\label{cor:theta} Suppose $\sup_{\tau \in \Upsilon}\left|\sqrt{N}\hat{ \sigma}^\theta(\tau) - \sqrt{Var(\mathrm{IF}_i^\theta(\tau))}\right| = o_P(1)$ with $\mathrm{IF}_i^\theta(\tau) = m_{1}(X_i,{q}_\tau) - \omega(X_i)(1\{Y_i \leq {q}_\tau\} - m_{0}(X_i,{q}_\tau))-\theta(\tau)+ \frac{\frac{\partial}{\partial q} \mathbb{E}m_{1}(X,{q}_\tau)}{f_Y({q}_\tau)}(\tau - 1\{Y_i \leq {q}_\tau\})$. If Assumptions \ref{assn:identification}--\ref{assn:rate_combined1} hold, then $$\mathbb{P}\left(\{\theta(\tau):\tau \in \Upsilon\} \in CB^\theta_\Upsilon\right) \rightarrow 1-\alpha.$$ \end{theorem} \subsection{Confidence Band for $\{UQPE(\tau): \tau \in \Upsilon\}$} In this section, we consider the confidence band for $\{UQPE(\tau): \tau \in \Upsilon\}$, which can be used to infer the entire trajectory of $UQPE(\tau)$ over $\tau \in \Upsilon$. Recall that we use the kernel function $K_1(\cdot)$ in the kernel density estimation of $f_Y(\cdot)$. We impose the following conditions for the kernel function $K_1(\cdot)$ and the bandwidth parameter $h_1$. \begin{assumption}\label{assn:fkernel} 1. $K_1(\cdot)$ is a second-order symmetric kernel function with a compact support. 2. $h_1=c_1N^{-H}$ for some positive constant $c_1$ and some $1/2>H \geq 1/5$. \end{assumption} We impose an additional rate condition about upper bounds on $s_h$ and $s_b$. \begin{assumption}\label{assn:rate_combined2} $\log(N)h_1(s_b\log(p_b)+s_h\log(p_h))^2 = o(N^{\frac{d}{2+d}})$ and there is some $c>0$ such that $ \log(N)h_1\pi_N^2(s_h \log(p_h) + s_b \log(p_b)) = o(1)$ where $\pi_N$ is defined in Assumption \ref{assn:rate_combined1}. \end{assumption} This assumption is weaker than Assumption \ref{assn:rate_combined2}, as $\log(N)h_1=o(1)$. In other words, if $d$ is sufficiently large and $\zeta_N$ is bounded, such a condition holds as long as $\sqrt{\log(N)h_1}(s_h \log(p_h) + s_b \log(p_b))= o(N^{1/2-c})$ for some $c>0$. The following theorem summarizes the validity for the bootstrap inference. The main takeaway from this theorem is that, by using the doubly robust method, we greatly relax the requirements on the sparsity and the number of effective covariates for $m_0(x,q)$ and $m_1(x,q)$, at the cost of imposing a sparsity condition on $\omega(x)$.\footnote{The leading term of the score function is $\frac{\theta(\tau)}{f_Y^2({q}_\tau)h_1}K_1\left(\frac{Y_i - {q}_\tau}{h_1}\right)$, but it does not imply that the doubly robust estimation for $\theta(\tau)$ is unnecessary. In fact, $\sup_{\tau \in \Upsilon}|\hat{ \theta}(\tau) - \theta(\tau)|$ is asymptotically negligible compared to $\frac{\theta(\tau)}{f_Y^2({q}_\tau)h_1}K_1\left(\frac{Y_i - {q}_\tau}{h_1}\right)$ if $\pi_N(\sqrt{s_h \log(p_h)/N} + \sqrt{s_b \log(p_b)/N}) + N^{-(1+d)/(2+d)}(s_h \log(p_h) + s_b \log(p_b))=o((\log(N)Nh_1)^{-1/2})$. If $h_1 = N^{-1/5}$, $d$ is sufficiently large, $c$ is arbitrarily small, and $\zeta_N$ is polylogarithmic, such a condition holds if $s_h \log(p_h) + s_b \log(p_b)= o(N^{3/5})$ up to some polylogarithmic factor. On the other hand, if we do not use the doubly robust method, the estimation error of $\theta(\tau)$ is $\sqrt{s_b\log(p_b)/N}$, which is asymptotically negligible if $\sqrt{s_b\log(p_b)/N} = o((\log(N)Nh_1)^{-1/2})$. Such a condition would require $s_h \log(p_h) = o(N^{1/5})$ up to some polylogarithmic factor.} \begin{theorem}\label{cor:infer} Suppose $\sqrt{Nh_1}=o(h_1^{-2})$, $h_1Var(\mathrm{IF}_i(\tau))$ is bounded away from zero, and $\sup_{\tau \in \Upsilon} \left\vert \sqrt{Nh_1}\hat\sigma(\tau)-\sqrt{h_1Var(\mathrm{IF}_i(\tau))}\right\vert=o_P(\log^{-1/2}(N))$, where $\mathrm{IF}_i(\tau) = \frac{\theta(\tau)}{f_Y^2({q}_\tau)h_1}K_1\left(\frac{Y_i - {q}_\tau}{h_1}\right)$. If Assumptions \ref{assn:identification}--\ref{ass:rr}, \ref{assn:fkernel}, \ref{assn:rate_combined2} hold, then $$ \mathbb{P}(\{UQPE(\tau):\tau \in \Upsilon\}\in CB_\Upsilon)\rightarrow 1-\alpha. $$ \end{theorem} Theorem \ref{cor:infer} is a direct consequence of the linear expansions for $\widehat{UQPE}(\tau)$ and $\widehat{UQPE}^*(\tau)$ (formally stated in Theorem \ref{thm:UQPE} in the Online Supplement) and the strong approximation theory developed by \cite{CCK14-anti,CCK14}. To compute $\hat\sigma(\tau)$, we can use either the plug-in method or the bootstrap method. For these methods, the convergence rate of $\sqrt{Nh_1}\hat\sigma(\tau)$ is polynomial in $N$, which implies $o_P(\log^{-1/2}(N))$. \section{Simulation Studies}\label{sec:simulation} In this section, we use Monte Carlo simulations to study the finite sample performance of the proposed method of estimation and inference for the UQPE. Consider the following set of alternative data-generating designs. The outcome variable is generated according to the partial linear high-dimensional model $$ Y\mid X \sim N\left(g(X_1) + \sum_{j=2}^p \alpha_j X_j, \ \ 1\right), $$ where the function $g(x)$ is defined in the following three ways: $g(x) = x$ in DGP 1, $g(x) = x - 0.10 \cdot x^2$ in DGP 2, and $g(x) = x - 0.10 \cdot x^2 + 0.01 \cdot x^3$ in DGP 3. The high-dimensional controls $(X_1,...,X_p)^\top$ are generated by $$ X_1\mid (X_2,...,X_p) \sim N\left(\sum_{j=2}^p \gamma_j X_j ,\ 1\right) \mbox{\ \ and\ \ }(X_2,...,X_p) \sim N(0,\Sigma_{p-1}), $$ where $\Sigma_{p-1}$ is the $(p-1) \times (p-1)$ variance-covariance matrix whose $(r,c)$-element is $0.5^{2(|r-c|+1)}$. Note that this data-generating process induces dependence of the control $X_1$ of main interest on the rest of the $p-1$ controls $(X_2,\ldots,X_p)^\top$, as well as the dependence among the $p-1$ controls $(X_2,\ldots,X_p)^\top$. For the high-dimensional parameter vectors in the above data-generating model, we consider the following four cases of varying sparsity levels: \begin{align*} \text{(i)} & \quad (\alpha_2,\ldots,\alpha_p)^\top=(\gamma_2,\ldots,\gamma_p)^\top = (0.5^2,0.5^3,...,0.5^p)^\top, \\ \text{(ii)} & \quad (\alpha_2,\ldots,\alpha_p)^\top=(\gamma_2,\ldots,\gamma_p)^\top = (0.5^2,0.5^{5/2},...,0.5^{(p+2)/2})^\top, \\ \text{(iii)} & \quad (\alpha_2,\ldots,\alpha_p)^\top=(\gamma_2,\ldots,\gamma_p)^\top = (0.5^2,0.5^{7/3},...,0.5^{(p+4)/3})^\top, \qquad\text{and}\\ \text{(iv)} & \quad (\alpha_2,\ldots,\alpha_p)^\top=(\gamma_2,\ldots,\gamma_p)^\top = (0.5^2,0.5^{9/4},...,0.5^{(p+6)/4})^\top. \end{align*} We follow the general estimation and inference approach outlined in Section \ref{sec:scoreconstruction}. We set $h(x) = (x^\top,(x^2)^\top,(x^3)^\top)^\top$ for estimation of $\omega(x)$, and set $b(x) = (x^\top,(x^2)^\top,(x^3)^\top)^\top$ for estimation of $m_0$ and $m_1$. For the choice of $h_1$, we under-smooth the rule-of-thumb optimal choice as $h_1 = 1.06\sigma(Y) N^{-1/5-0.01}$. For each design, we use 500 iterations of Monte Carlo simulations to compute the mean, bias, and root mean square error (RMSE) of the estimate, as well as the 95\% uniform coverage over the set $[0.20,0.80]$ of quantiles. To evaluate the bias, RMSE, and the 95\% uniform coverage, we first numerically approximate the true UQPE by large-sample Monte Carlo simulations. Across sets of Monte Carlo simulations, we vary the DGP $\in \{\text{DGP 1, DGP 2, DGP 3}\}$ and the sparsity design $\in \{\text{(i)},\text{(ii)},\text{(iii)},\text{(iv)}\}$, while we fix the sample size $N = 500$ and the dimension $p = 100$ throughout. Table \ref{tab:simulation_approximate_sparsity} summarizes the simulation results under the sparsity designs (i) and (ii). We can make the following three observations in these results. First, the bias of our UQPE estimator is small, especially relative to the RMSE. This feature of the results supports the fact that our estimator mitigates the bias via the use of the doubly robust score. Second, the RMSE decreases as the sample size increases. Third, the 95\% uniform coverage frequencies are close to the nominal probability, namely, 0.95. This feature of the results supports our theory on the asymptotic validity of the bootstrap inference. From these simulation results, we confirm the main theoretical properties of the proposed method of estimation and inference for the UQPE across alternative data-generating processes. Table \ref{tab:simulation_approximate_sparsity_less_sparse} shows the simulation results under the less sparse designs (iii) and (iv). While the bias and RMSE are slightly bigger here than those in Table \ref{tab:simulation_approximate_sparsity}, the magnitudes of changes are modest. In addition to the simulation designs introduced above, we also experimented with other designs, and the simulation results are very similar and support the main theoretical properties of our proposed method as well -- see Appendix \ref{sec:additional_simulation} in the online supplement. \begin{table}[tbh] \centering \scalebox{1}{ \begin{tabular}{ccccccccccccccc} \multicolumn{14}{c}{(i) The Most Sparse Design -- with the Doubly Robust Score}\\ \hline\hline & & & & & & True && \multicolumn{3}{c}{Estimates} && \multicolumn{2}{c}{95\% Cover}\\ \cline{9-11}\cline{13-14} DGP & & $N$ & $p$ & $\tau$ && UQPE && Mean & Bias & RMSE && Point & Unif. \\ \hline \multirow{4}{*}{1 (i)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.00 && 1.03 & 0.03 & 0.16 && 0.948 &\multirow{4}{*}{0.956}\\ &&&& 0.40 && 1.00 && 1.02 & 0.02 & 0.13 && 0.948 &\\ &&&& 0.60 && 1.00 && 1.03 & 0.03 & 0.14 && 0.954 &\\ &&&& 0.80 && 1.00 && 0.99 &-0.01 & 0.16 && 0.948 &\\ \hline \multirow{4}{*}{2 (i)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.12 && 1.14 & 0.02 & 0.18 && 0.952 &\multirow{4}{*}{0.956}\\ &&&& 0.40 && 1.03 && 1.05 & 0.02 & 0.13 && 0.946 &\\ &&&& 0.60 && 0.95 && 0.98 & 0.03 & 0.13 && 0.950 &\\ &&&& 0.80 && 0.87 && 0.88 & 0.00 & 0.15 && 0.950 &\\ \hline \multirow{4}{*}{3 (i)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.14 && 1.17 & 0.03 & 0.18 && 0.950 &\multirow{4}{*}{0.950}\\ &&&& 0.40 && 1.04 && 1.06 & 0.02 & 0.13 && 0.942 &\\ &&&& 0.60 && 0.97 && 1.00 & 0.03 & 0.13 && 0.944 &\\ &&&& 0.80 && 0.91 && 0.90 & 0.00 & 0.13 && 0.952 &\\ \hline\hline \\ \multicolumn{14}{c}{(ii) The Second Most Sparse Design -- with the Doubly Robust Score}\\ \hline\hline & & & & & & True && \multicolumn{3}{c}{Estimates} && \multicolumn{2}{c}{95\% Cover}\\ \cline{9-11}\cline{13-14} DGP & & $N$ & $p$ & $\tau$ && UQPE && Mean & Bias & RMSE && Point & Unif. \\ \hline \multirow{4}{*}{1 (ii)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.00 && 1.04 & 0.05 & 0.17 && 0.930 &\multirow{4}{*}{0.962}\\ &&&& 0.40 && 1.00 && 1.04 & 0.04 & 0.14 && 0.954 &\\ &&&& 0.60 && 1.00 && 1.04 & 0.04 & 0.15 && 0.920 &\\ &&&& 0.80 && 1.00 && 1.02 & 0.02 & 0.16 && 0.944 &\\ \hline \multirow{4}{*}{2 (ii)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.12 && 1.16 & 0.04 & 0.19 && 0.944 &\multirow{4}{*}{0.954}\\ &&&& 0.40 && 1.03 && 1.07 & 0.04 & 0.14 && 0.938 &\\ &&&& 0.60 && 0.95 && 0.99 & 0.04 & 0.14 && 0.918 &\\ &&&& 0.80 && 0.87 && 0.90 & 0.02 & 0.14 && 0.954 &\\ \hline \multirow{4}{*}{3 (ii)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.14 && 1.18 & 0.04 & 0.19 && 0.938 &\multirow{4}{*}{0.960}\\ &&&& 0.40 && 1.05 && 1.09 & 0.04 & 0.14 && 0.932 &\\ &&&& 0.60 && 0.97 && 1.01 & 0.04 & 0.14 && 0.922 &\\ &&&& 0.80 && 0.90 && 0.93 & 0.02 & 0.14 && 0.946 &\\ \hline\hline \end{tabular} } \caption{Monte Carlo simulation results for the sparsity designs (i) and (ii). The true UQPE is numerically computed. The 95\% coverage is uniform over the set $[0.20,0.80]$.} \label{tab:simulation_approximate_sparsity} \end{table} \begin{table}[tbh] \centering \scalebox{1}{ \begin{tabular}{ccccccccccccccc} \multicolumn{14}{c}{(iii) The Third Most Sparse Design -- with the Doubly Robust Score}\\ \hline\hline & & & & & & True && \multicolumn{3}{c}{Estimates} && \multicolumn{2}{c}{95\% Cover}\\ \cline{9-11}\cline{13-14} DGP & & $N$ & $p$ & $\tau$ && UQPE && Mean & Bias & RMSE && Point & Unif. \\ \hline \multirow{4}{*}{1 (iii)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.00 && 1.06 & 0.06 & 0.19 && 0.940 &\multirow{4}{*}{0.962}\\ &&&& 0.40 && 1.00 && 1.06 & 0.05 & 0.14 && 0.948 &\\ &&&& 0.60 && 1.00 && 1.05 & 0.05 & 0.14 && 0.932 &\\ &&&& 0.80 && 1.00 && 1.04 & 0.03 & 0.17 && 0.936 &\\ \hline \multirow{4}{*}{2 (iii)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.12 && 1.17 & 0.05 & 0.20 && 0.936 &\multirow{4}{*}{0.964}\\ &&&& 0.40 && 1.03 && 1.09 & 0.06 & 0.15 && 0.946 &\\ &&&& 0.60 && 0.95 && 1.00 & 0.05 & 0.14 && 0.936 &\\ &&&& 0.80 && 0.87 && 0.91 & 0.04 & 0.15 && 0.920 &\\ \hline \multirow{4}{*}{3 (iii)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.15 && 1.20 & 0.04 & 0.20 && 0.942 &\multirow{4}{*}{0.966}\\ &&&& 0.40 && 1.05 && 1.10 & 0.06 & 0.15 && 0.936 &\\ &&&& 0.60 && 0.97 && 1.02 & 0.05 & 0.14 && 0.936 &\\ &&&& 0.80 && 0.90 && 0.94 & 0.04 & 0.15 && 0.934 &\\ \hline\hline \\ \multicolumn{14}{c}{(iv) The Least Sparse Design -- with the Doubly Robust Score}\\ \hline\hline & & & & & & True && \multicolumn{3}{c}{Estimates} && \multicolumn{2}{c}{95\% Cover}\\ \cline{9-11}\cline{13-14} DGP & & $N$ & $p$ & $\tau$ && UQPE && Mean & Bias & RMSE && Point & Unif. \\ \hline \multirow{4}{*}{1 (iv)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.00 && 1.07 & 0.07 & 0.18 && 0.936 &\multirow{4}{*}{0.970}\\ &&&& 0.40 && 1.00 && 1.07 & 0.07 & 0.15 && 0.928 &\\ &&&& 0.60 && 1.00 && 1.06 & 0.06 & 0.15 && 0.930 &\\ &&&& 0.80 && 1.00 && 1.05 & 0.05 & 0.17 && 0.948 &\\ \hline \multirow{4}{*}{2 (iv)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.13 && 1.19 & 0.06 & 0.21 && 0.944 &\multirow{4}{*}{0.966}\\ &&&& 0.40 && 1.03 && 1.10 & 0.07 & 0.15 && 0.924 &\\ &&&& 0.60 && 0.95 && 1.01 & 0.06 & 0.14 && 0.938 &\\ &&&& 0.80 && 0.87 && 0.93 & 0.05 & 0.15 && 0.934 &\\ \hline \multirow{4}{*}{3 (iv)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 1.15 && 1.22 & 0.06 & 0.20 && 0.944 &\multirow{4}{*}{0.968}\\ &&&& 0.40 && 1.04 && 1.12 & 0.08 & 0.16 && 0.918 &\\ &&&& 0.60 && 0.97 && 1.03 & 0.06 & 0.14 && 0.928 &\\ &&&& 0.80 && 0.90 && 0.96 & 0.06 & 0.16 && 0.950 &\\ \hline\hline \end{tabular} } \caption{Monte Carlo simulation results for the sparsity designs (iii) and (iv). The true UQPE is numerically computed. The 95\% coverage is uniform over the set $[0.20,0.80]$.} \label{tab:simulation_approximate_sparsity_less_sparse} \end{table} To highlight the value added by our proposed method to the existing literature, we also experiment with the RIF-Logit estimator from \cite{firpo2009unconditional} as a benchmark. Table \ref{tab:simulation_FFL} summarizes the simulation results based on the RIF-Logit for the sample size of $N=500$ under the most sparse design (i). Observe that, as the dimension $p$ increases from 25 to 50, the finite sample performance substantially degrades in terms of all of the displayed statistics, namely the bias, RMSE, and (pointwise and uniform) 95\% coverage frequencies. In particular, the uniform coverage frequency drops to zero even for the dimension that is as small as $p=50$. With the same sample size of $N=500$, on the other hand, our proposed method produces accurate coverage frequencies as well as accurate estimates for the even larger dimension $p=100$ as presented in Tables \ref{tab:simulation_approximate_sparsity} and \ref{tab:simulation_approximate_sparsity_less_sparse}. This comparison sheds lights on favorable finite sample performance of our proposed method when there are high-dimensional controls, in comparison with the existing alternative method. With that said, we would like to remark that we pay the costs of additional assumptions for nuisance parameter estimation, and hence there are tradeoffs between the existing procedure \citep{firpo2009unconditional} and our proposed method. \begin{table}[t] \centering \scalebox{1}{ \begin{tabular}{ccccccccccccccc} \multicolumn{14}{c}{The Conventional RIF-Loit Estimator under the Most Sparse Design (i)}\\ \hline\hline & & & & && True && \multicolumn{3}{c}{Estimates} && \multicolumn{2}{c}{95\% Cover}\\ \cline{9-11}\cline{13-14} DGP & & $N$ & $p$ & $\tau$ && UQPE && Mean & Bias & RMSE && Point & Unif. \\ \hline \multirow{8}{*}{1 (i)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{25} & 0.20 && 1.00 && 1.05 & 0.05 & 0.17 && 0.872 &\multirow{4}{*}{0.902}\\ &&&& 0.40 && 1.00 && 1.03 & 0.03 & 0.13 && 0.892 &\\ &&&& 0.60 && 1.00 && 1.03 & 0.03 & 0.13 && 0.898 &\\ &&&& 0.80 && 1.00 && 1.05 & 0.05 & 0.17 && 0.886 &\\ \cline{3-14} && \multirow{4}{*}{500} & \multirow{4}{*}{50} & 0.20 && 1.00 && 0.18 &-0.82 & 1.32 && 0.008 &\multirow{4}{*}{0.000}\\ &&&& 0.40 && 1.00 && 1.33 & 0.33 & 0.63 && 0.500 &\\ &&&& 0.60 && 1.00 && 1.33 & 0.33 & 0.57 && 0.474 &\\ &&&& 0.80 && 1.00 && 0.15 &-0.85 & 1.06 && 0.010 &\\ \hline \multirow{8}{*}{2 (i)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{25} & 0.20 && 1.12 && 1.19 & 0.07 & 0.20 && 0.876 &\multirow{4}{*}{0.906}\\ &&&& 0.40 && 1.03 && 1.06 & 0.04 & 0.14 && 0.884 &\\ &&&& 0.60 && 0.96 && 0.99 & 0.03 & 0.12 && 0.900 &\\ &&&& 0.80 && 0.88 && 0.92 & 0.04 & 0.15 && 0.878 &\\ \cline{3-14} && \multirow{4}{*}{500} & \multirow{4}{*}{50} & 0.20 && 1.12 && 0.08 &-1.03 & 1.28 && 0.006 &\multirow{4}{*}{0.000}\\ &&&& 0.40 && 1.03 && 1.36 & 0.34 & 0.58 && 0.464 &\\ &&&& 0.60 && 0.95 && 1.24 & 0.29 & 0.49 && 0.528 &\\ &&&& 0.80 && 0.87 && 0.34 &-0.53 & 1.04 && 0.020 &\\ \hline \multirow{8}{*}{3 (i)} & \multirow{8}{*}{} & \multirow{4}{*}{500} & \multirow{4}{*}{25} & 0.20 && 1.14 && 1.22 & 0.07 & 0.21 && 0.886 &\multirow{4}{*}{0.912}\\ &&&& 0.40 && 1.04 && 1.08 & 0.04 & 0.14 && 0.892 &\\ &&&& 0.60 && 0.97 && 1.01 & 0.03 & 0.13 && 0.900 &\\ &&&& 0.80 && 0.90 && 0.95 & 0.04 & 0.15 && 0.874 &\\ \cline{3-14} && \multirow{4}{*}{500} & \multirow{4}{*}{50} & 0.20 && 1.14 && 0.04 &-1.10 & 1.15 && 0.006 &\multirow{4}{*}{0.000}\\ &&&& 0.40 && 1.04 && 1.41 & 0.37 & 0.64 && 0.444 &\\ &&&& 0.60 && 0.97 && 1.28 & 0.31 & 0.52 && 0.502 &\\ &&&& 0.80 && 0.90 && 0.26 &-0.65 & 0.98 && 0.016 &\\ \hline\hline \end{tabular} } \caption{Monte Carlo simulation results based on the conventional RIF-Logit estimator under the sparsity design (i). The true UQPE is numerically computed. The 95\% coverage is uniform over the set $[0.20,0.80]$.} \label{tab:simulation_FFL} \end{table} In addition to the RIF-Logit estimator, we also experiment with a RIF-Lasso-Logit estimator. This estimator has not been formally investigated in the literature to our knowledge, but it coincides with a non-orthogonalized version of our procedure and hence serves as a useful benchmark to evaluate the benefits of our proposed doubly robust score. Table \ref{tab:simulation_double_vs_nodouble} shows the simulation results based on this procedure without the doubly robust score (right panel) compared with the results of our proposed procedure with the doubly robust score (left panel) copied from Table \ref{tab:simulation_approximate_sparsity}. While the coverage frequencies for our proposed method achieves the nominal probability of 95\%, those for the counterpart without the doubly robust score fall short of 95\%. These results show that the method without the doubly robust score incurs larger size distortions and demonstrate that it is useful to employ the doubly robust score as we do for our proposed method. Finally, we consider the pointwise and uniform tests for $UQPE(\tau) =0$ by testing $\theta(\tau) = 0$. The corresponding simulation results are collected in Appendix \ref{sec:additional_simulation} in the online supplement. \begin{table}[tbh] \centering \scalebox{1}{ \begin{tabular}{ccccccccccccccc} \multicolumn{14}{c}{(i) The Most Sparse Design}\\ \hline\hline \multicolumn{7}{c}{With the Doubly Robust Score}&&\multicolumn{7}{c}{Without the Doubly Robust Score}\\ \cline{1-7}\cline{9-15} &&&&& \multicolumn{2}{c}{95\% Cover} &&&&&&& \multicolumn{2}{c}{95\% Cover}\\ \cline{6-7}\cline{14-15} DGP & $N$ & $p$ & $\tau$ && Point & Unif. && DGP & $N$ & $p$ & $\tau$ && Point & Unif.\\ \cline{1-7}\cline{9-15} \multirow{4}{*}{1 (i)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.948 &\multirow{4}{*}{0.956} && \multirow{4}{*}{1 (i)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.930 &\multirow{4}{*}{0.912}\\ &&& 0.40 && 0.948 & && &&& 0.40 && 0.912\\ &&& 0.60 && 0.954 & && &&& 0.60 && 0.902\\ &&& 0.80 && 0.948 & && &&& 0.80 && 0.910\\ \cline{1-7}\cline{9-15} \multirow{4}{*}{2 (i)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.952 &\multirow{4}{*}{0.956} && \multirow{4}{*}{2 (i)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.924 &\multirow{4}{*}{0.910}\\ &&& 0.40 && 0.946 & && &&& 0.40 && 0.908\\ &&& 0.60 && 0.950 & && &&& 0.60 && 0.906\\ &&& 0.80 && 0.950 & && &&& 0.80 && 0.916\\ \cline{1-7}\cline{9-15} \multirow{4}{*}{3 (i)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.950 &\multirow{4}{*}{0.950} && \multirow{4}{*}{3 (i)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.932 &\multirow{4}{*}{0.914}\\ &&& 0.40 && 0.942 & && &&& 0.40 && 0.910\\ &&& 0.60 && 0.944 & && &&& 0.60 && 0.908\\ &&& 0.80 && 0.952 & && &&& 0.80 && 0.922\\ \hline\hline \\ \multicolumn{14}{c}{(ii) The Second Most Sparse Design}\\ \hline\hline \multicolumn{7}{c}{With the Doubly Robust Score}&&\multicolumn{7}{c}{Without the Doubly Robust Score}\\ \cline{1-7}\cline{9-15} &&&&& \multicolumn{2}{c}{95\% Cover} &&&&&&& \multicolumn{2}{c}{95\% Cover}\\ \cline{6-7}\cline{14-15} DGP & $N$ & $p$ & $\tau$ && Point & Unif. && DGP & $N$ & $p$ & $\tau$ && Point & Unif.\\ \cline{1-7}\cline{9-15} \multirow{4}{*}{1 (ii)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.930 &\multirow{4}{*}{0.962} && \multirow{4}{*}{1 (ii)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.914 &\multirow{4}{*}{0.920}\\ &&& 0.40 && 0.954 & && &&& 0.40 && 0.924\\ &&& 0.60 && 0.920 & && &&& 0.60 && 0.882\\ &&& 0.80 && 0.944 & && &&& 0.80 && 0.914\\ \cline{1-7}\cline{9-15} \multirow{4}{*}{2 (ii)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.944 &\multirow{4}{*}{0.954} && \multirow{4}{*}{2 (ii)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.922 &\multirow{4}{*}{0.910}\\ &&& 0.40 && 0.938 & && &&& 0.40 && 0.920\\ &&& 0.60 && 0.918 & && &&& 0.60 && 0.890\\ &&& 0.80 && 0.954 & && &&& 0.80 && 0.926\\ \cline{1-7}\cline{9-15} \multirow{4}{*}{3 (ii)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.938 &\multirow{4}{*}{0.960} && \multirow{4}{*}{3 (ii)} & \multirow{4}{*}{500} & \multirow{4}{*}{100} & 0.20 && 0.914 &\multirow{4}{*}{0.920}\\ &&& 0.40 && 0.932 & && &&& 0.40 && 0.912\\ &&& 0.60 && 0.922 & && &&& 0.60 && 0.892\\ &&& 0.80 && 0.946 & && &&& 0.80 && 0.908\\ \hline\hline \end{tabular} } \caption{Monte Carlo simulation results for the sparsity designs (i) and (ii) with the doubly robust score (left) and without the doubly robust score (right). The true UQPE is numerically computed. The 95\% coverage is uniform over the set $[0.20,0.80]$.} \label{tab:simulation_double_vs_nodouble} \end{table} \section{Heterogeneous Counterfactual Marginal Effects of Job Corps Training}\label{sec:empirics} The UQPE identifies counterfactual effects that are heterogeneous across outcome levels $Y$. This feature of the UQPE is useful for evaluating economic policies designed to benefit targeted subpopulations of the economy that are identified in terms of economic outcomes such as wage and income. For instance, major job training programs are designed to benefit targeted subpopulations of individuals who are low wage earners, i.e., lower quantiles of $Y$. In redesigning a job training program, a policy maker may want to choose such changes in $X$ that particularly benefit these targeted subpopulations (with potentially lower wages) rather than the others (with potentially higher wages). Therefore, it is important for the policy maker to understand heterogeneous outcome gains (e.g., wage increase) of alternative counterfactual changes in $X$ across different subpopulations characterized by the levels of $Y$. The UQPE provides solutions to this goal. While a rich set of empirical findings have been reported about the treatment effects of Job Corps, an analysis of heterogeneous counterfactual effects is missing in the literature to the best of our knowledge, despite its relevance to designing effective program policies and schemes as emphasized in the previous paragraph. Applying our proposed method, we analyze heterogeneous counterfactual marginal effects of Job Corps training on labor outcomes in this section. Specifically, it is important to find whether higher (respectively, lower) potential earners would benefit more (respectively, less) from counterfactually extending the duration of the training program. Since the entrance interview in Job Corps provides some information regarding the human capital of prospective trainees, answers to these empirical questions may possibly help the program designers to devise more efficient policies and schemes for the training programs. As such, we are interested in heterogeneous counterfactual marginal effects of the duration of the exposure to the program, as a continuous treatment variable, on labor outcomes measured by hourly wages. We identify and estimate the counterfactual distributional change given a large set of observed controls by taking advantage of our machine-learning-based method. For the outcome variable, we consider hourly wages. For the continuous treatment variables, we consider two seemingly similar but different measures: the duration in days of participation in Job Corps and the duration in days of actually taking classes in Job Corps. As will be shown shortly, these two definitions lead to qualitatively different empirical findings. We use 42 observed controls (and their powers). Table \ref{tab:summary_statistics} shows the summary statistics of our data. Different sets of observations are missing across different variables, and hence we use the intersection of observations that are non-missing across all the variables in use for our analysis. After dropping the missing observations, we are left with $N=481$ when we define the duration of participation in Job Corps as the treatment, while we are left with $N=368$ when we define the duration of actually taking classes in Job Corps. Taking the intersection of these two samples, we use a subsample of size $N=347$. Note that the dimension of covariates is relatively large given this effective sample size, and hence high-dimensional econometric methods are indispensable in the current application. \begin{table} \centering \scalebox{0.90}{ \begin{tabular}{llccccc} \hline\hline && 25th & & & 75th & Non-\\ && Percentile & Median & Mean & Percentile & Missing \\ \hline Outcome $Y$ & Hourly wage & 4.750 & 5.340 & 5.892 & 6.500 & 7606\\ \hline Treatment $X_1$ & Days in Job Corps & 54.0 & 129.0 & 153.4 & 237.0 & 4748\\ & Days taking classes & 41.0 & 91.0 & 120.2 & 179.0 & 4207\\ \hline Controls $X_{-1}$ & Age & 17.00 & 18.00 & 18.43 & 20.00 & 14653\\ & Female & 0.000 & 0.000 & 0.396 & 1.000 & 14653\\ & White & 0.000 & 0.000 & 0.303 & 1.000 & 14327\\ & Black & 0.000 & 1.000 & 0.504 & 1.000 & 14327\\ & Hispanic origin & 0.000 & 0.000 & 0.184 & 0.000 & 14288\\ & Native language is English & 1.000 & 1.000 & 0.855 & 1.000 & 14327\\ & Years of education & 9.00 & 10.00 & 10.24 & 11.00 & 14327\\ & Other job trainings & 0.000 & 0.000 & 0.339 & 1.000 & 13500\\ & Mother's education & 11.00 & 12.00 & 11.53 & 11.53 & 11599\\ & Mother worked & 1.000 & 1.000 & 0.752 & 1.000 & 14223\\ & Father's education & 11.00 & 12.00 & 11.50 & 12.00 & 8774\\ & Father worked & 0.000 & 1.000 & 0.665 & 1.000 & 12906\\ & Received welfare & 0.000 & 1.000 & 0.563 & 1.000 & 14327\\ & Head of household & 0.000 & 0.000 & 0.123 & 0.000 & 14327\\ & Number of people in household & 2.000 & 3.000 & 3.890 & 5.000 & 14327\\ & Married & 0.000 & 0.000 & 0.021 & 0.000 & 14327\\ & Separated & 0.000 & 0.000 & 0.017 & 0.000 & 14327\\ & Divorced & 0.000 & 0.000 & 0.007 & 0.000 & 14327\\ & Living with spouse & 0.000 & 0.000 & 0.014 & 0.000 & 14235\\ & Child & 0.000 & 0.000 & 0.266 & 1.000 & 13500\\ & Number of children & 0.000 & 0.000 & 0.347 & 0.000 & 13500\\ & Past work experience & 0.000 & 1.000 & 0.648 & 1.000 & 14327\\ & Past hours of work per week & 0.000 & 24.00 & 25.15 & 40.00 & 14299\\ & Past hourly wage & 4.250 & 5.000 & 5.142 & 5.500 & 7884\\ & Expected wage after training & 7.000 & 9.000 & 9.910 & 11.000 & 6561\\ & Public housing or subsidy & 0.000 & 0.000 & 0.200 & 0.000 & 14327\\ & Own house & 0.000 & 0.000 & 0.411 & 1.000 & 11457\\ & Have contributed to mortgage & 0.000 & 0.000 & 0.255 & 1.000 & 13951\\ & Past AFDC & 0.000 & 0.000 & 0.301 & 1.000 & 14327\\ & Past SSI or SSA & 0.000 & 0.000 & 0.251 & 1.000 & 14327\\ & Past food stamps & 0.000 & 0.000 & 0.438 & 1.000 & 14327\\ & Past family income $\ge$ \$12K & 0.000 & 1.000 & 0.576 & 1.000 & 14327\\ & In good health & 1.000 & 1.000 & 0.871 & 1.000 & 14327\\ & Physical or emotional problem & 0.000 & 0.000 & 0.049 & 0.000 & 14327\\ & Smoke & 0.000 & 1.000 & 0.537 & 1.000 & 14327\\ & Alcohol & 0.000 & 1.000 & 0.584 & 1.000 & 14327\\ & Marijuana or hashish & 0.000 & 0.000 & 0.369 & 1.000 & 14327\\ & Cocaine & 0.000 & 0.000 & 0.033 & 0.000 & 14327\\ & Heroin/opium/methadone & 0.000 & 0.000 & 0.012 & 0.000 & 14327\\ & LSD/peyote/psilocybin & 0.000 & 0.000 & 0.055 & 0.000 & 14327\\ & Arrested & 0.000 & 0.000 & 0.266 & 1.000 & 14327\\ & Number of times arrested & 0.000 & 0.000 & 0.537 & 1.000 & 14218\\ \hline\hline \end{tabular} } \caption{Summary statistics of data.} \label{tab:summary_statistics} \end{table} Observe that our sample consists of high-dimensional controls and the sample size that results from the aforementioned sample selection is not sufficiently large for conventional econometric methods of estimation and inference for the UQPE. We therefore use our proposed method of estimation and inference for the UQPE that can accommodate a large dimension of controls via the use of the doubly robust score. Using the same computer program as the one used for simulation studies presented in Section \ref{sec:simulation}, we obtain estimates, pointwise 95\% confidence intervals, and uniform 95\% confidence bands for $UQPE(\tau)$ for $\tau \in [0.20,0.80]$. Table \ref{tab:job_corps} summarizes the results. The row groups (I) and (II) report results for days in Job Corps as the treatment variable, while the row groups (III) and (IV) report results for days of taking classes in Job Corps as the treatment variable. The row groups (I) and (III) report results for the hourly wage as the outcome variable, while the row groups (II) and (IV) report results for the logarithm of the hourly wage as the outcome variable. \begin{table}[t] \centering \scalebox{1.00}{ \begin{tabular}{rllccrrrr} \hline\hline & Outcome & Treatment & $\tau$ & $\widehat{UQPE}(\tau)$ & \multicolumn{2}{c}{Pointwise 95\% CI} & \multicolumn{2}{c}{Uniform 95\% CB}\\ \hline (I) & Hourly & Days in & 0.2 & 1.16 & [0.79 & 1.54] & [0.30 & 2.03]\\ &wage & Job Corps & 0.4 & 1.95 & [1.52 & 2.39] & [0.94 & 2.97]\\ & & & 0.6 & 1.60 & [0.26 & 2.94] & [0.11 & 3.09]\\ & & & 0.8 & 4.56 & [2.96 & 6.16] &[-0.67 & 9.79]\\ \hline (II) & Log & Days in & 0.2 & 0.20 & [0.13 & 0.27] & [0.02 & 0.38]\\ &hourly& Job Corps & 0.4 & 0.50 & [0.30 & 0.69] &[-0.12 & 1.11]\\ &wage & & 0.6 & 0.12 &[-0.15 & 0.38] &[-0.20 & 0.43]\\ & & & 0.8 & 0.66 & [0.37 & 0.96] &[-0.04 & 1.37]\\ \hline (III) & Hourly & Days in & 0.2 & 2.69 & [0.08 & 5.30] &[-19.06 & 24.44]\\ &wage & Job Corps & 0.4 & 2.66 & [2.07 & 3.25] &[-0.48 & 5.80]\\ & & classes & 0.6 & 1.14 & [0.00 & 2.29] &[-0.58 & 2.87]\\ & & & 0.8 & 5.30 & [2.76 & 7.84] &[-5.64 & 16.25]\\\hline (IV) & Log & Days in & 0.2 & 0.46 & [0.01 & 0.90] &[-4.24 & 5.15]\\ &hourly& Job Corps & 0.4 & 0.64 & [0.38 & 0.89] &[-1.38 & 2.65]\\ &wage & classes & 0.6 & 0.17 &[-0.22 & 0.55] &[-0.25 & 0.58]\\ & & & 0.8 & 0.77 & [0.42 & 1.13] &[-1.29 & 2.84]\\ \hline\hline \end{tabular} } \caption{Heterogeneous counterfactual marginal effects of days in Job Corps using $p=42$ controls. The displayed values are thousand times the original values for ease of reading. The row groups (I) and (II) report results for days in Job Corps as the treatment variable, while the row groups (III) and (IV) report results for days of taking classes in Job Corps as the treatment variable. The row groups (I) and (III) report results for the hourly wage as the outcome variable, while the row groups (II) and (IV) report results for the logarithm of the hourly wage as the outcome variable. The results are based on the sample size of $N=347$.} \label{tab:job_corps} \end{table} Overall, the magnitudes of the estimates are consistent with those from prior studies, and we also obtain the following new findings.\footnote{In the row group (I) in Table \ref{tab:job_corps} for instance, the \textit{daily} marginal effects range from 0.0012 to 0.0046 dollars. This magnitude is consistent with the 0.22 difference in average hourly wages between the treatment and control groups \citep*[][Table 3]{SBM2008}, where the average number of days in Job Corps for the treated group is 153.4 (Table \ref{tab:summary_statistics}).} First, observe that the none of the uniform 95\% confidence bands are contained in the negative reals. These results indicate that the counterfactual marginal effects of our interest are significantly negative for none of the heterogeneous subpopulations. We next look into the heterogeneity of these effects. Observe in row (I) that the uniform 95\% confidence band for $\tau=0.2$ is contained in the positive reals while the uniform 95\% confidence band for $\tau=0.8$ intersects with the zero. These results imply heterogeneous statistical significance across quantiles. Specifically, we predict significantly positive counterfactual effects for lower wage earners ($\tau=0.2$, $0.4$ and $0.6$) and insignificant effects for higher wage earners ($\tau=0.8$). On the other hand, the point estimate is smaller for $\tau=0.2$ than that for $\tau=0.8$ in row (I). The larger effects for the subpopulation of higher potential earners (i.e., higher quantiles) could simply result from the scale effect. Heterogeneity in causal effects across different quantiles often vanishes once we take the logarithm of the outcome variable.\footnote{The relationship ${\partial Q_{\tau}(F^\epsilon_{\log(Y)})}/{\partial \epsilon}\big|_{\epsilon=0} = {(\partial Q_{\tau}(F^\epsilon_{Y})/\partial \epsilon)}/{Q_{\tau}(F^\epsilon_{Y})}\big|_{\epsilon=0}$ implies that the sign of the level and the logarithm coincide at the population level, even though the signs of empirical estimates and statistical significance may not coincide as in our empirical results.} Therefore, we next consider the row group (II), where the outcome variable is defined as the logarithm of the hourly wage. Notice that, even in this row group, we continue to observe the same qualitative pattern as that in the row group (I). Namely, the point estimate is smaller for $\tau=0.2$ than that for $\tau=0.8$, but the uniform 95\% confidence band for $\tau=0.2$ is contained in the positive reals while the uniform 95\% confidence band for $\tau=0.8$ intersects with the zero. These results provide policy makers of confidence that extending the duration of exposures to the Job Corps program will benefit potential lower wage earners. Once we turn to row groups (III) and (IV), where the treatment variable is now defined as days of taking classes in Job Corps, we no longer observe the aforementioned pattern of heterogeneous counterfactual marginal effects, and the uniform 95\% confidence bands globally intersect with the zero. However, if we implement the test $UQPE(\tau)=0, \forall\tau\in [0.20,0.80]$ based on $\hat\theta(\tau)$ as in Section \ref{sec:testing_uqpe}, then we actually reject this hypothesis of uniformly zero counterfactual marginal effects with the 95\% confidence. In summary, we obtain the following three new findings about counterfactual marginal effects of the duration of exposure to Job Corps training on the hourly wage. First, the effects are significantly negative for none of the heterogeneous subpopulations under consideration, regardless of the definition of the treatment variable and the definition of the outcome variable. Second, the counterfactual marginal effects of days in Job Corps are significant for the subpopulation of lower wage earners, while they are insignificant for higher wage earners. This result holds robustly regardless of whether we define the outcome variable as the hourly wage or the logarithm of it. Third, we fail to detect the aforementioned pattern of counterfactual marginal effects once we define the treatment variable as days of taking classes in Job Corps, while we still reject the hypothesis of uniformly zero counterfactual marginal effects. These results contain the following policy implications. Extending the duration of exposures the Job Corps training program will be effective especially for the targeted subpopulations of lower potential wage earners. However, these benefits may come from sources other than the experience of taking classes in the Job Corps training program. Finally, to get further insights about our proposed method, we conclude this section with discussions of more details about what was implemented in the black box to produce the results reported in Table \ref{tab:job_corps}. When estimating the core functions $m_0(x,q_\tau)$ and $m_1(x,q_\tau)$ by the lasso logit, we in fact select different subvectors of $b(X)$ across different quantiles $\tau$. Table \ref{tab:selection} shows which controls and/or their powers are selected for each $\tau \in \{0.20,0.40,0.60,0.80\}$. While there are some controls that are common (such as the intercept and the number of people in household) across all $\tau$, the selections are by no means uniform across $\tau$. A researcher does not \textit{ex ante} know which variables among many in the list should be included for each quantile $\tau$. Including all the potentially relevant controls would incur non-trivial size distortions -- recall the simulation results shown in Table \ref{tab:simulation_FFL} in Section \ref{sec:simulation}. By our proposed method of inference that achieves the nominal size, on the other hand, it is those shown in Table \ref{tab:selection} that were selected to be relevant for each $\tau \in \{0.20,0.40,0.60,0.80\}$ in producing the estimation results reported in Table \ref{tab:job_corps}. \begin{table}[t] \centering \scalebox{1.00}{ \begin{tabular}{rcccc} \hline\hline $\tau$ & 0.2 & 0.4 & 0.6 & 0.8\\ \hline & Intercept & Intercept & Intercept & Intercept \\ \cline{2-5} & & & & Married \\ \cline{2-5} & & & Separated & Separated \\ \cline{2-5} & & & & Living with spouse \\ \cline{2-5} & & & Education & Education \\ \cline{2-5} & Number of people & Number of people & Number of people & Number of people \\ & in household & in household & in household & in household \\ \hline\hline \end{tabular} } \caption{The list of variables selected by the lasso logit estimation of $m_0(x,q_\tau)$ and $m_1(x,q_\tau)$ for $\tau \in \{0.2,0.4,0.6,0.8\}$.} \label{tab:selection} \end{table} \section{Conclusion}\label{sec:concl} Counterfactual analyses often involve high-dimensional controls. On the other hand, existing methods of estimation and inference for heterogeneous counterfactual changes are not compatible with high-dimensional settings. In this paper, we therefore propose a novel doubly/locally robust score for debiased estimation and inference for the UQPE as a measure of heterogeneous counterfactual marginal effects. A concrete implementation procedure is provide for estimation and multiplier bootstrap inference. The online supplement additionally presents a general class of estimation and inference procedures. Asymptotic theories are presented to guarantee that the bootstrap method works for size control. Simulation studies support our theoretical properties. Applying the proposed method of estimation and inference to survey data of Job Corps, the largest training program for disadvantaged youth in the United States, we obtain the following two policy implications. First, extending the duration of exposures the Job Corps training program will be effective especially for the targeted subpopulations of lower potential wage earners. Second, these benefits may come from sources other than the experience of taking classes in the Job Corps training program.
{ "timestamp": "2022-02-25T02:21:03", "yymm": "2007", "arxiv_id": "2007.13659", "language": "en", "url": "https://arxiv.org/abs/2007.13659" }
\section{Introduction} Consider the problem of explaining sequential decision-making on the basis of demonstrated behavior. In healthcare, an important goal lies in being able to obtain an interpretable parameterization of the experts' behavior (e.g in terms of how they assign treatments) such that we can quantify and inspect policies in different institutions and uncover the trade-offs and preferences associated with expert actions \citep{james2000challenge, westert2018medical, van2016physician, jarrett2020inverse}. Moreover, modeling the reward function of different clinical practitioners can be revealing as to their tendencies \mbox{to treat various diseases more/less aggressively \citep{rysavy2015between}, which} \textemdash in combination with patient outcomes\textemdash has the potential to inform and update clinical guidelines. In many settings, such as medicine, decision-makers can be modeled as reasoning about "what-if" patient outcomes: Given the available information about the patient, what would happen if we took a particular action? \citep{djulbegovic2018rational, mcgrath2009doctors}. As treatments often affect several patient covariates, by having both benefits and side-effects, decision-makers often make choices based on their preferences over these counterfactual outcomes. Thus, in our case, an interpretable explanation of a policy is one where the reward signal for (sequential) actions is parameterized on the basis of preferences over (sequential) counterfactuals (i.e. "what-if" patient outcomes). Given the observations and actions made by an expert, \textit{inverse reinforcement learning} (IRL) offers a principled way for modeling their behavior by recovering the (unknown) reward function being maximized \citep{ng2000algorithms, abbeel2004apprenticeship, choi2011inverse}. Standard solutions operate by iterating on candidate reward functions, solving the associated (forward) reinforcement learning problem at each step. In many real-world problems, however, we are specifically interested in the challenge of offline learning\textemdash that is, where further experimentation is not possible\textemdash such as in medicine. In this \textit{batch} setting, we only have access to trajectories sampled from the expert policy in the form of an observational dataset\textemdash such as in electronic health records. \textbf{Batch IRL.}~ By their nature, classic IRL algorithms require interactive access to the environment, or full knowledge of the environment's dynamics \citep{ng2000algorithms, abbeel2004apprenticeship, choi2011inverse}. While batch IRL solutions have been proposed by way of off-policy evaluation \citep{klein2011batch, klein2012inverse, lee2019truly}, they suffer from two disadvantages. First, they are limited by the assumption that state dynamics are fully-observable and Markovian. This is hardly true in medicine: treatment assignments generally depend on how patient covariates have evolved over time \citep{futoma2020popcorn}. Second, rewards are often parameterized as uninterpretable representations of neural network hidden states and consequently cannot be used to explain sequential decision making. \begin{figure} \centering \vspace{-5mm} \includegraphics[width=0.80\textwidth]{figs/intro_fig_new.pdf}% \caption{Explaining decision-making behaviour in terms of preferences over "what if" outcomes. Consider the evolution of tumour volume ($U$) and side effects ($Z$) under a binary action. $\bb{E}[U_{t+1}[a_t]\mid h_t]$ and $\bb{E}[Z_{t+1}[a_t]\mid h_t]$ are the counterfactuals for the patient features under action $a_t$ given history $h_t$ of prior actions and covariates. Parameterizing the reward as the weighted sum of these counterfactuals: $R(h_t, a_t) = w_u \bb{E}[U_{t+1}[a_t]\mid h_t] + w_z \bb{E}[Z_{t+1}[a_t]\mid h_t]$, naturally allows us to model the preferences of experts: e.g. finding that $|w_u|>|w_z|$ indicates that the expert is treating more aggressively, by placing more weight on reducing tumour volume than on minimizing side effects.}% \label{fig:motivation} \vspace{-7mm} \end{figure} \textbf{"What-if" Explanations.}~ To address these shortcomings and to obtain a parameterizable interpretation of the expert's behavior, we propose explicitly incorporating counterfactual reasoning into batch IRL. In particular, we focus on ``what if'' explanations for modeling decision-making, while simultaneously accounting for the partially-observable nature of patient histories. Under the max-margin apprenticeship framework \citep{abbeel2004apprenticeship, klein2011batch, lee2019truly}, we learn a parameterized reward function $R(h_t, a_t)$ that is defined as a weighted sum over \textit{potential outcomes} \citep{rubin2005causal} for taking action $a_t$ given history $h_t$. As highlighted in Figure \ref{fig:motivation}, consider the decision making process of assigning a binary action given the tumour volume ($U$) and side effects ($Z$). Let $\bb{E}[U_{t+1}[a_t]\mid h_t]$ and $\bb{E}[Z_{t+1}[a_t]\mid h_t]$ be the counterfactual outcomes for the two covariates when action $a_t$ is taken given the history $h_t$ of covariates and previous actions. We define the reward as the weighted sum of these counterfactuals: $R(h_t, a_t) = w_u \bb{E}[U_{t+1}[a_t]\mid h_t] + w_z \bb{E}[Z_{t+1}[a_t]\mid h_t]$, to take into account the effect of actions and to directly model the preferences of the expert. The ideal scenario is when both the tumour volume and the side effects are zero, so the reward weights of a doctor aiming for this are both negative. However, recovering that $|w_u| > |w_z|$, it means that the doctor is treating more aggressively, as they are focusing more on reducing the tumour volume rather than on the side effects of treatments. Alternatively, $|w_u| < |w_z|$ indicates that the side effects are more important and the expert is treating less aggressively. Our motivation for using counterfactuals to define the reward comes from the idea that rational decision making considers the potential effects of actions \citep{djulbegovic2018rational}. \textbf{Contributions.}~ Exploring the synergy between counterfactual reasoning and batch IRL for understanding sequential decision making confers multiple advantages. First, it offers a principled approach for parameterizing reward functions in terms of preferences over \textit{what-if} patient outcomes, which enables us to explain the cost-benefit tradeoffs associated with an expert's actions. Second, by estimating the effects of different actions, counterfactuals readily tackle the \textit{off-policy} nature of policy evaluation in the batch setting. Furthermore, we demonstrate that not only does this alleviate the \textit{cold-start} problem typical of conventional batch IRL solutions, but also accommodates settings where the usual assumption of full observability fails to hold. Through experiments in both real and simulated medical environments, we illustrate the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior. \vspace{-3mm} \section{Related works} \vspace{-3mm} In our work, the aim is to explain decision-making by recovering the preferences of experts with respect to the effects of their actions, denoted by the counterfactual outcomes. This goal is fundamentally different from the goal of IRL methods which generally aim to match the performance of experts. We operate under the standard max-margin apprenticeship framework \citep{ng2000algorithms, abbeel2004apprenticeship}, which searches for a reward function that minimizes the margin between feature expectations of the expert and candidate policies. However, our approach to recovering and understanding decision policies is uniquely characterized by incorporating counterfactuals to obtain explainable reward functions. To tackle the challenges posed by real-world decision making, our method also operates in an offline and model-free manner, and accommodates partially-observable environments. \begin{table*}[t] \begin{center} \begin{small} \setlength\tabcolsep{0.7pt} \renewcommand{\arraystretch}{0.7} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lcccccc} \toprule Method & Environment & Batch & Feature map for reward & Policy & Feat. expectations \\ \midrule \small{\citet{abbeel2004apprenticeship}} & Model-based & No & $\phi(s_t)$ = \text{basis functions for state} $s_t$ & $\pi(a_t\mid s_t)$ & Model roll-outs \\ \citet{choi2011inverse} & Model-based & No & $\sum_{s}b_{t}(s)\phi(s, a_t)$ = \text{basis for belief} $b_{t}$ & $\pi(a_t\mid b_t)$ & Model roll-outs \\ \citet{klein2011batch} & Model-free & Yes & $\phi(x_t)$ = \text{basis functions for state } $x_t$ & $\pi(a_t\mid x_t)$ & LSTD-Q \\ \citet{lee2019truly} & Model-free & Yes & \smash{$ \phi(x_t, a_t) = \text{concat} (\phi(x_t), a_t)$} & $\pi(a_t\mid x_t)$ & DSFN \\ \midrule Ours & Model-free & Yes & $\phi(h_t, a_t) = \mathbb{E}[Y_{t+1}[a_t] | h_t]$ & $\pi(a_t\mid h_t)$ & \makecell{Counterfactual \\ $\mu$-learning} \\ \bottomrule \end{tabular} \end{adjustbox} \end{small} \end{center} \vspace{-0.5em} \caption{Comparison of our proposed method (batch, counterfactual IRL) with related works in IRL.} \label{tab:related-works} \vspace{-5mm} \end{table*} \textit{Explainability}. By using basis functions \citep{klein2012inverse} or hidden layers of a deep network \citep{lee2019truly} to define the feature map, the learned rewards of either approach are inherently uninterpretable, and cannot be used to explain differences in expert behavior. An alternative approach for recovering the expert policy (without reward functions) is imitation learning \citep{hussein2017imitation, osa2018algorithmic, torabi2019recent, jarrett2020strictly}. However, these methods do not allow us to fully model the decision-making process of experts and to uncover the trade-offs behind their actions. \textit{Batch Learning}. \citet{klein2011batch} propose an off-policy evaluation method based on least squares temporal difference (LSTD-$Q$) \citep{lagoudakis2003least} for estimating feature expectations, and \citet{klein2012inverse} use a linear score-based classifier to directly approximate the $Q$-function offline. However, both methods require the constraining assumptions that rewards are direct, linear functions of fully-observable states\textemdash assumptions we cannot afford to make in realistic settings such as medicine. \citet{lee2019truly} propose a deep successor feature network (DSFN) based on $Q$-learning to estimate feature expectations. But their approach similarly assumes fully-observable states, and additionally suffers from the ``cold-start'' problem where off- policy evaluations are heavily biased unless the initial candidate policy is (already) close to the expert. \textit{Partial Observability}. No existing batch IRL method accommodates modeling expert policies that depend on patient histories. While \citet{choi2011inverse} and \citet{makino2012apprenticeship} extend the apprenticeship learning paradigm to partially observable environments by considering policies on beliefs over states, both need to interact with the environment (or a perfect simulator) during learning. To the best of our knowledge, we are the first to propose explaining sequential decisions through counterfactual reasoning and to tackle the batch IRL problem in partially-observable environments. Our use of the estimated counterfactuals yields inherently interpretable rewards and simultaneously addresses the cold-start problem in \citet{lee2019truly}. Table \ref{tab:related-works} highlights the main differences between our method and the relevant related works. See Appendix \ref{apx:related_works} for additional related works. \vspace{-3mm} \section{Problem formulation} \textbf{Preliminaries.}~ At timestep $t$, let random variable $X_t \in \cl{X}$ denote the observed patient features and let $A_t\in \cl{A}$ denote the action (e.g. treatment) taken, where $\cl{A}$ is a finite set of actions. Let $x_t$ and $a_t$ denote realizations of these random variables. Let $h_t = (x_0, a_0, \dots, x_{t-1}, a_{t-1}, x_{t}) = (x_{0:t}, a_{0:t-1}) \in \cl{H}$ be a realization of the history $H_t\in \cl{H}$ of patient observations and actions until timestep $t$. A stationary stochastic policy represents a mapping: $\pi: \cl{H} \times \cl{A} \rightarrow [0, 1]$, where $\pi(a\mid h)$ indicates the probability of choosing action $a\in \cl{A}$ given history $h\in \cl{H}$ and $\sum_{a\in \cl{A}} \pi(a\mid h) = 1$. Taking action $a_t$ under history $h_t$ results in observing $x_{t+1}$ and obtaining $h_{t+1}$. The reward function is $R:\cl{H} \times \cl{A} \rightarrow \mathbb{R}$ where $R(h, a)$ represents the reward for taking action $a\in \cl{A}$ given history $h\in \cl{H}$. The value function of a policy $\pi$, $V:\cl{H} \rightarrow \mathbb{R}$ is defined as: $V^{\pi}(h) = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} R(H_t, A_t) \mid \pi, H_0 = h]$, where $\gamma \in[0, 1)$ is the discount factor and $A_t \sim \pi(\cdot \mid H_t)$ for $t\geq 0$. The action-value function $Q:\cl{H} \times \cl{A} \rightarrow \mathbb{R}$ of a policy is defined as $Q^{\pi}(h, a) = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} R(H_t, A_t) \mid \pi, H_0 = h, A_0 = a]$ where $A_t \sim \pi(\cdot \mid H_t)$ for $t\geq0$. A higher $Q$-value indicates that action $a$ will yield better long term returns if taken for history $h$. We assume we know the discount factor $\gamma$ which indicates the importance of future rewards for the current history and action pair. \textbf{Batch IRL.} Let $\mathcal{D} = \{ \zeta^{i} \}_{i=1}^{N}$ be a batch observational dataset consisting of $N$ patient trajectories: $\zeta^{i} =(x_0^{i}, a_0^{i}, \dots x_{T^{i}-1}^{i}, a_{T^{i}-1}^{i}, x_{T^{i}}^{i})$. The trajectory $\zeta^{i}$ for patient $i$ consists of covariates $x^{i}_t$ and actions $a^{i}_t$ observed for $T^i$ timesteps. For simplicity, we drop the superscript $i$ unless explicitly needed. The actions $a_t \in \cl{D}$ are assigned according to some expert policy $\pi_E$ such that $a_t \sim \pi_E(\cdot \mid h_t)$. We work in the apprenticeship learning set-up \citep{abbeel2004apprenticeship} and we consider a linear reward function $R(h_t, a_t) = w \cdot \phi(h_t, a_t)$, where the weights $w \in \bb{R}^{d}$ satisfy $\| w \|_1 \leq 1$. The feature map $\phi:\cl{H} \times{A} \rightarrow \bb{R}^{d}$ also satisfies $\| \phi(\cdot) \|_2 \leq 1$ such that the reward is bounded. We assume that the expert policy $\pi_E$ is attempting to optimize, without necessarily succeeding, some unknown reward function $R^{*}(h_t, a_t) = w^{*} \cdot \phi(h_t, a_t)$, where $w^{*}$ are the `true' reward weights. Given $R(h_t, a_t)$, the value of policy $\pi$ can be re-written as: $\mathbb{E}[V^{\pi}(H_0)] = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} w \cdot \phi(H_t, A_t)\mid \pi] = w \cdot \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} \phi(H_t, A_t)\mid \pi]$, where the expectation is taken with respect to the sequence of histories and action pairs $(H_t,A_t)_{t\geq 0}$ obtained by acting according to $\pi$. The feature expectation of policy $\pi$, defined as the expected discounted cumulative feature vector obtained when choosing actions according to policy $\pi$ is $\mu^{\pi} = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} \phi(H_t, A_t) \mid \pi] \in \bb{R}^{d}$ such that: $\mathbb{E}[V^{\pi}(H_0)] = w \cdot \mu^{\pi}$. Our aim is to recover the expert weights $w^{*}$ as well as find a policy $\pi$ that is close to the policy of the expert $\pi_E$. We take the max-margin IRL approach and we measure the similarity between the feature expectations of the expert's policy and the feature expectations of a candidate policy using $\| \mu^{\pi_E} - \mu^{\pi}\|_2$. In this batch IRL setting, we do not have knowledge of transition dynamics and we cannot sample more trajectories from the environment. Note that in this context, we are the first to model expert policies that depend on patient histories and not just current observations. \textbf{Counterfactual reasoning.} To explain the expert's behaviour in terms of their trade-off associated with "what if" outcomes, we use counterfactual reasoning to define the feature map $\phi(h_t, a_t)$ part of the reward $R(h_t, a_t) = w \cdot \phi(h_t, a_t)$. We adopt the potential outcomes framework \citep{neyman1923applications, rubin1978bayesian, robins2008estimation}. Let $Y[a]$ be the potential outcome, either factual or counterfactual, for treatment $a\in \cl{A}$. Using the dataset $\cl{D}$ we learn feature map $\phi(h_t, a_t)$ such that: \begin{equation} \phi(h_t, a_t) = \mathbb{E}[Y_{t+1}[a_t] \mid h_t], \end{equation} where $\mathbb{E}[Y_{t+1}[a_t] \mid h_t]$ is the potential outcome for taking action $a_t$ at time $t$ given the history $h_t$. For the factual action $a_{t}$, assigned under policy $\pi(\cdot\mid h_{t})$, the factual outcome is $x_{t+1}$ and this is the same as the potential outcome $\mathbb{E}[Y_{t+1}[a_t]\mid h_t]$. The potential outcomes for the other actions $a_t\in\cl{A}$ are the counterfactual ones and they allow us to understand what would happen to the patient if they receive a different treatment $a_t$. To identify the potential outcomes from the batch data we make the standard assumptions of consistency, positivity and no hidden confounders as described in Appendix \ref{apx:counterfactuals}. No hidden confounders means that we observe all variables affecting the action assignment and potential outcomes. Overlap means that at each timestep, every action has a non-zero probability and can be satisfied in this setting by having a stochastic expert policy. These assumptions are standard across methods for estimating counterfactual outcome \citep{robins2000marginal, schulam2017reliable, bica2020crn}. Note that these assumptions are needed to be able to reliably perform causal inference using observational data. However, they do not constrain the batch IRL set-up. Estimating the potential outcomes from batch data poses additional challenges that need to be considered. The fact that the expert follows policies that consider the history of patient observations when deciding new actions, gives rise to time-dependent confounding bias. Standard supervised learning methods for learning $\bb{E}[Y_{t+1}[a_t] \mid h_t]$ from $\cl{D}$ will be biased by the expert policy used in the observational dataset and will not be able to correctly estimate the counterfactual outcomes under alternative policies \citep{schulam2017reliable}. Methods for adjusting for the confounding bias involve using either inverse probability of treatment weighting \citep{robins2000marginal, lim2018forecasting} or building balancing representations \citep{bica2020crn}. Refer to Appendix \ref{apx:counterfactuals} for more details. In the sequel, we consider the model for estimating counterfactuals as a black box such that the feature map $\phi(h_t, a_t)$ represents the effect of taking action $a_t$ for history $h_t$. The reward is then: \begin{equation} R(h_t, a_t) = w \cdot \phi(h_t, a_t) = w \cdot \mathbb{E}[Y_{t+1}[a_t] \mid h_t] \end{equation} Defining the reward function using counterfactuals gives an interpretable parameterization of doctor behavior: It allows us to interpret their behavior with respect to the importance weights implicitly assigned to the effects of their actions. This enables describing the relative trade-offs in treatment decisions. Note that we are \textit{not} assuming that the experts themselves actually compute these quantities (nor that they explicitly adopt the same causal inference assumptions); rather, we are simply providing a way to understand how decision-makers are effectively behaving (i.e. in terms of counterfactuals). \vspace{-3mm} \section{Batch inverse reinforcement learning using counterfactuals} Max-margin IRL \citep{abbeel2004apprenticeship} starts with an initial random policy $\pi$ and iteratively performs the following three steps to recover the expert policy and its reward weighs: (1) estimate feature expectations $\mu^{\pi}$ of candidate policy $\pi$, (2) compute new reward weights $w$ and (3) find new candidate policy $\pi$ that is optimal for reward function $R(h_t, a_t) = w \cdot \phi(h_t, a_t)$. This approach finds a policy $\Tilde{\pi}$ that satisfies $\| \mu^{\pi_e} - \mu^{\Tilde{\pi}}\|_2 < \epsilon$ such that $\Tilde{\pi}$ has an expected value function close the expert policy. \begin{wrapfigure}{t}{0.465\textwidth} \centering \vspace{-7mm} \includegraphics[width=0.465\textwidth]{figs/counterfactual_irl_fig.pdf}% \caption{Counterfactual inverse reinforcement learning (CIRL). Counterfactuals are used to define $\phi(h, a)$, to estimate feature expectations $\mu^{\pi}$ of candidate policy $\pi$ in batch setting and to learn optimal policy for reward weights $w$.} \label{fig:counterfactuals_IRL} \vspace{-10mm} \end{wrapfigure} The expert feature expectations can be estimated empirically from the dataset $\cl{D}$ using: \begin{equation}\label{eqn:expert_feature_expectations}\mu^{\pi_E} = \frac{1}{N}\sum_{i=1}^{N}\sum_{t=0}^{T^{i}} \gamma^{t}\phi(h^{i}_t, a^{i}_t). \end{equation} \noindent In the batch setting, we cannot estimate the feature expectations of candidate policies by taking the sample mean of on-policy roll-outs: $\mu^{\tilde{\pi}} \neq \frac{1}{N}\sum_{i=1}^{N}\sum_{t=0}^{T^{i}} \gamma^{t}\phi(h^{i}_t, \pi(h^{i}_t))$. To address this off-policy nature of estimating feature expectations, we introduce a new method that leverages the estimated counterfactuals. We also make use of the counterfactuals to learn optimal policies for different reward weights. Figure \ref{fig:counterfactuals_IRL} illustrates how we integrate "what if" reasoning into batch IRL. \subsection{Counterfactual $\mu-$learning} Similar to the approach proposed by \cite{klein2012inverse, lee2019truly}, we consider a history-action feature expectation defined as follows $\mu^{\pi}(h, a) = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} \phi(H_t, A_t)|\pi, H_0 = h, A_0 = a]$, where the first action $a$ can be chosen randomly and for $t\geq 1$, $A_t \sim \pi(\cdot \mid H_t)$. This can be re-written as: \begin{small} \begin{eqnarray} \mu^{\pi}(h, a) &=& \phi(h, a) + \mathbb{E}_{h^{\prime}, a^{\prime} \sim \pi(\cdot\mid h^{\prime})} [\sum_{t=1}^{\infty} \gamma^{t} \phi(H_t, A_t) \mid \pi, H_1 = h^{\prime}, A_1 = a^{\prime}] \\ &=& \phi(h, a) + \gamma \mathbb{E}_{h^{\prime}, a^{\prime} \sim \pi(\cdot\mid h^{\prime})} [\mu^{\pi}(h^{\prime}, a^{\prime})], \end{eqnarray} \end{small}% where $h^{\prime}$ is the next history. Notice the analogy between $\mu^{\pi}(h, a)$ and the action-value function: \begin{small} \begin{eqnarray} Q^{\pi}(h, a) &=& R(h, a) + \mathbb{E}_{h^{\prime}, a^{\prime} \sim \pi(\cdot\mid h^{\prime})} [\sum_{t=1}^{\infty} \gamma^{t} R(H_t, A_t) \mid \pi, H_1 = h^{\prime}, A_1 = a^{\prime}] \\ &=& R(h, a) + \gamma \mathbb{E}_{h^{\prime}, a^{\prime} \sim \pi(\cdot\mid h^{\prime})} [Q^{\pi}(h^{\prime}, a^{\prime})], \end{eqnarray} \end{small}% that allows us to use temporal difference learning to estimate feature expectations \citep{sutton1998introduction}. Existing methods for estimating feature expectations fall into two extremes: (1) model-based (online) IRL approaches learn a model of the world and then use the model as a simulator to obtain on-policy roll-outs \citep{abbeel2004apprenticeship} and (2) batch IRL approaches use Q-learning (or alternative methods) for off-policy evaluation \citep{lee2019truly}, and can only be used to evaluate policies similar to the expert policy and require warm start. In our case, the counterfactual model allows us to compute $h^{\prime} = (h, a, \mathbb{E}[Y(a)|h])$ for any $h \in \cl{D}$ and any arbitrary action $a$. Thus, we propose counterfactual $\mu$-learning, a novel method for estimating feature expectations that uses these counterfactuals as part of temporal difference learning with $1$-step bootstrapping. This approach falls in-between (1) and (2) and allows us to estimate feature expectations for any candidate policy $\pi$ in the batch IRL setting. The counterfactual $\mu$-learning algorithm learns the $\mu$-values for policy $\pi$ iteratively by updating the current estimates of the $\mu$-values with the feature map plus the $\mu$-values obtained by following policy $\pi$ in the new counterfactual history $h^{\prime} = (h, a, \mathbb{E}[Y[a]|h])$: \begin{equation} \hat{\mu}^{\pi}(h,a) \gets \hat{\mu}^{\pi}(h,a) + \alpha(\phi(h,a) + \gamma\mathbb{E}_{a'\sim\pi(\cdot|h')}[\hat{\mu}^{\pi}(h',a^{\prime})]- \hat{\mu}^{\pi}(h,a)), \end{equation} where $\alpha$ is the learning rate. We use a recurrent network with parameters $\theta$ to approximate $\hat{\mu}^{\pi}(h,a\mid \theta)$ and we train it by minimizing the sequence of loss functions $\cl{L}_i$ which changes for every iteration $i$: \begin{small} \begin{align} \cl{L}_i(\theta_i) = \mathbb{E}_{h\sim \cl{D}} [|| y_i - \hat{\mu}^{\pi}(h,a\mid \theta_i) ||_2] && \theta_{i+1} \gets \theta_{i} + \alpha \nabla (\cl{L}_i(\theta_i)), \end{align} \end{small}% where the action $a$ can be chosen randomly from $\cl{A}$ and $y_i = \phi(h,a) + \gamma\mathbb{E}_{a'\sim\pi(\cdot|h')}[\hat{\mu}^{\pi}(h',a^{\prime}\mid \theta_{i-1})] $ is the target for iteration $i$. The parameters for the previous iteration $\theta_{i-1}$ are held fixed when optimizing $\cl{L}_i(\theta_i)$. Refer to Appendix \ref{apx:mu_learning} for full details of the counterfactual $\mu$-learning algorithm. The feature expectations for the policy $\pi$ are given by $\hat{\mu}^{\pi} = \mathbb{E}_{H_0,A_0\sim \pi(\cdot|H_0)}[\hat{\mu}^{\pi}(H_0, A_0)]$, which can be estimated empirically from the observational dataset $\mathcal{D}$ using $\hat{\mu}^{\pi} = \frac{1}{N}\sum_{i=1}^N\sum_{a\in\mathcal{A}} \hat{\mu}^{\pi}(h_0^i, a)\pi(a\mid h_0^i).$ \begin{algorithm}[t] \begin{algorithmic}[1] \STATE \textbf{Input}: Batch dataset $\cl{D}$, max iterations $n$, convergence threshold $\epsilon$, \\ feature map $\phi(h_t, a_t)$ $=$ $\mathbb{E}[Y_{t+1}[a_t] | h_t]$ \STATE $\mu^{\pi_E} \gets$ compute $\pi_E$'s feature expectations {\small(Equation \ref{eqn:expert_feature_expectations})} \STATE $\hphantom{\mu^{\pi_E}}\mathllap{w_0~} \gets $ random initial reward weights, $\hphantom{\mu^{\pi_E}}\mathllap{\pi_0~} \gets $ compute optimal policy for $R_0 = w_0 \cdot \phi$ \STATE $\hphantom{\mu^{\pi_E}}\mathllap{\mu^{\pi_0}} \gets $ compute $\pi_0$'s feature expectations\hfill{\small(counterfactual $\mu$-learning)} \STATE $\Pi = \{\pi_0\}, \Delta = \{\mu^{\pi_0}\}, \bar{\mu}_0 = \mu^{\pi_0}$ \FOR{$k = 1$ to $n$} \STATE $\hphantom{\mu^{\pi_k}}\mathllap{w_k~}=\mu^{\pi_E}-\bar{\mu}_{k-1}$, $\hphantom{\mu^{\pi_k}}\mathllap{\pi_k~} \gets$ compute optimal policy for $R_k = w_k \cdot \phi$ \STATE $\mu^{\pi_k} \gets$ compute $\pi_k$'s feature expectations\hfill{\small (counterfacual $\mu$-learning)} \STATE $\Pi = \Pi \cup \{\pi_k\}, \Delta = \Delta \cup \{\mu^{\pi_k}\}$ \STATE Orthogonally project $\mu^{\pi_E}$ onto line through $\bar{\mu}_{k-1},\mu^{\pi_k}$:\\ $\medmath{\bar{\mu}_k =\dfrac{(\mu^{\pi_k} -\bar{\mu}_{k-1})^T (\mu^{\pi_E} -\bar{\mu}_{k-1}) }{(\mu^{\pi_k} -\bar{\mu}_{k-1})^T (\mu^{\pi_k} -\bar{\mu}_{k-1})} (\mu^{\pi_k} -\bar{\mu}_{k-1})+ \bar{\mu}_{k-1}},\,\,\,\,\,\,\,\,\,\,\,\,\, t = \lVert \mu^{\pi_E} - \bar{\mu}_k \rVert_2$ \STATE \textbf{if} $t < \epsilon $ \textbf{then} break \ENDFOR \STATE $K = \arg\min_{k:\mu^{\pi_k} \in \Delta} \|\mu^{\pi_E} - \mu^{\pi_k} ||_2$, $\tilde{R}(h, a) = w_K \cdot \phi(h, a)$ \\ \STATE \textbf{Output}: $\tilde{R}$, $\Delta$, $\Pi$ \caption{(Batch, Max-Margin) CIRL} \label{alg:cirl} \end{algorithmic} \end{algorithm} \vspace{-2mm} \subsection{Finding optimal policy for given reward weights} During each iteration of max-margin IRL, we obtain a candidate policy to evaluate by finding the optimal policy for a given vector of reward weights. We use deep recurrent $Q$-learning \citep{hausknecht2015deep}, a model-free approach for learning $Q$-values for reward function $R(h, a) = w \cdot \phi(h, a)$. The counterfactuals are used to compute $\phi(h, a)$, and to estimate the next history for the temporal difference updates. See Appendix \ref{apx:recurrent_q_learning} for details. After estimating the $Q-$values, $Q(h, a)$ for history $h$ and action $a$, a new candidate policy is obtained using: $\pi(a\mid h)~= \mathbbm{1}_{a=\arg\max_{a'}Q(h, a')}$. \vspace{-2mm} \subsection{Counterfactual inverse reinforcement learning algorithm (CIRL) } Algorithm \ref{alg:cirl} describes our proposed counterfactual inverse reinforcement learning (CIRL) method for the batch setting. CIRL is based on the projection algorithm proposed by \cite{abbeel2004apprenticeship} and iteratively updates the reward weights to minimize the margin between the expert's feature expectations and the feature expectations of intermediate policies. CIRL uses our proposed counterfactual $\mu-$learning algorithm for estimating the feature expectations of intermediate policies in an off-policy manner that is suitable for the batch setting. Compared to the algorithm for batch IRL proposed by \citet{lee2019truly} that requires the initial policy $\pi_0$ to already be similar to the expert policy, CIRL works for any initial policy $\pi$ that is optimal for the randomly initialized reward weights $w_0$. Similarly to \citet{choi2011inverse}, the CIRL algorithm returns the reward function $\tilde{R}(h, a)$ that results in a policy with feature expectations closest to the ones of the expert policy. We show experimentally that the reward that yields the closest feature expectations will be similar to the true underlying reward function of the expert. CIRL returns the set of policies tried $\Pi$ and their feature expectations $\Delta$, which allows us to compute a mixing policy that would yield similar performance to the expert policy \citep{abbeel2004apprenticeship}. Let $\Tilde{\mu}$ be the closest point to $\mu^{\pi_E}$ in the convex closure of $\Delta = \{\mu^{\pi_0}, \mu^{\pi_1} \dots \mu^{\pi_k}\}$, which can be computed by solving the quadratic programming problem: \begin{small} \begin{align} \min \| \mu^{\pi_E} - \mu \|_2 \text{ s.t. } \mu = \sum_i \lambda_i \mu^{\pi_i}, \lambda_i \geq 0, \sum_i \lambda_i = 1. \end{align} \end{small}% From the termination criteria of Algorithm 1, $\mu^{\pi_E}$ is separated from the points $\mu^{\pi_i}$ by a margin of at most $\epsilon$. Thus, the solution $\Tilde{\mu}$ will satisfy $\| \mu^{\pi_E} - \tilde{\mu} \|_2 \leq \epsilon$. To obtain a policy that is close to the performance of the expert policy, we mix together the policies $\Pi = \{\pi_0, \dots \pi_k\}$ returned by Algorithm 1, where the probability of selecting $\pi_i$ is $\lambda_i$. \vspace{-3mm} \section{Experiments} We evaluate the ability of CIRL to recover the preferences of experts over the "what if" outcomes of actions. These preferences are denoted by the magnitude of the recovered reward weights. Since we do not have access to the underlying reward weights of experts in real data, we first validate the method in a simulated environment. To show the applicability of CIRL in healthcare, we also perform a case study on an ICU dataset from the MIMIC III database \citep{johnson2016mimic}. \noindent\textbf{Benchmarks.} For our proposed CIRL method, we uses the Counterfactual Recurrent Network, a state-of-the art model for estimating counterfactual outcomes in a temporal setting \citep{bica2020crn}. Note that other models for this task are also applicable \citep{lim2018forecasting}. Refer to Appendix \ref{apx:impl_cirl} for details. We benchmark CIRL against MB-IRL: model-based IRL\textemdash i.e. inverse reinforcement learning with model-based policy evaluation (e.g. \citet{yin2016synthesizing, nagabandi2018neural, kaiser2019model, buesing2018woulda}). We consider two versions of this benchmark: MB($h$)-IRL which uses the patient history and MB($x$)-IRL which only uses the current observations to define the policy. For both MB($x$)-IRL and MB($h$)-IRL we define the reward as a weighted sum of counterfactual outcomes, but these methods use instead standard supervised methods to estimate the next history $h^{\prime}$ needed for the counterfactual $\mu$-learning algorithm. These benchmarks are aimed to highlight the need of handling the bias from the time-dependent confounders and using a suitable counterfactual model, but also the importance of handling the patient history. We also compare against the deep successor feature networks (DSFN) proposed by \citet{lee2019truly}, which currently represents the state-of-the-art Batch IRL for the MDP setting. To show that their approach for estimating feature expectations in the batch setting is suboptimal, we extend their method to also incorporate histories in the DSFN($h$) benchmark. Implementation details of the benchmarks can be found in Appendix \ref{apx:impl_benchmarks}. \vspace{-2mm} \subsection{Extracting different types of expert behaviour} \noindent\textbf{Simulated environment.} We propose an environment that uses a general data simulation involving $p$-order auto-regressive processes. To analyze different types of expert behaviour (e.g. treating more/less aggressively) we simulate data for patient features representing disease progression ($x$), e.g. tumour volume and side effects ($z$) and action ($a$) indicating the binary application of treatment. For time $t$, we model the evolution of patient covariates according to the treatments as follows: \begin{footnotesize} \begin{align} x_t = \frac{1}{p} \sum_{i=1}^{p} x_{t-i} - 2.5\sum_{i=1}^{p} a_{t-i} + 0.5p + \epsilon && z_t = \frac{1}{p} \sum_{i=1}^{p} z_{t-i} + 0.5\sum_{i=1}^{p} a_{t-i} - p + \eta \label{eq:data_sim} \end{align} \end{footnotesize}% where $p=5$ and $\epsilon \sim \cl{N}(0, 0.1^2)$, $\eta \sim \cl{N}(0, 0.1^2)$ are noise terms. The initial values for the features are sampled as follows: $x_0 \sim \cl{N}(30, 5)$ and $z_0 \sim \cl{N}(2, 1)$. We set $x_{max} = 50$ and $z_{max} = 15$. The trajectory of the patient terminates when either $x_t = 0$, $x_t \geq x_{max}$, $z_t \geq z_{max}$ or $t\geq20$. The tumour volume $x_t$, denoting the disease progression, decreases when we give treatment and increases otherwise. Conversely, the side effects $z_t$, increase when we give treatment and decrease otherwise. We define a linear reward for taking action $a_t$ given history $h_t = (x_{0:t}, z_{0:t}, a_{0:t-1})$ as follows: \begin{small} \begin{equation} R(h_t, a_t) = w_1 \frac{x_{t+1}}{x_{max} - x_{min}} + w_2 \frac{z_{t+1}}{z_{max} - z_{min}} \end{equation} \end{small}% where $w = [w_1, w_2]$, $||w||_{1} \leq 1$ and $x_{t+1}$ and $z_{t+1}$ are simulated according to equations \ref{eq:data_sim} to take into account the effect of action $a_t$ for history $h_t$. The features are normalized to $[0, 1]$. The best scenario for a patient is when both the side effects and tumour volume are zero and a doctor attempting to achieve this will have negative reward weights. However, different settings of the reward weights will result in different expert behaviours. For instance, for $w_1 = -0.7$ and $w_2 = -0.3$, the expert policy will focus more on the disease progression and will consequently treat more aggressively, while this behaviour will be reversed for reward weights set to $w_1 = -0.3$ and $w_2 = -0.7$. We used deep recurrent $Q-$learning \citep{hausknecht2015deep} to find a stochastic expert policy that optimizes the reward function for different settings of the weights. The batch dataset $\cl{D}$ consists of 10000 trajectories sampled from the expert policy. Refer to Appendix E for more details. \begin{wrapfigure}{t}{0.42\textwidth} \centering \vspace{-5mm} \includegraphics[width=0.42\textwidth]{figs/reward_weights.pdf}% \caption{Reward weights recovered by benchmarks over 10 runs. The weights of the expert are $w_1 = -0.3$ and $w_2 = -0.7$.}% \label{fig:reward_weights} \vspace{-5mm} \end{wrapfigure} \textbf{Recovering decision making preferences of experts}. We first evaluate the benchmarks on their ability to recover the weights of the reward function optimized by the expert for the experimental setting with $\gamma = 0.99$, $w_1 = -0.3$ and $w_2 = -0.7$. Note that DSFN does not provide interpretable reward weights, and thus cannot be used for understanding the trade-offs in the expert behavior. We train each benchmark 10 times and we plot in Figure \ref{fig:reward_weights} the reward weights obtained for the different iterations. We show that our proposed CIRL method performs best at recovering the preferences of the expert, which in this case is to treat less aggressively. While the MB$(h)$-IRL method also recovers the correct trade-offs in the expert behavior, the computed weights have a much higher variance. Conversely, the MB$(x)$-IRL method, which does not consider the patient history fails to recover the underlying weights of the expert policy. \textbf{Matching the expert policy}. We evaluate the benchmarks' ability to recover policies that match the performance on the expert policy for two settings of the discount factor $\gamma \in \{0.99, 0.5\}$. A lower $\gamma$ indicates that the expert is optimizing for the immediate effect of actions, while a higher $\gamma$ means they considered the long term effect of actions. For each $\gamma$ we learn expert policies for different reward weights and we use the expert policies to generate the batch datasets. We evaluate the policies learned by the benchmarks using two metrics: cumulative reward for running the policy in the simulated environment and accuracy on matching the expert policy (computed as described in Appendix \ref{apx:accuracy}). We report in Tables \ref{tab:results_cummulative_rewards} and \ref{tab:results_accuracy} the average results and their standard error over 1000 sampled trajectories from the environment. CIRL recovers a policy that has the closest cumulative reward to the expert policy and that can best match the treatments assigned by the expert. \begin{table}[h] \begin{center} \vspace{-0.1cm} \caption{Mean cumulative reward and standard deviation for running learnt policy in the environment.} \label{tab:results_cummulative_rewards} \centering \setlength\tabcolsep{3.0pt} \begin{footnotesize} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{l|c|c|c|c|c|c} \toprule & \multicolumn{3}{c|}{$\gamma=0.99$} & \multicolumn{3}{c}{$\gamma=0.5$} \\ \midrule \makecell{Reward\\ weights}& \makecell{$w_1 = - 0.3$\\ $w_2 = -0.7$} & \makecell{$w_1 = - 0.7$ \\ $w_2 = -0.3$}& \makecell{$w_1 = - 0.5$\\ $w_2 = -0.5$} & \makecell{$w_1 = - 0.3$\\ $w_2 = -0.7$} & \makecell{$w_1 = - 0.7$ \\ $w_2 = -0.3$}& \makecell{$w_1 = - 0.5$\\ $w_2 = -0.5$} \\ \midrule MB($x$)-IRL & $-3.78 \pm 0.02$ & $-4.42 \pm 0.05$ & $-4.90 \pm 0.05$ & $-4.51 \pm 0.05$ & $-4.53 \pm 0.05$ & $-4.54\pm 0.04$ \\ MB($h$)-IRL & $-3.23 \pm 0.02$ & $-4.10 \pm 0.02$ & $-4.63 \pm 0.04$ & $-4.43 \pm 0.04$ & $-3.54 \pm 0.05$ & $-4.35 \pm 0.03$ \\ DSFN & $-3.56 \pm 0.06$ & $-4.32 \pm 0.04$ & $-3.77 \pm 0.05$ & $-4.11 \pm 0.06$ & $-3.07 \pm 0.03$ & $-4.67 \pm 0.07$ \\ DSFN($h$) & $-3.31 \pm 0.07$ & $-4.33 \pm 0.07$ & $-3.60 \pm 0.07$ & $-3.95 \pm 0.07$ & $-3.05 \pm 0.05$ & $-4.61 \pm 0.05$ \\ CIRL & $- \bm{2.89} \pm \bm{0.02}$ & $- \bm{3.92} \pm \bm{ 0.03}$ & $-\bm{3.41} \pm \bm{0.05}$ & $-\bm{2.79} \pm \bm{0.02}$ & $- \bm{2.91} \pm \bm{0.02}$ & $- \bm{4.27} \pm \bm{0.03}$ \\ \midrule Expert & $-2.72 \pm 0.02$ & $-3.61 \pm 0.02$ & $-2.81 \pm 0.01$ & $-2.65 \pm 0.02$ & $-2.36 \pm 0.01$ & $-3.97\pm 0.03$ \\ \bottomrule \end{tabular} \end{adjustbox} \end{footnotesize} \vspace{-0.4cm} \end{center} \end{table} \begin{table}[b] \begin{center} \vspace{-0.1cm} \caption{Average accuracy and standard deviation for matching the actions in the expert policy.} \label{tab:results_accuracy} \centering \setlength\tabcolsep{3.6pt} \begin{footnotesize} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{l|c|c|c|c|c|c} \toprule & \multicolumn{3}{c|}{$\gamma=0.99$} & \multicolumn{3}{c}{$\gamma=0.5$} \\ \midrule \makecell{Reward\\ weights}& \makecell{$w_1 = - 0.3$\\ $w_2 = -0.7$} & \makecell{$w_1 = - 0.7$ \\ $w_2 = -0.3$}& \makecell{$w_1 = - 0.5$\\ $w_2 = -0.5$} & \makecell{$w_1 = - 0.3$\\ $w_2 = -0.7$} & \makecell{$w_1 = - 0.7$ \\ $w_2 = -0.3$}& \makecell{$w_1 = - 0.5$\\ $w_2 = -0.5$} \\ \midrule MB($x$)-IRL & $62.5 \pm 0.41\%$ & $61.4 \pm 0.81\%$ & $54.6 \pm 0.56\%$ & $52.4 \pm 0.63\%$ & $60.1 \pm 0.39\%$ & $71.8 \pm 0.72\%$ \\ MB($h$)-IRL & $77.8 \pm 0.31\%$ & $70.2 \pm 0.45\%$ & $71.4 \pm 0.69\%$ & $66.3 \pm 0.58\%$ & $70.2 \pm 0.71\%$ & $75.6 \pm 0.52\%$ \\ DSFN & $75.4 \pm 0.32\%$ & $68.4 \pm 0.21\%$ & $73.4 \pm 0.45\%$ & $80.2 \pm 0.37\%$ & $70.8 \pm 0.24\% $ & $69.8 \pm 0.44\% $ \\ DSFN($h$) & $76.3 \pm 0.37\%$ & $67.5\pm 0.32\%$ & $80.6 \pm 0.54\%$ & $80.4\pm 0.56\%$ & $71.0 \pm 0.35\%$ & $70.2 \pm 0.47\%$ \\ CIRL & $\bm{81.8} \pm \bm{0.42\%}$ & $\bm{75.5} \pm \bm{0.51\%}$ & $\bm{83.7} \pm \bm{0.76\%}$ & $\bm{89.5} \pm \bm{0.37\%}$ & $\bm{73.2} \pm \bm{0.43\%}$ & $\bm{80.4} \pm \bm{0.42\%}$ \\ \bottomrule \end{tabular} \end{adjustbox} \end{footnotesize} \vspace{-0.7cm} \end{center} \end{table} \subsection{Case study on real-world dataset: MIMIC III} Suppose we want to explain the decision-making process of doctors assigning antibiotics to patients in the ICU. For this purpose, we consider a dataset with 6631 patients that have received antibiotics during their ICU stay, extracted from the Medical Information Mart for Intensive Care (MIMIC III) database \citep{johnson2016mimic}. \begin{wrapfigure}{r}{0.3\textwidth} \centering \vspace{-9mm} \includegraphics[width=0.3\textwidth]{figs/radar_plot_reward_weights.pdf}% \caption{Radar plot of reward weights magnitude for assigning antibiotics.}% \label{fig:reward_weights_mimic} \vspace{-4mm} \end{wrapfigure} We used CIRL, with $\gamma=0.99$, to recover the policy and the reward weights of doctors administering antibiotics to understand their preferences over the effect of antibiotics on the patient features. The relative magnitude of the reward weights for the counterfactual outcomes of the patient features considered are illustrated in Figure \ref{fig:reward_weights_mimic}. CIRL found that reducing temperature had the highest weight in the reward function of the expert, followed by WBC. This corresponds to known medical guidelines \citep{marik2000fever, palmer2008aerosolized}. {\color{black} Sepsis is a leading cause of morbidity and mortality in the ICU, and several studies have \begin{wraptable}{r}{0.3\columnwidth} \centering \vspace{-60mm} \begin{scriptsize} \setlength\tabcolsep{3pt} \setlength\tabcolsep{3.6pt} \begin{tabular}{l|c} \toprule & \makecell{Accuracy} \\ \midrule MB($x$)-IRL & $70.1 \pm 0.11\%$\\ MB($h$)-IRL & $77.5 \pm 0.15\%$ \\ DSFN & $73.5 \pm 0.25\%$ \\ DSFN($h$) & $75.3 \pm 0.19\%$ \\ CIRL & $\bm{83.4} \pm \bm{0.17\%}$ \\ \bottomrule \end{tabular} \caption{Accuracy on matching expert actions. } \vspace{-5mm} \label{tab:accuracy_mimic} \end{scriptsize} \end{wraptable} shown that early administratio of antibiotics is crucial to decrease the risk of adverse outcomes \citep{zahar2011outcomes}. Fever and elevated WBC are among a small subset of variables that indicate a systemic inflammatory response, concerning for the development of sepsis \citep{neviere2017sepsis}. While these findings are not specific to bacterial infection, the risk of failing to treat a potentially serious infection often outweighs the risk of inappropriate antibiotic administration, thereby driving clinicians to prescribe antibiotics in the setting of these abnormal findings. Similarly, the decision to discontinue antibiotics is complex, but it is often supported by signs of a resolution of infection which included normalization of body temperature and a downtrending of the WBC. As such, our finding that the two highest reward weights for the administration of antibiotics in the ICU is temperature and WBC is consistent with clinical practice. Moreover, note that the model is simply identifying the factors that are driving the decision-making of the clinician represented in this dataset.} In Table \ref{tab:accuracy_mimic}, to verify that explainability does not come at the cost of accuracy we also evaluate the benchmarks on matching the expert actions. \vspace{-3mm} \section{Discussion} In this paper, we propose building interpretable parametrizations of sequential decision-making by explaining an expert's behaviour in terms of their preferences over "what-if" outcomes. To achieve this, we introduce CIRL, a new method that incorporates counterfactual reasoning into batch IRL: counterfactuals are used to define the feature map part of the reward function, but also to tackle the off-policy nature of estimating feature expectations in the batch setting. The reward weights recovered by CIRL indicate the relative preferences of the expert over the counterfactual outcomes of their actions. Our aim is to provide a description of behavior, i.e. how the expert is effectively behaving under our interpretable parameterization of the reward based on counterfactuals. We are not assuming that the experts actually operate under this specific (linear, in our case) model or that they compute the exact counterfactuals. Instead, our purpose is to show that we can explain an agent’s behavior on the basis of counterfactuals, which is useful in that it allows us to audit them, sanity-check their policies and find variation in practice. Further discussion can be found in Appendix \ref{apx:limitations}. There are several limitations of your method and directions for future work. While our method considers reward functions that are linear in the features, one way of extending it to handle more complex reward functions is to use domain knowledge to define the feature map as a non-linear function over the counterfactual outcomes. Nevertheless, these functions should be defined in a way that still allows us to obtain interpretable explanations of the expert's behaviour. Moreover, although time-invariant reward weights $w$ are standard in the IRL literature \citep{abbeel2004apprenticeship, choi2011inverse, lee2019truly}, time-variant rewards/policies have been considered in the dynamic treatment regimes (reinforcement learning) literature \citep{chakraborty2013statistical, zhang2019near}. Thus another direction for future work would be to extend our method to consider non-stationary policies and reward weights that can change over time. \section*{Acknowledgments} We would like to thank the reviewers for their valuable feedback. The research presented in this paper was supported by The Alan Turing Institute, under the EPSRC grant EP/N510129/1, by Alzheimer’s Research UK (ARUK), by the US Office of Naval Research (ONR), and by the National Science Foundation (NSF) under grant numbers 1407712, 1462245, 1524417, 1533983, and 1722516. The authors are also grateful to Brent Ershoff for the insightful discussions and help with interpreting the medical results on MIMIC III. \section{Introduction} Consider the problem of explaining sequential decision-making on the basis of demonstrated behavior. In healthcare, an important goal lies in being able to obtain an interpretable parameterization of the experts' behavior (e.g in terms of how they assign treatments) such that we can quantify and inspect policies in different institutions and uncover the trade-offs and preferences associated with expert actions \citep{james2000challenge, westert2018medical, van2016physician, jarrett2020inverse}. Moreover, modeling the reward function of different clinical practitioners can be revealing as to their tendencies \mbox{to treat various diseases more/less aggressively \citep{rysavy2015between}, which} \textemdash in combination with patient outcomes\textemdash has the potential to inform and update clinical guidelines. In many settings, such as medicine, decision-makers can be modeled as reasoning about "what-if" patient outcomes: Given the available information about the patient, what would happen if we took a particular action? \citep{djulbegovic2018rational, mcgrath2009doctors}. As treatments often affect several patient covariates, by having both benefits and side-effects, decision-makers often make choices based on their preferences over these counterfactual outcomes. Thus, in our case, an interpretable explanation of a policy is one where the reward signal for (sequential) actions is parameterized on the basis of preferences over (sequential) counterfactuals (i.e. "what-if" patient outcomes). Given the observations and actions made by an expert, \textit{inverse reinforcement learning} (IRL) offers a principled way for modeling their behavior by recovering the (unknown) reward function being maximized \citep{ng2000algorithms, abbeel2004apprenticeship, choi2011inverse}. Standard solutions operate by iterating on candidate reward functions, solving the associated (forward) reinforcement learning problem at each step. In many real-world problems, however, we are specifically interested in the challenge of offline learning\textemdash that is, where further experimentation is not possible\textemdash such as in medicine. In this \textit{batch} setting, we only have access to trajectories sampled from the expert policy in the form of an observational dataset\textemdash such as in electronic health records. \textbf{Batch IRL.}~ By their nature, classic IRL algorithms require interactive access to the environment, or full knowledge of the environment's dynamics \citep{ng2000algorithms, abbeel2004apprenticeship, choi2011inverse}. While batch IRL solutions have been proposed by way of off-policy evaluation \citep{klein2011batch, klein2012inverse, lee2019truly}, they suffer from two disadvantages. First, they are limited by the assumption that state dynamics are fully-observable and Markovian. This is hardly true in medicine: treatment assignments generally depend on how patient covariates have evolved over time \citep{futoma2020popcorn}. Second, rewards are often parameterized as uninterpretable representations of neural network hidden states and consequently cannot be used to explain sequential decision making. \begin{figure} \centering \vspace{-5mm} \includegraphics[width=0.80\textwidth]{figs/intro_fig_new.pdf}% \caption{Explaining decision-making behaviour in terms of preferences over "what if" outcomes. Consider the evolution of tumour volume ($U$) and side effects ($Z$) under a binary action. $\bb{E}[U_{t+1}[a_t]\mid h_t]$ and $\bb{E}[Z_{t+1}[a_t]\mid h_t]$ are the counterfactuals for the patient features under action $a_t$ given history $h_t$ of prior actions and covariates. Parameterizing the reward as the weighted sum of these counterfactuals: $R(h_t, a_t) = w_u \bb{E}[U_{t+1}[a_t]\mid h_t] + w_z \bb{E}[Z_{t+1}[a_t]\mid h_t]$, naturally allows us to model the preferences of experts: e.g. finding that $|w_u|>|w_z|$ indicates that the expert is treating more aggressively, by placing more weight on reducing tumour volume than on minimizing side effects.}% \label{fig:motivation} \vspace{-7mm} \end{figure} \textbf{"What-if" Explanations.}~ To address these shortcomings and to obtain a parameterizable interpretation of the expert's behavior, we propose explicitly incorporating counterfactual reasoning into batch IRL. In particular, we focus on ``what if'' explanations for modeling decision-making, while simultaneously accounting for the partially-observable nature of patient histories. Under the max-margin apprenticeship framework \citep{abbeel2004apprenticeship, klein2011batch, lee2019truly}, we learn a parameterized reward function $R(h_t, a_t)$ that is defined as a weighted sum over \textit{potential outcomes} \citep{rubin2005causal} for taking action $a_t$ given history $h_t$. As highlighted in Figure \ref{fig:motivation}, consider the decision making process of assigning a binary action given the tumour volume ($U$) and side effects ($Z$). Let $\bb{E}[U_{t+1}[a_t]\mid h_t]$ and $\bb{E}[Z_{t+1}[a_t]\mid h_t]$ be the counterfactual outcomes for the two covariates when action $a_t$ is taken given the history $h_t$ of covariates and previous actions. We define the reward as the weighted sum of these counterfactuals: $R(h_t, a_t) = w_u \bb{E}[U_{t+1}[a_t]\mid h_t] + w_z \bb{E}[Z_{t+1}[a_t]\mid h_t]$, to take into account the effect of actions and to directly model the preferences of the expert. The ideal scenario is when both the tumour volume and the side effects are zero, so the reward weights of a doctor aiming for this are both negative. However, recovering that $|w_u| > |w_z|$, it means that the doctor is treating more aggressively, as they are focusing more on reducing the tumour volume rather than on the side effects of treatments. Alternatively, $|w_u| < |w_z|$ indicates that the side effects are more important and the expert is treating less aggressively. Our motivation for using counterfactuals to define the reward comes from the idea that rational decision making considers the potential effects of actions \citep{djulbegovic2018rational}. \textbf{Contributions.}~ Exploring the synergy between counterfactual reasoning and batch IRL for understanding sequential decision making confers multiple advantages. First, it offers a principled approach for parameterizing reward functions in terms of preferences over \textit{what-if} patient outcomes, which enables us to explain the cost-benefit tradeoffs associated with an expert's actions. Second, by estimating the effects of different actions, counterfactuals readily tackle the \textit{off-policy} nature of policy evaluation in the batch setting. Furthermore, we demonstrate that not only does this alleviate the \textit{cold-start} problem typical of conventional batch IRL solutions, but also accommodates settings where the usual assumption of full observability fails to hold. Through experiments in both real and simulated medical environments, we illustrate the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior. \vspace{-3mm} \section{Related works} \vspace{-3mm} In our work, the aim is to explain decision-making by recovering the preferences of experts with respect to the effects of their actions, denoted by the counterfactual outcomes. This goal is fundamentally different from the goal of IRL methods which generally aim to match the performance of experts. We operate under the standard max-margin apprenticeship framework \citep{ng2000algorithms, abbeel2004apprenticeship}, which searches for a reward function that minimizes the margin between feature expectations of the expert and candidate policies. However, our approach to recovering and understanding decision policies is uniquely characterized by incorporating counterfactuals to obtain explainable reward functions. To tackle the challenges posed by real-world decision making, our method also operates in an offline and model-free manner, and accommodates partially-observable environments. \begin{table*}[t] \begin{center} \begin{small} \setlength\tabcolsep{0.7pt} \renewcommand{\arraystretch}{0.7} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lcccccc} \toprule Method & Environment & Batch & Feature map for reward & Policy & Feat. expectations \\ \midrule \small{\citet{abbeel2004apprenticeship}} & Model-based & No & $\phi(s_t)$ = \text{basis functions for state} $s_t$ & $\pi(a_t\mid s_t)$ & Model roll-outs \\ \citet{choi2011inverse} & Model-based & No & $\sum_{s}b_{t}(s)\phi(s, a_t)$ = \text{basis for belief} $b_{t}$ & $\pi(a_t\mid b_t)$ & Model roll-outs \\ \citet{klein2011batch} & Model-free & Yes & $\phi(x_t)$ = \text{basis functions for state } $x_t$ & $\pi(a_t\mid x_t)$ & LSTD-Q \\ \citet{lee2019truly} & Model-free & Yes & \smash{$ \phi(x_t, a_t) = \text{concat} (\phi(x_t), a_t)$} & $\pi(a_t\mid x_t)$ & DSFN \\ \midrule Ours & Model-free & Yes & $\phi(h_t, a_t) = \mathbb{E}[Y_{t+1}[a_t] | h_t]$ & $\pi(a_t\mid h_t)$ & \makecell{Counterfactual \\ $\mu$-learning} \\ \bottomrule \end{tabular} \end{adjustbox} \end{small} \end{center} \vspace{-0.5em} \caption{Comparison of our proposed method (batch, counterfactual IRL) with related works in IRL.} \label{tab:related-works} \vspace{-5mm} \end{table*} \textit{Explainability}. By using basis functions \citep{klein2012inverse} or hidden layers of a deep network \citep{lee2019truly} to define the feature map, the learned rewards of either approach are inherently uninterpretable, and cannot be used to explain differences in expert behavior. An alternative approach for recovering the expert policy (without reward functions) is imitation learning \citep{hussein2017imitation, osa2018algorithmic, torabi2019recent, jarrett2020strictly}. However, these methods do not allow us to fully model the decision-making process of experts and to uncover the trade-offs behind their actions. \textit{Batch Learning}. \citet{klein2011batch} propose an off-policy evaluation method based on least squares temporal difference (LSTD-$Q$) \citep{lagoudakis2003least} for estimating feature expectations, and \citet{klein2012inverse} use a linear score-based classifier to directly approximate the $Q$-function offline. However, both methods require the constraining assumptions that rewards are direct, linear functions of fully-observable states\textemdash assumptions we cannot afford to make in realistic settings such as medicine. \citet{lee2019truly} propose a deep successor feature network (DSFN) based on $Q$-learning to estimate feature expectations. But their approach similarly assumes fully-observable states, and additionally suffers from the ``cold-start'' problem where off- policy evaluations are heavily biased unless the initial candidate policy is (already) close to the expert. \textit{Partial Observability}. No existing batch IRL method accommodates modeling expert policies that depend on patient histories. While \citet{choi2011inverse} and \citet{makino2012apprenticeship} extend the apprenticeship learning paradigm to partially observable environments by considering policies on beliefs over states, both need to interact with the environment (or a perfect simulator) during learning. To the best of our knowledge, we are the first to propose explaining sequential decisions through counterfactual reasoning and to tackle the batch IRL problem in partially-observable environments. Our use of the estimated counterfactuals yields inherently interpretable rewards and simultaneously addresses the cold-start problem in \citet{lee2019truly}. Table \ref{tab:related-works} highlights the main differences between our method and the relevant related works. See Appendix \ref{apx:related_works} for additional related works. \vspace{-3mm} \section{Problem formulation} \textbf{Preliminaries.}~ At timestep $t$, let random variable $X_t \in \cl{X}$ denote the observed patient features and let $A_t\in \cl{A}$ denote the action (e.g. treatment) taken, where $\cl{A}$ is a finite set of actions. Let $x_t$ and $a_t$ denote realizations of these random variables. Let $h_t = (x_0, a_0, \dots, x_{t-1}, a_{t-1}, x_{t}) = (x_{0:t}, a_{0:t-1}) \in \cl{H}$ be a realization of the history $H_t\in \cl{H}$ of patient observations and actions until timestep $t$. A stationary stochastic policy represents a mapping: $\pi: \cl{H} \times \cl{A} \rightarrow [0, 1]$, where $\pi(a\mid h)$ indicates the probability of choosing action $a\in \cl{A}$ given history $h\in \cl{H}$ and $\sum_{a\in \cl{A}} \pi(a\mid h) = 1$. Taking action $a_t$ under history $h_t$ results in observing $x_{t+1}$ and obtaining $h_{t+1}$. The reward function is $R:\cl{H} \times \cl{A} \rightarrow \mathbb{R}$ where $R(h, a)$ represents the reward for taking action $a\in \cl{A}$ given history $h\in \cl{H}$. The value function of a policy $\pi$, $V:\cl{H} \rightarrow \mathbb{R}$ is defined as: $V^{\pi}(h) = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} R(H_t, A_t) \mid \pi, H_0 = h]$, where $\gamma \in[0, 1)$ is the discount factor and $A_t \sim \pi(\cdot \mid H_t)$ for $t\geq 0$. The action-value function $Q:\cl{H} \times \cl{A} \rightarrow \mathbb{R}$ of a policy is defined as $Q^{\pi}(h, a) = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} R(H_t, A_t) \mid \pi, H_0 = h, A_0 = a]$ where $A_t \sim \pi(\cdot \mid H_t)$ for $t\geq0$. A higher $Q$-value indicates that action $a$ will yield better long term returns if taken for history $h$. We assume we know the discount factor $\gamma$ which indicates the importance of future rewards for the current history and action pair. \textbf{Batch IRL.} Let $\mathcal{D} = \{ \zeta^{i} \}_{i=1}^{N}$ be a batch observational dataset consisting of $N$ patient trajectories: $\zeta^{i} =(x_0^{i}, a_0^{i}, \dots x_{T^{i}-1}^{i}, a_{T^{i}-1}^{i}, x_{T^{i}}^{i})$. The trajectory $\zeta^{i}$ for patient $i$ consists of covariates $x^{i}_t$ and actions $a^{i}_t$ observed for $T^i$ timesteps. For simplicity, we drop the superscript $i$ unless explicitly needed. The actions $a_t \in \cl{D}$ are assigned according to some expert policy $\pi_E$ such that $a_t \sim \pi_E(\cdot \mid h_t)$. We work in the apprenticeship learning set-up \citep{abbeel2004apprenticeship} and we consider a linear reward function $R(h_t, a_t) = w \cdot \phi(h_t, a_t)$, where the weights $w \in \bb{R}^{d}$ satisfy $\| w \|_1 \leq 1$. The feature map $\phi:\cl{H} \times{A} \rightarrow \bb{R}^{d}$ also satisfies $\| \phi(\cdot) \|_2 \leq 1$ such that the reward is bounded. We assume that the expert policy $\pi_E$ is attempting to optimize, without necessarily succeeding, some unknown reward function $R^{*}(h_t, a_t) = w^{*} \cdot \phi(h_t, a_t)$, where $w^{*}$ are the `true' reward weights. Given $R(h_t, a_t)$, the value of policy $\pi$ can be re-written as: $\mathbb{E}[V^{\pi}(H_0)] = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} w \cdot \phi(H_t, A_t)\mid \pi] = w \cdot \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} \phi(H_t, A_t)\mid \pi]$, where the expectation is taken with respect to the sequence of histories and action pairs $(H_t,A_t)_{t\geq 0}$ obtained by acting according to $\pi$. The feature expectation of policy $\pi$, defined as the expected discounted cumulative feature vector obtained when choosing actions according to policy $\pi$ is $\mu^{\pi} = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} \phi(H_t, A_t) \mid \pi] \in \bb{R}^{d}$ such that: $\mathbb{E}[V^{\pi}(H_0)] = w \cdot \mu^{\pi}$. Our aim is to recover the expert weights $w^{*}$ as well as find a policy $\pi$ that is close to the policy of the expert $\pi_E$. We take the max-margin IRL approach and we measure the similarity between the feature expectations of the expert's policy and the feature expectations of a candidate policy using $\| \mu^{\pi_E} - \mu^{\pi}\|_2$. In this batch IRL setting, we do not have knowledge of transition dynamics and we cannot sample more trajectories from the environment. Note that in this context, we are the first to model expert policies that depend on patient histories and not just current observations. \textbf{Counterfactual reasoning.} To explain the expert's behaviour in terms of their trade-off associated with "what if" outcomes, we use counterfactual reasoning to define the feature map $\phi(h_t, a_t)$ part of the reward $R(h_t, a_t) = w \cdot \phi(h_t, a_t)$. We adopt the potential outcomes framework \citep{neyman1923applications, rubin1978bayesian, robins2008estimation}. Let $Y[a]$ be the potential outcome, either factual or counterfactual, for treatment $a\in \cl{A}$. Using the dataset $\cl{D}$ we learn feature map $\phi(h_t, a_t)$ such that: \begin{equation} \phi(h_t, a_t) = \mathbb{E}[Y_{t+1}[a_t] \mid h_t], \end{equation} where $\mathbb{E}[Y_{t+1}[a_t] \mid h_t]$ is the potential outcome for taking action $a_t$ at time $t$ given the history $h_t$. For the factual action $a_{t}$, assigned under policy $\pi(\cdot\mid h_{t})$, the factual outcome is $x_{t+1}$ and this is the same as the potential outcome $\mathbb{E}[Y_{t+1}[a_t]\mid h_t]$. The potential outcomes for the other actions $a_t\in\cl{A}$ are the counterfactual ones and they allow us to understand what would happen to the patient if they receive a different treatment $a_t$. To identify the potential outcomes from the batch data we make the standard assumptions of consistency, positivity and no hidden confounders as described in Appendix \ref{apx:counterfactuals}. No hidden confounders means that we observe all variables affecting the action assignment and potential outcomes. Overlap means that at each timestep, every action has a non-zero probability and can be satisfied in this setting by having a stochastic expert policy. These assumptions are standard across methods for estimating counterfactual outcome \citep{robins2000marginal, schulam2017reliable, bica2020crn}. Note that these assumptions are needed to be able to reliably perform causal inference using observational data. However, they do not constrain the batch IRL set-up. Estimating the potential outcomes from batch data poses additional challenges that need to be considered. The fact that the expert follows policies that consider the history of patient observations when deciding new actions, gives rise to time-dependent confounding bias. Standard supervised learning methods for learning $\bb{E}[Y_{t+1}[a_t] \mid h_t]$ from $\cl{D}$ will be biased by the expert policy used in the observational dataset and will not be able to correctly estimate the counterfactual outcomes under alternative policies \citep{schulam2017reliable}. Methods for adjusting for the confounding bias involve using either inverse probability of treatment weighting \citep{robins2000marginal, lim2018forecasting} or building balancing representations \citep{bica2020crn}. Refer to Appendix \ref{apx:counterfactuals} for more details. In the sequel, we consider the model for estimating counterfactuals as a black box such that the feature map $\phi(h_t, a_t)$ represents the effect of taking action $a_t$ for history $h_t$. The reward is then: \begin{equation} R(h_t, a_t) = w \cdot \phi(h_t, a_t) = w \cdot \mathbb{E}[Y_{t+1}[a_t] \mid h_t] \end{equation} Defining the reward function using counterfactuals gives an interpretable parameterization of doctor behavior: It allows us to interpret their behavior with respect to the importance weights implicitly assigned to the effects of their actions. This enables describing the relative trade-offs in treatment decisions. Note that we are \textit{not} assuming that the experts themselves actually compute these quantities (nor that they explicitly adopt the same causal inference assumptions); rather, we are simply providing a way to understand how decision-makers are effectively behaving (i.e. in terms of counterfactuals). \vspace{-3mm} \section{Batch inverse reinforcement learning using counterfactuals} Max-margin IRL \citep{abbeel2004apprenticeship} starts with an initial random policy $\pi$ and iteratively performs the following three steps to recover the expert policy and its reward weighs: (1) estimate feature expectations $\mu^{\pi}$ of candidate policy $\pi$, (2) compute new reward weights $w$ and (3) find new candidate policy $\pi$ that is optimal for reward function $R(h_t, a_t) = w \cdot \phi(h_t, a_t)$. This approach finds a policy $\Tilde{\pi}$ that satisfies $\| \mu^{\pi_e} - \mu^{\Tilde{\pi}}\|_2 < \epsilon$ such that $\Tilde{\pi}$ has an expected value function close the expert policy. \begin{wrapfigure}{t}{0.465\textwidth} \centering \vspace{-7mm} \includegraphics[width=0.465\textwidth]{figs/counterfactual_irl_fig.pdf}% \caption{Counterfactual inverse reinforcement learning (CIRL). Counterfactuals are used to define $\phi(h, a)$, to estimate feature expectations $\mu^{\pi}$ of candidate policy $\pi$ in batch setting and to learn optimal policy for reward weights $w$.} \label{fig:counterfactuals_IRL} \vspace{-10mm} \end{wrapfigure} The expert feature expectations can be estimated empirically from the dataset $\cl{D}$ using: \begin{equation}\label{eqn:expert_feature_expectations}\mu^{\pi_E} = \frac{1}{N}\sum_{i=1}^{N}\sum_{t=0}^{T^{i}} \gamma^{t}\phi(h^{i}_t, a^{i}_t). \end{equation} \noindent In the batch setting, we cannot estimate the feature expectations of candidate policies by taking the sample mean of on-policy roll-outs: $\mu^{\tilde{\pi}} \neq \frac{1}{N}\sum_{i=1}^{N}\sum_{t=0}^{T^{i}} \gamma^{t}\phi(h^{i}_t, \pi(h^{i}_t))$. To address this off-policy nature of estimating feature expectations, we introduce a new method that leverages the estimated counterfactuals. We also make use of the counterfactuals to learn optimal policies for different reward weights. Figure \ref{fig:counterfactuals_IRL} illustrates how we integrate "what if" reasoning into batch IRL. \subsection{Counterfactual $\mu-$learning} Similar to the approach proposed by \cite{klein2012inverse, lee2019truly}, we consider a history-action feature expectation defined as follows $\mu^{\pi}(h, a) = \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} \phi(H_t, A_t)|\pi, H_0 = h, A_0 = a]$, where the first action $a$ can be chosen randomly and for $t\geq 1$, $A_t \sim \pi(\cdot \mid H_t)$. This can be re-written as: \begin{small} \begin{eqnarray} \mu^{\pi}(h, a) &=& \phi(h, a) + \mathbb{E}_{h^{\prime}, a^{\prime} \sim \pi(\cdot\mid h^{\prime})} [\sum_{t=1}^{\infty} \gamma^{t} \phi(H_t, A_t) \mid \pi, H_1 = h^{\prime}, A_1 = a^{\prime}] \\ &=& \phi(h, a) + \gamma \mathbb{E}_{h^{\prime}, a^{\prime} \sim \pi(\cdot\mid h^{\prime})} [\mu^{\pi}(h^{\prime}, a^{\prime})], \end{eqnarray} \end{small}% where $h^{\prime}$ is the next history. Notice the analogy between $\mu^{\pi}(h, a)$ and the action-value function: \begin{small} \begin{eqnarray} Q^{\pi}(h, a) &=& R(h, a) + \mathbb{E}_{h^{\prime}, a^{\prime} \sim \pi(\cdot\mid h^{\prime})} [\sum_{t=1}^{\infty} \gamma^{t} R(H_t, A_t) \mid \pi, H_1 = h^{\prime}, A_1 = a^{\prime}] \\ &=& R(h, a) + \gamma \mathbb{E}_{h^{\prime}, a^{\prime} \sim \pi(\cdot\mid h^{\prime})} [Q^{\pi}(h^{\prime}, a^{\prime})], \end{eqnarray} \end{small}% that allows us to use temporal difference learning to estimate feature expectations \citep{sutton1998introduction}. Existing methods for estimating feature expectations fall into two extremes: (1) model-based (online) IRL approaches learn a model of the world and then use the model as a simulator to obtain on-policy roll-outs \citep{abbeel2004apprenticeship} and (2) batch IRL approaches use Q-learning (or alternative methods) for off-policy evaluation \citep{lee2019truly}, and can only be used to evaluate policies similar to the expert policy and require warm start. In our case, the counterfactual model allows us to compute $h^{\prime} = (h, a, \mathbb{E}[Y(a)|h])$ for any $h \in \cl{D}$ and any arbitrary action $a$. Thus, we propose counterfactual $\mu$-learning, a novel method for estimating feature expectations that uses these counterfactuals as part of temporal difference learning with $1$-step bootstrapping. This approach falls in-between (1) and (2) and allows us to estimate feature expectations for any candidate policy $\pi$ in the batch IRL setting. The counterfactual $\mu$-learning algorithm learns the $\mu$-values for policy $\pi$ iteratively by updating the current estimates of the $\mu$-values with the feature map plus the $\mu$-values obtained by following policy $\pi$ in the new counterfactual history $h^{\prime} = (h, a, \mathbb{E}[Y[a]|h])$: \begin{equation} \hat{\mu}^{\pi}(h,a) \gets \hat{\mu}^{\pi}(h,a) + \alpha(\phi(h,a) + \gamma\mathbb{E}_{a'\sim\pi(\cdot|h')}[\hat{\mu}^{\pi}(h',a^{\prime})]- \hat{\mu}^{\pi}(h,a)), \end{equation} where $\alpha$ is the learning rate. We use a recurrent network with parameters $\theta$ to approximate $\hat{\mu}^{\pi}(h,a\mid \theta)$ and we train it by minimizing the sequence of loss functions $\cl{L}_i$ which changes for every iteration $i$: \begin{small} \begin{align} \cl{L}_i(\theta_i) = \mathbb{E}_{h\sim \cl{D}} [|| y_i - \hat{\mu}^{\pi}(h,a\mid \theta_i) ||_2] && \theta_{i+1} \gets \theta_{i} + \alpha \nabla (\cl{L}_i(\theta_i)), \end{align} \end{small}% where the action $a$ can be chosen randomly from $\cl{A}$ and $y_i = \phi(h,a) + \gamma\mathbb{E}_{a'\sim\pi(\cdot|h')}[\hat{\mu}^{\pi}(h',a^{\prime}\mid \theta_{i-1})] $ is the target for iteration $i$. The parameters for the previous iteration $\theta_{i-1}$ are held fixed when optimizing $\cl{L}_i(\theta_i)$. Refer to Appendix \ref{apx:mu_learning} for full details of the counterfactual $\mu$-learning algorithm. The feature expectations for the policy $\pi$ are given by $\hat{\mu}^{\pi} = \mathbb{E}_{H_0,A_0\sim \pi(\cdot|H_0)}[\hat{\mu}^{\pi}(H_0, A_0)]$, which can be estimated empirically from the observational dataset $\mathcal{D}$ using $\hat{\mu}^{\pi} = \frac{1}{N}\sum_{i=1}^N\sum_{a\in\mathcal{A}} \hat{\mu}^{\pi}(h_0^i, a)\pi(a\mid h_0^i).$ \begin{algorithm}[t] \begin{algorithmic}[1] \STATE \textbf{Input}: Batch dataset $\cl{D}$, max iterations $n$, convergence threshold $\epsilon$, \\ feature map $\phi(h_t, a_t)$ $=$ $\mathbb{E}[Y_{t+1}[a_t] | h_t]$ \STATE $\mu^{\pi_E} \gets$ compute $\pi_E$'s feature expectations {\small(Equation \ref{eqn:expert_feature_expectations})} \STATE $\hphantom{\mu^{\pi_E}}\mathllap{w_0~} \gets $ random initial reward weights, $\hphantom{\mu^{\pi_E}}\mathllap{\pi_0~} \gets $ compute optimal policy for $R_0 = w_0 \cdot \phi$ \STATE $\hphantom{\mu^{\pi_E}}\mathllap{\mu^{\pi_0}} \gets $ compute $\pi_0$'s feature expectations\hfill{\small(counterfactual $\mu$-learning)} \STATE $\Pi = \{\pi_0\}, \Delta = \{\mu^{\pi_0}\}, \bar{\mu}_0 = \mu^{\pi_0}$ \FOR{$k = 1$ to $n$} \STATE $\hphantom{\mu^{\pi_k}}\mathllap{w_k~}=\mu^{\pi_E}-\bar{\mu}_{k-1}$, $\hphantom{\mu^{\pi_k}}\mathllap{\pi_k~} \gets$ compute optimal policy for $R_k = w_k \cdot \phi$ \STATE $\mu^{\pi_k} \gets$ compute $\pi_k$'s feature expectations\hfill{\small (counterfacual $\mu$-learning)} \STATE $\Pi = \Pi \cup \{\pi_k\}, \Delta = \Delta \cup \{\mu^{\pi_k}\}$ \STATE Orthogonally project $\mu^{\pi_E}$ onto line through $\bar{\mu}_{k-1},\mu^{\pi_k}$:\\ $\medmath{\bar{\mu}_k =\dfrac{(\mu^{\pi_k} -\bar{\mu}_{k-1})^T (\mu^{\pi_E} -\bar{\mu}_{k-1}) }{(\mu^{\pi_k} -\bar{\mu}_{k-1})^T (\mu^{\pi_k} -\bar{\mu}_{k-1})} (\mu^{\pi_k} -\bar{\mu}_{k-1})+ \bar{\mu}_{k-1}},\,\,\,\,\,\,\,\,\,\,\,\,\, t = \lVert \mu^{\pi_E} - \bar{\mu}_k \rVert_2$ \STATE \textbf{if} $t < \epsilon $ \textbf{then} break \ENDFOR \STATE $K = \arg\min_{k:\mu^{\pi_k} \in \Delta} \|\mu^{\pi_E} - \mu^{\pi_k} ||_2$, $\tilde{R}(h, a) = w_K \cdot \phi(h, a)$ \\ \STATE \textbf{Output}: $\tilde{R}$, $\Delta$, $\Pi$ \caption{(Batch, Max-Margin) CIRL} \label{alg:cirl} \end{algorithmic} \end{algorithm} \vspace{-2mm} \subsection{Finding optimal policy for given reward weights} During each iteration of max-margin IRL, we obtain a candidate policy to evaluate by finding the optimal policy for a given vector of reward weights. We use deep recurrent $Q$-learning \citep{hausknecht2015deep}, a model-free approach for learning $Q$-values for reward function $R(h, a) = w \cdot \phi(h, a)$. The counterfactuals are used to compute $\phi(h, a)$, and to estimate the next history for the temporal difference updates. See Appendix \ref{apx:recurrent_q_learning} for details. After estimating the $Q-$values, $Q(h, a)$ for history $h$ and action $a$, a new candidate policy is obtained using: $\pi(a\mid h)~= \mathbbm{1}_{a=\arg\max_{a'}Q(h, a')}$. \vspace{-2mm} \subsection{Counterfactual inverse reinforcement learning algorithm (CIRL) } Algorithm \ref{alg:cirl} describes our proposed counterfactual inverse reinforcement learning (CIRL) method for the batch setting. CIRL is based on the projection algorithm proposed by \cite{abbeel2004apprenticeship} and iteratively updates the reward weights to minimize the margin between the expert's feature expectations and the feature expectations of intermediate policies. CIRL uses our proposed counterfactual $\mu-$learning algorithm for estimating the feature expectations of intermediate policies in an off-policy manner that is suitable for the batch setting. Compared to the algorithm for batch IRL proposed by \citet{lee2019truly} that requires the initial policy $\pi_0$ to already be similar to the expert policy, CIRL works for any initial policy $\pi$ that is optimal for the randomly initialized reward weights $w_0$. Similarly to \citet{choi2011inverse}, the CIRL algorithm returns the reward function $\tilde{R}(h, a)$ that results in a policy with feature expectations closest to the ones of the expert policy. We show experimentally that the reward that yields the closest feature expectations will be similar to the true underlying reward function of the expert. CIRL returns the set of policies tried $\Pi$ and their feature expectations $\Delta$, which allows us to compute a mixing policy that would yield similar performance to the expert policy \citep{abbeel2004apprenticeship}. Let $\Tilde{\mu}$ be the closest point to $\mu^{\pi_E}$ in the convex closure of $\Delta = \{\mu^{\pi_0}, \mu^{\pi_1} \dots \mu^{\pi_k}\}$, which can be computed by solving the quadratic programming problem: \begin{small} \begin{align} \min \| \mu^{\pi_E} - \mu \|_2 \text{ s.t. } \mu = \sum_i \lambda_i \mu^{\pi_i}, \lambda_i \geq 0, \sum_i \lambda_i = 1. \end{align} \end{small}% From the termination criteria of Algorithm 1, $\mu^{\pi_E}$ is separated from the points $\mu^{\pi_i}$ by a margin of at most $\epsilon$. Thus, the solution $\Tilde{\mu}$ will satisfy $\| \mu^{\pi_E} - \tilde{\mu} \|_2 \leq \epsilon$. To obtain a policy that is close to the performance of the expert policy, we mix together the policies $\Pi = \{\pi_0, \dots \pi_k\}$ returned by Algorithm 1, where the probability of selecting $\pi_i$ is $\lambda_i$. \vspace{-3mm} \section{Experiments} We evaluate the ability of CIRL to recover the preferences of experts over the "what if" outcomes of actions. These preferences are denoted by the magnitude of the recovered reward weights. Since we do not have access to the underlying reward weights of experts in real data, we first validate the method in a simulated environment. To show the applicability of CIRL in healthcare, we also perform a case study on an ICU dataset from the MIMIC III database \citep{johnson2016mimic}. \noindent\textbf{Benchmarks.} For our proposed CIRL method, we uses the Counterfactual Recurrent Network, a state-of-the art model for estimating counterfactual outcomes in a temporal setting \citep{bica2020crn}. Note that other models for this task are also applicable \citep{lim2018forecasting}. Refer to Appendix \ref{apx:impl_cirl} for details. We benchmark CIRL against MB-IRL: model-based IRL\textemdash i.e. inverse reinforcement learning with model-based policy evaluation (e.g. \citet{yin2016synthesizing, nagabandi2018neural, kaiser2019model, buesing2018woulda}). We consider two versions of this benchmark: MB($h$)-IRL which uses the patient history and MB($x$)-IRL which only uses the current observations to define the policy. For both MB($x$)-IRL and MB($h$)-IRL we define the reward as a weighted sum of counterfactual outcomes, but these methods use instead standard supervised methods to estimate the next history $h^{\prime}$ needed for the counterfactual $\mu$-learning algorithm. These benchmarks are aimed to highlight the need of handling the bias from the time-dependent confounders and using a suitable counterfactual model, but also the importance of handling the patient history. We also compare against the deep successor feature networks (DSFN) proposed by \citet{lee2019truly}, which currently represents the state-of-the-art Batch IRL for the MDP setting. To show that their approach for estimating feature expectations in the batch setting is suboptimal, we extend their method to also incorporate histories in the DSFN($h$) benchmark. Implementation details of the benchmarks can be found in Appendix \ref{apx:impl_benchmarks}. \vspace{-2mm} \subsection{Extracting different types of expert behaviour} \noindent\textbf{Simulated environment.} We propose an environment that uses a general data simulation involving $p$-order auto-regressive processes. To analyze different types of expert behaviour (e.g. treating more/less aggressively) we simulate data for patient features representing disease progression ($x$), e.g. tumour volume and side effects ($z$) and action ($a$) indicating the binary application of treatment. For time $t$, we model the evolution of patient covariates according to the treatments as follows: \begin{footnotesize} \begin{align} x_t = \frac{1}{p} \sum_{i=1}^{p} x_{t-i} - 2.5\sum_{i=1}^{p} a_{t-i} + 0.5p + \epsilon && z_t = \frac{1}{p} \sum_{i=1}^{p} z_{t-i} + 0.5\sum_{i=1}^{p} a_{t-i} - p + \eta \label{eq:data_sim} \end{align} \end{footnotesize}% where $p=5$ and $\epsilon \sim \cl{N}(0, 0.1^2)$, $\eta \sim \cl{N}(0, 0.1^2)$ are noise terms. The initial values for the features are sampled as follows: $x_0 \sim \cl{N}(30, 5)$ and $z_0 \sim \cl{N}(2, 1)$. We set $x_{max} = 50$ and $z_{max} = 15$. The trajectory of the patient terminates when either $x_t = 0$, $x_t \geq x_{max}$, $z_t \geq z_{max}$ or $t\geq20$. The tumour volume $x_t$, denoting the disease progression, decreases when we give treatment and increases otherwise. Conversely, the side effects $z_t$, increase when we give treatment and decrease otherwise. We define a linear reward for taking action $a_t$ given history $h_t = (x_{0:t}, z_{0:t}, a_{0:t-1})$ as follows: \begin{small} \begin{equation} R(h_t, a_t) = w_1 \frac{x_{t+1}}{x_{max} - x_{min}} + w_2 \frac{z_{t+1}}{z_{max} - z_{min}} \end{equation} \end{small}% where $w = [w_1, w_2]$, $||w||_{1} \leq 1$ and $x_{t+1}$ and $z_{t+1}$ are simulated according to equations \ref{eq:data_sim} to take into account the effect of action $a_t$ for history $h_t$. The features are normalized to $[0, 1]$. The best scenario for a patient is when both the side effects and tumour volume are zero and a doctor attempting to achieve this will have negative reward weights. However, different settings of the reward weights will result in different expert behaviours. For instance, for $w_1 = -0.7$ and $w_2 = -0.3$, the expert policy will focus more on the disease progression and will consequently treat more aggressively, while this behaviour will be reversed for reward weights set to $w_1 = -0.3$ and $w_2 = -0.7$. We used deep recurrent $Q-$learning \citep{hausknecht2015deep} to find a stochastic expert policy that optimizes the reward function for different settings of the weights. The batch dataset $\cl{D}$ consists of 10000 trajectories sampled from the expert policy. Refer to Appendix E for more details. \begin{wrapfigure}{t}{0.42\textwidth} \centering \vspace{-5mm} \includegraphics[width=0.42\textwidth]{figs/reward_weights.pdf}% \caption{Reward weights recovered by benchmarks over 10 runs. The weights of the expert are $w_1 = -0.3$ and $w_2 = -0.7$.}% \label{fig:reward_weights} \vspace{-5mm} \end{wrapfigure} \textbf{Recovering decision making preferences of experts}. We first evaluate the benchmarks on their ability to recover the weights of the reward function optimized by the expert for the experimental setting with $\gamma = 0.99$, $w_1 = -0.3$ and $w_2 = -0.7$. Note that DSFN does not provide interpretable reward weights, and thus cannot be used for understanding the trade-offs in the expert behavior. We train each benchmark 10 times and we plot in Figure \ref{fig:reward_weights} the reward weights obtained for the different iterations. We show that our proposed CIRL method performs best at recovering the preferences of the expert, which in this case is to treat less aggressively. While the MB$(h)$-IRL method also recovers the correct trade-offs in the expert behavior, the computed weights have a much higher variance. Conversely, the MB$(x)$-IRL method, which does not consider the patient history fails to recover the underlying weights of the expert policy. \textbf{Matching the expert policy}. We evaluate the benchmarks' ability to recover policies that match the performance on the expert policy for two settings of the discount factor $\gamma \in \{0.99, 0.5\}$. A lower $\gamma$ indicates that the expert is optimizing for the immediate effect of actions, while a higher $\gamma$ means they considered the long term effect of actions. For each $\gamma$ we learn expert policies for different reward weights and we use the expert policies to generate the batch datasets. We evaluate the policies learned by the benchmarks using two metrics: cumulative reward for running the policy in the simulated environment and accuracy on matching the expert policy (computed as described in Appendix \ref{apx:accuracy}). We report in Tables \ref{tab:results_cummulative_rewards} and \ref{tab:results_accuracy} the average results and their standard error over 1000 sampled trajectories from the environment. CIRL recovers a policy that has the closest cumulative reward to the expert policy and that can best match the treatments assigned by the expert. \begin{table}[h] \begin{center} \vspace{-0.1cm} \caption{Mean cumulative reward and standard deviation for running learnt policy in the environment.} \label{tab:results_cummulative_rewards} \centering \setlength\tabcolsep{3.0pt} \begin{footnotesize} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{l|c|c|c|c|c|c} \toprule & \multicolumn{3}{c|}{$\gamma=0.99$} & \multicolumn{3}{c}{$\gamma=0.5$} \\ \midrule \makecell{Reward\\ weights}& \makecell{$w_1 = - 0.3$\\ $w_2 = -0.7$} & \makecell{$w_1 = - 0.7$ \\ $w_2 = -0.3$}& \makecell{$w_1 = - 0.5$\\ $w_2 = -0.5$} & \makecell{$w_1 = - 0.3$\\ $w_2 = -0.7$} & \makecell{$w_1 = - 0.7$ \\ $w_2 = -0.3$}& \makecell{$w_1 = - 0.5$\\ $w_2 = -0.5$} \\ \midrule MB($x$)-IRL & $-3.78 \pm 0.02$ & $-4.42 \pm 0.05$ & $-4.90 \pm 0.05$ & $-4.51 \pm 0.05$ & $-4.53 \pm 0.05$ & $-4.54\pm 0.04$ \\ MB($h$)-IRL & $-3.23 \pm 0.02$ & $-4.10 \pm 0.02$ & $-4.63 \pm 0.04$ & $-4.43 \pm 0.04$ & $-3.54 \pm 0.05$ & $-4.35 \pm 0.03$ \\ DSFN & $-3.56 \pm 0.06$ & $-4.32 \pm 0.04$ & $-3.77 \pm 0.05$ & $-4.11 \pm 0.06$ & $-3.07 \pm 0.03$ & $-4.67 \pm 0.07$ \\ DSFN($h$) & $-3.31 \pm 0.07$ & $-4.33 \pm 0.07$ & $-3.60 \pm 0.07$ & $-3.95 \pm 0.07$ & $-3.05 \pm 0.05$ & $-4.61 \pm 0.05$ \\ CIRL & $- \bm{2.89} \pm \bm{0.02}$ & $- \bm{3.92} \pm \bm{ 0.03}$ & $-\bm{3.41} \pm \bm{0.05}$ & $-\bm{2.79} \pm \bm{0.02}$ & $- \bm{2.91} \pm \bm{0.02}$ & $- \bm{4.27} \pm \bm{0.03}$ \\ \midrule Expert & $-2.72 \pm 0.02$ & $-3.61 \pm 0.02$ & $-2.81 \pm 0.01$ & $-2.65 \pm 0.02$ & $-2.36 \pm 0.01$ & $-3.97\pm 0.03$ \\ \bottomrule \end{tabular} \end{adjustbox} \end{footnotesize} \vspace{-0.4cm} \end{center} \end{table} \begin{table}[b] \begin{center} \vspace{-0.1cm} \caption{Average accuracy and standard deviation for matching the actions in the expert policy.} \label{tab:results_accuracy} \centering \setlength\tabcolsep{3.6pt} \begin{footnotesize} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{l|c|c|c|c|c|c} \toprule & \multicolumn{3}{c|}{$\gamma=0.99$} & \multicolumn{3}{c}{$\gamma=0.5$} \\ \midrule \makecell{Reward\\ weights}& \makecell{$w_1 = - 0.3$\\ $w_2 = -0.7$} & \makecell{$w_1 = - 0.7$ \\ $w_2 = -0.3$}& \makecell{$w_1 = - 0.5$\\ $w_2 = -0.5$} & \makecell{$w_1 = - 0.3$\\ $w_2 = -0.7$} & \makecell{$w_1 = - 0.7$ \\ $w_2 = -0.3$}& \makecell{$w_1 = - 0.5$\\ $w_2 = -0.5$} \\ \midrule MB($x$)-IRL & $62.5 \pm 0.41\%$ & $61.4 \pm 0.81\%$ & $54.6 \pm 0.56\%$ & $52.4 \pm 0.63\%$ & $60.1 \pm 0.39\%$ & $71.8 \pm 0.72\%$ \\ MB($h$)-IRL & $77.8 \pm 0.31\%$ & $70.2 \pm 0.45\%$ & $71.4 \pm 0.69\%$ & $66.3 \pm 0.58\%$ & $70.2 \pm 0.71\%$ & $75.6 \pm 0.52\%$ \\ DSFN & $75.4 \pm 0.32\%$ & $68.4 \pm 0.21\%$ & $73.4 \pm 0.45\%$ & $80.2 \pm 0.37\%$ & $70.8 \pm 0.24\% $ & $69.8 \pm 0.44\% $ \\ DSFN($h$) & $76.3 \pm 0.37\%$ & $67.5\pm 0.32\%$ & $80.6 \pm 0.54\%$ & $80.4\pm 0.56\%$ & $71.0 \pm 0.35\%$ & $70.2 \pm 0.47\%$ \\ CIRL & $\bm{81.8} \pm \bm{0.42\%}$ & $\bm{75.5} \pm \bm{0.51\%}$ & $\bm{83.7} \pm \bm{0.76\%}$ & $\bm{89.5} \pm \bm{0.37\%}$ & $\bm{73.2} \pm \bm{0.43\%}$ & $\bm{80.4} \pm \bm{0.42\%}$ \\ \bottomrule \end{tabular} \end{adjustbox} \end{footnotesize} \vspace{-0.7cm} \end{center} \end{table} \subsection{Case study on real-world dataset: MIMIC III} Suppose we want to explain the decision-making process of doctors assigning antibiotics to patients in the ICU. For this purpose, we consider a dataset with 6631 patients that have received antibiotics during their ICU stay, extracted from the Medical Information Mart for Intensive Care (MIMIC III) database \citep{johnson2016mimic}. \begin{wrapfigure}{r}{0.3\textwidth} \centering \vspace{-9mm} \includegraphics[width=0.3\textwidth]{figs/radar_plot_reward_weights.pdf}% \caption{Radar plot of reward weights magnitude for assigning antibiotics.}% \label{fig:reward_weights_mimic} \vspace{-4mm} \end{wrapfigure} We used CIRL, with $\gamma=0.99$, to recover the policy and the reward weights of doctors administering antibiotics to understand their preferences over the effect of antibiotics on the patient features. The relative magnitude of the reward weights for the counterfactual outcomes of the patient features considered are illustrated in Figure \ref{fig:reward_weights_mimic}. CIRL found that reducing temperature had the highest weight in the reward function of the expert, followed by WBC. This corresponds to known medical guidelines \citep{marik2000fever, palmer2008aerosolized}. {\color{black} Sepsis is a leading cause of morbidity and mortality in the ICU, and several studies have \begin{wraptable}{r}{0.3\columnwidth} \centering \vspace{-60mm} \begin{scriptsize} \setlength\tabcolsep{3pt} \setlength\tabcolsep{3.6pt} \begin{tabular}{l|c} \toprule & \makecell{Accuracy} \\ \midrule MB($x$)-IRL & $70.1 \pm 0.11\%$\\ MB($h$)-IRL & $77.5 \pm 0.15\%$ \\ DSFN & $73.5 \pm 0.25\%$ \\ DSFN($h$) & $75.3 \pm 0.19\%$ \\ CIRL & $\bm{83.4} \pm \bm{0.17\%}$ \\ \bottomrule \end{tabular} \caption{Accuracy on matching expert actions. } \vspace{-5mm} \label{tab:accuracy_mimic} \end{scriptsize} \end{wraptable} shown that early administratio of antibiotics is crucial to decrease the risk of adverse outcomes \citep{zahar2011outcomes}. Fever and elevated WBC are among a small subset of variables that indicate a systemic inflammatory response, concerning for the development of sepsis \citep{neviere2017sepsis}. While these findings are not specific to bacterial infection, the risk of failing to treat a potentially serious infection often outweighs the risk of inappropriate antibiotic administration, thereby driving clinicians to prescribe antibiotics in the setting of these abnormal findings. Similarly, the decision to discontinue antibiotics is complex, but it is often supported by signs of a resolution of infection which included normalization of body temperature and a downtrending of the WBC. As such, our finding that the two highest reward weights for the administration of antibiotics in the ICU is temperature and WBC is consistent with clinical practice. Moreover, note that the model is simply identifying the factors that are driving the decision-making of the clinician represented in this dataset.} In Table \ref{tab:accuracy_mimic}, to verify that explainability does not come at the cost of accuracy we also evaluate the benchmarks on matching the expert actions. \vspace{-3mm} \section{Discussion} In this paper, we propose building interpretable parametrizations of sequential decision-making by explaining an expert's behaviour in terms of their preferences over "what-if" outcomes. To achieve this, we introduce CIRL, a new method that incorporates counterfactual reasoning into batch IRL: counterfactuals are used to define the feature map part of the reward function, but also to tackle the off-policy nature of estimating feature expectations in the batch setting. The reward weights recovered by CIRL indicate the relative preferences of the expert over the counterfactual outcomes of their actions. Our aim is to provide a description of behavior, i.e. how the expert is effectively behaving under our interpretable parameterization of the reward based on counterfactuals. We are not assuming that the experts actually operate under this specific (linear, in our case) model or that they compute the exact counterfactuals. Instead, our purpose is to show that we can explain an agent’s behavior on the basis of counterfactuals, which is useful in that it allows us to audit them, sanity-check their policies and find variation in practice. Further discussion can be found in Appendix \ref{apx:limitations}. There are several limitations of your method and directions for future work. While our method considers reward functions that are linear in the features, one way of extending it to handle more complex reward functions is to use domain knowledge to define the feature map as a non-linear function over the counterfactual outcomes. Nevertheless, these functions should be defined in a way that still allows us to obtain interpretable explanations of the expert's behaviour. Moreover, although time-invariant reward weights $w$ are standard in the IRL literature \citep{abbeel2004apprenticeship, choi2011inverse, lee2019truly}, time-variant rewards/policies have been considered in the dynamic treatment regimes (reinforcement learning) literature \citep{chakraborty2013statistical, zhang2019near}. Thus another direction for future work would be to extend our method to consider non-stationary policies and reward weights that can change over time. \section*{Acknowledgments} We would like to thank the reviewers for their valuable feedback. The research presented in this paper was supported by The Alan Turing Institute, under the EPSRC grant EP/N510129/1, by Alzheimer’s Research UK (ARUK), by the US Office of Naval Research (ONR), and by the National Science Foundation (NSF) under grant numbers 1407712, 1462245, 1524417, 1533983, and 1722516. The authors are also grateful to Brent Ershoff for the insightful discussions and help with interpreting the medical results on MIMIC III.
{ "timestamp": "2021-03-31T02:37:21", "yymm": "2007", "arxiv_id": "2007.13531", "language": "en", "url": "https://arxiv.org/abs/2007.13531" }
\section{Introduction} The condensation of water vapor onto solid surfaces is integral to many natural processes including dew formation\cite{Beysens1995} and fog harvesting by animals, like the Namib Desert Beetle\cite{Parker2001,Park2016} and Litoria caerulea-, a green tree frog in Australia,\cite{Tracy2011} and plants, such as the Namib desert plant.\cite{Malik2014} Water condensation is also intrinsic to various technological applications like fog harvesting,\cite{Milani2011,Lee2012} seawater desalination,\cite{Khawaji2008} and heat exchangers for power generation\cite{Beer2007} and refrigeration.\cite{Barbosa2012,Kim2002} In all cases, efficient condensation and removal (or `collection') of the condensed liquid is essential. The entire process consists of a series of steps, namely the nucleation of liquid on an initially dry solid surface, the subsequent growth of the liquid phase in form of a film or droplets, and finally the removal of the latter. At first glance, hydrophilic surfaces may seem the most natural choice to promote condensation. Yet, it has been known for decades that plain hydrophilic surfaces are actually not the best choice because they promote the formation of condensed liquid film (for a review: see \cite{Rose2002}). Compared to films, drops are much easier to manipulate and transport in desired directions by suitable topographical and chemical patterns on the surface. Moreover, particularly in heat transfer, thick films of condensed liquid form a barrier of poor thermal conductivity that prevents direct contact of the vapor with the cooled surface of the condenser and thus reduces the overall heat transfer. Hence, it is usually advantageous to use partially wetting solid surfaces where condensing vapor forms discrete drops that leave parts of the solid surface in direct contact with the to-be-condensed vapor. As these discrete drops are removed, they expose even more bare surface again and thereby free space for a subsequent generation of condensing drops. Like in case of biological or technological fog harvesting surfaces, efficient removal of the condensate drops is therefore essential for the overall performance of the system. Throughout recent years, various efforts have been made to optimize dropwise condensation and the subsequent removal of drops using suitable topographical and chemical surface patterns.\cite{Hou2015,Mondal2015,Ghosh2014,Narhe2004,Boreyko2009,Miljkovic2012,Miljkovic2013b,Weisensee2017,Anand2012,Tsuchiya2017} Such patterns generate an energy landscape in which condensing drops initially form at either random locations or at preferred hydrophilic nucleation sites. As drops grow with time, they experience the imprinted gradients in wettability, hit geometric boundaries, and coalesce with other drops. In each of these situations, the original configuration of the drop typically becomes unstable and the drop moves towards a location of lower energy. Examples of such surface patterns include surfaces with alternating hydrophobic and hydrophilic stripes,\cite{Hou2015,Mondal2015,Ghosh2014} surfaces with conical geometries,\cite{Park2016} superhydrophobic surfaces with grooves\cite{Narhe2004} or nanostructures,\cite{Boreyko2009,Miljkovic2012,Miljkovic2013b} as well as liquid-infused surfaces.\cite{Weisensee2017,Anand2012,Tsuchiya2017} The resulting drop displacements are either driven entirely by capillary and wetting forces or they may be assisted by gravity in case of vertically oriented condenser surfaces. In all cases, drops only move once the driving forces are strong enough to overcome the pinning due to microscopic heterogeneities.\cite{DeRuiter2015} The latter are usually quantified by specifying the contact angle hysteresis $\Delta \cos\theta=\cos\theta_r-\cos\theta_a$, where, $\theta_r$ and $\theta_a$ are the receding and advancing contact angles. This explains the interest in surfaces with low contact angle hysteresis such as superhydrophobic and liquid-infused surfaces for heat transfer applications with dropwise condensation. The approaches described above all rely on passive wettability patterns imprinted onto the solid surface upon fabrication. In contrast, electrowetting (EW) allows for active tuning of the wettability and controlled transport of drops of conductive liquids such as water on partially wetting hydrophobic surfaces.\cite{Pollack2000,Cho2003,Mugele2005,MugeleBook} While generically used in combination with a wire that is immersed directly into the liquid, capacitive coupling between the drop(s) and suitably structured co-planar electrodes on the substrate that are covered by a thin hydrophobic polymer layer allow for similarly efficient control of the wettability locally above the activated electrodes.\cite{Yi2006,MugeleBook} By patterning the electrodes, wettability patterns such as simple traps for drops can be generated and switched on and off at will.\cite{Mannetje2013} Drops that were large compared to the width of a gap between two electrodes, preferentially aligned on the center of the gap. As usual in EW, this minimum of the electrostatic energy $E_{el} =-C_{tot}U^2/2$ corresponds to the maximum of the total capacitance between the drop and the electrodes. In this manner, 't Mannetje et al. \cite{TMannetje2014} demonstrated controlled capture, release, and steering of rolling drops on an inclined plane. Later de Ruiter et al.\cite{DeRuiter2014} extended the same principle for drops in microfluidic two phase flow systems for a range of electrode geometries and applied a simple analytical model to calculate the electrical holding force based on the geometric overlap of the trapped drop and the activated electrodes. The idea of manipulating condensing drops by EW was first explored by Kim and Kaviany.\cite{Kim2007} Baratian et al.\cite{Baratian2018} later combined these ideas to study for the first time directly the condensation of water vapor onto EW-functionalized surfaces. For the specific case of parallel interdigitated electrodes aligned along the direction of gravity, they found that the condensation pattern is governed by an electrostatic energy landscape that depends on the size of the condensing drops. While the initial condensation occurred at random locations, subsequent growth by further condensation and EW-induced coalescence lead to alignment of the drops along the edges of the electrodes. Later, once their diameter became comparable to the width of the electrodes, the drops accumulated at the centers of the gaps between adjacent electrodes. Analyzing the distribution of drop sizes and locations, they showed that the drops decorate the drop size-dependent minima of the (one-dimensional) electrostatic energy landscape perpendicular to the electrodes. EW-induced coalescence events lead to faster drop growth. In combination with the reduced contact angle hysteresis in EW with AC voltage\cite{Li2008} drop shedding occurs on average for smaller drops, as compared to the reference case without EW.\cite{Baratian2018} According to classical observations in dropwise condensation, such a reduction of the critical shedding radius is accompanied by enhanced heat transfer.\cite{Rose2002} A series of follow-up studies confirmed these basic original observations regarding the evolution of the drop distribution for straight interdigitated electrodes.\cite{Yan2018,Yan2019a,Wikramanayake2019,Wikramanayake2020a,Hognadottir2020} Experiments with slightly more complex electrode geometries with zigzag-shaped edges resulted in preferential alignment of the drops not only perpendicular but also along the direction of the electrodes, in qualitative agreement with expectations.\cite{Dey2018} That study also indirectly inferred an increased heat transfer from the volume of shedded drops as extracted from video microscopy images. Overall, the experiments suggest that it should be possible to optimize the performance of EW-controlled condensation in heat transfer and other applications by systematically varying electrode geometries and/or excitation patterns. Since experimental brute force optimization of electrode shapes would be very time consuming and costly, it is essential then to extend the existing electrostatic models to arbitrary electrode geometries, and to demonstrate their performance in capturing the complex evolution of drop distribution patterns to enable electrode optimization \emph{in silico} prior to experimental testing. The purpose of the present work is therefore twofold: the core of the work consists of a detailed comparison of the distribution of approximately 87 million drops with sizes between 4.3 and 2000 $\mu$m extracted using image analysis with the predictions of a numerical model based on the drop size-dependent minimization of the electrostatic energy. Experiments and calculations are carried out for the specific case of interdigitated electrodes with zigzag shaped edges of variable length. The comparison reveals an impressive degree of agreement and correctly reproduces a series of subsequent transitions of preferred drop positions as a function of size. The previously proposed simple analytical model by 't Mannetje et al.\cite{Mannetje2013} reproduces the qualitative behavior but underestimates electrostatic energies and forces for small drops. Following the discussion of these results, we evaluate the present status of the field and discuss aspects that we consider essential for the development of EW-controlled condensation from a physical phenomenon towards a technologically relevant application. \section{Methods} \subsection{Experimental Aspects} The present condensation experiments were performed in the same homemade experimental setup (Figure \ref{fig:experimentalsetup}a) that was used in our previous studies.\cite{Baratian2018,Dey2018} The setup consists of a condensation chamber with two inlets at the bottom and an outlet through a fine grid of holes for vapor at the top side. The transparent sample is mounted vertically on one of the side walls and cooled from the back by cooling water (11.5 $^\circ$C) from a commercial cooler (Haake-F3-K, Thermo Fisher Scientific). The sample is back-illuminated with an LED pad (MB-BL305-RGB-24-Z, Metabright) and imaged from the opposite side through an indium-tin-oxide (ITO)-coated heated window with a camera (Point Grey, FL3-U3) through a 20x zoom lens (Z125D-CH12, EHD). The resulting field-of-view is ${\sim}10 \times 7.5$ mm (see movie in Supporting Information \ref{app:movie}). The temperature inside the chamber is measured by several thermistors (TCS651m AmsTECHNOLOGIES and Thorlabs TSP-TH) using a DAQ card and Labview and with the Thorlabs TSP01 Application. Thermistors are located at the vapor inlet, in the vapor close to the sample surface, at the vapor outlet, in the coolant behind the sample, in the heated water on the hot plate, and in the ambient air. Deionized water (Millipore Synergy UV, 18.2 M$\Omega\cdot$cm) is heated on a hot plate (RCT Basic, IKA labortechnik). Ambient air is blown through the water using an aquarium pump (0886-air-550R-plus, Sera) at a flow rate of of 3.5 l/min, as monitored by a flow meter (AWM5101VN flowmeter, Honeywell). The condensation chamber is initially kept dry with a steady flow of dry Nitrogen. At the start of an experiment, the humidified air is guided into the condensation chamber at the bottom of the chamber at a temperature of 42 $^\circ$C, and at a flow rate of 3.5 l/min. To ensure reproducibility, the subcooling of the surface is kept constant at $\sim$30.5 $^\circ$C throughout all experiments. The recorded images are analyzed using a home-built image analysis routine in MATLAB to evaluate the center locations and radii of all the condensing drops (Supporting Information \ref{app:imageanalysis}). The smallest drop size detectable using this method is $R_{min}\approx 4.3$ $\mu$m. The interdigitated zigzag electrodes are fabricated using photo-lithography on a glass substrate. The electrodes are subsequently coated with a 2 µm thick dielectric layer of Parylene C (PDS2010, SCS Labcoter) using chemical vapour deposition (CVD), and an ultra thin top hydrophobic polymer coating (CytopTM, Asahi Glass Co., Ltd.) using a dip-coating procedure. For the experiments and simulations reported herein, we use interdigitated electrodes with zigzag-shaped edges (Figures \ref{fig:experimentalsetup}b-\ref{fig:experimentalsetup}c). As in ref. \cite{Dey2018}, the minimum and maximum width of the gap between adjacent electrodes are kept fixed at $w_{g,min}=50\mu$m and $w_{g,max}=250\mu$m, and three different lengths $\ell$ of 500, 1000 and 3000 µm are tested. For ac-EW, an amplified electrical signal of rms amplitude between $U_{RMS} = 100-150$ V and a fixed frequency of $f = 1$ kHz is used using a function generator (Agilent 33220A) and voltage amplifier (Trek PZD700A). Young's contact angle at zero voltage is $\theta_Y\sim110^\circ$ and Lippmann's angle under EW ($U_{RMS} = 150$ V) is $\theta(U_{RMS})\sim90^\circ$.\cite{Baratian2018} The contact angle hysteresis under ac-EW ($U_{RMS} = 100-150$ V) is measured to be $\Delta \cos \theta = 0.06 \pm 0.01$. \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{figure1_combined.pdf} \caption{Experimental setup (not to scale). (a) Schematic of vapor generator, condensation chamber, cooled sample stage, and optical setup (not to scale). (b) Top view of vertically oriented sample with zoomed view of unit cell of electrode pattern. (c) Cross-sectional view of a condensed drop on the substrate.} \label{fig:experimentalsetup} \end{figure} \subsection{Numerical Aspects} To explain our experimental observations, we developed a numerical model that allows us to calculate the electrostatic energy of a drop as a function of its size and the $(x,y)$ position of its center of mass within the unit cell of the electrode pattern (see zoomed view in Figure \ref{fig:experimentalsetup}b). To calculate this energy landscape $(E_{el}(x,y; R))$, we solve the Poisson equation for a three-dimensional computational domain consisting of the electrodes, the dielectric layer, a water drop, and the surrounding air. Since $\theta(150 \ \text{V})\sim90^\circ$, we represent the drop by a simple hemisphere with radius $R$ and with a fixed electrical conductivity ($10^{-5}$ S/m) that guarantees (for all practical purposes) complete screening of the electric field from the inside of the drop. Note that this hemispherical approximation neglects slight EW-induced distortions of the drop shape (see below). Yet, earlier simulations showed that this merely leads to a minor underestimation of the electrostatic trapping strength for rather weakly deformed drops as in the present experiments.\cite{Cavalli2015} The calculation of $E_{el}(x,y;R)$ starts with the calculation of the distribution of the electrostatic potential $\phi(x,y,z)$ within a three-dimensional domain that encloses a single drop of radius $R$ at a fixed position as well as the adjacent electrodes. $\phi$ and the free charge density $\rho_e$ are related according to the Poisson equation as \begin{equation} \label{eq:poisson} \nabla^2 \phi = -\frac{\rho_e}{\epsilon_0 \epsilon}. \end{equation} Here $\epsilon_0$ is the permittivity of free space, and $\epsilon$ is the relative permittivity of the computational domain. $\rho_e$ can be related to the current density $\vec{J}$ using the charge conservation equation as \begin{equation} \label{eq:chargeconservation} \frac{\partial \rho_e}{\partial t} = - \nabla \cdot \vec{J} = \nabla \cdot \sigma \nabla \phi, \end{equation} where $\sigma$ is the electrical conductivity of the computational domain. Taking the time derivative of Equation \ref{eq:poisson}, and subsequently substituting Equation \ref{eq:chargeconservation} in it, we get a second order partial differential equation in $\phi$: \begin{equation} \label{eq:diffequation} \nabla^2 \dot{\phi} = -\nabla \cdot \left ( \frac{\sigma}{\epsilon_0 \epsilon}\nabla \phi \right ). \end{equation} Considering a sinusoidal electrical potential $\phi=\phi_0 \Re \left [ e^{i \omega t} \right ]$, and subsequently, considering its time derivative $\dot{\phi} = \phi_0 \Re \left [ i \omega e^{i \omega t} \right ]$, Equation \ref{eq:diffequation} can be rewritten as \begin{equation} \label{eq:poissonsolution} \nabla \cdot \left [ \left ( \epsilon_0 \epsilon - i \frac{\sigma}{\omega} \right ) \nabla \phi \right ] = 0. \end{equation} Equation \ref{eq:poissonsolution} is solved numerically in COMSOL Multiphysics (version 5.4) using the finite element method for a fixed voltage (amplitude) of $150$ V and frequency of $1$ kHz. The discretization or element order of modeling domains is varied between quadratic and the fifth-order in order to achieve the desired accuracy. Since the drop size in our experiments varies from a fraction of the width of a unit cell at early stages to drops covering several adjacent electrodes during later stages, the computational domain is chosen to be sufficiently large to cover the entire drop as well as the immediately adjacent electrodes. (In practice, we chose several domain sizes for different ranges of drop sizes in order to reduce computational efforts.) The geometries of electrodes and dielectric films are chosen according to the experiments. Dirichlet boundary conditions (fixed electrostatic potential) are imposed on the electrode surfaces; Von Neumann conditions (zero electric field in normal direction) are applied on all other boundaries. Supplementary Information \ref{app:numerical_geometry} shows a typical view of a computational domain along with the resulting potential distribution for a specific drop configuration. As mentioned above, these calculations were repeated for 200 values of the drop size R between 0 and 900 $\mu$m, and for each drop size at $30 \times 30$ (large R) or $30 \times 60$ (small R) equally spaced locations within the unit cell. (For symmetry reasons, it is sufficient to vary the drop positions only within half of a unit cell; see grey shaded area in Supplementary Information \ref{app:numerical_geometry}.) After numerical evaluation of $\phi(x,y,z)$ for all allowed drop sizes and $(x,y)$-location within the unit cell, the total electrostatic energy of the entire system is calculated as \begin{equation} \label{eq:Eel} E_{el}(x,y;R) = - \int_v \frac{1}{2} \vec{E} \cdot \epsilon_0 \epsilon \vec{E} dv = -\frac{1}{2} \int_v \epsilon_0 \epsilon \left ( \left | \frac{\partial \phi}{\partial x} \right |^2 + \left | \frac{\partial \phi}{\partial y} \right |^2 + \left | \frac{\partial \phi}{\partial z} \right |^2 \right ) dv, \end{equation} where $\vec{E} = - \nabla \phi$ is the electric field, and the integration represents the volume integral over the entire computational domain. In the representation of the electrostatic energy landscapes later on (Figure \ref{fig:model}), we make use of symmetries and periodicities to extend the energy landscapes beyond a single unit cell for a more intuitive representation. Finally, note that Equation \ref{eq:poissonsolution} contains both dielectric and purely conductive contributions. However, for the conductivity of pure water and for the applied (low) frequency, the ionic current dominates the displacement current towards screening the electric field (also see \cite{Baratian2018}). \section{Results} \subsection{Evolution of breath figures} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figure2_combined.pdf} \caption{Top view of condensed droplets on a vertically mounted substrate ($l=1000\mu$m). (a-c) Full field of view for $t=159, 369, 768$s illustrating alignment and growth of condensing drops. (d-i) Zoomed view of $\sim$3 unit cells for times as indicated. Note the vertical and horizontal shift of the center of the drops with increasing size. (Transparent ITO electrodes are superimposed in red.)} \label{fig:breathfigures} \end{figure} As apparent at first glance, the condensate drops form a pattern (breath figure) with well-defined periodicities along both the lateral $(x-)$ and vertical $(y-)$ direction upon condensation onto surfaces with zigzag interdigitated electrodes (Figures \ref{fig:breathfigures}a-\ref{fig:breathfigures}c). This is in sharp contrast to breath figures with straight interdigitated electrodes under ac-EW, where no periodicity along the $y-$direction was found.\cite{Baratian2018} While these observations have qualitatively been reported before,\cite{Dey2018} a closer look at the representative Figures \ref{fig:breathfigures}d-\ref{fig:breathfigures}i reveals a number of additional details: Initially, the small condensate drops are essentially randomly distributed; however, as the drops grow and begin to coalesce, they align parallel to the electrode edges, with a slight preferential displacement towards the gap centers (Figures \ref{fig:breathfigures}d-\ref{fig:breathfigures}e). Simultaneously, the drops closer to a gap minimum (i.e. at $(x=w/2; y=0)$ in Figure \ref{fig:experimentalsetup}b) are pulled down towards that minimum and typically grow on their way by coalescence with other drops (Figures \ref{fig:breathfigures}d-\ref{fig:breathfigures}e). As we will see below, drops at these `gap minima' are trapped in electrostatic energy minima; as these continue to grow, their lower edge remains close to $y=0$, whereas their center gradually moves upwards (Figures \ref{fig:breathfigures}d-\ref{fig:breathfigures}g). These trapped, growing condensate drops dominate the visual appearance of the breath figures on a macroscopic scale (Figures \ref{fig:breathfigures}a-\ref{fig:breathfigures}b). Interestingly, upon reaching some critical size, the center of mass of these trapped drops suddenly translates horizontally from being centered on the gap between two adjacent electrodes to being centered on an electrode (see transition cross marker to diamond marker in Figures \ref{fig:breathfigures}g-\ref{fig:breathfigures}h). Upon growing further, the center of the trapped drops shifts slightly downwards (Figure \ref{fig:breathfigures}i). Eventually, at much larger radii, drops shed under the influence of gravity when the individual drop weight exceeds the electrical trapping force and contact angle hysteresis (Figure \ref{fig:breathfigures}c). For a statistical analysis of the distribution of condensate drops , we project the drop centroid locations of all drops within the field of view onto a single unit cell of the electrode pattern (Figure \ref{fig:experimentalsetup}b) using a mapping procedure that takes into account the optical distortion of the optical imaging system (see Supporting Information \ref{app:imageanalysis}). Figure \ref{fig:spatialdistributions} shows the resulting spatial distribution of the drops within the unit cell for $\ell = 1000$ $\mu$m binned into ranges of $R=5-15, 40-60, 65-85$, and $90-120$ µm, where each data point represents the location of a drop center at a particular moment in time. \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{figure3_combined.pdf} \caption{Center locations of all drops projected into single unit cell and binned to size ranges as indicated ($l=1000\mu$m). Drops with $R>0.5w_g(y)$ are shown in red. $w_g(y)$ is the $y$-dependent gap width ranging from $w_g(0)=w_{g,min}$ to $w_g(\ell)=w_{g,max}$.} \label{fig:spatialdistributions} \end{figure} While the distribution of the smallest drop sizes (5-15 µm; Figure \ref{fig:spatialdistributions}a) is almost random, somewhat larger drops (40-60 µm) preferentially align along the inclined edges of the electrodes (Figure \ref{fig:spatialdistributions}b). As the drops coalesce and grow further, they gradually move from the electrode edges towards the gap center (Figures \ref{fig:spatialdistributions}b-\ref{fig:spatialdistributions}d). Drops with a diameter that exceeds the local width $w_g(y)$ of the gap, i.e. drop with a critical size $R>0.5w_g(y)$ (red data points) are preferentially found in the center of the gap rather than along the electrode edges (Figure \ref{fig:spatialdistributions}c-\ref{fig:spatialdistributions}d), giving rise to a peculiar bi-modal distribution of the drops (Figure \ref{fig:spatialdistributions}c). This bi-modal spatial distribution of drops is unique to the converging electrode geometry. In contrast, for straight electrode edges with a constant gap width, the drop distribution is always uni-modal (i.e. the drops of equal size align either on both sides of the gap center or along the gap center).\cite{Baratian2018} The larger the drop size under consideration, the larger the fraction of drops with a width exceeding the local gap width, i.e. with $R>0.5w_g(y)$. Hence, the largest drops are again largely centered on the gap (red dots in Figure \ref{fig:spatialdistributions}d), concomitant with a depletion of drops from the electrodes including their edges. The evolution of the spatial distribution of the condensate drops (Figures \ref{fig:spatialdistributions}b-\ref{fig:spatialdistributions}d) with increasing drop size is thus reminiscent of a `zipper-like' effect. The cluster of data points in the vicinity of the gap minimum always represent electrically trapped droplets (Figures \ref{fig:spatialdistributions}b-\ref{fig:spatialdistributions}d). Furthermore, Figures \ref{fig:spatialdistributions}c-\ref{fig:spatialdistributions}d clearly show that the strong electrical force sweeps the drops within a distance of characteristic length scale ${\sim}2R$ above the gap minimum (note the relative lack of droplets over this region) creating the bigger trapped droplet which continues to grow upward. \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{4a4b_alignment.pdf} \caption{Horizontal (a) and vertical (b) distribution of experimental drop centers (small dots) within a unit cell for variable drop radius $R$. (a) Horizontal dotted lines indicate critical radii for first central alignment (0.5$w_{g,max}$) and subsequent transitions ($R_1...R_4$) from preferred alignment on gap center ($x=150\mu$m) and electrode center ($x=0$ or $300\mu$m), as extracted from Figure \ref{fig:modeltransitions} (data for $l=1000\mu$m). Bright and light shaded regions indicate the minimum and maximum gap widths $w_{g,min}$ and $w_{g,max}$. Inset: illustration of horizontal transitions of drop center upon growth. (b) Normalized vertical position of drop center for all electrode sizes (green: $l=500\mu$m; blue: $l=1000\mu$m; red: $l=3000\mu$m) vs. normalized drop size. Large symbols: vertical position of electrostatic energy minimum vs. drop size extracted from numerical calculations (see Figure \ref{fig:model}): green squares: $l=500\mu$m; blue triangles: $l=1000\mu$m. Solid line: geometric approximation $y=R$. Inset: illustration of geometric shift of drop center for bottom of drop pinned at minimum gap.} \label{fig:statisticalalignment} \end{figure} Another interesting series of transitions is revealed by plotting the correlation between average drop size $R$ and the lateral position of their center of mass (Figure \ref{fig:statisticalalignment}a). For $R>0.5w_{g,max}$, most drops are preferentially aligned along the gap center ($x=150$ $\mu$m). However, this gap-centered alignment of the condensate drops does not persist as the drops grow further. At another critical size $R_1$ (${\sim}320$ $\mu$m for the present $\ell=1000$ $\mu$m electrode), drops on average undergo a transition from being centered on the middle of the gaps to being centered on the middle of the adjacent electrodes, as already described for the specific individual drop in Figures \ref{fig:breathfigures}g-\ref{fig:breathfigures}h. Beyond that, a series of additional transitions back and forth the centers of gaps and electrodes are seen at critical radii $R_2, R_3, R_4$, yet increasingly faint due to decreasing numbers of larger drops. The positions of the dashed horizontal lines emerge from the numerical model (see below). The same series of transitions are also observed for the other electrode geometries with $\ell=500$ $\mu$m and $\ell=3000$ $\mu$m (see Supporting Information \ref{app:horizontaltransitions_extra}). In order to further visualize the spatial evolution of the trapped drops, Figure \ref{fig:statisticalalignment}b shows the projected vertical ($y$-) locations of all drops normalized by the electrode length $(y / \ell)$ versus $R/\ell$ for all three electrode designs ($\ell=500, 1000, 3000$ $\mu$m). The tail developing from the gap minimum $(y/\ell=0)$ represents the vertical locations of the trapped drops. Initially, the vertical locations of these trapped drops satisfy $y\approx R$ (solid black line in Figure \ref{fig:statisticalalignment}b), as previously observed from Figures \ref{fig:breathfigures} and \ref{fig:spatialdistributions}. Although for the $\ell=3000$ $\mu$m electrode (red data points) the trapped drops stay aligned at $y\approx R$ till gravity-driven shedding ($R_{shed}/\ell \approx 0.3 $), for the $\ell=1000$ $\mu$m (blue) and $\ell=500$ $\mu$m (green) electrodes the drops subsequently deviate from the $y=R$-trend as these continue to grow by coalescence (Figure \ref{fig:statisticalalignment}b). For the $\ell=1000$ $\mu$m electrode, this deviation begins immediately after the first lateral transition ($R_1/\ell \approx 0.32$), while for the $\ell=500$ $\mu$m electrode this occurs well before the first lateral transition at $R/\ell \approx 0.5$ (Figure \ref{fig:statisticalalignment}b). Interestingly, once the trapped drops grow bigger, they realign again following $y\approx R$ for both the $\ell=1000$ $\mu$m and $\ell=500$ $\mu$m electrodes above $R/\ell \approx 0.6$ (Figure \ref{fig:statisticalalignment}b). Note that for both $\ell=1000$ $\mu$m and $\ell=500$ $\mu$m electrodes, the deviation of the trapped drops from the $y = R$ trend occurs well below the critical shedding radius ($R_{shed}\approx1$ mm); hence, gravity is not the cause of the deviation. \subsection{Electrostatic energy landscape controls the evolution of breath figures} \begin{figure}[!htbp] \centering \includegraphics[width=1\linewidth]{figure6_combined_20200713.pdf} \caption{False color representation of 2D energy landscape (blue=low; red=high energy) vs. $(x,y)$ position of drop center for various drop sizes as indicated. Top row (a)-(e): $l=1000\mu$m; bottom row (f)-(j): $l=500\mu$m. Grey lines: cross sections through electrostatic energy landscapes along dashed lines; crosses and black circles: centers and edges of drops at minimum energy configuration.(Color scales and cross sections are individually rescaled for optimum visual contrast; Data in a) are somewhat compromised by numerical noise.)} \label{fig:model} \end{figure} The details of the drop distributions described above can be understood by considering the 2D electrostatic energy landscape ($E_{el}(x,y)$) emerging from our numerical calculations. Figure \ref{fig:model} illustrates the evolution of these energy landscapes for electrodes with $\ell=1000$ $\mu$m Figures \ref{fig:model}a-\ref{fig:model}e) and $\ell=500$ $\mu$m (Figures \ref{fig:model}f-\ref{fig:model}j) for a series of drop sizes as indicated in the figure. As noted above, the energy landscape is by construction mirror symmetric along the center of the gap. For the smallest drops, the energy landscape is rather flat with shallow valleys along the electrode edges that become deeper upon approaching the gap minimum at $y=0$ (Figure \ref{fig:model}a). For larger drops, (Figure \ref{fig:model}b, \ref{fig:model}c), the two separate minima along the electrode edges first merge into one minimum centered at $x=w/2, y\approx R$, leading to a coexistence of a single minimum close to the gap center in the lower parts of the unit cell and two valleys in the upper parts. Such variation in the $E_{el}(x,y)$ landscape is consistent with the drop distribution shown in Figure \ref{fig:spatialdistributions}b-\ref{fig:spatialdistributions}d and its 'zipper-like' evolution with increasing drop size. The $y-$coordinate of the central minimum gradually shifts towards larger $y$ for sizes comparable to the trapped drops, consistent with the solid $y=R$ line in Figure \ref{fig:statisticalalignment}b. As the drop size approaches 300 $\mu$m for $\ell=1000$ $\mu$m, the electrostatic energy minimum eventually moves from the gap center to the electrode center (Figures \ref{fig:model}d-\ref{fig:model}e). We can predict the drop radius for this and subsequent lateral transitions by calculating the $E_{el}$ for a drop located either at the gap center or at electrode center, for a range of drop radii (Figure \ref{fig:modeltransitions}). For small drop sizes ($R<320 \ \mu$m), the total electrostatic energy is smaller when the drop is located at the gap center (black solid line in Figure \ref{fig:modeltransitions}) than when located at the electrode center (red solid line). As the drop size increases further, the location of the lowest electrostatic energy moves alternately between the electrode and the gap centers (compare the relative variations between the black and red solid lines in Figure \ref{fig:modeltransitions}). The characteristic radii at which these transitions occur, $R_1$, $R_2$, ..., $R_n$ in Figure \ref{fig:modeltransitions} are shown as horizontal dashed lines in Figure \ref{fig:statisticalalignment}a, and provide a good description of the transitions observed experimentally. Note that these horizontal transitions of the center of mass as a function of the drop size are well-known for surfaces with parallel stripes of alternating wettability originating from chemical patterning.\cite{Brandon1997} Tracing the position of the global energy minimum along the $y-$direction reveals that the drop center indeed moves upward with increasing drop size following slightly below the line $y=R$, as shown in Figure \ref{fig:statisticalalignment}b. The numerical results (squares and triangles in Figure \ref{fig:statisticalalignment}b) reproduce the experimental observations with great accuracy. In some cases correlations with lateral transitions of the drop position can be observed. For the shorter unit cell ($\ell=500 \ \mu$m), the evolution of the energy landscape is qualitatively similar. Nevertheless, the two situations cannot be mapped directly onto each other. For instance, unlike the long electrodes, we find for $\ell=500 \ \mu$m that the energy minimum in the gap center splits up into two distinct local minima as the drop diameter becomes comparable to $\ell$ between $R\sim250 \dotso \sim300 \ \mu$m (Figures \ref{fig:model}g-\ref{fig:model}i). This leads to a distinct transition of the drop position along the $y-$direction for $R=280 \rightarrow 300 \ \mu$m (Figures \ref{fig:model}h-\ref{fig:model}i), while the drop remains laterally centered on the gap. This transition is indeed observed in the experiments with short ($\ell=500 \ \mu$m) electrodes (Supplementary Information \ref{app:horizontaltransitions_extra}a) but not for $\ell=1000 \ \mu$m, see Figure \ref{fig:statisticalalignment}a. Nevertheless, the slight downward shift of the center-off-mass position for $R/\ell \sim 0.4...0.5$ in Figure \ref{fig:statisticalalignment}b is also correctly reproduced for electrodes of both short and intermediate length. As an alternative to the full numerical calculations, we can also evaluate the energy landscapes by approximating the electrostatic energy using the simple geometric approximation proposed by 't Mannetje et al.\cite{Mannetje2013,TMannetje2014} This analytical calculation involves approximating the condensate drop-dielectric system as an electrical circuit consisting of two parallel plate capacitors in series formed by the overlap between the conducting drop and the electrodes. The overall capacitance of the system is approximated as $C(x,y) = \epsilon_0 \epsilon_d / d \cdot A_{cap}$, where $A_{cap}= A_1 A_2 / (A_1 + A_2)$, and $A_1$ and $A_2$ are the spatially varying overlap areas between the drop footprint and the two electrodes (see inset in Figure \ref{fig:modeltransitions}). The associated electrostatic energy of the system on application of an electrical voltage $(U)$ can be written as $E_{el,cap}=-C(x,y) U^2 / 2$.\cite{Mannetje2013} The dashed lines in Figure \ref{fig:modeltransitions} show the electrostatic energy minimum in this approximation for drop centered on the gap (black) and on the electrode (red). Like in the case of the full numerical model, for small drops (i.e. $R<\sim320 \ \mu$m) it is more favorable to be centered on the gap, whereas for increasing $R$ there is a succession of transitions between preferred alignment on the electrode center and the gap center. While the energies deviate substantially for the smallest drop sizes (for which the overlap area with one of the electrodes and hence the total energy can vanish), the agreement improves for increasing drop size and the predictions for the various subsequent transitions of the drop positions ($R_2, R_3, R_4$) becomes remarkably good. While some of the aspects described above are very specific to the present electrode configuration, the overall excellent agreement demonstrates the ability of the numerical model to reproduce the experiments, including even subtle aspects such as the transitions between various competing local minima of the overall energy landscape are correctly captured. For not too small droplets, the simple analytical model of geometric overlap also provides reasonable predictions between various competing drop configurations. \begin{figure}[!htbp] \centering \includegraphics[width=1\linewidth]{figure7.pdf} \caption{Normalized excess electrostatic energy vs. drop size for drops centered on the gap (black) vs. centered on the electrode (red). Solid: full numerical model; dashed: analytical model based on drop-electrode overlap areas $A_1, A_2$. $R_1, ... R_4$: critical radii of transitions from preferential gap center to electrode center-alignment. Inset: illustration of competing drop positions.} \label{fig:modeltransitions} \end{figure} \section{Discussion and Perspectives} The results presented here clearly demonstrate the flexibility of electric fields in controlling condensation patterns on solid surfaces with submerged co-planar electrodes. While individual drops are obviously subject to their specific local environment, averaging over large ensembles shows that condensed drops decorate the local minima of electrostatic energy landscapes of remarkable complexity. Consequently, the drops undergo gradual translations as well as discrete transitions as local shift or become unstable with increasing drop size. Apparently, the random character of coalescence events with neighboring drops in combination with enhanced mobility of drops caused by the reduced contact angle hysteresis in EW with AC voltage\cite{Li2008} provide sufficient energy for the drops to explore the entire energy landscape despite the fact that energetic barriers between adjacent minima are obviously substantially larger than thermal energies. While not exploited here explicitly, compared to passive chemical or topographic patterning, EW-functionalization offers the advantage of switchability in addition to the enhanced drop mobility thanks to the reduced effective hysteresis. While drop positions are well-defined and controllable beyond a certain critical size, the random distribution of small drops in Figure \ref{fig:spatialdistributions}a confirms the earlier observation that the nucleation sites for drop condensation seem to be unaffected: no correlation can be observed between the position of the smallest drops and the location of the electrodes. This arises from the fact that the forming liquid nuclei only experience a dielectrophoretic polarization force. Upon nucleation, this electrostatic force competes with surface forces caused by random heterogeneities on the surface. While the electric force scales with the (very small) volume of the critical nucleus, i.e. $\propto R^3$, the latter scale with surface area, i.e. $\propto R^2$ and therefore dominate. A control of nucleation rates and locations is therefore possible only if local electric fields and field gradients can be substantially increased, e.g. by generating miniaturized electrode patterns on the nanoscale. From an applied perspective, the key question in both fog harvesting and enhanced heat transfer is how the removal of drops from the surface can be optimize to condense as much liquid as possible. Obviously, this requires a somewhat broader perspective of the entire system than only the control of drop distribution patterns. While the results presented above clearly show that suitable electrode patterns allow to control drop positions and to promote faster growth by inducing lateral and vertical translations and coalescence, the same strong electrostatic forces also generate deep energetic traps that can hold back even large drops as shown in Figure \ref{fig:breathfigures} and thereby hamper efficient drop removal. To circumvent this problem, the electrically induced wettability patterns should be applied with some form of time-dependent actuation. The easiest approach is to periodically activate the electrodes to induce drop motion and growth, and to subsequently deactivate them such that drops exceeding the relevant critical size can spontaneously shed off the surface under the influence of gravity. While some success of this strategy has been demonstrated,\cite{Dey2018,Wikramanayake2020} the overall performance was not impressive. In part, this is probably caused by the fact that the pinning forces increase as the EW-induced reduction of the contact angle hysteresis ceases upon switching off the AC voltage and hence the critical shedding radius increases and the shedding frequency decreases.\cite{Mannetje2011a} Alternatively, one could make use of active transport strategies borrowed from EW-based lab-on-a-chip systems, where drops are transported towards activated electrodes.\cite{MugeleBook} Given the nature of drop condensation, it is obviously not desirable to bring the condensing drops in direct contact with electrodes on top of the functionalized surface. Therefore, structured electrodes, possibly in two layers, should be embedded into the substrate and actuated in such a manner that they lead to a conveyor belt-like directed motion. Such strategies are rather straightforward to implement for surfaces that are flat or covered by some `moderate' degree of topographic pattern. For intrinsically three-dimensional structures such as meshes that are frequently used for fog harvesting the implementation of any form EW-enhanced condensation and drop removal is much more difficult to realize - notwithstanding initial demonstrations with crossing fibers of switchable wettability.\cite{Eral2011} \\ While the effect of EW on the drop distribution patterns is rather striking, the reported consequences for the total condensation rate and the resulting heat transfer are far less impressive.\cite{Baratian2018,Dey2018,Wikramanayake2019} Applying standard models of dropwise heat transfer,\cite{Zhao2018a} Wikramanayake et al. pointed out that the majority of previous EW experiments were not very enlightening because they were carried out using water vapor in moist air.\cite{Wikramanayake2019} Under such conditions, it is well-known in the heat transfer community that the overall heat transfer resistance is dominated by the ambient air, which acts as a non-condensable background gas and introduces a diffusive boundary layer at the solid-vapor interface.\cite{Minkowycz1966} The expected beneficial effects of EW-enhanced drop removal on heat transfer are thus overshadowed by mass transport limitations across that boundary layer and thus the actual potential of the EW-induced enhancement does not become evident. To demonstrate and exploit the benefits of EW, condensation setups should thus be designed in such a manner that drop removal is indeed the dominating factor for the overall heat transfer coefficient. This implies in particular preferential operation in pure vapor. \\ A final essential issue for any practical application of the effects described above is the stability of the EW-functionalized surfaces over extended periods of time. Like many other applications, both fog collection and heat transfer require long continuous operation times of the devices, ideally of the order or years. While proof-of-principle experiments in laboratories on short time scales are often relatively easy to achieve, maintaining cleanliness and hydrophobicity of coatings in the presence of complex and reactive fluids such as condensing water vapor is extremely demanding from a materials perspective. Recent experiments demonstrated that fluoropolymer surfaces commonly used in EW spontaneous charge up upon contact with water for several hours.\cite{Banpurkar2017} In the presence of electric fields, this effect is even more pronounced and can even be exploited to generate well-controlled charge densities and charge patterns.\cite{Wu2020,Wu2020a} Therefore, the development of reliable hydrophobic fluoropolymer coatings that remain stable throughout the life time of various types of devices has been a long standing challenge in applied EW research. One recommendations from these investigations has been to avoid water as an operating fluid whenever possible.\cite{Raj2009} While this is an interesting option for heat transfer devices, it is obviously not possible for fog harvesting applications. In such cases, novel materials with improved stability\cite{Paxson2014} will be required to achieve the necessary stability of operation. Nevertheless, it is also worth pointing out that extended EW-enhanced drop condensation tests over a period of 40 hours displayed - after some initial degradation within the first 1-2 hours - rather stable operation even for conventional fluoropolymer surfaces (Supporting Information \ref{app:surfacedegradation}). \section{Conclusions} Co-planar electrodes embedded into electrowetting-functionalized surfaces allow to control the distribution of drops of condensing water vapor. For interdigitated electrodes with zigzag-shaped edges drops undergo a series of transitions between different preferred locations as they grow in size upon further condensation and coalescence. Comparison to numerical calculations shows excellent agreement with the experiments and demonstrates that the drops decorate on average the minima of the drop size-dependent electrostatic energy landscape, including subtle transitions between preferred locations. This agreement demonstrates that the existing numerical approach provides a solid basis for future more sophisticated models that will include time-dependent electrical actuation schemes. Such models can eventually be used by engineers for numerical optimization of electrode designs and operation modes of future EW-enhanced drop condensation systems. A critical assessment of bottlenecks for such applications indicates that mass transfer limitations and - in particular - the development of combinations of long term stable condenser surface materials and fluids will be essential for the technological success of the approach. \begin{acknowledgments} We thank D. Wijnperlé for sample preparation in the cleanroom and D. Baratian for his contributions in earlier phases of the experiment. We acknowledge financial support by the Dutch Organization for Scientific Research (NWO) within the VICI program (Grant No. 11380). \end{acknowledgments}
{ "timestamp": "2020-07-28T02:40:44", "yymm": "2007", "arxiv_id": "2007.13586", "language": "en", "url": "https://arxiv.org/abs/2007.13586" }
\section{Introduction} The so-called neutral component of the interstellar medium, despite being shielded from EUV (13.6 to 124 eV) stellar photons able to ionize hydrogen, retains a small ionization fraction ($x(\mathrm{e}^-)=n(\mathrm{e}^-)/n_\mathrm{H}$). The ionization mechanism depends on the type of region: FUV (6 to 13.6 eV) photons ionizing C and S in the low $A_\mathrm{V}$ surface layer of clouds, or cosmic rays, X rays, shocks, etc., in the densest parts. As a result, ionization fractions range between $\sim10^{-4}$ in low $A_\mathrm{V}$ cloud surfaces and down to $\sim10^{-9}$ in dense cores \citep[e.g.][]{Goicoechea2009,Draine2011}. This ionization fraction controls several key aspects of neutral interstellar clouds. It determines the degree of coupling of the gas to the magnetic field: the neutrals, accounting for most of the mass of the fluid, are only indirectly sensitive to the presence of a magnetic field through their friction with the ions that remain coupled to the field, a process called ion-neutral friction. This coupling can provide a significant magnetic support against gravitational collapse of dense cores despite the low ionization fractions values found there, between 10$^{-9}$ and 10$^{-7}$ \citep{Mestel1956,Mouschovias1976,Basu1994}. The ionization fraction also controls the onset of the magneto-rotational instability \citep{Balbus1991}, the main mechanism of angular momentum transport in accretion disks. Moreover, the gas phase chemistry in dense molecular clouds is to a large extent driven by fast ion-neutral reactions \citep{Herbst1973,Oppenheimer1974}. The build-up of chemical complexity thus depends on the ionization fraction of the medium. Finally, some common molecular tracers with high dipole moments, such as HCN and HCO$^+$, have high inelastic collision cross sections with electrons, and their excitation can be significantly affected by electron collisions for ionization fractions $\gtrsim 10^{-5}$ \citep{Black1991,Liszt2012,Liszt2016,Goldsmith2017}. This makes the interpretation of their emission (e.g. to estimate gas density) sensitive to our knowledge on the local ionization fraction . Direct observational estimation of the ionization fraction in neutral clouds is difficult, except in very specific regions (e.g. \citealt{Goicoechea2009,Cuadrado2019}, at the dissociation front in a photodissociation region). Direct estimation of the total charge accounted for by observable molecular ions in molecular clouds only yields a loose lower limit (e.g. \citealt{Miettinen2011}). Indirect methods based on tracers that are chemically sensitive to the ionization fraction have thus been commonly used. These methods have mostly involved measuring the deuterium fractionation through abundance ratios involving simple molecular ions like DCO$^+$/HCO$^+$ \citep{Guelin1977,Guelin1982,Dalgarno1984,Caselli1998} or N$_2$D$^+$/N$_2$H$^+$ which is less affected by depletion \citep{Caselli2002a}. The idea is that the deuterium enrichment (defined as H$_2$D$^+$/H$_3^+$), initiated by the exchange reaction \begin{equation} \mathrm{H}_3^+ + \mathrm{HD} \rightleftharpoons \mathrm{H}_2\mathrm{D}^+ + \mathrm{H}_2 \label{eq:deut_reac} \end{equation} at low temperature, is limited by electronic dissociative recombination of H$_2$D$^+$, and that the resulting ratio is transmitted (with a known prefactor) to the deuteration fraction of other molecules such as HCO$^+$. Using such tracers, the ionization fraction is deduced either by using approximate analytical formulae representing simplified networks \citep{Caselli1998,Miettinen2011,Caselli2002a}, or by adjusting an astrochemical model including a full chemical network to the observations, using stationary chemical models \citep{Williams1998,Bergin1999,Caselli2002b,Fuente2016}, time-dependent ones \citep{Maret2007,Shingledecker2016}, or PDR models \citep{Goicoechea2009}. Despite the variety of determination methods, using different deuterated molecules, only very few works have proposed using other tracers than deuterated species. For instance, \citet{Flower2007} proposed the C$_6$H$^-$/C$_6$H ratio as a tracer of the ionization fraction, and \citet{Fosse2001} have investigated the relationship between the cyclic-to-linear ratio of C$_3$H$_2$ and the ionization fraction. Deuteration-based approaches however suffer from several limitations due to the fact that they depend on other physical or chemical parameters that need to be determined independently. The initial deuteration reaction (Eq.~\ref{eq:deut_reac}) is sensitive not only to the gas temperature but also to the essentially unmeasurable ortho-to-para ratio of H$_2$ \citep{Pagani1992,Pagani2011,Shingledecker2016}. Indeed, the endothermicity of the reaction in the backward direction (192K) is very close to the J=1 to J=0 energy difference of H2 (170.5K). Then, even a small fraction of o-H2 (J=1) contributes to H2D+ destruction (with a reduced endothermicity of $\simeq$ 20K) and restricts the deuteration process. In addition, ratios such as DCO$^+$/HCO$^+$ are linked to H$_2$D$^+$/H$_3^+$ through reactions with neutral species like CO. The estimated deuterium fraction is therefore sensitive to the depletion factors of carbon, oxygen and nitrogen that are not easy to evaluate \citep{Caselli2002a}. Moreover, deuterated tracers such as DCO$^+$ are typically only detectable in cold dense cores, representing only a tiny fraction of the observable area of a giant molecular cloud (GMC). Deuteration-based approaches are thus inadequate for an unbiased characterization of the conditions in GMCs as a whole. Despite the common use of advanced chemical models computing the abundances of hundreds of species, the observed tracers to which these models are compared to estimate $x(\mathrm{e}^-)$ (deuterated molecules such as DCO$^+$) are still those initially proposed based on analytical reasoning using simplified chemical networks. The wealth of data produced in large chemical model grids remains largely unexploited. Their exploration of wide parameter spaces might reveal less intuitive but more efficient tracers. Based on this approach, we propose here a general and largely automatic method to identify the best observational predictors of the ionization fraction, when other important parameters such as the gas density, temperature or H$_2$ ortho-to-para ratio are unknown. We apply this method to propose new predictors of the ionization fraction as a function of the molecular cloud conditions. We use simple stationary chemical models with a complete up-to-date chemical network \citep{Roueff2015}, and use molecular ratios (column density ratios or integrated line intensity ratios) as observable tracers from which we seek to predict the ionization fraction. We base our investigation on the observed range of physical conditions and detected tracers in the IRAM-30m Large Program ORION-B (Outstanding Radio-Imaging of OrioN B, co-PIs: J. Pety and M. Gerin) \footnote{Informations and data related to the ORION-B program can be found at \url{http://www.iram.fr/~pety/ORION-B/}}. In this program, we imaged 5 square degrees towards the southern part of the Orion B giant molecular cloud over most of the 3 mm atmospheric window~\citep{Pety2017,Gratier2017,Orkisz2017,Bron2018,Orkisz2019,Roueff2020,Gratier2020}. In the context of dense cores, the ionization fraction is linked with the cosmic ray ionization rate (CRIR) and both are often studied from the same molecular ratios (although direct tracers of the cosmic ray ionization rate can also be used in more diffuse medium, in particular H$_3^+$, e.g. \citealt{Indriolo2012,LePetit2016}). In our context of the Orion B giant molecular cloud, where UV illumination controls the ionization fraction in large parts of the cloud, we focus here on the question of estimating the ionization fraction only, independently of the source of ionization. The task of tracing the cosmic ray ionization rate (which has also been attempted using astrochemical model grids, e.g. \citealt{Barger2020}) will be considered in a future application of the method presented here. In this first article, we present a generic method to find the best tracers of an unobservable physical parameter and apply it to the search of new tracers of the ionization fraction among the species that are detectable in the ORION-B dataset. The observational application of the tracers found here to study the ionization fraction in the Orion B molecular cloud will be presented in a second paper \citep{Guzman2020}. In Sect.~\ref{sect:Method}, we describe our general statistical method for determining the best predictors of a given unobservable parameter based on the results of a model grid. In Sect.~\ref{sect:Models}, we present the models used in this study for the search of ionization fraction tracers. We then present the ranking of observable predictors in Sect.~\ref{sect:Tracers}. For ease of application of our results, we provide in Sect.~\ref{sect:AnalyticalFits} analytical fit formulae to deduce the ionization fraction from each of the proposed best predictors. We finally discuss our results in Sect.~\ref{sect:Discussions} and present our conclusions in Sect.~\ref{sect:Conclusions}. \section{Method} \label{sect:Method} Both observable line intensities (or column densities) on the one hand, and the ionization fraction $x(\mathrm{e}^-)$ on the other hand, depend on multiple, unobservable physical parameters (e.g. gas density, elemental abundances, cosmic ray flux, ...). Our goal is to find reliable relationships between observable quantities and ionization fraction, despite lacking estimations of these hidden physical parameters. To do this, we first run model grids covering the whole possible parameter space. We then use a flexible regression method to fit $x(\mathrm{e}^-)$ as a function of one of the potential observational tracers through the whole grid of chemical models. This means that we treat the effects of the variations of the hidden parameters as sources of noise on the prediction of the ionization fraction. Finally, we use a quantitative measurement of the fit quality as an estimate of the predictive power of each potential tracer. These estimates are used to rank the tracers and highlight the most powerful predictors of the ionization fraction. The fitted models for the best tracers will provide ready-to-use tools to be applied to observations. \subsection{Predicting the ionization fraction from one ratio of line intensities (or column densities) with Random Forests} Any a priori information could easily be included in the method by sampling the parameters of the model grid according to a specific prior distribution. However, we wish to minimize the amount of a priori information injected in the method and to avoid making assumptions on the shape of the distributions of physical parameters (e.g. gas density, elemental abundances, cosmic ray flux, ...). We thus build a model grid that samples uniformly the possible range of values (see Sect.~\ref{sect:Models}). Our model grids provide us with a dataset comprising ionization fraction values and corresponding values of observable quantities. We will consider line intensity ratios or column density ratios as our observable quantities in this paper. The hidden physical parameter values introduce a non deterministic aspect to the relationship between $x(\mathrm{e}^-)$ and the observables: models might have identical values of an observable but different $x(\mathrm{e}^-)$ if the underlying physical parameters are different. Learning to predict $x(\mathrm{e}^-)$ from a given observable is then a regression problem, with the uncertainty introduced by the hidden physical parameters playing the role of noise. Determining the best tracers of $x(\mathrm{e}^-)$ is thus equivalent to finding observables for which the relationship to $x(\mathrm{e}^-)$ is least affected by this noise (i.e. by the hidden physical conditions). This means finding the observables for which the most accurate regression model can be found. For this regression problem, we choose to use Random Forests \citep{Breiman2001} because their flexibility makes it possible to fit general non-linear shapes, while their simplicity provides reasonable computational costs. This makes the method presented here very general and applicable to finding tracers of other physical parameters without any assumption on the shape of the relationship between the tracers and the target parameter. We will use RF for Random Forest in the rest of the paper. RF regression models are based on the concept of regression trees \citep{Breiman84}, where a succession of binary decisions are made based on the input variables (e.g. $x_3 < 2$ or $\ge2$) and constant values are predicted in each of the subsets of the partition that the decision tree defines. While such decision trees are easily interpretable, they require large tree depths to be flexible but are prone to overfitting if this depth is too large. RF tackle this overfitting problem by using the simple idea that multiple overfitted regression models will, when averaged, give a better prediction as long as the errors they individually make are uncorrelated between models. In a RF, the individual trees are made as independent as possible by introducing randomness in two aspects: 1) the building of each tree only considers a random subset of the input variables, and 2) each tree is given a bootstrapped sample (i.e. drawn by random sampling with replacement from the original dataset, \citealt{Breiman1996}) instead of the original full sample. This way, the datasets seen by the different trees are independent and each tree only sees a subset of the dataset (bootstrapped datasets typically contain only 63\% of the points of the original dataset as repetition is allowed). This provides a very flexible regression model, which retains some of the interpretability of decision trees. RF have thus quickly become a standard method in Machine Learning (see e.g. \citealt{Hastie2001}). In addition, they allow to estimate the generalization error of the fit (i.e., the error made when predicting data not seen during training): as each tree has only seen a random bootstrapped sample from the data, it is possible to estimate for each datapoint a partial prediction using only the trees that have not seen this datapoint during training. As the sample seen by a given tree is called a bag, these partial predictions are called out-of-bag predictions (OOB). \cite{Gratier2020} also use RF in the context of the interstellar medium and introduce the method in detail. We thus train RF regression models for each observable (using only one observable at a time) and estimate the accuracy of the regression models. This accuracy is taken as an estimate of the predictive power of the observable quantity considered, for the purpose of predicting $x(\mathrm{e}^-)$. The different observables can then be ranked according to this predictive power estimate. The accuracy of the regression model is estimated with the OOB $R^2$ \[ R^2 = 1 - \frac{S\!S_\mathrm{res}}{S\!S_\mathrm{tot}} \quad \mbox{with the sums of squares} \] \[ S\!S_\mathrm{res} = \sum_i \left( y_i^{pred} - y_i^{true}\right)^2 \quad \mbox{and} \quad S\!S_\mathrm{tot} = \sum_i \left( \overline{y^{true}} - y_i^{true} \right)^2, \] where the index $i$ runs across data points (individual model results), $y_i^{true}$ is the true value of ionization fraction (computed by the chemical model), $y_i^{pred}$ is the OOB prediction value from the RF, and $\overline{y^{true}}$ is the average of the (true) ionization fraction over the model grid. This coefficient $R^2$ gives the fraction of the total ionization fraction variance (across the full model grid) that the RF model can explain from the given observable predictor alone (i.e. it measures the fractional decrease from the initial variance of $x(\mathrm{e}^-)$ to the variance of the residuals). It is thus <1, with 1 representing perfect prediction (zero residual variance). Note that it can take a negative value when the model performs worse than predicting a constant value set at the average $x(\mathrm{e}^-)$ of the dataset. A value of 0 indicates a performance equivalent to this constant prediction of the average. This $R^2$ value is used for the ranking of tracers. For information, we also provide below the root mean square error \[ \mathrm{RMSE} = \sqrt{ \frac{1}{N} \sum_i \left( y_i^{true} - y_i^{pred}\right)^2 }, \] where $N$ is the number of chemical models in our grid. The RMSE is completely univocally related to the $R^2$ value, but is more interpretable in terms of the amplitude of typical errors. We also provide the maximum absolute error \[ \mathrm{max.\,\,abs.\,\,err.} = \max _i \left| y_i^{true} - y_i^{pred}\right|\jp{.} \] This quantity, estimating the maximum error made by our regression model, is not guaranteed to converge when increasing the size of the dataset. It should thus not be interpreted further than being the largest error we observed in our limited-size sample. The RF model depends on a few internal parameters (number of trees, maximum depth of trees, etc...). Their values can affect the quality of the model and its tendency to overfit. We used a number of trees in the forest $N_\mathrm{trees} = 400$ and a maximum tree depth $d_\mathrm{max} = 4$. The procedure used to select these values is described in Appendix~\ref{app:RF_params_optimization}. Our tests show that this optimization scheme is not critical for our purpose: while the choice of parameter values does affect the quality of the best fit RF model, it does not change significantly the ranking of the predictors that we deduce from it. The pipeline tool implementing this procedure is available at [link to be added before publication]. \section{Chemical models} \label{sect:Models} \TabGridRanges We use here the chemical code presented in \cite{Roueff2015} to study isotopic fractionation of deuterium, carbon and nitrogen compounds. Single zone models with fixed density, temperature, visual extinction, radiation field, cosmic ray ionization rate, ortho-to-para H$_2$ ratio and depletion factors are computed at steady state. We consider for the present study a chemical network including deuterium, and isotopic carbon and oxygen species where the deuterium, carbon and oxygen fractionation reactions have been introduced following the recent determinations of exothermicities by \cite{Mladenovic2017}. We introduce in particular D$^{13}$CO$^+$. Apart from these specific fractionation reactions, the chemistry of isotopically substituted species is built automatically from the chemical network of the major components. The chemical reactions involving one single carbon-containing reactant and one single carbon-containing product are duplicated with the same reaction rate coefficient. Simple statistical assumptions are introduced when two carbon containing molecules are implied in the reaction. Consider for example the case of the reaction $$\mathrm{CX} + \mathrm{CY} \rightarrow \mathrm{CX}' + \mathrm{C Y}', $$ taking place with a reaction rate coefficient $k$. The reactions introduced for the isotopically substituted species are the following: \begin{align*} ^{13}\mathrm{CX} + \mathrm{CY} &\rightarrow \vphantom{a}^{13}\mathrm{CX}' + \mathrm{CY}' \quad \mathrm{with} \,\,k/2 \\ ^{13}\mathrm{CX} + \mathrm{CY} &\rightarrow \mathrm{CX}' + \vphantom{a}^{13}\mathrm{CY}' \quad \mathrm{with} \,\,k/2 \\ \mathrm{CX} + \vphantom{a}^{13}\mathrm{CY} &\rightarrow \mathrm{CX}' + \vphantom{a}^{13}\mathrm{CY}' \quad \mathrm{with} \,\,k/2 \\ \mathrm{CX} + \vphantom{a}^{13}\mathrm{CY} &\rightarrow \vphantom{a}^{13}\mathrm{CX}' + \mathrm{CY}' \quad \mathrm{with} \,\,k/2 \\ ^{13}\mathrm{CX} + \vphantom{a}^{13}\mathrm{CY} &\rightarrow \vphantom{a}^{13}\mathrm{CX}' + \vphantom{a}^{13}\mathrm{CY}’ \quad \mathrm{with} \,\,k \end{align*} Such a procedure leads to an ensemble of 310 species linked through 8711 chemical reactions. These models allow us to compute observable column density ratios for the hundreds of species included. Although commonly derived by observers, column densities are not the primary observable quantities, and we thus also compute line intensity ratios. To do so, we post-process the results of our chemical models using a non-LTE excitation and radiative transfer model (RADEX, \citealt{vanderTak2007}) assuming a typical linewidth of 1 km/s (observed linewidths in the Orion B are typically of a few km/s \citep{Pety2017}). Results based on column density ratios and based on line intensity ratios will be presented separately in the following sections. It is unlikely that a single tracer will provide a good estimate of the ionization fraction $x(\mathrm{e}^-)$ in all possible physical conditions. Either the tracer will lose its relationship with $x(\mathrm{e}^-)$ in some conditions, or it might be too weak to be observable in other conditions. We thus decided to divide the range of possible conditions into subregions corresponding to the different types of environments found in GMCs \citep{Pety2017,Bron2018}. We focus on two kinds of environments : the translucent medium and the cold and dense gas, and we derive separate rankings of tracers for these two environments. The range of physical conditions explored for each of these environments are chosen based on our previous studies of Orion B and listed in Table~\ref{tab:grid_ranges}. In both grids, the gas density and gas temperatures were varied covering the typical ranges for translucent medium ($3\times 10^2 - 3\times 10^3$ cm$^{-3}$, 15-100 K) and cold and dense medium ($10^3 - 10^6$ cm$^{-3}$, 7-20 K). In the translucent model grid, external FUV photons still play an important role in the chemistry and in controlling the ionization fraction. This FUV illumination is controlled through an external FUV field strength $G_0$ (see e.g. \citealt{Hollenbach1999}, p. 177) and an extinction $A_V$ representing the amount of shielding between the FUV source and the gas under consideration (also used as the depth of the slab when computing line intensities). We take into account self-shielding of H$_2$ by using the approximate expression of \cite{Draine1996} and introduce also the shielding of CO by H$_2$ from \cite{Heays2017}. We consider lower extinctions and higher $G_0$ values in translucent medium ($A_V$ in the range 2-6, $G_0$ of the external field in the range 1 - 1000) than in cold dense medium ($A_V$ in the range 5-20, external $G_0$ set to 1). We explore average to moderately strong FUV illumination values in the low density translucent grid. Regions with both high density high FUV illumination correspond to dense photodissociation regions (PDR), in which strong chemical and physical stratification on small spatial scale is critical. These regions would thus require the use of complete PDR models, such as the Meudon PDR Code \citep{LePetit2006}. We thus did not explore this type of conditions in the present study. Given the uncertainties about the cosmic ray ionization rate in molecular clouds \citep{Lepp1992,McCall2003,Indriolo2007}, we consider the range of value $10^{-17}-10^{-15}$ s$^{-1}$. In the cold gas conditions, in order to account for the reduced cosmic ray fluxes \citep{Padovani2009}, we limit this range to $10^{-17}-10^{-16}$ s$^{-1}$. As sulfur can be an important contributor of electrons in neutral gas but has a highly uncertain gas-phase elemental abundance \citep{Agundez2013,Goicoechea2006}, we explore in both grids values of [S], the relative sulfur abundance with respect to H, in the range $1.86\times 10^{-8} - 1.86\times 10^{-5}$. We also explore ranges of H$_2$ ortho-para ratio (OPR$_{\mathrm{H}_2}$) which impact significantly two reactions: i.e. H$_2$D$^+$ + o-H$_2$ $\rightarrow$ H$_3^+$ + HD \citep{Pagani1992} where the energy endothermicity is reduced to 61.5 K (compared to 232K with p--H$_2$) and N$^+$ + o-H$_2$ $\rightarrow$ NH$^+$ + H which is slightly endothermic ($\sim$ 44 K) whereas N$^+$ + p-H$_2$ $\rightarrow$ NH$^+$ + H is more strongly endothermic ($\sim$ 170 K), as first emphasized by \cite{LeBourlot1991}. For this reaction, we follow the prescription of \cite{Dislaire2012} which is derived from experimental results. We use higher values of the OPR in the warmer translucent medium ($0.1-3$) than in cold dense gas ($10^{-4} - 10^{-1}$). Finally, cold dense cores offer conditions where molecules can freeze out on dust grains, depleting the gas phase abundances of elements such as C and O. In our cold dense medium grid, we thus in addition explore depletion factors going from 1 (no depletion) to 10 (C elemental abundance 10 times lower than the reference values) with a constant C/O elemental ratio value of 0.6 (the elemental abundance of carbon is taken to be [C]$=1.32\times 10^{-4}$ when there is no depletion). Other parameters that might have an impact (although second order compared to the parameters considered here), such as variations of the metal elemental abundances or PAH abundance, were not considered in this study. The gas-phase elemental abundances for metals (relative to H) are taken to be [Fe]$=1.5\times 10^{-8}$, [Cl]$=1.8\times 10^{-7}$, [Si]$=8.2 \times 10^{-7}$, [F]$=1.8\times 10^{-8}$, [Ar]$=3.29\times 10^{-6}$. For each medium type, a set of 5000 models was run, sampling randomly and uniformly within the chosen parameter space. The adequacy of this number of models for our purpose is ascertained later when estimating the uncertainties on our results. Given the variation by orders of magnitude both in the parameter values and the computed observables, we choose to work with the logarithm of all quantities. We sampled uniformly on the logarithm of each parameter within the ranges indicated above. The method described in Sect.~\ref{sect:Method} is applied on the logarithm of all quantities (i.e., training RF models using the logarithm of column density ratios or line intensity ratios to predict the logarithm of $x(\mathrm{e}^-)$). Note that representative error values such as the RMSE on logarithms are equivalent to representative error \emph{factors} on the actual quantity. These corresponding error factors are given in parenthesis in the result tables of the following sections. To get rid of possible instrumental, calibration and other source geometry effects, we choose to work only on ratios of observable quantities (either column density ratios or line intensity ratios). In the following, we use the term tracers for ratios of observable quantities. Among all the species computed in our chemical model, we made a selection of species that are detected in the radio observations of the ORION-B project and potentially linked to the ionization fraction. Our search for the best tracers is made among the ratios of these selected species. For the translucent medium condition, we selected $^{13}$CO, C$^{18}$O, HCO$^+$, HCN, HNC, CN, C$_2$H, CS, SO, H$_2$CS, HCS$^+$, CF$^+$. The search for a best ratio was thus done among 66 possible column density ratios (and 66 line intensity ratios). For dense cold medium conditions, we considered the same selection with the addition of DCO$^+$ and N$_2$H$^+$. We thus had 91 possible column density ratios (and 91 line intensity ratios) in this case. For line intensities, the exact transition and frequency considered for each species are listed in Table~\ref{tab:LineIDs}. In the RADEX computations of line intensities, we account for collisional excitation with electrons (using the ionization fraction computed by the chemical model) for species for which collisional data with electrons are available in RADEX (HCO$^+$, HCN and C$_2$H). We note that for CN, excitation by electrons was not included as in the current version of RADEX, the collisional data that includes the hyperfine structure of CN does not include collisions with electrons. We chose to privilege the fact of accounting for the hyperfine structure here. \TabLineIDs \section{Tracers rankings} \label{sect:Tracers} We applied the method described in Sect.~\ref{sect:Method} to the two chemical model grids (translucent medium conditions and cold dense medium conditions) presented in Sect.~\ref{sect:Models} in order to obtain a ranking of the selected potential tracers according to their usefulness for predicting the ionization fraction. \subsection{Translucent medium} \FigRankingSingleRatiosTranslucent \FigRFmodelBestRatiosTranslucent Figure~\ref{fig:RankingSingleRatiosTranslucent} presents the predictive power (estimated as the $R^2$ of a RF fit) of each tracer for the best 20 tracers. The left panel shows the result when taking column density ratios as observable quantities, and the right panel the results when considering line intensity ratios. We see that the ranking is similar in both cases, suggesting that excitation and radiative transfer only have a moderate effect on the relationship between these tracers and the ionization fraction. A more complete ranking (covering all tracers having $R^2>0.5$) is given in Table~\ref{tab:RatiosRankingColumnDensitiesTranslucent} (due to their sizes, the results tables of this Section and of Sect.~\ref{sect:AnalyticalFits} are placed in Appendix~\ref{app:Rankings})\footnote{Datafiles containing the training RF models and tables of the rankings presented in this section are available at [the link will be added in the published version].}. In both cases, about ten different ratios are found to be each able to explain more than 80\% of the ionization fraction variance ($R^2 > 0.8$). We emphasize that this means that an accurate prediction of the ionization fraction is possible from each of these tracers despite not knowing the values of the 7 physical and chemical parameters that have been varied in our model grid (cf. Table~\ref{tab:grid_ranges}). The $R^2$ values are slightly lower when using line intensity ratios rather than column densities ratios, indicating that excitation and radiative transfer effects tend to increase the degeneracy between the ionization fraction and other unknown parameters, but this effect remains moderate. To illustrate the performance of the tracers found with this ranking, we show on the left panel of Fig.~\ref{fig:RFmodelBestRatiosTranslucent} the ionization fraction versus the best ranked column density ratio (C$_2$H/HCN) in our grid of models for translucent medium conditions (blue symbols, with contours in shades of blue indicating iso-PDF contours encompassing 25\%, 50\%, and 75\% of the distribution) and the prediction of the fitted RF model (solid red line), which is found in this case to explain 95.7\% of the ionization fraction variance in our grid. The remaining scatter around the relationship represents the effect of ignoring all other parameters (gas density, temperature, UV field, H$_2$ OPR,...). The best ranked line intensity ratio (C$_2$H (1-0) / HCN (1-0)) is similarly shown on the right panel of Fig.~\ref{fig:RFmodelBestRatiosTranslucent} with the corresponding fitted RF model (solid red line). In translucent gas, the ionization is still dominated by the effect of external FUV photons ionizing carbon (and to a lesser extent sulfur and chlorine), and is slowly decreasing as the total extinction increases. In the conditions covered by our translucent grid, we find $x(\mathrm{e}^-)$ ranging from $2\times10^{-4}$ to $2\times10^{-7}$. C$_2$H, which we find in several of the best ratios, is known to be enhanced in FUV illuminated environments \citep{Pety2005,Cuadrado2015,Guzman2015,Gratier2017,Pety2017}: as explained in \cite{Beuther2008}, C$_2$H traces the amount of carbon not locked into CO, and is thus sensitive to the FUV flux through CO photodissociation and the presence of C and C$^+$ at a significant abundance level. In our translucent medium model grid, we indeed find C$^+$ to be the main charge carrier and thus to very strongly correlate with $x(e^-)$. H$^+$ and H$_3^+$ may also contribute to the electronic fraction in environments where the cosmic ionization reaches values above $10^{-16}$ s$^{-1}$ \citep{LePetit2016}. However, C$^+$, an open shell ion, is chemically reactive with various molecules, except H$_2$\footnote{Except in strong PDR environments, where a small fraction of vibrationally excited H$_2$ may overcome the endothermicity barrier. Strong PDR environments are not considered here.}, and is at the origin of a complex chemistry with insertion of carbon atoms. C$^+$ itself is not straightforwardly detectable as its fine structure transition at 158 $\mu$m requires spaceborne or airborne observations. But we may expect that molecules involving C$^+$ in the initial chemical steps allow to probe the electronic fraction. Our finding of ratios involving C$_2$H (relative to e.g. HCN, HNC or CN) as good proxies of the electronic fraction is a natural consequence of the relevance of C$^+$ as one of the main positive charge carriers. The initial step of C$_2$H formation involves indeed the C$^+$ + CH $\rightarrow$ C$_2^+$ + H reaction, followed by subsequent reactions with H$_2$ up to C$_2$H$_2^+$, which recombines to form C$_2$H. Molecules such as HCN, HNC or CO and its isotopologues, on the other hand, are saturated stable molecules which scale with column density. As a result, ratios such as C$_2$H/HCN, whose transitions are easily detectable, offer a convenient diagnostic tool of the electronic fraction in translucent medium. The electronic fraction is then, as shown in Fig. \ref{fig:RFmodelBestRatiosTranslucent} (left panel), an increasing function of the C$_2$H/HCN ratio. CF$^+$ is another proxy of C$^+$, as described in \cite{Neufeld2006,Guzman2012}, and ratios involving this ion are also found here to be good tracers of the ionization fraction. However, this ion is relatively scarce since it involves fluorine, which has a low relative abundance to H$_2$ and has only been detected in PDR environments so far (detectability issues are investigated in Sect. \ref{sec:noise}). In order to estimate the reliability of our results and determine if our 5000-model grid is sufficient to explore the chosen parameter space for our purpose, we compute errorbars on the predictive power estimate (the $R^2$ of the RF fit). To do so, we use 10-fold cross validation: the model grid is randomly split into 10 parts, and for each of these parts, a RF model is trained on the other 9 parts and tested on the remaining part which it has not seen during training. From these 10 estimates of the $R^2$ (all made on samples unseen during training), a reliable estimate of the predictive power on unseen data is made, as well as an estimate of the standard error on this predictive power. The corresponding error bars are also shown in Fig.~\ref{fig:RankingSingleRatiosTranslucent}. For most of the points, the errorbars are smaller than the marker, and the inset on the left panel presents a zoom on the first five ratios, showing the magnitude of the errorbars. This shows that the uncertainty induced by the finite size of our model grid in negligibly small and that our 5000-model grid is sufficient for our purpose. However, this conclusion should be taken with some caution as it has been shown that there exists no unbiased estimator of the variance of a cross-validation estimate \citep{Bengio2004} and that a naive estimation of this variance tends to underestimate it by a factor of up to 4 \citep{Varoquaux2018}. \subsection{Cold dense medium} \FigRankingSingleRatiosColdDense \FigRFmodelBestRatiosDense Figure~\ref{fig:RankingSingleColdenRatiosColdDense} presents the ranking of the best tracers in cold dense conditions (cf. Table~\ref{tab:grid_ranges}), for both column density ratios (left panel) and line intensity ratios (right panel). Error bars computed by the cross validation procedure described above are also shown, confirming that the size of our model grid is sufficient for estimating the quality of the fits based on each tracer. We see that the $R^2$ values of the best tracers are slightly lower than in the translucent medium case, indicating slightly stronger degeneracies with unknown parameters in this case (note that depletion was varied in this cold dense medium, in addition to the parameters varied in the translucent medium grid). However, we still find several tracers explaining more than 80\% of the variance in $x(\mathrm{e}^-)$. The best column density ratio is here found to be CN/N$_2$H$^+$. The cold dense environments are essentially ionized through cosmic rays and secondary UV photons induced by cosmic rays. Electrons are primarily produced by cosmic ray ionization of H$_2$ and destroyed in the efficient dissociative recombination reactions of the various molecular ions. He$^+$ ions, produced by cosmic ray ionization of He, are also particularly efficient in ionizing the molecular reservoirs, CO, HCN, N$_2$, H$_2$O. This contributes to forming atomic ions, in addition to the H$_3^+$ molecular ion resulting from H$_2$ ionization and the other stable molecular ions resulting from proton transfer of H$_3^+$ with stable molecules giving ions such as H$_2$D$^+$, HCO$^+$, H$_3$O$^+$, or N$_2$H$^+$. The ionization carriers are then shared amongst several different species, going from the simple atomic ions that do not react with H$_2$ (i.e. C$^+$, S$^+$, H$^+$) and closed shell molecular ions such as H$_3^+$, HCO$^+$, H$_3$O$^+$, and N$_2$H$^+$. Molecular ions are principally destroyed by dissociative recombination reactions whereas atomic ions rather react with the present neutral molecules since radiative recombination is not efficient. One can thus expect that molecular ions are inversely proportional to the electron abundances, as seen with the CN/N$_2$H$^+$ ratio which is found to increase monotonically with $x(e^-)$ (cf. Fig. \ref{fig:RFmodelBestRatiosDense}, left panel). As in the translucent case, we find slightly lower $R^2$ values for the best line intensity ratios than for the best column density ratio. However, we find a few ratios that have better scores as intensity ratios than as column density ratios. In particular, while the $^{13}$CO/HCO$^+$ column density ratio is found to be a poor predictor of the ionization fraction, the $^{13}$CO (1-0) / HCO$^+$ (1-0) line intensity ratio appears as one of the best tracers. Contrary to the previous cases, this is entirely an excitation effect. The abundance ratio of $^{13}$CO/HCO$^+$ is found to be mostly uncorrelated with $x(e^-)$ and mostly constant in the cold dense medium model grid (ratio of about $10^3$ with a typical scatter of a factor of 2-3). However, HCO$^+$ and $^{13}$CO have strongly different critical densities: $\sim 2 \times 10^5$ cm$^{-3}$ for HCO$^+$ (in cold dense medium conditions, $x(e^-)$ is too low for electron collisions to play a significant role) in comparison to $\sim 2 \times 10^3$ cm$^{-3}$ for $^{13}$CO. In the range of densities considered in the cold dense medium grid ($10^3 - 10^6$ cm$^{-3}$), $^{13}$CO excitation is thus mostly at local thermodynamic equilibrium (LTE) and its emissivity per molecule is thus constant with gas density, while HCO$^+$ is transitioning from the sub-thermally excited regime to the LTE regime, and its emissivity per molecule thus increases with density. The $^{13}$CO (1-0) / HCO$^+$ (1-0) ratio thus decreases with gas density. On the other hand, we find the gas density to be very strongly anti-correlated with $x(e^-)$ in these conditions (with a typical scatter of a factor of $\sim 3$), as cosmic ray ionization is the dominant source of ionization here and the recombination rates per ion scale with the gas density. These two effects combine to give a $^{13}$CO (1-0) / HCO$^+$ (1-0) ratio that increases with $x(e^-)$ with a relatively tight correlation (cf. Fig. \ref{fig:AnalyticalmodelBestSixDenseIntensityRatios}, top right panel). Fig.~\ref{fig:RFmodelBestRatiosDense} shows the relation between $x(\mathrm{e}^-)$ and the best column density ratio, CN/N$_2$H (left panel), and the best line intensity ratio, CF$^+$ (1-0) / DCO$^+$ (1-0) (right panel). We see a larger scatter than in Fig.~\ref{fig:RFmodelBestRatiosTranslucent}, but a clear relationship is still found. We note that the classical DCO$^+$/HCO$^+$ ratio does not appear among the best tracers found here for cold dense medium conditions. This point is discussed in Sect.~\ref{sec:trad_tracers}. \section{Analytical fit formulas} \label{sect:AnalyticalFits} If possible, we recommend using the RF models described in the previous section when attempting to estimate $x(\mathrm{e}^-)$ from one of the best tracers listed above. However, the provided datafiles are dependent on a specific implementation of Random Forests (the \texttt{scikit-learn} module for Python, \citealt{Pedregosa2011}). For a simpler and more persistent solution (independent of any external software), we provide in this section simple analytical fit formulae for the best tracers found in Sect.~\ref{sect:Tracers}. While the RF models are flexible enough to make the method described in Sect.~\ref{sect:Method} generally applicable to any model grid and any physical quantity we want to find tracers of, the analytical fits provided here use formulas that have been specifically chosen for the application presented here (finding predictors of the ionization fraction from our chemical model grid). There is no guarantee that these same formulae would perform adequately to find analytical fits with other model grids and/or another quantity to predict. \subsection{Prediction formulae} We use simple polynomial formulae (working as before on the logarithm of both the observable ratios and the ionization fraction) to fit the non-linear relationships between the best tracers found in Sect.~\ref{sect:Tracers} and $x(\mathrm{e}^-)$. This is applied to all tracers which where found to have $R^2>0.5$ in the previous RF analysis. In the cold dense medium conditions, we thus use a simple polynomial of order 5: \begin{equation} f^\mathrm{dense}(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + a_4 x^4 + a_5 x^5 \label{eq:main_fit_dense} \end{equation} where $x$ is the $\log_{10}$ of the column density ratio or line intensity ratio from which we want to predict $\log_{10}(x(\mathrm{e}^-))$, $f^\mathrm{dense}(x)$ is our fitting function to $\log_{10}\left(x(\mathrm{e}^-)\right)$ in cold dense gas conditions\footnote{We use the notation $\log_{10}$ for the logarithm in base 10, and $\log_e$ for the natural logarithm.}, and the parameters $a_0$ to $a_5$ are our fit parameters. A fit is made for each of the tracers that had $R^2>0.5$ (more than half of the variance explained) in the rankings of Section~\ref{sect:Tracers}. The corresponding fit coefficient values for each column density ratio are given in Table~\ref{tab:RatiosColumnDensitiesDenseFullFitCoeff} and the coefficient values for line intensity ratios are listed in Table~\ref{tab:RatiosIntensitiesDenseFullFitCoeff}. In the translucent medium conditions, $x(\mathrm{e}^-)$ naturally reaches a plateau at the fractional abundance of carbon ($1.32\times10^{-4}$ in our undepleted models). We thus use a modified formula combining a polynomial of order 5 and a saturation: \begin{equation} f^\mathrm{translucent}(x) = f_\mathrm{max} - \log_e \left( 1 + e^{-(a_0 + a_1 x + a_2 x^2 + a_3 x^3 + a_4 x^4 + a_5 x^5)} \right) \label{eq:main_fit_translucent} \end{equation} The fit parameters here are $f_\mathrm{max}$ and $a_0$ to $a_5$. As in the cold dense medium case, $f^\mathrm{translucent}(x)$ is our fitting function to $\log_{10}\left(x(\mathrm{e}^-)\right)$ in translucent gas conditions. The corresponding fit coefficient values for each column density ratio are listed in Table~\ref{tab:RatiosColumnDensitiesTranslucentFullFitCoeff} and the coefficient values for line intensity ratios are listed in Table~\ref{tab:RatiosIntensitiesTranslucentFullFitCoeff}. The order of the polynomial in these functions is chosen so that further increasing it yields only marginal increase in $R^2$ (estimated by cross-validation to avoid overfitting). The $R^2$ values found for the best tracers in each case are close to the values initially found with the RF models, indicating that our analytical fits are not significantly worse than the RF models, at least for the high-$R^2$ tracers. As an example, Figures \ref{fig:RFmodelBestRatiosTranslucent} and \ref{fig:RFmodelBestRatiosDense} also show the analytical fit (solid red line) in comparison to the RF model (solid black line). For a few of the lower $R^2$ tracers, the analytical fits perform significantly worse as can be seen on the tables of Appendix~\ref{app:Rankings} by comparing the $R^2$ values of the RF models with the $R^2$ values of the analytical fits. \subsection{Uncertainty formulae} \FigAnalyticalmodelUncertainties Finally, a key point is to estimate uncertainties on our prediction of the ionization fraction. We separate here two sources of uncertainty. Our best analytical fit is determined on a finite sample of models, so that the fit coefficients are only estimates of the theoretical best fit coefficients. Estimating again these fit coefficients from a different sample of models (drawn from the same distribution) would result in slightly different values, and these uncertainties on the fit coefficients in turn imply an uncertainty on the ionization fraction value predicted by the fit formula at any given value of the observable quantity (intensity ratio or column density ratio). In order to estimate this uncertainty on the predicted value, we proceed by bootstrapping: we repeat the fitting procedure on 100 bootstrapped samples (drawn from the original model grid) and report the standard deviation of the value of the fit function as its uncertainty. The left panel of Fig.~\ref{fig:AnalyticalmodelUncertainties} shows the corresponding uncertainty (showing the $3 \sigma$ level in dashed curves around the main prediction curve) in the case of the best column density ratio in cold dense gas conditions (CN/N$_2$H$^+$). We name this uncertainty the fit coefficients uncertainty to distinguish it from the second form of uncertainty below. We define the validity domain of our fit as the range of values of the observable ratio where our best fit is sufficiently constrained by our finite grid of models for this fit coefficient uncertainty to be negligible. In practice, we define it as the range of ratios where the above-defined uncertainty remains lower than 2\% of the predicted value. In the following, we will thus assume this uncertainty to be negligible inside of this validity domain and focus on the second form of uncertainties. The limits of the corresponding validity range are also shown on the left panel of Fig.~\ref{fig:AnalyticalmodelUncertainties} as blue vertical lines. Due to the tendency of high order polynomial fits to diverge quickly outside of the domain of the fitted dataset, the analytical fit formulae should not be used outside of the validity range defined here. The second form of uncertainties comes from the unobservable parameters (density, temperature,...), which induce a scatter in the relationship between any of the line intensity ratios or column density ratios and the ionization fraction. Inside of the validity domain of the fit, this scatter in the residuals is much larger than the fit coefficients uncertainty, as can be seen on the left panel of Fig.~\ref{fig:AnalyticalmodelUncertainties}), and is thus the dominant source of uncertainties. When applying the fit formula to real observations, in the ideal case where the chemical model used in this paper would be a perfect model of reality, we would expect the value predicted by the fit formula to commit a mean squared error equal to the variance of this scatter of the residuals, and this error represents our lack of information on the underlying physical conditions. As can be seen on the previous figures however, this residual scatter varies as a function of the observable predictor (the vertical scatter is smaller in some regions of the plot than in others). This residual variance as a function of the predictor can be estimated by a moving average method (shown in the right panel of Fig.~\ref{fig:AnalyticalmodelUncertainties}, we used a window of 0.1 dex). In order to provide a simpler way of estimating this residual variance function, we fitted the simple analytical function \begin{equation} g(x) = \left| b_0 + b_1 x + b_2 x^2 + b_3 x^3 + b_4 x^4 + b_5 x^5 \right| \label{eq:uncertainty_fit} \end{equation} to the squared residuals, thus providing a fit to the local variance. Note that this function provides a fit to the variance on the prediction of $\log_{10}(x(\mathrm{e}^-))$, and, as previously, $x$ is the $\log_{10}$ of the observable ratio. The absolute value in this function was chosen because the residual variance is by definition a positive quantity. The best fit coefficients to this residual variance function are given in Tables~\ref{tab:RatiosColumnDensitiesTranslucentFullFitCoeff}, \ref{tab:RatiosIntensitiesTranslucentFullFitCoeff}, \ref{tab:RatiosColumnDensitiesDenseFullFitCoeff} and \ref{tab:RatiosIntensitiesDenseFullFitCoeff} in Appendix \ref{app:Rankings}. An example of the corresponding residual standard deviation function is shown in the right panel of Fig.~\ref{fig:AnalyticalmodelUncertainties}, where we compare its moving-average estimate (dotted red curves around the main prediction curve) to its squared polynomial fit (dashed black curve), showing in both cases the $3 \sigma$ level, for the best column density ratio in the cold dense medium case (CN/N$_2$H). The resulting fits are also presented for the best ratio in the different cases in Fig.~\ref{fig:RFmodelBestRatiosTranslucent} and \ref{fig:RFmodelBestRatiosDense}, showing the RF fit, the analytical fit and the scatter fit. Similar figures for each of the 6 best tracers for both the translucent medium and the cold dense medium conditions, and for both column density ratios and line intensities ratios, are presented in Appendix~\ref{app:fit_figures}, . \section{Discussions} \label{sect:Discussions} \subsection{Traditional ionization tracers} \label{sec:trad_tracers} \FigDCOpoverHCOp One of the most commonly used ionization fraction tracers is DCO$^+$/HCO$^+$ \citep{Guelin1977,Guelin1982,Dalgarno1984,Caselli1998}. It is used mainly in cold dense cores, where the temperature is low enough to allow sufficient deuterium enrichment, and the column density is large enough to make DCO$^+$ detectable. However, our results show that even in the cold dense gas regime, and despite including DCO$^+$ in our list of potential tracers, the DCO$^+$/HCO$^+$ column density ratio is not ranked among the best tracers of the ionization fraction. In fact, it is ranked as the 38\textsuperscript{th} best tracer in dense cold gas conditions, with a $R^2$ of 0.57 only (cf. Table~\ref{tab:RatiosColumnDensitiesDenseFullFitCoeff}). Note that this ratio is often determined using observations of H$^{13}$CO$^+$ as H$^{12}$CO$^+$ can be optically thick in high-column-density lines of sight. We leave this aspect aside in this discussion by showing that the column density ratio DCO$^+$/HCO$^+$ itself (however it might be determined observationally) suffers from several limitations as a tracer of the ionization fraction. The relationship between the DCO$^+$/HCO$^+$ abundance ratio and the ionization fraction found in our (cold and dense) model grid is shown in Fig.~\ref{fig:DCOpoverHCOp}. The blue distribution shows the results of the model grid and the black line our best RF model, with a $R^2$ of 0.57. We see that two main problems limit the usability of DCO$^+$/HCO$^+$. First, a large scatter of the ionization values (by up to 3 orders of magnitude) is present at all values of the ratio, despite having a significant fraction of the distribution tightly located around a clear relationship (the outermost blue contour encloses 75\% of the distribution). As a result, the best fit RF model tries to make a compromise between the tightly located part of the distribution, and the scattered points at lower ionization fraction values. We found most of this scatter to be related to variations in the ortho-to-para ratio of H$_2$ (OPR$_\mathrm{H_2}$), an unobservable parameter whose value remains difficult to estimate in observations of dense cold cores. For instance, selecting only the models having OPR$_\mathrm{H_2} < 2.5\times 10^{-3}$, we see in Fig.~\ref{fig:DCOpoverHCOp} (red dashed contour) that we retain only the unscattered part of the distribution. Thus the difficulty of obtaining reliable estimates of the OPR of H$_2$ in dense cores limits the use of DCO$^+$/HCO$^+$ as a tracer of the ionization fraction. Second, even when selecting the low OPR$_\mathrm{H_2}$ models, we see that the relationship presents a very steep slope at high ratio values (low ionization fraction values). As a result, for DCO$^+$/HCO$^+$ ratios above $10^{-1.6}$, a range of ionization fractions of more than two orders of magnitude is possible. The ratio would then only be usable for lower ratios, equivalent to ionization fractions larger than $10^{-6.5}$. Even if the model grid presented no scatter at all, a steep slope implies that small observational uncertainties on the ratio will induce large uncertainties on the predicted ionization fraction. Thus, relationships with steep slopes are of limited use. These different effects combine to make the DCO$^+$/HCO$^+$ ratio a poor predictor of the ionization fraction, compared to the best ranked tracers found by our method. \subsection{Detectability constraints}\label{sec:noise} \FigTranslucentNoiseImpactAbs \FigTranslucentNoiseImpactSNR So far, detectability constraints have been ignored. Predictive power has been tested from noiseless values of column density or line intensity ratios. However, the different lines considered here have widely different brightnesses and thus differ in terms of detectability with current instruments. We explore here the effect of various noise levels on the predictive power of the different line ratios. We will consider two noise setups corresponding to two observation scenarios: the case of one constant noise level for all lines, corresponding to the typical case of a line survey where faint lines are detected with a lower signal-to-noise ratio than bright ones, and the case of a fixed signal-to-noise ratio (SNR) for all lines, corresponding to observations being designed to reach a set signal-to-noise ratio for a few desired lines. These two cases correspond to the two opposite extremes of possible observation scenarios and will give us a general overview of the possible impact of noise on the performance of the tracers. In both cases, synthetic noise is added to the line integrated intensity values and we consider the SNR on the integrated intensity and not the peak intensity. Note that all noise and SNR values quoted are for individual lines, not for line ratios. In both cases, in order to measure how the predictive power (measured as the $R^2$ value) is affected by noise, we perform a modified cross-validation. The model grid is randomly split in ten parts. For each of these tenths and for each line ratio : \begin{enumerate} \item A RF model is \emph{trained} from the other nine parts of the model grids, \emph{without any noise added}. \item The trained RF is \emph{tested} on the tenth under consideration, \emph{with added noise} (either with a constant noise variance $\sigma^2_\mathrm{noise}$ for all models and all lines, or with a constant SNR for each model and line). Since the RF models take the $\log_{10}$ of the intensity ratio as input and since the addition of noise can produce negative values, we only apply the RF model when both line integrated intensity values are above 1 $\sigma$. Otherwise, we do not use the RF model and simply take the average ionization fraction value of the grid as our prediction. \end{enumerate} We finally take the average $R^2$ value obtained over the ten noisy tests as the predictive power of the tracer under noisy conditions. We thus avoid estimating $R^2$ from datapoints that have been seen during training, and we estimate the predictive power of a model trained on noiseless data when applied to noisy data (as would be the case when applying the results of this article to real observations). \TabNoiseImpactTranslucent The results for the translucent medium grid, for a few possible line ratios, are presented in Fig.~\ref{fig:TranslucentNoiseImpactAbs} and \ref{fig:TranslucentNoiseImpactSNR}. Figure~\ref{fig:TranslucentNoiseImpactAbs} presents the scenario of a constant noise level $\sigma_\mathrm{noise}$ for all lines and all models, and shows how the $R^2$ of the prediction from different line ratios decreases when the noise level is increased. We show only the three best line ratios (according to the noiseless ranking of Tab.~\ref{tab:RatiosRankingIntensitiesTranslucent}): C$_2$H (1-0) / HCN (1-0), C$_2$H (1-0) / $^{13}$CO (1-0), and C$_2$H (1-0) / C$^{18}$O (1-0), and the four tracers found to be the least sensitive to noise. We define the tracers least sensitive to noise as those having the highest $\sigma_{1/2}$ among tracers with $R_\mathrm{noiseless}\ge 0.7$, thus giving a compromise between a good fit quality (high $R_\mathrm{noiseless}$) and a slow decrease with increasing noise level (high $\sigma_{1/2}$). The four best ratios found according to this definition and shown on the figure are $^{13}$CO (1-0) / C$^{18}$O (1-0), C$^{18}$O (1-0) / CF$^+$ (1-0), $^{13}$CO (1-0) / CF$^+$ (1-0), and HCN (1-0) / CF$^+$ (1-0). We see that the best three tracers, all including C$_2$H, have their predictive power decreasing sharply at a relatively low noise level (their $R^2$ decreases by 50\% at $\sigma_\mathrm{noise} \sim 5\times 10^{-3}$ K~km~s$^{-1}$). This is due to the relatively low brightness of the C$_2$H line in translucent conditions (median integrated intensity of $\sim 7\times 10^{-3}$ K~km~s$^{-1}$ in our translucent medium grid). In comparison, some line ratios built from brighter lines, despite a lower $R^2$ on noiseless data, are found to perform better in the presence of noise : the $R^2$ for the $^{13}$CO (1-0) / C$^{18}$O (1-0) ratio decreases by 50\% at $\sigma_\mathrm{noise} \sim 4\times 10^{-2}$ K~km~s$^{-1}$ (C$^{18}$O (1-0) has a median integrated intensity of $\sim 3\times 10^{-1}$ K~km~s$^{-1}$ in this grid), the C$^{18}$O (1-0) / CF$^+$ (1-0) ratio has its $R^2$ decreased by 50\% at $\sigma_\mathrm{noise} \sim 10^{-2}$ K~km~s$^{-1}$ (CF$^+$ (1-0) has a median integrated intensity of $\sim 1.2 \times 10^{-2}$ K~km~s$^{-1}$ in this grid). We note, however, that ratios built from CF$^+$ (1-0) actually only perform marginaly better than the best ratios involving C$_2$H at high noise levels (see Fig. \ref{fig:TranslucentNoiseImpactAbs}) due to the low brightness of CF$^+$ (1-0). For a more exhaustive comparison of the line ratios, Tab.~\ref{tab:NoiseImpactTranslucent} gives for each line ratio the noise level $\sigma_{1/2}$ at which the $R^2$ is decreased by half. \FigDenseNoiseImpactAbs \FigDenseNoiseImpactSNR Figure~\ref{fig:TranslucentNoiseImpactSNR} similarly shows the results for the scenario of a fixed SNR for all lines and all models, still in the case of translucent medium conditions. The variations of the $R^2$ of the prediction with the SNR are shown for the best seven tracers (according to the noiseless ranking of Table~\ref{tab:RatiosRankingIntensitiesTranslucent}). In this scenario, we find as expected that the predictive power drops at a SNR of order unity. The only exception is the $^{13}$CO (1-0) / C$^{18}$O (1-0) ratio, which decreases slightly earlier. This is due to this ratio spanning a relatively small range of values in our grid of models (approximately one order of magnitude) while the other line ratios span ranges of four to six orders of magnitude. As the ionization fraction spans a range of values of 2.5 orders of magnitude in the grid, this implies that the relationship between the ionization fraction and the $^{13}$CO (1-0) / C$^{18}$O (1-0) ratio has a much steeper slope than for the other line ratios. A steep slope then implies that small errors in the line ratios result in large errors in the predicted ionization fraction. As a result, the $^{13}$CO (1-0) / C$^{18}$O (1-0) ratio requires significantly higher SNRs than the other line ratios. Similarly to $\sigma_{1/2}$, we define SNR$_{1/2}$ as the SNR value at which $R^2$ is half of its value on noiseless data. The SNR$_{1/2}$ values for all line ratios are also given in Table~\ref{tab:NoiseImpactTranslucent}. \TabNoiseImpactDense The results for dense and cold medium conditions are similarly shown in Fig.~\ref{fig:DenseNoiseImpactAbs} and \ref{fig:DenseNoiseImpactSNR} and Table~\ref{tab:NoiseImpactDense}. In the case of a constant noise level, we find that the $R^2$ drop generally occurs at higher noise levels than in the translucent medium case, as expected because the lines are brighter in the cold dense medium due to higher column densities. Figure~\ref{fig:DenseNoiseImpactAbs} shows the decrease of $R^2$ with $\sigma_\mathrm{noise}$ for the best three dense gas tracers (according to the noiseless ranking of Tab.~\ref{tab:RatiosRankingIntensitiesTranslucent}), and for the four ratios least sensitive two noise (according to the same definition as previously). We note that $^{13}$CO (1-0) / HCO$^+$ (1-0), the second best ratio on noiseless data (therefore already shown on the figure), has also the highest $\sigma_{1/2}$ value among ratios with $R_\mathrm{noiseless}\ge 0.7$, so that we show the next four ratios least sensitive to noise on the figure. These four ratios are found to be C$^{18}$O (1-0) / HCO$^+$ (1-0), HCO$^+$ (1-0) / CN (1-0), SO (3-2) / CN (1-0), and HCN (1-0) / CN (1-0). The $\sigma_{1/2}$ values for all line ratios are listed in Table~\ref{tab:NoiseImpactDense}. When considering a constant SNR scenario for the dense cold medium case, we again find that the $R^2$ drop occurs at a SNR value of order unity, as shown in Fig.~\ref{fig:DenseNoiseImpactSNR}. The SNR$_{1/2}$ values for all line ratios are listed in Table~\ref{tab:NoiseImpactDense}. When interpreting the results of this study, one must keep in mind that the decrease in $R^2$ in our two scenarios (constant $\sigma_\mathrm{noise}$ or constant SNR) does not come from the same effect. In the constant $\sigma_\mathrm{noise}$ scenario, at a given $\sigma_\mathrm{noise}$ level one part of the grid has undetected or very low SNR values for the ratio under consideration, while the other part has high SNR. The $R^2$ decrease is indicative of the growing fraction of the parameter space with undetected/low SNR ratios. As a result, even when finding a low overall $R^2$, there might remain a fraction of the parameter space were the predictor remains very good (usually, high column density, high volume density, high temperature,...), which we did not characterize here. In the constant SNR scenario on the other hand, the SNR is by design constant over all models, independent of the physical parameters. The decrease in overall $R^2$ is then more representative of the decrease in predictive power at any point in the parameter space. As a result, a ratio with a low $\sigma_{1/2}$ value might still be usable in real observations with a higher noise level but would be restricted to high brightness regions of GMCs (in the corresponding lines), while a ratio cannot be used at all in observations with SNR significantly lower than its SNR$_{1/2}$ value. \subsection{Chemical model reliability}\label{sec:chemical_reliability} Independently of the statistical method that we present in this article, the results obtained rely on the chosen chemical model and its limitations. Previous works on ionization fraction tracers have mostly used stationary-state results of single-zone chemical models (although some works have used time-dependent models, e.g., \citealt{Maret2007,Shingledecker2016}). As these previous studies have been mostly limited to deuteration-based tracers, the focus of the present article has been on highlighting the non-deuteration-based tracers that can be found for the ionization fraction from similar chemical models. We discuss here the impacts of our model's limitations on our results. Our single-zone model cannot include a detailed treatment of UV radiative transfer through the cloud. While most photodissociation rates can be simply estimated based on an assumed optical depth of dust protecting each model from UV photons (parameter $A_\mathrm{V}$ in our models), species such as H$_2$, CO and its isotopologues can be protected from photodissociation by self- or mutual-shielding. While self-shielding of H$_2$ and mutual shielding of CO by H$_2$ are included in our model using approximations \citep{Draine1996,Heays2017}, mutual shielding of $^{13}$CO and C$^{18}$O by H$_2$ and $^{12}$CO are not included. As a result, in our translucent medium models where photodissociation by external UV photons still plays an important role, we expect the abundances of the rarer CO isotopologues to be less reliable than the other species. Note that the observations of $^{13}$CO and C$^{18}$O in our ORION-B dataset indeed present specificities (systematic excitation temperature differences with $^{12}$CO, \citealt{Bron2018,Roueff2020}) that remain unexplained even by more complex 1D PDR models. The only explicit surface reaction in our chemical model is H$_2$ formation, however we account for the freeze-out of CO through our depletion parameter. The list of species that we consider as possible tracers has been restricted to species that are not strongly affected by surface chemistry beyond the depletion effect. Extension to more complex molecules would require a chemical model including a more complete treatment of surface chemistry. Another source of uncertainties comes from the experimental or theoretical estimates of the reaction rate coefficients used in our chemical network. While we did not directly perform a sensitivity analysis of the reaction rate coefficients, our model grids include temperature variations which subsequently impact the reactions rate coefficients through their temperature dependence. This is especially true for the important dissociative recombination reactions which display significative temperature dependences. Temperature is then considered as an unobserved parameter when searching for good tracers of $x(e^-)$. The discovery of strong relationships between some of the ratios and $x(e^-)$, despite temperature variations in the grid, thus indicates that these relationships are to some extent robust to the reaction rates. A careful analysis of the magnitude and correlations of model uncertainties resulting from reaction rate uncertainties in the chemical network would deserve a separate study. Time-dependent effects are expected to be more important in cold dense medium conditions than in translucent medium conditions as photochemistry has shorter timescales in the latter case. Time-dependent effects that result from the time evolution of some physical parameter (e.g. density and temperature during the contraction of a core) while the chemistry follows in a quasi-stationary way are in part accounted for in our models by exploring a large range of the various physical parameters that can be subject to variations (see Table~\ref{tab:grid_ranges}). In addition, the progressive freeze out of CO on dust grains is accounted for by considering a range of depletion factors for carbon and oxygen. Similarly, the slow evolution of OPR$_\mathrm{H_2}$ in cold gas can keep parts of the chemistry (deuterium chemistry, nitrogen chemistry) in a time-dependent evolution that depends mainly on the evolution of OPR$_\mathrm{H_2}$. This is also in part accounted for by exploring a large range of OPR$_\mathrm{H_2}$ values in our models. The tracer-finding method presented in this article will be applied to time-dependent chemical models in a future study. The final and most important limitation of our model is that it does not include a spatial dimension : gas at a single value of density, temperature, etc. is assumed to be exposed to a given radiation field and protected by a given column density. As a result the possibility that different emission lines originate in separate layers of gas on the line of sight is completely neglected. Variations of the physical conditions along the line of sight can indeed have an important impact on the observables (e.g. \citealt{Levrier2012}). This limitation could be important both in the translucent medium where physical and chemical gradients are present due to the progressive extinction of the external UV field, and in dense cores with density and temperature gradients. As a result some caution must be exercised when choosing the line ratios to consider, ratios involving two species expected to emit in completely different regions should be avoided. For instance, the C$_2$H (1-0) / N$_2$H$^+$ (1-0) that is found as the fourth best line ratio in dense cold medium should be avoided : C$_2$H is known to be a tracer of UV illumination and thus more likely to be emitted at the external surface of a given clump, while N$_2$H$^+$ is abundant in the inner regions of the core where CO is already significantly depleted. An application of our method to 1D PDR models to better account for this effect in translucent medium conditions will be carried out in a future work. \subsection{Parameter PDF in the model grids} In the model grids used in this study, we sampled uniformly (in logarithm) for the values of the unobservable physical conditions (gas density, temperature, UV field, etc.) in an hypercube defined by lower and upper bounds for each of the parameters. The results of our ranking method will depend on this assumed PDF (probability density function) for the physical conditions in the ISM. We made here the choice of making the minimal assumption: knowing only reasonable lower and upper bounds on each of the parameters, the uniform distribution is the maximum entropy PDF (i.e. the PDF that best represents our assumed state of knowledge). As a perspective, if more a priori knowledge is available, then more accurate assumptions for the PDF (in particular for the correlations between the different physical parameters) could reveal additional tracers. We note that this assumption of a uniform PDF over a maximum support (in the sense that any more accurate PDF would have almost all of its weight enclosed in this support) makes it likely that any tracer found to have a very good relationship with the ionization fraction would keep a strong relationship for more accurate PDF choices (if the relationship is strong over the full hypercube, it should with high likelihood stay strong on subregions of this hypercube). In this sense, we expect the tracers found here to remain reliable, but a more accurate PDF choice might strongly increase the performance of some tracers found here to perform poorly and thus reveal additional tracers of the ionization fraction. This argument remains however qualitative as pathological cases of PDF might be constructed that would radically change the rankings of the tracers. As a result, the precise rankings presented in this article could slightly change, but we expect the good tracers highlighted by these ranking to be reliable for more realistic PDFs of the physical conditions in GMCs. \subsection{Final recommandation} Based on the limitations discussed above (detectability and model reliability), we recommend the use of the following integrated line intensity ratios to trace the ionization fraction. \begin{itemize} \item In translucent medium conditions, we recommend the use C$_2$H (1-0) / HCN (1-0), C$_2$H (1-0) / HNC (1-0), or C$_2$H (1-0) / CN (1-0). If sensitivity is an issue, HCN (1-0) / CF$^+$ (1-0) can sometimes perform as well as the previously listed ratios. \item In cold dense gas conditions, we recommend the use of CF$^+$ (1-0) / DCO$^+$ (1-0), $^{13}$CO (1-0) / HCO$^+$ (1-0) or CN (1-0) / N$_2$H$^+$ (1-0) if detectability is not an issue for these species, and of $^{13}$CO (1-0) / HCO$^+$ (1-0) or C$^{18}$O (1-0) / HCO$^+$ (1-0) otherwise. \end{itemize} This list is of course not exhaustive, and other ratios can give satisfactory predictions (see Tables \ref{tab:NoiseImpactTranslucent} and \ref{tab:NoiseImpactDense}) if the species listed above are not available. In translucent gas conditions, this recommendation is based on the following points. After eliminating rarer CO isotopologues based on our discussion of mutual-shielding effects on selective photodissociation in low/moderate $A_\mathrm{V}$ regions (cf Sect.~\ref{sec:chemical_reliability}), and eliminating ratios involving sulfur species that were found to require unreasonably low noise levels of $10^{-4}-10^{-5}$ K km s$^{-1}$, the three remaining best line intensity ratios are C$_2$H (1-0) / HCN (1-0) and C$_2$H (1-0) / HNC (1-0) and C$_2$H (1-0) / CN (1-0). If noise sensitivity is critical, the tracer found to have the best predictive power at high noise levels is found in Sect.~\ref{sec:noise} to be HCN (1-0) / CF$^+$ (1-0) but is negligibly better than the three previously mentioned at high noise levels. In cold dense gas conditions, the three best ratios are CF$^+$ (1-0) / DCO$^+$ (1-0), $^{13}$CO (1-0) / HCO$^+$,and CN (1-0) / N$_2$H$^+$ (1-0). If noise sensitivity is critical, we found in Sect.~\ref{sec:noise} that the ratios with the best predictive power at high noise levels are $^{13}$CO (1-0) / HCO$^+$ and C$^{18}$O (1-0) / HCO$^+$ (1-0). \section{Conclusions} \label{sect:Conclusions} We have presented a general statistical method to find the best observable tracers of an unobservable parameter based on a grid of models spanning the range of possible values for all the unknown underlying physical parameters (e.g., gas density, temperature, depletion, etc.). Our method estimates the predictive power of each potential observable tracer by training a flexible, non-linear regression model (a Random Forest model, making no assumption on the non-linear shape of the relationship to be found) on the task of predicting the target quantity from each of the potential tracers. The fit quality on test data, measured as the $R^2$ coefficient by cross-validation and out-of-bag estimation, is used to rank the potential tracers by order of predictive power. In the context of our recent studies of the Orion B GMC \citep{Pety2017, Gratier2017, Orkisz2017, Bron2018, Orkisz2019}, we have applied this method to the important astrophysical question of tracing the ionization fraction in the neutral ISM, with the goal of being able to probe its variations across a whole GMC, from its translucent enveloppe to its dense cores. We considered grids of single-zone, stationary state astrochemical models exploring wide ranges of values in gas density, temperature, external UV field, $A_\mathrm{V}$ on the line of sight, cosmic ray ionization rate, ortho-to-para ratio of H$_2$, depletion factor, and sulfur elemental abundance. For a finer exploration of the possible conditions, we considered two grids corresponding to translucent medium conditions and cold dense medium conditions respectively, based on the different types of environments found in the Orion B GMC \citep{Pety2017,Bron2018}. We considered successively column density ratios and line intensity ratios as potential tracers, focusing on species observable in the band at 100 GHz of our observations of Orion B. We find that in both cases and in both types of physical conditions, multiple ratios allow accurate predictions of the ionization fraction, with $R^2 > 0.8$ (and up to 0.96). We investigated the impact of the noise level on the predictive capability of the different ratios After accounting for detectability and model reliability, we recommend : \begin{itemize} \item for translucent medium conditions, C$_2$H (1-0) / HCN (1-0), C$_2$H (1-0) / HNC (1-0) or C$_2$H (1-0) / CN (1-0), \item for cold dense medium conditions, CF$^+$ (1-0) / DCO$^+$ (1-0), $^{13}$CO (1-0) / HCO$^+$ (1-0) or CN (1-0) / N$_2$H$^+$ (1-0) at low enough noise level, or $^{13}$CO (1-0) / HCO$^+$ (1-0) or C$^{18}$O (1-0) / HCO$^+$ (1-0) if sensitivity is an issue. \end{itemize} In order to simplify the use of these predictors, we constructed ad hoc analytical fits (using polynomials or saturated polynomials) of the relationship of each observable tracer to the ionization fraction. Contrary to the Random Forest models, the choice of the analytical form of these fits is specific to the types of relationships observed in this specific application (different analytical forms might be necessary for other applications). We also provide analytical formulae to estimate the uncertainty on any measurement of the ionization fraction from these tracers. These tracers will be used to study the ionization fraction in the Orion B molecular cloud in a second paper \citep{Guzman2020}. The method presented here is very general and could be easily applied to finding tracers of other related (cosmic ray ionization rate, absolute electron abundance) or unrelated (gas density, temperature, OPR$_{\mathrm{H}_2}$,...) unobservable quantities. This method can also be extended simply to simultaneously use pairs (or more) of line ratios (by training RF models on the possible combinations of line ratios), which would likely further increase the quality of the prediction. \begin{acknowledgements} We thank the anonymous referee for his comments that helped improve this article. We thank the CIAS for their hospitality during the many workshops devoted to the ORION-B project. This work was supported in part by the Programme National "Physique et Chimie du Milieu Interstellaire" (PCMI) of CNRS/INSU with INC/INP, co-funded by CEA and CNES. This project has received financial support from the CNRS through the MITI interdisciplinary programs. The authors also acknowledge funding by Paris Observatory through the AF \emph{Astrochimie} program. JRG thanks Spanish MICI for funding support under grant AYA2017-85111-P. \end{acknowledgements} \bibliographystyle{aa} %
{ "timestamp": "2020-07-28T02:40:58", "yymm": "2007", "arxiv_id": "2007.13593", "language": "en", "url": "https://arxiv.org/abs/2007.13593" }
\section{Introduction} Important properties of real crystals such as plasticity, melting, growth, etc., are mainly defined by defects of the crystalline structure which are called dislocations. Moreover, many bodies posses a spin structure. For example, ferromagnets are also characterized by the distribution of magnetic moments described by the unit vector field. This unit vector field may also have defects (singularities) which are called disclinations. Description of dislocations and disclinations in elastic media is a very active field of research for more then one century because of its importance for applications (see, e.g., \cite{LanLif70,Kosevi81}). Real solids posses usually a crystalline structure and are often described by models based on this crystalline structure especially at the quantum level. At the same time, many properties of solids can be also described by the elasticity theory in the continuous approximation. Discrete and continuous approaches complement each other, and are both needed for our understanding of nature. In this paper, we consider only continuous approximation. In this approximation solids without dislocations are described by the displacement vector field within the ordinary elasticity theory. The spin structure of solids without disclinations is described by the unit vector field ($n$-field) satisfying appropriate field equations. In the presence of dislocations and disclinations there as a problem: what variable are to be used? For example, real solids posses many defects, and if we want to use continuous approximation for defect distributions then the displacement vector field and $n$-filed do not exist because they are singular at each point. The geometric theory of defects is aimed to resolve this problem. The idea of geometric theory of defects is simple. In the continuous approximation, a crystal with a spin structure is considered as elastic media (manifold) with a given metric and affine connection with torsion (the Riemann--Cartan geometry). As usual, elastic deformations of media and distribution of the unit vector field are described by the displacement and rotational angle vector fields. The absence of defects means that displacement and unit vector fields are smooth. If they are not continuous then we say that the media has defects. In general, there are two types of defects: dislocations which are defects of elastic media itself (discontinuity of the displacement vector field) and disclinations corresponding to discontinuities of the unit vector field. If defects are absent, then geometry is trivial: curvature and torsion are zero. In the presence of defects, geometry becomes nontrivial. Dislocations give rise to torsion and disclinations result in nontrivial curvature. The physical meaning of torsion and curvature are surface densities of Burgers \cite{Burger39A,Burger39B} and Frank \cite{Frank58} vectors, respectively, \cite{KatVol92,Katana05}. The geometric theory of defects allows one to describe single defects as well as their continuous distribution. For single defects, torsion and curvature are zero everywhere except some points, lines or surfaces where defects are located and where they have singularities. In the case of continuous distribution of dislocations and disclinations, torsion and curvature become nontrivial on the whole media, and instead of the displacement and angular rotation field we use tetrad and $\MS\MO(3)$-connection as the independent variables. The advantage is that these variables exist even in the absence of the displacement and unite vector fields. The history of geometric theory of defects goes back to 1950s [8--11] \nocite{Kondo52,Nye53,BiBuSm55,Kroner58} when dislocations were related to torsion for the first time. The review and earlier references can be found in the book \cite{Kleine08}. In the geometric approach to the theory of defects \cite{KatVol92,Katana05,KatVol99}, we discuss the model which is different from others in two respects. Firstly, we do not have the displacement and unit vector fields as {\em independent} variables because, in general, they are not continuous. Instead, the triad field and $\MS\MO(3)$-connection are considered as the only independent variables. If defects are absent, then the triad and $\MS\MO(3)$-connection reduce to partial derivatives of the displacement and rotational angle vector fields (pure gauge because torsion and curvature vanish). In this case, the latter can be reconstructed. Secondly, the set of equilibrium equations is different. We proposed the purely geometric set which coincides with that of Euclidean three dimensional gravity with torsion. The nonlinear elasticity equations and principal chiral $\MS\MO(3)$-model for the unit vector field enter the model through the elastic and Lorentz gauge conditions \cite{Katana03,Katana04,Katana05} which allow us to reconstruct the displacement and unit vector fields in the absence of defects in full agreement with classical models. When a new model is proposed then one has to show how to obtain previous results within new approach. A number of dislocations were described in the geometric theory of defects and shown to be in agreement with the elasticity theory \cite{Katana05}, which corresponds to linear approximation. Therefore the geometric theory of defects does not contradict experimental data in the domain where elasticity theory is valid. At the same time, the geometric theory of defects have also different predictions, for example, for the deformation tensor near the core of wedge dislocation. As far as we know, there is no experimental confirmation or refutation of geometric theory of defects. So, the model is still under theoretical development. In this paper, we consider the possibility of physical interpretation of the 't Hooft--Polyakov monopole solution \cite{tHooft74,Polyak74} in the geometric theory of defects. The famous 't Hooft--Polyakov solution in the $\MS\MU(2)$ gauge theory interacting with the triplet of scalar fields attracted much interest in physics and mathematics (for review, see, for example, \cite{ManSut04,Shnir05}). The solution is static and spherically symmetric. Therefore, it reduces to minimization of three-dimensional Euclidean energy expression which can be regarded as the free energy expression in solid state physics. We consider the $\MS\MU(2)$-connection components as the $\MS\MO(3)$-connection because their Lie algebras coincide, the triplet of scalar fields being the source of defects. Moreover, we assume that the $\MS\MO(3)$ group acts not in the isotopic space but in the tangent space to space manifold $\MR^3$. The metric of the space remains Euclidean. So the 't Hooft--Polyakov monopole corresponds to Euclidean vielbein and nontrivial $\MS\MO(3)$-connection which give rise to nontrivial Riemann--Cartan geometry of space. So, the 't Hooft--Polyakov monopole solution has natural interpretation in solid state physics describing elastic media with continuous distribution of disclinations and dislocations. We compute the corresponding densities of Frank and Burgers vectors. \section{Geometric theory of defects} In this section we give short review of the geometric theory of defects and introduce basic geometric notions: triad field and $\MS\MO(3)$-connection. More details can be found in \cite{Katana05}. We consider a three dimensional continuous media described by a topologically trivial Riemann--Cartan manifold. We use triad field $e_\mu{}^i$ and $\MS\MO(3)$-connection $\om_\mu{}^{ij}=-\om_\mu{}^{ji}$, where Greek letters $\mu=1,2,3$ and Latin ones $i,j=1,2,3$ denote world and tangent indices, respectively, as basic independent variables. We assume that metric $g_{\mu\nu}:=e_\mu{}^i e_\nu{}^j\dl_{ij}=\dl_{\mu\nu}$ is an ordinary flat Euclidean metric, but connection is nontrivial and may have singularities on some points, lines, or surfaces. The simplest and most widespread examples of linear dislocations are shown in Fig.~\ref{fdislo} (see, e.g., \cite{LanLif70,Kosevi81}). They are produced as follows. We cut the medium along the half-plane $x^2=0$, $x^1>0$, move the upper part of the medium located over the cut $x^2>0$, $x^1>0$ by the vector $\Bb$ towards the dislocation axis $x^3$, and glue the cutting surfaces. The vector $\Bb$ is called the Burgers vector. In a general case, the Burgers vector may not be constant on the cut. For the edge dislocation, it varies from zero to some constant value $\Bb$ as it moves from the dislocation axis. After the gluing, the media comes to the equilibrium state called the edge dislocation, see Fig.~\ref{fdislo}\textit{a}. If the Burgers vector is parallel to the dislocation line, it is called the screw dislocation (Fig.~\ref{fdislo}\textit{b}). \begin{figure}[ht \hfill\includegraphics[width=.8\textwidth]{edge_screw_dislocations} \hfill {} \\ \centering \caption{\label{fdislo} Straight linear dislocations. (\textit{a}) The edge dislocation. The Burgers vector $\Bb$ is perpendicular to the dislocation line. (\textit{b}) The screw dislocation. The Burgers vector $\Bb$ is parallel to the dislocation line.} \end{figure From the topological standpoint, the medium containing several dislocations or even the infinite number of them is still the Euclidean space $\MR^3$. In contrast to the case of elastic deformations, the displacement vector in the presence of dislocations is no longer a smooth function because of the presence of cutting surfaces where it jumps. The main idea of the geometric approach amounts to the following. To describe single dislocations in the framework of elasticity theory, we must solve equations for the displacement vector with some boundary conditions on the cuts. This is possible for small number of dislocations. But, with an increasing number of dislocations, the boundary conditions become so complicated that the solution of the problem becomes unrealistic. Besides, one and the same dislocation can be created by different cuts which leads to an ambiguity in the displacement vector field. Another shortcoming of this approach is that it cannot be applied to the description of a continuous distribution of dislocations because the displacement vector field does not exist in this case at all because it must have discontinuities at every point. In the geometric approach, we consider the triad field instead of the displacement vector field which is introduced as follows. Let a point of the medium has Cartesian coordinates $y^i$ in the ground equilibrium state. After elastic deformation, this point has the coordinates \begin{equation} \label{eeldef} y^i\mapsto x^i(y)=y^i+u^i(x), \end{equation} where $u^i(x)$ is the displacement vector field. We consider its components as functions of final point position $x$. In a general dislocation-present case, we do not have a preferred Cartesian coordinate system in the equilibrium because there is no symmetry. Therefore, we consider arbitrary global coordinates $x^\mu$, $\mu=1,2,3$, in $\MR^3$. We use Greek letters for coordinates allowing arbitrary coordinate changes. Then the Burgers vector for linear dislocation can be expressed as the integral of the displacement vector \begin{equation} \label{eBurge} \oint_Cdx^\mu\pl_\mu u^i(x)=-\oint_Cdx^\mu\pl_\mu y^i(x)=-b^i, \end{equation} where $C$ is a closed contour surrounding the dislocation axis. This integral is invariant under arbitrary coordinate transformations $x^\mu\mapsto x^{\mu'}(x)$ and covariant under global $\MS\MO(3)$-rotations of $y^i$. Here, components of the displacement vector field $u^i(x)$ are considered with respect to the orthonormal basis in the tangent space, $u=u^i e_i$. If components of the displacement vector field are considered with respect to the coordinate basis $u=u^\mu\pl_\mu$, the invariance of the integral (\ref{eBurge}) under general coordinate changes is violated. In the geometric approach, we introduce new independent variable -- the triad -- instead of partial derivatives $\pl_\mu u^i$: \begin{equation} \label{edevid} e_\mu{}^i(x):=\begin{cases} \pl_\mu y^i, &\text{outside the cut,}\\ \lim\pl_\mu y^i, &\text{on the cut.}\end{cases} \end{equation} The triad is a smooth function on the cut by construction. We note that if the vielbein was simply defined as partial derivatives $\pl_\mu y^i$, then it would have the $\dl$-function singularity on the cut because functions $y^i(x)$ have a jump. The Burgers vector can be expressed through the integral over a surface $S$ having contour $C$ as the boundary: \begin{equation} \label{eBurg2} \oint_Cdx^\mu e_\mu{}^i=\int\!\!\int_Sdx^\mu\wedge dx^\nu (\pl_\mu e_\nu{}^i-\pl_\nu e_\mu{}^i)=b^i, \end{equation} where $dx^\mu\wedge dx^\nu$ is the surface element. As a consequence of the definition of the vielbein in (\ref{edevid}), the integrand is equal to zero everywhere except at the dislocation axis. For the edge dislocation with constant Burgers vector, the integrand has a $\dl$-function singularity at the origin. The criterion for the presence of a dislocation is a violation of the integrability conditions for the system of equations $\pl_\mu y^i=e_\mu{}^i$: \begin{equation} \label{eintco} \pl_\mu e_\nu{}^i-\pl_\nu e_\mu{}^i\ne0. \end{equation} If dislocations are absent, then the functions $y^i(x)$ exist and define transformation to a Cartesian coordinates frame. In the geometric theory of defects, the field $e_\mu{}^i$ is identified with the triad. Next, we compare the integrand in (\ref{eBurg2}) with the expression for the torsion in Cartan variables \begin{equation} \label{ubbxvs} T_{\mu\nu}{}^i:=\pl_\mu e_\nu{}^i-\pl_\nu e_\mu{}^j-e_\mu{}^j\om_{\nu j}{}^i +e_\nu{}^j\om_{\mu j}{}^i. \end{equation} They differ only by terms containing the $\MS\MO(3)$-connection $\om_{\mu j}{}^i$. This is the ground for the introduction of the following postulate. In the geometric theory of defects, the Burgers vector corresponding to a surface $S$ is defined by the integral of the torsion tensor: \begin{equation*} b^i:=\int\!\!\int_S dx^\mu\wedge dx^\nu T_{\mu\nu}{}^i. \end{equation*} This definition is invariant with respect to general coordinate transformations of $x^\mu$ and covariant with respect to global rotations. Thus, the torsion tensor has straightforward physical interpretation: it is equal to the surface density of the Burgers vector. If the curvature tensor for the $\MS\MO(3)$-connection \begin{equation} \label{ubncbd} R_{\mu\nu}{}^{ij}:=\pl_\mu\om_\nu{}^{ij}-\pl_\nu\om_\mu{}^{ij} -\om_\mu{}^{ik}\om_{\nu k}{}^j+\om_\nu{}^{ik}\om_{\mu k}{}^j, \end{equation} is zero, then the connection is locally trivial, and there exists such $\MS\MO(3)$ rotation that $\om_{\mu i}{}^j=0$. In this case, we return to expression (\ref{eBurg2}). Next we give physical interpretation of the $\MS\MO(3)$-connection entering the expression for torsion (\ref{ubbxvs}). To this end we consider more general solids possessing spin structure, for example, ferromagnets or liquid crystals. The spin structure is the unit vector field $n^i(x)$ $(n^in_i=1)$. It can be described as follows. We fix some direction in the medium $n_0^i$. Then the field $n^i(x)$ at a point $x$ can be uniquely defined by the angular rotation field $\theta^{ij}(x)=-\theta^{ji}(x) =\frac12\ve^{ijk} \theta_k$, where $\ve^{ijk}$ is the totally antisymmetric tensor and $\theta_k$ is a covector directed along the rotation axis, its length being the rotation angle. Here and in what follows, Latin tangent indices are raised and lowered with the help of the flat Euclidean metric $\dl_{ij}$. So, \begin{equation} \label{ubndhy} n^i=n_0^j S_j{}^i(\theta), \end{equation} where $S_j{}^i\in\MS\MO(3)$ is the rotation matrix corresponding to $\theta^{ij}$ and parameterized as \begin{equation} \label{elsogr} S_i{}^j=(e^{(\theta\ve)})_i{}^j=\cos\theta\,\dl_i^j +\frac{(\theta\ve)_i{}^j}\theta\sin\theta +\frac{\theta_i\theta^j}{\theta^2}(1-\cos\theta)\qquad \in\MS\MO(3)\, , \end{equation} where $(\theta\ve)_i{}^j:=\theta_k\ve^k{}_i{}^j$ and $\theta:=\sqrt{\theta^i\theta_i}$. If the unit vector field is continuous then there are no disclinations. Disclinations arise when the angular rotation field has discontinuities. The simplest examples of linear disclinations are shown in Fig.~\ref{fdiscl}, where the discontinuity of the angular rotation field occurs on a half-plane cut from the $x^3$ axis to infinity, and the vector field $n$ lies in the perpendicular plane $(x^1,x^2)$. \begin{figure} \hfill\includegraphics[width=.75\textwidth]{2p_4p_disclination} \hfill {} \centering\caption{Distribution of unit vector field in the $x:=x^1,y:=x^2$ plane for straight linear disclinations parallel to the $x^3$ axis, for $|\Theta|=2\pi$ (\textit{a}) and $|\Theta|=4\pi$ (\textit{b}).} \label{fdiscl} \end{figure} A linear disclination is characterized by the Frank vector \begin{equation} \label{etheta} \Th_i:=\frac12\ve_{ijk}\Th^{jk}, \end{equation} where \begin{equation} \label{eomega} \Th^{ij}:=\oint_Cdx^\mu\pl_\mu\theta^{ij}, \end{equation} and the integral is taken along closed contour $C$ surrounding the disclination axis. The length of the Frank vector is equal to the total angle of rotation of the field $n^i$ as it goes around the disclination. For linear disclinations it must be a multiple of $2\pi$. In the presence of disclinations, the rotational angle field $\theta^{ij}(x)$ is no longer continuous, and we must make some cuts for a given distribution of disclinations and impose appropriate boundary conditions in order to define $\theta^{ij}(x)$. In geometric theory of defects, instead of the rotational angle field, we introduce the $\MS\MO(3)$-connection \begin{equation} \label{edesoc} \om_\mu{}^{ij}:=\begin{cases}\pl_\mu\om^{ij}, &\text{outside the cut,}\\ \lim\pl_\mu\om^{ij}, &\text{on the cut.} \end{cases} \end{equation} in the way similar to the introduction of the triad field. Sure, we assume that the limits on both sides of the cut exist and are equal. So the $\MS\MO(3)$-connection is less singular then the rotational angle field by definition. Then the Frank vector for a surface $S$ is given by the integral of curvature \begin{equation} \label{unbgtr} \Om^{ij}:=\iint_S dx^\mu\wedge dx^\nu R_{\mu\nu}{}^{ij}. \end{equation} If we have straight linear disclination with rotational symmetry, and vector $n$ rotates in the perpendicular plane, then the $\MS\MO(3)$ group reduces to abelian $\MS\MO(2)$ group, the nonlinear terms in the curvature (\ref{ubncbd}) disappear, and we return to the previous expression (\ref{eomega}) due to the Stokes theorem. The previous discussion refers to an isolated disclinations. If there is a continuous distribution of disclinations the curvature differs from zero everywhere, and the rotational angle field $\theta^{ij}$ does not exist. Disclinations are said to be absent if and only if the curvature of $\MS\MO(3)$-connection vanishes, $R_{\mu\nu i}{}^j=0$. In this manner, the geometric theory of defects describes single defects as well as their continuous distribution, in which the phenomena of disclinations is replaced by the notion of curvature. \section{'t Hooft--Polyakov monopole} Let us consider three-dimensional Euclidean space $\MR^3$ with Cartesian coordinates $x^\mu$ and Euclidean metric $\dl_{\mu\nu}$, $\mu,\nu=1,2,3$,. The spherically symmetric $\MS\MU(2)$-gauge fields $A_\mu{}^i$, $i=1,2,3$, interacting with the triplet of scalar fields $\vf^i$ in the adjoint representation minimize the three-dimensional energy \cite{ManSut04,Shnir05} \begin{equation} \label{ubsghj} \CE:=\int \!d^3x\left(\frac14F^{\mu\nu i}F_{\mu\nu i} +\frac12\nb^\mu\vf^i\nb_\mu\vf_i+\frac14\lm\big(\vf^2-a^2\big)^2\right), \end{equation} where indices are raised and lowered by Euclidean metrics $\dl_{\mu\nu}$ and $\dl_{ij}$, \begin{equation} \label{uvvxfd} \begin{split} F_{\mu\nu}{}^i:=&\pl_\mu A_\nu{}^i-\pl_\nu A_\mu{}^i +A_\mu{}^jA_\nu{}^k\ve_{jk}{}^i, \\ \nb_\mu\vf^i:=&\pl_\mu\vf^i+A_\mu{}^j\vf^k\ve_{jk}{}^i. \end{split} \end{equation} -- are the curvature tensor components for $\MS\MU(2)$-connection and the covariant derivative of scalar fields; $\lm>0, a>0$ -- are coupling constants, $\ve_{ijk}$ is the totally antisymmetric tensor, $\ve_{123}:=1$, and $\vf^2:=\vf^i\vf_i$. The spherically symmetric ansatz is \begin{equation} \label{uncbgf} A_\mu{}^i=\frac{\ve_\mu{}^{ij}x_j(K-1)}{r^2},\qquad\vf^i=\frac{x^i H}{r^2}, \end{equation} where $K(r)$ and $H(r)$ are some dimensionless functions on radius $r:=\sqrt{x^2}$. The Euler--Lagrange equations for functional (\ref{ubsghj}) in the spherically symmetric case reduce to \begin{equation} \label{ubvxgd} \begin{split} r^2K''=&K\big(K^2+H^2-1\big), \\ r^2H''=&2HK^2+\lm\left(H^2-a^2r^2\right)H. \end{split} \end{equation} At present we know only one exact analytic solution to this system of equations for $\lm=0$ \begin{equation} \label{uvxcse} K=\frac{ar}{\sh(ar)},\qquad H=\frac{ar}{\tanh(ar)}-1, \end{equation} which is called the Bogomol'nyi--Prasad--Sommerfield solution \cite{PraSom75,Bogomo76}. It is easily checked that this solution has finite energy. The Lie algebra $\Gs\Gu(2)$ is isomorphic to $\Gs\Go(3)$, and we can consider energy (\ref{ubsghj}) as the three-dimensional Euclidean functional for $\MS\MO(3)$-connection interacting with the triplet of scalar fields $\vf^i$ in the fundamental representation. We assume, that this is the expression for the free energy describing static distribution of disclinations and dislocations in elastic media with defects, the triplet of scalar fields being the source of defects. The Euclidean metric means that elastic stresses are absent in media. The Cartan variables for monopole solutions are \begin{equation} \label{ubbcvl} e_\mu{}^i=\dl_\mu^i,\qquad\om_\mu{}^{ij}=A_\mu{}^k\ve_k{}^{ij} =(\dl_\mu^jx^i-\dl_\mu^ix^j)\frac{K-1}{r^2}, \end{equation} where we use the spherically symmetric $\MS\MO(3)$-connection (\ref{uncbgf}). The curvature and torsion are expressed through Cartan variables as usual by Eqs.(\ref{ubbxvs}), (\ref{ubncbd}). In the considered case, simple calculations yield the following expressions for curvature and torsion: \begin{align} \label{ubbcvh} R_{\mu\nu}{}^k:=\frac12R_{\mu\nu}{}^{ij}\ve_{ij}{}^k=F_{\mu\nu}{}^k =&\ve_{\mu\nu}{}^k\frac{K'}{r}-\frac{\ve_{\mu\nu}{}^jx_jx^k}{r^3} \left(K'-\frac{K^2-1}r\right), \\ \label{unbgtr} T_{\mu\nu}{}^k=&\left(\dl_\mu^kx_\nu-\dl_\nu^kx_\mu\right)\frac{K-1}{r^2}. \end{align} In the geometric theory of defects, curvature (\ref{ubbcvh}) and torsion (\ref{unbgtr}) have physical meaning of surface densities of Frank and Burgers vectors, respectively. That is they are equal to $k$-th components of respective vectors on surface element $dx^\mu\wedge dx^\nu$. If $s^\mu$ is normal to the surface element, then there are the following densities of Frank and Burgers vectors: \begin{align} \label{uvvxgd} f_\mu{}^i:=&\frac12\ve_\mu{}^{\nu\rho}R_{\nu\rho}{}^i =\frac1{3r}\dl_\mu^i\left(2K'+\frac{K^2-1}r\right)-\frac1{r} \left(\hat x_\mu\hat x^i-\frac13\dl_\mu^i\right)\left(K'-\frac{K^2-1}r\right), \\ \label{ubfgry} b_\mu{}^i:=&\frac12\ve_\mu{}^{\nu\rho}T_{\nu\rho}{}^i =\ve_\mu{}^{ij}\hat x_j\frac{K-1}{r}, \end{align} where $\hat x^\mu:=x^\mu/r$ and tensor $f_\mu{}^i$ is decomposed into irreducible components. For the Bogomol'nyi--Prasad--Sommerfield solution functions $K(r)$ and $H(r)$ are given in Eq.\ (\ref{uvxcse}). They have the following asymptotics \begin{equation} \label{unnvhf} \begin{aligned} K\big|_{r\to0}\approx & 1-\frac{(ar)^2}6-\frac{(ar)^4}{120}, & \qquad K\big|_{r\to\infty}\approx & 2ar\ex^{-ar}\to0, \\ H\big|_{r\to0}\approx & 1+\frac{(ar)^2}3-\frac{2(ar)^4}{15}, & H\big|_{r\to\infty}\approx & ar-1\to\infty. \end{aligned} \end{equation} The corresponding asymptotics of Frank and Burgers vector densities are \begin{equation} \label{uvnfuy} \begin{split} f_\mu{}^i\big|_{r\to0}\approx&-\frac13\dl_\mu^i\left(a^2+\frac7{90}a^4r^2 \right)+\frac2{45}x_\mu x^i3a^4\to-\frac13\dl_\mu^ia^2, \\ b_\mu{}^i\big|_{r\to0}\approx & -\frac16\ve_\mu{}^{ij}x_j\left(a^2+ \frac{a^4r^2}{20}\right)\to-\frac16\ve_\mu{}^{ij}x_ja^2, \\ \vf^i\big|_{r\to0}\approx & \frac13x^i\left(a^2-\frac{2a^4r^2}5\right) \to\frac13x^ia^2, \\ f_\mu{}^i\big|_{r\to\infty}\approx & -\frac{x_\mu x^i}{r^4}\to0, \\ b_\mu{}^i\big|_{r\to\infty}\approx & -\ve_\mu{}^{ij}x_j\frac1{r^2}\to0, \\ \vf^i\big|_{r\to\infty}\approx & \frac{x^i}r\left(a-\frac1{r}\right) \to\frac{x^i}ra. \end{split} \end{equation} It implies, in particular, that the total energy (\ref{ubsghj}) is finite. \section{Conclusion} The geometric theory of defects is aimed for description of dislocations and disclinations in the continuous approximation. It is well suited for description of single defects as well as their continuous approximation. In the present paper, we consider media with Euclidean metric but nontrivial $\MS\MO(3)$-connection. The 't Hooft--Polyakov monopole solution is the static spherically symmetric solution of $\MS\MU(2)$ Yang--Mills theory. The isomorphism of $\Gs\Gu(2)$ and $\Gs\Go(3)$ Lie algebras implies that the 't Hooft--Polyakov monopole may have new physical interpretation in solid state physics. In contrast to the original model, the $\MS\MO(3)$ group acts now not in the isotopic space but in the tangent space, giving rise to nontrivial torsion and curvature. These geometrical notions have physical interpretation as surface densities of Burgers and Frank vectors, respectively, in the geometric theory of defects. These are explicitly computed for the Bogomol'nyi--Prasad--Sommerfield solution. We are not aware what kind of media is to be chosen for experimental observations and what kind of experiment can be taken to confirm or disprove the geometric theory of defects but the mere existence of such possibility seems to be interesting.
{ "timestamp": "2020-07-28T02:37:42", "yymm": "2007", "arxiv_id": "2007.13490", "language": "en", "url": "https://arxiv.org/abs/2007.13490" }
\chapter{Abbreviations} \fancyhf{} \renewcommand{\headrulewidth}{2pt} \fancyhead[LE,RO]{\thepage} \fancyhead[RE]{\textit{ \nouppercase{Abbreviations}} } \fancyhead[RE]{\textit{ \nouppercase{Abbreviations}} } \fancyhead[LO]{\textit{ \nouppercase{\rightmark}} } \renewcommand{\footrulewidth}{0.1pt} \fancyfoot[CE,CO]{\nouppercase{Abbreviations}} \fancyfoot[LE,RO]{JFMJ} \begin{flushleft} \vspace{-20mm} \renewcommand{\baselinestretch}{1.5} \normalsize \hspace{20mm} \begin{tabular}{ll} 5G & Fifth Generation \\ 6G & Sixth Generation \\ ABCS & Ambient Backscatter Communication System \\ AF & Amplify-and-Forward \\ AP & Access Point \\ AS & Antenna Switching\\ AWGN & Additive White Gaussian Noise \\ BS & Base Station \\ CRN & Cognitive Radio Network \\ CSI & Channel State Information \\ D.C. & Difference of Two Convex Functions \\ D2D & Device-to-Device \\ DC & Direct Current \\ DF & Decode-and-Forward \\ DL & Downlink \\ EE & Energy Efficiency \\ EH & Energy Harvesting\\ GI & Guard Interval\\ ICT & Information and Communication Technology\\ ID & Information Decoding \\ IFFT & Inverse Fast Fourier Transform \\ IoT & Internet of Things \\ ISI & Inter-Symbol Interference\\ KKT & Karush-Kuhn-Tucker\\ LoS & Line of Sight \\ LTE & Long Term Evolution\\ MEC & Mobile Edge Computing \\ \end{tabular} \end{flushleft} \newpage \begin{flushleft} \renewcommand{\baselinestretch}{1.5} \normalsize \hspace{20mm} \begin{tabular}{ll} MILP & Mixed Integer Linear Programming \\ MIMO & Multiple-Input Multiple-Output \\ MINLP & Mixed Integer Non-Linear Programming \\ MISO & Multiple-Input Single-Output \\ MM & Majorization Minimization \\ NOMA & Non-Orthogonal Multiple Access \\ OFDM & Orthogonal Frequency Division Multiplexing \\ OFDMA & Orthogonal Frequency Division Multiple Access \\ PS & Power Switching \\ QoS & Quality of Service\\ RF & Radio Frequency \\ RFID & RF Identification \\ SBS & Small Base Station \\ SE & Spectral Efficiency \\ SISO & Single-Input Single-Output \\ SNIR & Signal-to-Interference plus Noise Ratio \\ SNR & Signal-to-Noise Ratio \\ SWIPT & Simultaneous Wireless Information and Power Transfer \\ SDMA & Spatial Division Multiple Access \\ TDMA & Time Division Multiple Access \\ TS & Time Switching\\ UAV & Unmanned Aircraft Vehicles \\ UE & User Equipment \\ UL & Uplink \\ WDT & Wireless Data Transfer \\ WPBC & Wirelessly Powered Backscatter Communication \\ WPCN & Wirelessly Powered Communication Network \\ WPT & Wireless Power Transfer \\ WSN & Wireless Sensor Network \\ \end{tabular} \end{flushleft} \chapter{Introduction}\vspace{5mm} \fancyhf{} \renewcommand{\headrulewidth}{2pt} \fancyhead[LE,RO]{\thepage} \fancyhead[RE]{\textit{ \nouppercase{\leftmark}} } \fancyhead[LO]{\textit{ \nouppercase{\rightmark}} } \renewcommand{\footrulewidth}{0.1pt} \fancyfoot[CE,CO]{\nouppercase{\leftmark}} \fancyfoot[LE,RO]{JFMJ} \label{chap:intro} \vspace{15mm} Wireless communication is a broad and dynamic field that is enjoying the rapid development of new technologies needed to cope with the massive growth in the number of wireless communication devices and many practical applications. In the past few decades, wireless networks and communication devices have become an indispensable part of modern life. The next generation of wireless networks (called “fifth generation” (5G) by the standardization community) is designed to communicate and access information more efficiently. 5G is being deployed to provide high data-rates for mobile users, massive internet of things (IoT) applications, device-to-device (D2D) communications, and low latency or extreme real-time tactile communications with high availability based on real-time immediate interactions. High data-rate access is extremely important, but the huge amount of power consumed by modern communication applications and networks is a major factor in global warming. This has inspired the notion of green and sustainable radio communication. New wireless ecosystems and data networks supporting higher system throughput with greater energy efficiency are needed. At the same time, they must provide a variety of services such as wireless power transfer, mobile node positioning, cooperative sensing of the surrounding environment, and distributed processing of wireless audio and video signals. In this regard, a series of procedures, specifications, requirements, and constraints concerning the delivery of each of these services is accomplished through gradual design. At best, this leads to ad-hoc treatment of just one of the many services because the lack of structure causes the wireless infrastructure to be poorly managed. In contrast, a unique shared infrastructure provides plenty of room to holistically analyze and more systematically design the complete wireless ecosystem and optimize a multi-service wireless network. This systematic approach to multi-service wireless networks is called “X-service design”. In it, all the services and (computational and radio) resources are optimized adaptively and controlled cooperatively. Today's cellular networks with approximately 6 million macro-cells worldwide consume a peak rate of almost 12 billion watts – the majority of the world’s wireless information and communication technology (ICT) power consumption \cite{SWIPT_New_Paradigm_for_Green_Communications}. How can we reduce this enormous consumption of power? By remembering Nikola Tesla's late-nineteenth-century dream of a “wirelessly powered world”. Radio signals convey power and can potentially also be used to deliver energy to remote devices. Tesla's seemingly far-fetched idea has recently triggered interest in wireless power transfer (WPT), which when coupled with information transfer is referred to as simultaneous wireless information and power transfer (SWIPT)~\cite{4595260}. WPT can be a by-product of wireless data transfer (WDT) networks in which devices capture ambient power, and thus contribute to minimizing a network's overall power consumption using green energy. This partially responds to the urgent question, although it is far from an ideal answer. If a certain quality of power transfer must be secured, base stations (BSs) can play an active role in power delivery and reserve part or all of their resources for this specific objective of reducing a network's overall power consumption. This is one example of positioning services \cite{6477849}. Let's think for a moment of a hypothetical meeting of the minds between Claude Shannon, the father of information theory, and Nikola Tesla. Tesla tried to build a circuit to deliver power to a load wirelessly, Shannon wanted to use such a circuit just for sending information \cite{5513714}. Path loss, energy harvester sensitivities that require a significant signal level, and the limits of radio transmit power mean that WPT can be effective only over distances like those found in ultra-dense networks \cite{7476821}. Thus, network densification could be a point of convergence for positioning, WPT, and WDT networks – with the ultimate goal that could be described as “zero RF power” SWIPT-enabled networks. This is where Shannon meets Tesla. \section{The Architecture of an RF Energy Harvesting Network} Wireless communication systems equipped with energy harvesting (EH) receivers have increasingly been attracting attention \cite{Proceeding}. A radio frequency (RF) EH device has a sustainable power supply from a radio environment that provides harvestable energy from RF signals for information processing and transmission. A practical example of this is wireless sensor networks (WSNs), in which ambient energy is converted to electrical energy via an EH device – not only to enable a long period of operation in WSNs but also as an alternative to replacing the battery~\cite{Ambient_Harvest}. However, traditional EH devices may not be appropriate for many applications due to their complex mechanical constraints such as form factor and cost, and they may not always be available in indoor environments. Moreover, conventional EH approaches usually depend on renewable energy sources like solar, tide, wind, thermal, geothermal, and vibrations, which are usually unpredictable and hard to control~\cite{K_S}. Hence, proactive WPT as an EH method that only needs an RF EH circuit and has low cost and small form factor should be considered when studying how to jointly design WPT and WDT in a SWIPT-enabled network~\cite{Fundamental,23}. In fact, WPT presents a viable solution for facilitating sustainable communication networks serving energy-limited communication devices in which wireless devices can communicate through electromagnetic waves in the RF band. In an RF energy harvesting mode, the range from 3kHz to as high as 300GHz is designated for radio signals to carry energy as electromagnetic radiation. Thus, by recalling that the RF signals carry both information and energy, RF energy radiated by the transmitter(s) can be recycled at the receivers to prolong the network's lifetime~\cite{1}. In order to be able to harvest energy from an RF source, a general network architecture aided with a means of harvesting modules must be present. Figure (\ref{fig:1}), which was adapted directly from \cite{S_Harvest}, illustrates the block diagram of a network equipped with an RF EH device. As can be seen in figure (\ref{fig:1.1a}), the application module provides the data to be processed by a low-power microcontroller, while the low-power transceiver is employed to either transmit the processed information or to receive data for further processing. The next major component in figure (\ref{fig:1.1a}) is the RF energy harvester that collects RF signals and converts them into electricity. The converted signal then goes to a power management module, which either stores the electricity obtained from the RF energy harvester in an energy storage device, such as a rechargeable battery, thus helping the users to save excess energy for future use (\textit{harvest-store-use} mode), or uses it directly to transmit information without saving the energy (\textit{harvest-user} mode) \cite{Modeling}. \begin{figure}[!t] \vspace*{1mm} \centering \begin{subfigure}{0.75\textwidth} \hspace*{-1cm} \includegraphics[width=14.5cm,trim=4 4 4 4,clip]{figures/chap1/fig1-1-F1.pdf} \caption{General network architecture for energy harvesting} \label{fig:1.1a} \end{subfigure} \begin{subfigure}{0.75\textwidth} \hspace*{-2cm} \vspace*{2mm} \includegraphics[width=14.5cm,trim=4 4 4 4,clip]{figures/chap1/fig1-2-F2.pdf} \caption{RF energy harvester} \label{fig:1.1b} \end{subfigure} \caption{The block diagram of an energy harvesting device [10]. } \label{fig:1} \end{figure} Figure (\ref{fig:1.1b}) illustrates an RF harvester device with input from an RF antenna, an impedance matching circuit, a voltage multiplier, and a capacitor to create the output. It is worth mentioning that an RF energy harvester typically operates over a range of frequencies: The RF antenna that provides input to an RF EH unit can be designed to work on either single or multiple frequency bands, facilitating the energy harvesting from single or multiple sources at the same time. To maximize the power transferred from the antenna to the voltage multiplier, an impedance matching in the form of a resonator circuit operating at the designed frequency, is utilized. Figure~(\ref{fig:1.1b}) shows the diodes of the rectifying circuit that are the main component of the voltage multiplier that converts RF signals into direct current (DC) voltage levels that can be used to load an electronic circuit, where the capacitor ensures that the energy generated is smoothly delivered to the load~\cite{1435362,Fully_integrated,Recycling_ambient,Rate_Energy}. Since RF signals carry both energy and information, an RF energy harvesting device like that shown in figure (\ref{fig:1}), could theoretically also simultaneously perform information decoding from the same RF signal input using the same antenna or antenna array. This concept has been defined as SWIPT. \section{Energy Harvesting Modes} Future generation wireless networks not only have limited spectrum resources but also must operate with low-power batteries. Recently, energy-efficient communication systems, or “green radios”, have been increasingly attracting attention from the research community due to their ability to improve system performance while simultaneously diminishing the energy consumption of the communication devices \cite{An_Overview}. Reducing wireless network energy consumption is not only essential for prolonging battery lives but also crucial for the environment. Although ICT is to blame for more than $2$ percent of CO$_2$ emissions worldwide, it also presents solutions for drastically diminishing the remaining $98$ percent of CO$_2$ emissions~\cite{webb2008smart}. To significantly minimize ICTs' carbon footprint and environmental impact, we need new and efficient communication techniques~\cite{Green_radio}. In recent literature, energy efficiency (EE), which measures the number of bits communicated per unit of energy consumed (bits/joules delivered to the receivers), has emerged as the performance metric to evaluate a communication system's energy consumption and guarantee green communication \cite{Tradeoff_Green,Toward_dynamic}. However, today's galloping development of wireless communication technologies is increasing energy consumption and carbon emissions full tilt, further aggravating environmental concerns. According to \cite{Global_footprint}, the percentage of global carbon emissions due to ICTs is estimated to reach $5$ percent by the end of 2020, and the situation will only be exacerbated in coming years with the arrival of beyond 5G- and sixth generation (6G)-enabled networks. In this regard, networks that harvest energy can definitely decrease the carbon consumption of high data-rate wireless systems by exploiting energy from the environment \cite{4}. In communication devices, energy harvested from RF signals is a random parameter that depends on the channel fading coefficients using circuitry like that discussed in the previous section~\cite{5513714,3}. Wireless communication energy harvesting by means of an RF EH device can be explored in a variety of ways, among which, the three most common configurations – WPT, WPCN, WPBC, and SWIPT – will be discussed below. \subsection{WPT} As shown in figure (\ref{fig:1.WPT}) in this model, RF energy is transferred in the downlink (DL) direction. A WPT-enabled network includes a transmitter connected to the main power system, such as a base station (BS) or an access point (AP), that supplies power to an electrical component (or a portion of a circuit that consumes electrical power) without any wired interconnections. The WPT-enabled network uses electromagnetic waves in the surrounding environment that can be obtained from \textit{near-field} (non-radiative) or \textit{far-field} (radiative) regions to power electrical components \cite{Radio_Science}. In general, near-field techniques do not support the mobility of an energy harvesting device. Therefore, it is preferable to transfer information through a far-field RF band in which the distance is much greater than the diameter of the transmit antenna, instead of using methods such as inductive coupling, capacitive coupling, or magnetic resonant coupling for a near-field that corresponds to the area of just one wavelength of the transmitting antenna \cite{Near_field}. WPT-enabled RF signals are anticipated to engender lots of applications and opportunities by providing cost-efficient, predictable, dedicated, on-demand, perpetual, and reliable energy supplies to energy-constrained wireless networks, where no wires, contacts, or batteries are needed. Numerous research activities are laying the groundwork for the future of wireless networking that transcends conventional communication-centric transmission. For instance, the authors in \cite{3} proposed a randomly deployed power-beacon-based hybrid cellular network that wirelessly powers mobile users as they recharge their devices. In \cite{7}, the total outage probability of an ad-hoc network overlaid with power-beacon was analyzed. The authors employed the stochastic geometry method in a harvest-store-use mode to study network performance in terms of the power and channel outage probability. In \cite{One-Bit-Feedback}, multi-user multiple-input multiple-output (MIMO) WPT was considered with a new channel learning method that requires only one feedback bit from each EH receiver to the power transmitter per feedback interval. The power transmitter uses the feedback information to coordinate the transmit beamforming in subsequent training intervals and concurrently obtains updated estimates of the MIMO channels to different EH users by solving an optimization problem. Building on that, \cite{Adaptively_Directional} presented an adaptive directional WPT methodology for prolonging the lifetime of a WSN by offering a sustainable power supply to the distributed sensor nodes. Although today's wireless networks were designed solely for communication purposes, with their adoption of new technologies, they are evolving toward 6G. Nonetheless, WPT development is still in its infancy and has not even entered its first generation: No single standard has yet been released on far-field WPT. The WPT framework will be particularly suitable for future wireless networks with ubiquitous and autonomous low-power and energy-limited devices, massive IoT connections, and D2D {\text{communications}}. \begin{figure}[!t] \centering \includegraphics[width=16cm,trim=4 4 4 4,clip]{figures/chap1/WPT-F1.pdf} \caption{The WPT system model. } \label{fig:1.WPT} \end{figure} \subsection{WPCN} Wirelessly powered communication networking (WPCN) is a new networking paradigm in which batteries for wireless communication devices can be remotely provisioned through microwave WPT technology. Figure (\ref{fig:1.wpcn}) shows WPCN approach of transferring energy in the downlink (DL) direction while information is communicated in the uplink (UL) direction. The receiver is a low-power device that harvests energy in the DL, and then uses the harvested energy to transfer data in the UL. WPCN can effectively reduce cost and enhance communication performance by eliminating the battery charge limit. In addition, WPCN enjoys full control over its power transfer: Transmit power waveforms can be adjusted to provide a permanent energy supply under varying physical conditions and service requirements \cite{Wireless_powered}. WPCN is intended to strike a balance between energy supply limitations and data transmission in order to optimize communication network performance. WPCN could be suitable for a variety of low-power applications (up to several milliwatts) such as wireless sensor WSNs, IoT, and RF Identification (RFID) networks. WPCN enables low-power applications to operate longer while actively transmitting at much larger data-rates and from a greater distance than conventional backscatter-based communications \cite{8766912}. \begin{figure}[!b] \centering \includegraphics[width=16cm,trim=4 4 4 4,clip]{figures/chap1/WPCN-F1.pdf} \caption{The WPCN system model. } \label{fig:1.wpcn} \end{figure} WPCNs promise to significantly enhance performance, but building an efficient WPCN is challenging. In \cite{Throughput_WPCN}, the authors considered a hybrid AP based on the harvest-store-use mode with a constant power supply coordinating the wireless energy transmissions in the DL direction to a set of distributed users that have no other energy sources. Once the users have harvested enough energy, they send their independent information signals to the hybrid AP in the UL direction using time-division-multiple-access (TDMA). Because of WPCN's distance-dependent signal attenuation in both the DL WPT and the UL WDT, a user who is far from the hybrid AP receives less wireless energy than a user who is closer to the DL communication. In order to have reliable information transmission, the distant user must transmit more power in the UL direction. This phenomenon is known as the \textit{doubly near-far} problem. \cite{Optimal_WPCN} studied an optimal resource allocation policy in which WPCN had simultaneous WPT in the DL and WDT in the UL. They addressed the doubly near-far problem in which the hybrid AP operates in full-duplex. The authors in \cite{8580593} considered multiple-input single-output (MISO) WPCN, in which the single-antenna users harvest energy from a multi-antenna AP in DL direction and then retransmit information to the AP in the UL direction – using the TDMA or the spatial division multiple access (SDMA) technique. With multiple antennas, the AP can utilize energy beamforming in the DL and employ the multiplexing-gain or receive beamforming-gain in the UL direction \cite{Energy_Beamforming_WPCN}. In addition to all these techniques in various directions of research, WPCNs continue to present new research problems for future applications. \subsection{WPBC} Although WPT and WPCN offer many advantages, these EH schemes still face critical limitations when adopted in low-power low-cost networks, such as WSN, RFID, IoT, and D2D applications. On one hand, in WPTs, users only harvest energy with no RF data transmission or reception, whereas in WPCNs, users may need lots of time to harvest the RF energy needed for data transmission, which limits the system's performance. To overcome these deficiencies, wirelessly powered backscatter communication (WPBC) networks, also known as ambient backscatter communication systems (ABCSs), are proposed as an alternative that can significantly improve network performance. This innovative technique facilitates ubiquitous communication: Devices can interact among themselves at unprecedented scales and in previously inaccessible locations by using existing ambient RF signals, rather than generating their own radio waves \cite{8368232}. In WPBCs, energy is transferred to a tag in the DL direction, and information is transferred to a reader in the UL direction, as shown in figure~(\ref{fig:1.WPBC}). More specifically, the information on a tag (a tagged object) is communicated to a nearby reader (e.g., an RFID reader) via backscatter modulation (tag-to-reader UL). Since tags do not require oscillators to generate carrier signals and do not have any dedicated power infrastructure, backscatter communications – a passive method of communication – benefit from much less power consumption than conventional radio communications. \begin{figure}[!t] \centering \vspace*{5mm} \includegraphics[width=16cm,clip]{figures/chap1/WPBC-F1.pdf} \caption{The WPBC system model, where the backscatter modulation on a tag is used to reflect and modulate the incoming RF signal in order to communicate with a reader (e.g., a wireless router).} \label{fig:1.WPBC} \end{figure} Quite a lot of research has been conducted on the topic of WPBC networks. In \cite{8863924}, optimal resource allocation for the WPBC is investigated, where energy-constrained users harvest energy from RF signals transmitted by a multi-antenna source to power their future active wireless information backscatter transmission to an AP. A theoretical trade-off in WPBC networks~–~between the reliability of the backscatter communication and the harvested energy at the tag, measured in terms of signal-to-noise ratio (SNR) at the reader – was studied in \cite{7950915}. The authors in \cite{7876867} proposed a novel network architecture that enables D2D communication to be modeled as a WPBC. In \cite{6685977}, the authors aimed to integrate the WPBC and RFIDs to study perceptions of power and spectral efficiency with respect to energy constraints. They did this by deriving the diversity-multiplexing trade-off for RFID MIMO channels. Notwithstanding the progress gained through implementing WPBC networks, diverse issues such as security, reliability, the data-rate of communications, and the development of a global standard still need to be addressed. That is beyond the scope of this thesis. \subsection{SWIPT}\vspace{-3mm} Because RF signals can simultaneously carry information and energy, an intriguing new research domain has recently developed from multiple WPT technologies. What is called a simultaneous wireless information and power transfer (SWIPT) is a promising technique for wireless communication systems~\cite{K_S}. The SWIPT paradigm enables energy-constrained wireless user equipments (UEs) to harvest energy and process the information simultaneously by utilizing RF signals transmitted from a BS, mobile BS (e.g., a drone), or AP. In this model, energy and information signals are simultaneously transferred in the DL direction from one or multiple BSs or APs to one or multiple receivers for simultaneous information decoding (ID) and energy harvesting (EH). The ideal receiver for enabling SWIPT has circuitry to perform ID and EH at the same time \cite{4595260}, rather than two dissimilar types of circuitry to perform EH and ID separately at the receiver~\cite{6133872}. It is also worth noting that performing EH and ID operations at the same time does not necessarily mean that these operations are carried out on the same received signal: That is practically impossible since the information content of the RF domain signals would be entirely destroyed by harvesting energy on the signals. Furthermore, a single antenna receiver may not be able to create a reliable energy supply due to its limited resources for collecting energy. Enabling SWIPT requires using separate antennas for both the EH and ID receivers, or splitting the received RF signal into two separate parts, one for EH and the other for ID operations, by using a \textit{splitter}. EH and ID receivers for enabling SWIPT can be classified into two broad architectural categories: separated or co-located receiver. In a separated architecture, as shown in figure (\ref{fig:1.seperated}), the EH and ID receivers are two distinct devices with separate antennas that experience different channels from the transmitter. The EH receiver is a low-power device capable of harvesting energy and the ID receiver is only able to process data. Since the ability to harvest energy deteriorates with distance, EH receivers ought to be closer to the BS or AP than ID receivers (which have to be spatially separated). This explains why an inner radius and outer radius are used to distinguish between EH and ID receivers in figure(\ref{fig:1.seperated}). On the other hand, with co-located SWIPT architecture, each receiver gets identical channels from the transmitter and is a low-power device that can perform EH and ID at the same time, as illustrated in figure (\ref{fig:1.colocated}). \begin{figure}[p] \centering \includegraphics[width=15.2cm,trim=4 4 4 4,clip]{figures/chap1/Seperated_SWIPT-F1.pdf} \caption{The separated SWIPT system model. } \label{fig:1.seperated} \end{figure} \begin{figure}[p] \centering \includegraphics[width=15.2cm,trim=4 4 4 4,clip]{figures/chap1/co-located_SWIPT-F1.pdf} \caption{The co-located SWIPT system model. } \label{fig:1.colocated} \end{figure} \begin{figure}[p] \vspace{-5mm} \centering \begin{subfigure}{0.75\textwidth} \includegraphics[width=9.8cm,trim=4 4 4 4,clip]{figures/chap1/fig1-4.pdf} \caption{Separated receiver architecture.} \label{fig:1.4} \end{subfigure} \begin{subfigure}{0.75\textwidth} \includegraphics[width=9.8cm,trim=4 4 4 4,clip]{figures/chap1/fig1-5.pdf} \caption{Time switching (TS) approach to realize co-located SWIPT architecture.} \label{fig:1.5} \end{subfigure} \begin{subfigure}{0.75\textwidth} \includegraphics[width=9.8cm,trim=4 4 4 4,clip]{figures/chap1/fig1-6.pdf} \caption{Power splitting (PS) approach to realize co-located SWIPT architecture.} \label{fig:1.6} \end{subfigure} \begin{subfigure}{0.75\textwidth} \includegraphics[width=9.8cm,trim=4 4 4 4,clip]{figures/chap1/fig1-7.pdf} \caption{{Antenna switching (AS) approach to realize~co-located SWIPT~architecture.}} \label{fig:1.7} \end{subfigure} \caption{Integrated receiver architecture designs for SWIPT. } \label{chap1:fig:co-located:methods} \end{figure} Three practical approaches to designing co-located receiver architecture for SWIPT are time switching (TS), power splitting (PS), and antenna switching (AS). The EH and ID receivers share the same antennas to realize the co-located receiver architecture, as shown in figure (\ref{chap1:fig:co-located:methods}). The receiver in the TS approach in figure (\ref{fig:1.5}) includes an EH module, an ID module, and a switch to periodically adjust the receiving antenna for particular operations. The receiver switches between EH and ID modes based on a pre-defined, but optimizable, time factor or TS sequence. The TS approach necessitates careful information/energy scheduling and accurate time perception. In the PS approach in figure (\ref{fig:1.6}), the receiver divides the received signal into two streams of different power levels for EH and ID operations based on an optimizable PS ratio. Finally, figure (\ref{fig:1.7}) shows how the receiver is equipped with independent antennas for EH and ID operations in the antenna switching (AS) approach to enable SWIPT by means of a low complexity AS algorithm. In general, an antenna array is configured at the receiver end to take advantage of spatial multiplexing which divides the antennas into two subsets for EH and ID operations. One subset of antennas operates on the EH mode while the rest executes the ID operation. It should be stressed that the AS approach is somewhat easier and more suitable for practical SWIPT architecture designs than the TS and PS approaches \cite{6804407}. In addition, the AS approach can be similarly adopted to optimize the separated receiver architecture as shown in figure (\ref{fig:1.4}) \cite{6781609}. \section{A SWIPT Literature Review} In this section, we review some of the relevant work in literature on SWIPT, including multi-carrier SWIPT systems, SWIPT in cognitive radio networks, cooperative relaying in SWIPT networks, and multiple antenna communication in SWIPT systems. All the application areas associated with SWIPT apply the far-field WPT technique for transferring power within communication systems. \subsection{Multi-Carrier SWIPT Systems} The basic idea of multi-carrier modulation is to divide the transmitted bitstream into several smaller blocks, each containing the original bitstream, that are sent over many different subcarriers. Under ideal propagation conditions, the subcarriers are orthogonal. In order to alleviate the effect of inter-symbol interference (ISI) on each subcarrier, each subcarrier should have a bandwidth lower than the channel coherence bandwidth in a multi-carrier network. Orthogonal frequency division multiplexing (OFDM) is one of the well-established, discrete implementations of a multi-carrier scheme for high data-rate wireless communications that is embraced in various standards such as IEEE 802.11a/g/n, IEEE 802.15 ultrawideband, WiMAX, and 3GPP-Long Term Evolution (LTE). This stimulated us to study how SWIPT can manage high date-rate interference-free communication in multi-carrier-based systems. In this regard, \cite{5513714} investigated SWIPT over a single-user OFDM channel in order to obtain the optimal trade-off between the achievable data-rate and the transferred power given a specific total available power. The authors studied a power allocation algorithm design assuming that the receiver is capable of using one received signal to simultaneously perform EH and ID operations. A heuristic algorithm was proposed in \cite{IA} in a study of resource allocation policies for maximizing the harvested energy for a single user in an OFDM SWIPT system. In \cite{8548551}, the EE optimization problem for OFDM-based 5G wireless networks with SWIPT was studied, with subcarrier and power allocation jointly optimized to maximize the system EE for single-user and multi-user cases using the Dinkelbach iterative and the Lagrange dual methods. The authors in \cite{chap4:6555184} investigated the sum data-rate maximization problem in a multi-user OFDM system with SWIPT capability that allows the user to either perform an ID operation when it is active (served by the transmitter) or to be in an EH mode when it is idle – but not simultaneously. In order to realize SWIPT in a broadband system with transmit beamforming and PS-receiver architecture, \cite{Simultaneous_Information} presented a novel strategy for designing power control algorithms for both single-user and multi-user OFDM systems: These exploit channel diversity to concurrently enhance throughput and WPT efficiency in DL and UL directions with variable and fixed coding rates. In \cite{18}, the weighted sum data-rate optimization problem was studied taking into account TS and PS approaches in a co-located architecture. In \cite{18}, the optimal design for SWIPT in DL OFDM systems was investigated with users performing EH and ID operations on the same signals received from a fixed AP. Studying a system model very similar to that in \cite{18}, the authors of \cite{9Xu} sought to minimize the fraction of all users' outage when constrained by both minimum harvest energy and total transmit power. Two types of multiple access schemes – time division multiple access (TDMA) and orthogonal frequency division multiple access (OFDMA) – are studied for transmitting information in SWIPT networks in particular. TDMA permits several users to share the same frequency channel by dividing the signal into different non-overlapping time slots. Since TDMA occupies the entire available bandwidth, ISI mitigation techniques are required to handle interference. On the other hand, OFDMA technology is a promising solution for effectively dividing the available bandwidth into orthogonal sub-channels so they can be flexibly allocated among existing users. Although in OFDMA based networks, “intra-cell” interference does not exist, “inter-cell” interference arising from neighboring cells deteriorates the OFDMA networks' overall performance. In this respect, resource allocation is highly significant in TDMA/OFDMA-based SWIPT cellular wireless networks: It mitigates the effect of interference with respect to limited bandwidth, stringent power constraints, and the growing demand for wireless communication services. In \cite{18}, both TDMA and OFDMA were examined for information transmission. For TDMA-based information transmission, each user applies TS so that the ID operation is used throughout the user's scheduled information time slot and the EH operation is applied in all other time slots. For the OFDMA-based transmission strategy, authors assumed that the PS approach was employed at each receiver with all subcarriers sharing the same PS ratio at each receiver. These transmission scenarios were employed to address the problem of maximizing the weighted sum data-rate for all users by varying the power allocation in time and/or frequency and also the TS/PS ratios, subject to satisfying a maximum transmit power allowance and a minimum harvested energy constraint for each user. Following Dinkelbach's method, the authors in \cite{8183429} investigated the secrecy EE maximization problem by proposing power allocation and PS optimization algorithms in an OFDMA SWIPT network with two users, where each user uses the PS receiver approach to enable SWIPT. The resource allocation algorithm design for EE optimization was studied in \cite{Wireless_Information}, in which an EE maximization problem jointly optimizes subcarrier and power allocation as well as the PS ratios of PS hybrid receivers in an OFDMA system with SWIPT. The underlying problem in \cite{Wireless_Information}, which can be solved iteratively using the Dinkelback algorithm, was expressed as a non-convex optimization problem that had to take into account the maximum transmit power, the receivers' minimum amount of EH power, and the minimum data-rate requirements of both delay constrained services and the network. Multi-carrier SWIPT is flourishing, with many exciting research directions yet to be explored. \subsection{SWIPT in Cognitive Radio Networks} Radio spectrum underutilization results from conventional spectrum allocation with a cognitive radio network (CRN) introduced to enable highly reliable and efficient opportunistic spectrum access while increasing the number of wireless devices. CRN allows secondary users to operate in the frequency bands allocated to primary users, as long as they cause no harmful uncontrollable interference to primary users \cite{hossain2009dynamic}. OFDMA technology combined with a CRN can help allocate radio resources to secondary users more efficiently because it supports gained spectrum allocation by dividing the entire bandwidth into a set of subcarriers. Two different types of cognitive radio based on spectrum-sensing and full-coordination are recognized in the literature~–~spectral-sensing to identify the channels in the radio frequency spectrum and full-coordination to assess all the attributes that a wireless network or node can be aware of during the communication process. A CRN is subdivided into two main architectural designs: infrastructure-based and infrastructure-less. In an infrastructure-based CRN, every unlicensed user transmits their data or routing parameters through a central BS, AP, or a relay, while in an infrastructure-less CRN, unlicensed users communicate with each other directly using the existing communication protocols (e.g., in a peer-to-peer fashion) that require no central entity. The combination of infrastructure-based and infrastructure-less architectures is called a “hybrid” CRN. In order to achieve both spectrum (spectral) efficiency (SE) and EE via dynamic spectrum access in CRN, secondary users in a CRN can be accompanied by the RF EH capability, as has been described very well in~\cite{Dynamic_spectrum}. Focusing on the trade-off between spectrum sensing, data transmission, and RF energy harvesting, the study provides a detailed discussion of the dynamic channel selection problem in a multi-channel EH-based CRN. The authors of \cite{7848925} investigated how hybrid EH cooperative spectrum sensing in heterogeneous CRNs can enable self-sustaining green communications by reducing energy cost while capitalizing on the idle spectrum. \cite{8874991}~analyzed maximizing the throughput of cooperative CRNs with EH to obtain optimal time allocation between primary and secondary users, and balancing the trade-off between EH and packet transmission. Secrecy outage performance for MISO CRNs with EH in order to maximize the secrecy and EE was studied in \cite{8910443}. The same secrecy outage performance for an underlay MIMO CRN by means of transmitting antenna selection was investigated in \cite{7849037,7882641}. In both studies, EH from the primary transmitter drives the transmitter of secondary users in order to improve both EE and SE. In \cite{8693979}, a joint optimization framework to distribute power among users at the secondary BS, allocate power for cooperative transmission, assign a time slot for the transmission of each user, and find the PS ratio for EH and ID receivers was explored for SWIPT-based cooperative CRNs. The authors of \cite{7880677} tried to maximize the data-rate for underlay multi-hop CRNs using TS receivers. The problem of maximizing EH by taking into account an optimal resource allocation policy while also satisfying constraints of minimum data-rate, transmit power, and subcarrier was studied in \cite{Wideband_cognitive} with respect to max-min fairness in wideband CRN with SWIPT. \subsection{Cooperative Relaying in SWIPT Networks} Relay techniques are proposed for facilitating communication between the source and the destination \cite{Hierarchical}. They offer a low-cost way of expanding coverage enlargements, gaining diversity, and enhancing data-rates. However, since the relay power supplies might be insufficient, it needs an additional, possibly green, energy source to cooperate. Luckily, WPT techniques have the potential to assist cooperative devices by recharging energy-limited relays to use as tokens for future cooperation. Recent research on using SWIPT for self-powered relaying has concentrated on two conventional cooperative relaying systems: amplify-and-forward (AF) and decode-and-forward~(DF)~\cite{8628978}. The past few years have witnessed new research on cooperative relaying integrated with SWIPT networks. The authors of \cite{Bidirectional_Wireless} studied a cooperative relaying SWIPT network with a bidirectional relay capable of transferring wireless power in the DL direction and relaying information by AF in the UL direction. Two approaches were examined with PS and TS at the relay to maximize the user's data-rate. Energy harvesting and power consumption constraints at both the relay and the user were noted. In \cite{7362039}, the authors considered a cooperative PS-based SWIPT network with one source-destination pair and multiple EH DF relays that were intended to obtain a closed-form formulation of the outage probability achieved by the multi-relay cooperative protocol as well as its approximation at high SNR. PS-based SWIPT in cooperative networks with spatially random DF relays was studied in \cite{6779694} in order to characterize the outage probability and diversity gain by applying the stochastic geometry principle. The optimization problems of PS ratios at the relays were formulated in \cite{7419826} for both DF and AF relaying protocols in a multi-relay two-hop relay SWIPT system. Joint relay selection and resource allocation optimization for EE maximization was studied in \cite{8954679} in order to provide a performance comparison between DF and AF relays without external power supplies in PS-enabled SWIPT. The author of \cite{7953588} investigated whether a joint resource allocation would maximize the achievable data-rate in PS-based and AS-based SWIPT for a two-hop cooperative transmission in which a half-duplex multi-antenna relay adopts DF relaying strategy. Relay selection strategies for both PS- and TS-based SWIPT in a cooperative AF relaying network were studied in \cite{8292374}. The authors tried to maximize either the amount of EH at the user's end under the constraint of the minimum available data-rate in a PS-based SWIPT network or the overall user data-rate while guaranteeing a minimum EH in a TS-based SWIPT network. A novel relay selection and resource allocation in a two-hop relay-assisted multi-user PS-based SWIPT OFDMA network were analyzed in \cite{8668723} to optimize the subcarrier assignment, power allocation, and users' PS ratios as well as relays to ensure maximization of the system's data-rate while also satisfying minimal EH and maximal transmit power constraints. \subsection{Multiple Antenna Communication in SWIPT Systems} Nowadays, new communication technologies can incorporate multiple antennas to increase data-rate through multiplexing, improve performance through diversity, and use WPT techniques to generate sufficient power for reliable communication. These approaches require centralized or distributed antenna array deployment at the transmitter and/or receivers to achieve substantial array/capacity gains over single-input single-output (SISO) systems through spatial beamforming/multiplexing \cite{935916}. Multiple-input single-output (MISO) and MIMO are regarded as proper ways to boost the reliability and capacity of SWIPT-enabled communication networks. Most work that has tried to integrate SWIPT in multiple antenna wireless networks assumes that two defined user groups need to be served, one for receiving information and the other for receiving power to recharge their power sources. However, there are also cases of co-located receiver architecture. The following paragraphs present interesting research studies conducted on SWIPT in MISO and MIMO systems to help the reader understand the challenges of multiple antenna configurations in SWIPT. In \cite{17}, the capacity region of a MISO broadcast channel featuring SWIPT was evaluated. The approach used was based on solving a sequence of weighted sum data-rate maximization problems subject to a maximum transmit power constraint for the AP, and a set of minimum harvested energy constraints for individual EH receivers. In this study a multi-antenna AP simultaneously delivers information and energy via RF signals to multiple single-antenna receivers in a separated receiver architecture. The authors of \cite{8116441} studied joint power allocation and PS ratio optimization for a multi-user MISO SWIPT network with the aim of maximizing the users' minimum signal-to-interference plus noise ratio (SINR) under the maximum transmit power and the minimum energy harvesting constraints. In \cite{8120246}, a multi-user MISO full-duplex system with PS-based SWIPT was proposed for a resource allocation policy design that jointly optimizes the PS ratios, the beamforming matrix, and the transmit power – subject to satisfying maximal SINR and harvested power constraints. A novel beamforming design to minimize transmission power in a multi-user MISO SWIPT system was proposed in \cite{8422097} for TS and PS receiver architectures. Joint transceiver optimization of beamforming vectors and TS ratios for MISO SWIPT systems was examined in \cite{8306833}. In \cite{8481704}, the joint design of the beamforming vector and the artificial noise covariance matrix for a MISO SWIPT with multiple eavesdroppers was studied by analyzing a proportional secrecy EE maximization problem. Two different scenarios for the MIMO broadcast system were investigated in \cite{6489506}, namely, separate and co-located receivers architecture in which all the transmitters and receivers were equipped with multiple antennas. In the first scenario (separated receivers), the best transmission strategy for MIMO SWIPT systems was designed to attain different trade-offs between maximum information data-rate and energy transfer in the boundary of a so-called rate-energy region. For the other scenario (co-located receivers), TS and PS approaches were applied to determine an outer bound for the achievable rate-energy region. The authors of \cite{7934322} similarly described the trade-off between maximum energy transfer versus maximum information data-rate under the nonlinear EH model. A novel power allocation algorithm for SWIPT-enabled multi-user MIMO DL systems with separated receiver architecture was proposed in \cite{6692369}, based on the block diagonalization precoding technique that completely suppresses multi-user interference while maximizing a network's data-rate. In \cite{6777403}, the problem of antenna selection (at the transmitter) and transmit covariance matrix design was studied in the effort to jointly maximize the data-rate of a MIMO broadcast system with SWIPT, given the multidimensional trade-off between the minimum data-rate requirement of ID users and the minimum energy harvesting of EH users in a separated architecture. A general multi-objective optimization problem, in which data-rate and harvested power are simultaneously optimized for each user, is proposed in \cite{7581107} for a multi-user MIMO network implementing SWIPT. A secrecy data-rate maximization problem for SWIPT in cognitive MIMO networks was studied in \cite{7132724}: An interference power constraint was imposed to protect the primary user while the secondary EH receiver had a minimum energy harvesting constraint. Unlike the MIMO SWIPT systems, the weighted minimum mean squared error criterion – rather than the data-rate of the network – was investigated in \cite{7063588} for a separated SWIPT architecture. The secure transmission issue for PS-enabled SWIPT in a multi-user MIMO system with multiple external eavesdroppers is presented in \cite{8171731}. Robust beamforming with imperfect channel state information (CSI) was designed with the maximum transmission power optimized subject to a minimum achievable secrecy data-rate and EH. EE maximization in SWIPT-based MIMO broadcast channels for IoT applying the TS approach was studied in \cite{8233108} as a way of obtaining resource allocation policies that consider per-user minimum harvested energy constraints. \section{Thesis Overview and Contributions} Although studies have thus far concentrated on solving problems in wireless communication networks empowered by WPT technologies, various problems in the field remain to be answered. In this thesis, we attempt to fill in several crucial gaps in the literature by designing, analyzing, and optimizing resource allocation problems with different objectives for SWIPT in multi-service energy-constrained wireless networks. The overview and specific contributions of the main chapters are: \textbf{Chapter \ref{CHAP2} {Optimization Techniques}} In our study, we refer to several basic optimization techniques. Chapter \ref{CHAP2} discusses them and provides some approaches to optimization. \textbf{Chapter \ref{CHAP3} {SWIPT in Single Small-Cell Networks}} In chapter \ref{CHAP3}, we provide a novel architecture for harvesting energy from an AP without needing a splitter at the receiver. We propose a new system model in which a designated portion of the spectrum is used for ID operation while another portion is exploited for EH operation, and investigate how much performance is gained. The main objective of this chapter is to design a resource allocation policy that maximizes the harvested energy of a multi-user DL OFDMA network with SWIPT while satisfying a minimum data-rate requirement for all users. We then use optimization tools to obtain a locally optimal solution for the underlying problem, which is essentially non-convex. The results of this chapter with slight improvements to the system model, as presented in the extended abstract, will be submitted to \textit{IEEE Communications Letters}: \begin{addmargin}[2.5em]{0em} ——, “Optimal Resource Allocation for MC-NOMA in SWIPT-enabled Networks,” to be submitted to \textit{IEEE Communications Letters}. \end{addmargin} \textbf{Chapter \ref{CHAP4} {SWIPT in Multi-Cell Networks}} In chapter \ref{CHAP4}, we study allocating resources to maximize the data-rate based on the separated receiver architecture in a SWIPT OFDMA multi-user multi-cell system. We distribute the users between the inner and outer cells. Nearby users can harvest energy from the AP; users at a distance get their wireless information from the nearest AP. Although it seems that the AP supports EH users by consuming some power, in this chapter, we discuss whether it is justifiable in light of the fact that this might cause the performance gain of a multi-user multi-cell OFDMA network to deteriorate. The resulting problem, which jointly optimizes the subcarrier assignment and the power allocation, is an intractable mixed-integer non-linear problem. Because of that, we use a minorization maximization approach based on the difference of convex functions programming with a surrogate function to approximate the non-convex objective function. \textbf{Chapter~\ref{CHAP5} {Antenna Selection Technique in SWIPT}} Finally, in chapter~\ref{CHAP5}, we describe a new harvesting technique at the receiver that is based on the receiver antenna selection for a multi-user multi-cell SWIPT OFDMA system with a co-located architecture. This we call a “generalized antenna switching technique”. As a performance metric, we optimize EE, which is highly appreciated for investigating resource allocation in the next generation of wireless communication. The underlying problem in this chapter is non-convex because we incorporate both interference and integer variables. We relax the integer variable, and then apply the big-M formulation to make sure that relaxed variables take binary values. After that, we use the minorization maximization approach employing a first-order Taylor approximation. As a last step to convexify the EE optimization problem, we apply the Dinkelback algorithm to transform the objective function into a non-fractional function. The simulation result concludes the amount of performance gain that can be obtained through generalized antenna switching architecture. The results of this chapter with an improvement in the system model will be submitted to \textit{IEEE JSTSP} (Special Issue on Signal Processing Advances in Wireless Transmission of Information and Power). A very simplified version of the system model without SWIPT has already been accepted by the VTC 2020 conference: \begin{addmargin}[2.5em]{0em} \textbf{Jalal Jalali}, Ata Khalili, and Heidi Steendam, “Antenna Selection and Resource Allocation in Downlink MISO OFDMA Femtocell Networks,” accepted for presentation at \textit{IEEE VTC} 2020. \end{addmargin} \begin{addmargin}[2.5em]{0em} ——, “Simultaneous Wireless Information and Power Transfer via a Joint Resource Allocation and Generalized Antenna Switching Strategy,” to be submitted to \textit{IEEE JSTSP}. \end{addmargin} \textbf{Chapter \ref{CHAP6} {Conclusion and Future Work}} Chapter~\ref{CHAP6} summarizes the results and provides suggestions for future research. \chapter{Optimization Techniques} \label{CHAP2} \fancyhf{} \renewcommand{\headrulewidth}{2pt} \fancyhead[LE,RO]{\thepage} \fancyhead[RE]{\textit{ \nouppercase{\leftmark}} } \fancyhead[LO]{\textit{ \nouppercase{\rightmark}} } \renewcommand{\footrulewidth}{0.1pt} \fancyfoot[CE,CO]{\nouppercase{\leftmark}} \fancyfoot[LE,RO]{JFMJ} \vspace{15mm} In this chapter, we introduce some basic properties of convex functions that can be useful for better understanding this thesis \cite{boyd2004convex}. \section{Convex Analysis} \subsection{Definitions } Let $f(\text{·}):\mathbb{R}^{n} \rightarrow\mathbb{R}$ be a convex function. Then, $f(\text{·})$ is convex, if for each $\lambda \in [0,1]$, we have \begin{equation}\label{F2:1} f(\lambda \mathbf{x}_{1}+ (1-\lambda)\mathbf{x}_{2})\leq \lambda f(\mathbf{x}_{1})+ (1-\lambda)f(\mathbf{x}_{2}), \end{equation} for all $\mathbf{x}_{1}$, $\mathbf{x}_{2} \in \mathbb{R}^{n}$.~Geometrically speaking, the above inequality states that the line segment between $(x_1, f(x_1))$ and $(x_2, f(x_2))$, which is the chord from $x_1$ to $x_2$, lies on top of the graph of $f(\text{·})$.~A function $f(\text{·})$ is said to be \textit{strictly convex} if (\ref{F2:1}) holds with strict inequality.~Moreover, supposing that $f(\text{·})$ is differentiable, i.e., its gradient exists, then each convex function satisfies the following inequality \begin{equation}\label{F2:2} f(\mathbf{{x}})\geq f(\tilde{\mathbf{{x}}}) + \nabla_{\mathbf{x}} f(\tilde{\mathbf{x}})^T (\mathbf{x}-\tilde{\mathbf{x}}), \end{equation} where $\nabla_{\mathbf{x}}$ is the gradient vector with respect to $\mathbf{x}$ at $\tilde{\mathbf{x}}$, and ${\square}^T$ is the transpose operation on ${\square}$. The inequality (\ref{F2:2}) asserts that for a convex function, a global underestimator of the function can be easily derived via its first-order Taylor approximation. Consequently, the first-order Taylor approximation of a convex function is always a global underestimator of the function. This inequality additionally confirms that global information of a convex function can be obtained through its local information, i.e., its value and derivative at a point. \section{Duality Theorem} We consider the following optimization problem, also known as the \textit{primal problem}, written in its general form as \begin{align} \label{F(2-3)} &\min_{\mathbf{x} \in \mathcal{X}}~f(\mathbf{x}) \\s.t.:~ & g_{i}(\mathbf{x})\leq 0,~ \forall i=1,...,I,\nonumber\\ & h_{l}(\mathbf{x})=0,~ \forall l=1,...,L,\nonumber \end{align} where $f(\text{·}):~\mathbb{R}^{n} \rightarrow\mathbb{R}$ is the objective function, and $\mathbf{x} \in \mathbb{R}^{n}$ is the vector of optimization variables inside the feasible set $\mathcal{X}$. This optimization problem has $I$ inequality constraints and $L$ equality constraints. Furthermore, we also refer to $p^*$ as the optimal value of the optimization problem in (\ref{F(2-3)}). The Lagrangian duality of the objective function of (\ref{F(2-3)}) is given by \begin{equation} \mathcal{L} (\mathbf{x},\boldsymbol{\mu},\boldsymbol{\nu}) = f(\mathbf{x}) + \sum_{i=1}^{I}\mu_{i}g_{i}(\mathbf{x}) + \sum_{l=1}^{L}\nu_{l}h_{l}(\mathbf{x}), \end{equation} where $\boldsymbol{\mu}$ and $\boldsymbol{\nu}$ are called the vector of \textit{Lagrangian multiplier} or the \textit{dual variables} with respect to inequality and equality constraints associated with the problem (\ref{F(2-3)}) that have $\mu_{i}$'s and $\nu_{l}$'s as the elements of the corresponding vectors. The essential purpose of Lagrangian duality is to get somehow rid of the constraints in (\ref{F(2-3)}) by adding a weighted sum of the constraint functions to the objective function. We can now define the corresponding \textit{Lagrange dual function} (or just \textit{dual function}), which is formally stated as \begin{equation}\label{F2:5} \mathcal{D}(\boldsymbol{\mu},\boldsymbol{\nu}) = \inf_{\mathbf{x}} \mathcal{L} (\mathbf{x},\boldsymbol{\mu},\boldsymbol{\nu}). \end{equation} Note that even though the primal problem could be non-convex, the dual problem is always a convex optimization problem since the dual function is a point-wise infimum. This infimum can be seen as the greatest lower bound of a family of affine functions with respect to $\boldsymbol{\mu}$ and $\boldsymbol{\nu}$. The Lagrange dual function in (\ref{F2:5}) gives us a lower bound on the optimal value $p^*$ of the primal problem (\ref{F(2-3)}). In order to find the best lower bound for the primal problem, the following optimization problem can be defined from the Lagrange dual function \begin{equation}\label{F2:6} \max_{\boldsymbol{\mu},\boldsymbol{\nu}} ~ \mathcal{D} (\boldsymbol{\mu},\boldsymbol{\nu}). \end{equation} This problem is known as the \textit{Lagrange dual problem} corresponding to the primal problem. Moreover, if $\boldsymbol{\mu}^*$ and $\boldsymbol{\nu}^*$ are the optimal values for the Lagrange dual problem in (\ref{F2:6}), they are traditionally called \textit{dual optimal} or \textit{optimal Lagrange}. It should also be noted since the objective to be maximized is concave in (\ref{F2:6}), the Lagrange dual problem is a convex optimization problem no matter the primal problem in (\ref{F(2-3)}) is convex or not. \subsection{Weak Duality and Duality Gap} Let $\mathbf{x}^*$ be a feasible solution for the primal problem, i.e., $p^*$ and $(\boldsymbol{\mu}^*,\boldsymbol{\nu}^*)$ are a feasible solution to the dual problem, that is, $d^*$. According to weak duality, we have the following inequality for a general (possibly non-convex) problem \begin{equation}\label{F2:7} d^* \leq p^*. \end{equation} It must be noted that the weak duality inequality also holds when $d^*$ and $p^*$ are infinite. On the other hand, the difference between the primal optimal value and dual primal value, i.e., $p^*-d^*$ is called the \textit{optimal duality gap}. It should be stated that the optimal duality gap is always non-negative. Since the dual problem is always convex, and often can be solved efficiently to determine $d^*$, the inequality in (\ref{F2:7}) is quite useful in finding a lower bound on the optimal value of a problem that is difficult to solve. \subsection{Strong Duality and Slater condition} If the duality gap is zero, i.e., $p^* = d^*$, the \textit{strong duality} holds.~The strong duality indicates that the best bound that can be achieved from the Lagrange dual function is tight. Moreover, in strong duality, since the gap between primal and dual is zero, solving the dual problem is equivalent to solving the primal problem. A sufficient condition for strong duality to hold for a convex optimization problem is the \textit{Slater condition} or \textit{Slater's condition}. In particular, if the Slater condition holds for the primal problem, then the duality gap is zero, which implies strong duality for convex problems. And if the dual optimal value is finite, then it is attained, i.e., a dual feasible $(\boldsymbol{\mu}^*,\boldsymbol{\nu}^*)$ exists that satisfies ${\mathcal{D}(\boldsymbol{\mu}^*,\boldsymbol{\nu}^*) = d^* = p^*}$. In general, there exist so many results that establish conditions on the optimization problem, that yield strong duality. These conditions are coined \textit{constraint qualifications}, where the Slater condition is only a simple specific example of many. \section{Abstract Lagrangian Duality} In order to study duality in optimization models, two approaches exist historically, and the duality results are manifested as referred to as: i) \textit{Classical Lagrangian} and ii) \textit{Abstract Lagrangian}. Among these two forms, the classical Lagrangian form is more extensively used in the literature.~What we have discussed so far is indeed the classical Lagrangian form of duality. As seen, classical Lagrangian typically starts from a primal problem while the Lagrangian and the Dual Lagrangian problems are established subsequently.~However, at a more abstract level, an abstract Lagrangian function is used to derive the primal and dual optimization problems. Here, we briefly discuss an abstract version of Lagrangian duality that is elaborated in more significant details in~\cite{goh2002duality}. In this version, through a certain real-valued abstract Lagrangian function, the primal and dual costs are taken into account, such that \begin{align*} (\textrm {Primal problem})&~~ \min_{\mathbf{x}\in \mathcal{X}}~ \mathcal{F}(\mathbf{x})~~\textrm{where}~ \mathcal{F}(\mathbf{x})=~\sup_{\mathbf{y}\in Y}~ \mathcal{L}(\mathbf{x},\mathbf{y}),\\ (\textrm {Dual problem})~&~~ \max_{\mathbf{y}\in \mathcal{Y}}~ \mathcal{G}(\mathbf{y})~~\textrm{where}~ \mathcal{G}(\mathbf{y})=~ \inf_{\mathbf{x}\in \mathcal{X}}~ \mathcal{L}(\mathbf{x},\mathbf{y}), \end{align*} where $\mathcal{L}:\mathcal{X}\times \mathcal{Y} \longrightarrow R$ is the abstract Lagrangian function pertaining to $\mathcal{X}$ and $\mathcal{Y}$ as appropriate domains defined in some primal and dual spaces, respectively. Moreover, the supremum can be seen as the least upper bound of a family of affine functions with respect to $\mathbf{x}$ and $\mathbf{y}$. This approach to duality is based on conjugate duality, where a convexity assumption is always made~\cite{rockafellar1974conjugate}.~This approach also puts a strong emphasis on the minimax and saddle point theorems, which are given below. $\bullet$~\textbf{Minimax Theorem}: This theorem provides the condition that guarantees the \textit{strong max-min property} or the \textit{saddle point} as follows \begin{equation}\label{F2:8} \sup_{\mathbf{y}\in \mathcal{Y}}~\inf_{\mathbf{x}\in \mathcal{X}}~ \mathcal{H}(\mathbf{x},\mathbf{y})= \inf_{\mathbf{x}\in \mathcal{X}}~ \sup_{\mathbf{y}\in \mathcal{Y}}~ \mathcal{H}(\mathbf{x},\mathbf{y}). \end{equation} It should be noted that the above equality, strong max-min property, holds only in special cases. This is, in particular, true, when for example, $\mathcal{H}:\mathcal{X}\times \mathcal{Y} \longrightarrow R$ is the Lagrangian of a problem where the strong duality holds. $\bullet$~\textbf{Saddle Point Theorem}: Under suitable conditions, there exists a \textit{saddle point} for $\mathcal{S}(\text{·})$ referred to as a pair $(\mathbf{x}^{*},\mathbf{y}^{*})\in \mathcal{X}\times \mathcal{Y}$ such that for all~$(\mathbf{x},\mathbf{y})\in \mathcal{X}\times \mathcal{Y}$, \begin{equation}\label{F2-9} \mathcal{S}(\mathbf{x}^{*},\mathbf{y})\leq \mathcal{S}(\mathbf{x}^{*},\mathbf{y}^{*})\leq \mathcal{S}(\mathbf{x},\mathbf{y}^{*}). \end{equation} In (\ref{F2-9}), $\mathcal{S}:\mathcal{X}\times \mathcal{Y} \longrightarrow R$ is the Lagrangian of a problem where the strong duality holds. In other words, $\mathcal{S}(\textbf{x}^{*},\textbf{y}^{*}) = \sup_{\textbf{y}\in \mathcal{Y}} ~\mathcal{S}(\textbf{x},\textbf{y}^*)$, and $\mathcal{S}(\textbf{x}^{*},\textbf{y}^{*}) = \inf_{\textbf{x}\in \mathcal{X}}~\mathcal{S}(\textbf{x}^*,\textbf{y})$. This indicates that the strong max-min property (\ref{F2:8}) holds with the common value of $\mathcal{S}(\textbf{x}^{*},\textbf{y}^{*})$. \section{Complementary Slackness and KKT Optimality Conditions} Suppose that both the primal and dual optimal values exist and are equal. This means the strong duality holds. We further assume that $\mathbf{x^*}$ and $(\boldsymbol{\mu}^*,\boldsymbol{\nu}^*)$ to be a primal optimal and a dual optimal point, respectively. Therefore, we have \begin{align}\label{F2:10} f(\mathbf{x^*})= \mathcal{D}(\boldsymbol{\mu}^*,\boldsymbol{\nu}^*) \leq f(\mathbf{x^*}) + \sum_{i=1}^{I}\mu^*_{i}g_{i}(\mathbf{x^*}) + \sum_{l=1}^{L}\nu^*_{l}h_{l}(\mathbf{x^*}) \leq f(\mathbf{x^*}). \end{align} The first inequality in (\ref{F2:10}) holds since the infimum of the Lagrangian over $\mathbf{x}$ is less than or equal to its value at $\mathbf{x} = \mathbf{x^*}$. However, the last inequality follows from {${\mu_{i}^* \geq 0,~g_{i}(\mathbf{x^*})\leq0 ,~ \forall i=1,...,I}$}, and {${h_{l}(\mathbf{x^*})\leq0,~ \forall l=1,...,L}$}. An important conclusion that one can make from (\ref{F2:10}) is that \begin{equation} \mu^*_{i}g_{i}(\mathbf{x^*})=0,~~~~ \forall i=1,...,I. \end{equation} This condition is called the \textit{complementary slackness}.~It confirms that one can go from the optimal primal solution to the optimal dual solution, and vice versa, if the strong duality holds. Moreover, the complementary slackness verifies that a solution is optimal, by checking if there is a dual solution. Now, we introduce the Karush-Kuhn-Tucker (KKT) conditions assuming that all the functions both in the objective and the constrains in (\ref{F(2-3)}) are differentiable. Just same as was assumed in (\ref{F2:10}), let's also suppose the primal and dual variables at the optimum point, for which strong duality obtains, are $\mathbf{x}^{*}$ and $(\boldsymbol{\mu}^{*},\boldsymbol{\nu}^{*})$, respectively. The KKT conditions have the following properties \begin{subequations} \begin{align} g_{i}(\mathbf{x}^{*})&\leq 0,~ \forall i=1,...,I,\\ h_{l}(\mathbf{x}^{*})&=0,~ \forall l=1,...,L,\\ \mu_{i}^{*}&\geq 0, ~ \forall i=1,...,I,\\ \mu_{i}^{*}g_{i}(\mathbf{x}^{*})&=0,~ \forall i=1,...,I,\\ \nabla_{\mathbf{x}}f(\mathbf{x}^{*}) + \sum_{i=1}^{I}\mu_{i}^{*} \nabla_{\mathbf{x}}g_{i}(\mathbf{x}^{*}) + \sum_{l=1}^{L}\nu_{l}^{*} \nabla_{\mathbf{x}}h_{l}(\mathbf{x}^{*})&=0,\label{2-12e} \end{align} \end{subequations} where $\mu_{i}^{*}$ and $\nu_{l}^{*}$ are the elements of Lagrangian vectors $\boldsymbol{\mu}^{*}$ and $\boldsymbol{\nu}^{*}$,~respectively. Also, $\nabla_{\textbf{x}}$ denotes the gradient of a function with respect to \textbf{x} in (\ref{2-12e}). Note that the KKT conditions are necessary and sufficient conditions for the optimality of the convex optimization problem with differentiable objective and constraint functions. However, if the problem is non-convex, the KKT conditions would only provide the necessary conditions for optimality given that the objective and constraints are differentiable. \section{Interior-Point Methods} The literature on interior-point methods is very extensive, and research is still flourishing. This paragraph can only serve as a very condensed introduction. Interior-point methods can be seen as a branch in the classification of convex optimization algorithms that solve linear and nonlinear convex optimization problems. KKT conditions, another branch of optimization algorithms in this classification, obtain a collection of linear equations that can be solved analytically, for example a quadratic optimization problem with linear equality constraints. Interior-point methods, on the other hand, solve an optimization problem with linear equality and inequality constraints by the relaxation of a problem with only a set of linear equality constraints. The motivation for calling such methods an “interior-point method” lies in the fact that these methods begin their search for an optimal solution in the interior of the feasible region, and travel on a path towards the boundary, converging at the optimum. \section{MM Approach and D.C. Programming} MM algorithms are an appropriate tool to reduce a given optimization problem into a series of simpler problems. In this sense, an MM algorithm is not an algorithm, but rather an appropriate principal way of designing optimization algorithms for high dimensional settings, where the classical methods of optimization do not work well. MM algorithms are not new. The celebrated Expectation Maximization algorithm is a particular case of MM algorithms that is extensively used in electrical engineering applications and in other fields. The reason for selecting the MM acronym is two-fold. An MM algorithm operates on a more straightforward and simpler surrogate function that majorizes/minorizes (the first M of MM) the objective function in a minimization/maximization (the second M of MM) optimization problem. Thus, the MM stands for either \textit{Majorization Minimization} or \textit{Minorization Maximization}, depending on the application. In the next few paragraphs, we consider a majorization minimization problem to explain how the algorithm works. Consider the following optimization problem \begin{align} \min_{\textbf{x}\in \mathcal{X}}~&f(\textbf{x}), \end{align} where $\textbf{x}$ is the optimization variable vector belonging to the feasible set ${\mathcal {X}}$. In order to majorize the function $f(\textbf{x})$ at $\textbf{x}^n$, there exists a surrogate function $g(\textbf{x}|\textbf{x}^n)$ that satisfies two conditions \begin{align} f(\textbf{x}^n) &= g(\textbf{x}^n|\textbf{x}^n), \label{F2:14}\\ f(\textbf{x}) & \leq g(\textbf{x}|\textbf{x}^n), ~~~\textbf{x} \neq \textbf{x}^n. \label{F2:15} \end{align} The first condition (\ref{F2:14}) is called the tangency condition at the current iteration step. This condition grantees $g(\textbf{x}^n|\textbf{x}^n)$ is tangent to $f(\textbf{x})$ at $\textbf{x}^{n}$.~The second condition, on the other hand, (\ref{F2:15}) makes sure the $g(\textbf{x}|\textbf{x}^n)$ is dominant in a sense that it always lies above the surface of $f(\textbf{x})$ except at $\textbf{x}^{n}$. Besides, if a function $g(\textbf{x}|\textbf{x}^n)$ majorizes the function $f(\textbf{x})$ at $\textbf{x}^{n}$, it can be easily perceived that -$g(\textbf{x}|\textbf{x}^n)$ minorizes -$f(\textbf{x})$. Another very important result of the MM algorithms is the descent property. Starting from $\textbf{x}^{0}\in \mathcal{X}$ as an initial point for the feasible set ${\mathcal {X}}$, an MM algorithm generates a sequence of feasible point $\textbf{x}^{n}$. At point $\textbf{x}^{n}$ in the majorization step, a continuous surrogate function is constructed that satisfies the domination condition in (\ref{F2:15}) \begin{equation}\label{F2:16} g(\textbf{x}|\textbf{x}^{n})\geq f(\textbf{x}) + g(\textbf{x}^{n}|\textbf{x}^{n})- f(\textbf{x}^{n}),~\textbf{x} \neq \textbf{x}^{n}. \end{equation} Hence, in the minimization step, the following update rule can be applied \begin{equation}\label{F2:17} \textbf{x}^{n+1}\in \min_{\textbf{x}\in \mathcal{X}} g(\textbf{x}|\textbf{x}^{n}). \end{equation} It is easy to show that the generated sequence $f(\textbf{x}^{n})$ is non-increasing. Thus, we have \begin{equation}\label{F2:18} f(\textbf{x}^{n+1}) \leq g(\textbf{x}^{n+1}|\textbf{x}^{n}) - g(\textbf{x}^{n}|\textbf{x}^{n}) + f(\textbf{x}^{n}) \leq g(\textbf{x}^{n}|\textbf{x}^{n})- g(\textbf{x}^{n}|\textbf{x}^{n}) + f(\textbf{x}^{n}) = f(\textbf{x}^{n}), \end{equation} where the first inequality comes from (\ref{F2:16}), and the second inequality is the direct consequence of (\ref{F2:17}). The property in (\ref{F2:18}), the descent property, gives a remarkable numerical stability to MM algorithms. Hence, instead of minimizing the cost function $f(\textbf{x})$ directly, the MM algorithms stably optimize a sequence of tractable approximate surrogate objective functions $g(\textbf{x}|\textbf{x}^{n})$ that minorize $f(\textbf{x})$ as tightly as possible. The MM algorithms can easily be connected to other algorithmic frameworks \cite{MM,Sequential_convex,yuille2003concave}. One of the application areas of the MM algorithms is in \textit{Difference of Convex functions} (D.C.) programming problems. The general form of D.C. functions is \begin{align}\label{F2:19} &\min_{\textbf{x}}~ f_{0}(\textbf{x})- h_{0}(\textbf{x})\\ \textit{s.t.:}~ &f_{i}(\textbf{x})- h_{i}(\textbf{x})\leq 0, ~\forall i=1,...,m, \end{align} where $f_{i}$'s and $h_{i}$'s are all convex functions.~We further assume that $f_{i}$'s and $h_{i}$'s are twice differentiable, and are strictly convex without loss of generality according to (\ref{F2:1}). Among various algorithms having desirable properties for the solution of D.C. problems, the MM scheme, which solves a sequence of convex problems acquired by linearizing non-convex parts in the objective function as well as the constraints, is preferred. Accordingly, an approximate solution can be found that iteratively solves (\ref{F2:19}) through defining the following convex subproblem \begin{align} &\min_{\textbf{x}}~ g_{0}(\textbf{x}|\textbf{x}^{n})\\ \textit{s.t.:}~ &g_{i}(\textbf{x}|\textbf{x}^{n})\leq 0, ~\forall i=1,...,m, \end{align} where \begin{equation} g_{i}(\textbf{x}|\textbf{x}^{n})= f_{i}(\textbf{x})- \Big( h_{i}(\textbf{x}^{n}) + \nabla_{\textbf{x}} h_{i} (\textbf{x}^{n})^{T}(\textbf{x}-\textbf{x}^{n}) \Big),~~~ \forall i \in \{0,...,m\}. \end{equation} The aforementioned approximation satisfies the MM principle and is a tight upper bound of $f_{i}-h_{i}$ with equality achieved at $\textbf{x}=\textbf{x}^{n}$. This technique is used several times throughout the thesis. Moreover, the solution methodology for the MM algorithm is summarized in \text{\textbf{Algorithm~\ref{Alg:chap2}}}. A valid question to be asked at this point would be how good the convergence behaviors of the MM algorithms are. For the answer, the interested reader is referred to \cite{SCA2014MR,General_inner,Local_convergence}. \begin{algorithm*}[t] \caption{The MM Approach} \begin{algorithmic}[1] \STATE {$\mathbf{Initialize}$} \\{ \begin{addmargin}[1em]{0em} {iteration index $n=0$ with the maximum number of iteration $N_{\max}$ \\ and find a feasible point $\mathbf{x}^{0}$.} \end{addmargin}} \STATE \textbf{repeat} \STATE{ \begin{addmargin}[1em]{0em} Find $\mathbf{x}^{n}$ by solving the optimization problem (\ref{F2:17}) and store as $\mathbf{x}$. \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Set $n=n+1$ and $\mathbf{x}^{n}=\mathbf{x}$. \end{addmargin}} \STATE \textbf{until} some convergence criterion is met or $n=N_{\max}$ \STATE \textbf{return} optimal $\mathbf{x}$ \end{algorithmic} \label{Alg:chap2} \end{algorithm*} \section{Optimization Packages} Many optimization tools and packages exist for solving any given optimization problem. This paragraph introduces a few of the most popular optimization packages. The GLPK is a package designated for solving large-scale linear programming (LP), mixed integer programming (MIP), and other similar problems. The Gurobi optimizer is a well-known commercial optimization solver for LP, quadratic programming (QP), and MIPs (including mixed-integer linear programming (MILP), mixed-integer quadratic programming (MIQP), and mixed-integer quadratically constrained programming (MIQCP)). The Mosek solver is another widely used optimization package that solves LP, QP, MIP, second-order cone programming (SOCP), and semi-definite programming (SDP). The SeDuMi and SDPT3 are two other leading solvers for SDPs. The list of packages goes on and on, with new packages always being added and the well-known ones continuously being updated to respond to the ever-rising demand for higher performance speed to solve problems with unimaginably large dimensions. The purpose of the thesis is not to develop such optimization solvers nor to improve them, but rather to use the existing ones to solve the problems at hand. A whole different research domain investigates and designs optimization solvers for specific needs. \chapter{SWIPT in Single Small-Cell Networks} \label{CHAP3} \fancyhf{} \renewcommand{\headrulewidth}{2pt} \fancyhead[LE,RO]{\thepage} \fancyhead[RE]{\textit{ \nouppercase{\leftmark}} } \fancyhead[LO]{\textit{ \nouppercase{\rightmark}} } \renewcommand{\footrulewidth}{0.1pt} \fancyfoot[CE,CO]{\nouppercase{\leftmark}} \fancyfoot[LE,RO]{JFMJ} \vspace{15mm} As explained in the introductory chapter, wireless power transfer (WPT) provides wireless devices with continuous and stable energy. Radio frequency (RF)-enabled simultaneous wireless information and power transfer (SWIPT) has the capability of using an innovative way to simultaneously transfer information over a single radio waveform as power. Consequently, SWIPT-based networks contribute exceptional benefit to users by conveniently utilizing radio signals to transfer both energy and information. SWIPT is attracting ever more attention due to its ability to provide green communication services more efficiently by broadcasting information and power on orthogonal and non-orthogonal resources. Furthermore, resource allocation for SWIPT-enabled networks carefully takes into account the performance of both information and power transfer, unlike traditional wireless systems. In this sense, resource allocation in SWIPT systems can improve network performance and make maximum use of network resources while satisfying the quality of service requirement by flexibly allocating and dynamically adjusting the network's available resources. A great deal of research has been conducted on resource allocation for SWIPT in various types of wireless communication networks. In \cite{Simultaneous_Information} for instance, the focus was on resource allocation based on the orthogonal frequency division multiplexing (OFDM) for different system configurations in a SWIPT-enabled mobile network that maximizes throughput by using separated receiver architecture. The optimal design for maximizing the weighted sum data-rate over all users for SWIPT in downlink (DL) OFDM systems was explored in \cite{18}. In it, users harvest energy and decode information using the same receiving signals that carry both energy and information from a fixed access point (AP). Each user also applies either power splitting (PS) or time switching (TS) receiver architecture to coordinate the energy harvesting (EH) and information decoding (ID) operations. The authors in \cite{chap3:8276564} investigated resource allocation for maximizing the effective capacity and effective energy efficiency (EE) of a DL multi-user OFDM system by considering both PS and TS architectures for receivers using SWIPT. In \cite{Wireless_Information}, the ideal resource allocation algorithm design to maximize EE of data transmission was studied by considering PS hybrid receivers in an orthogonal frequency division multiple access (OFDMA) network. Most of the previous works had investigated a SWIPT system based on the splitter using PS or TS architecture to separate the received signal for either EH or ID operation. However, \cite{IA} considered a SWIPT-assisted joint subcarrier and power allocation based on an OFDM system using a heuristic solution methodology in which neither time nor power splitter was used to maximize EH. In this chapter, we aim to fill the knowledge gap in existing literature by improving the paradigm of resource allocation policy in SWIPT networks without the need for a splitter in a co-located architecture. We study a simple scenario often found in the literature that captures the essential characteristics of SWIPT-enabled networks in this chapter: the optimal design for resource allocation in a multi-user OFDMA network with SWIPT using the same signals received from a fixed AP to perform both EH and ID operations. The complexity of the receiver is significantly reduced because there is no need for a splitter to perform appropriately: The receiver has no time or power splitter. The resource allocator only needs to know which group of subcarriers is allocated for EH and which for ID operation. This information is derived from the channel state information (CSI) and the relevant algorithm. We do not address the effect of interference in this chapter; instead we investigate the problem of resource allocation design in a single small-cell network, which allows us to address the problem of maximizing the energy harvested by all users. More specifically, the problem of joint power allocation and subcarrier assignment is investigated in order to maximize harvested energy while fulfilling the minimum data-rate requirement for each user. Since each subcarrier can be configured independently in OFDMA systems, information and power are transferred separately on different subcarriers with different waveforms. We thus try to determine a resource allocation policy that includes joint power allocation and subcarrier assignment algorithms based on an OFDMA network for a DL of a single small-cell multi-user system by means of SWIPT. The underlying optimization problem calls for mixed-integer linear programming (MILP). To tackle the problem, we employ the majorization minimization (MM) optimization approach to obtain a close-to-optimal resource allocation policy. Simulation results demonstrate that our proposed algorithm achieves excellent performance as compared to other possible solutions presented in the literature. \section{System Model} We consider a DL of an OFDMA network in a single small-cell scenario consisting of an indoor AP and $K$ co-located users as shown in figure~(\ref{fig:3.1}). In particular, users receive the intended signal from the AP for EH and ID operations, simultaneously. Furthermore, the AP and all the receivers are equipped with a single antenna in this configuration. We additionally assume that the entire frequency band of $\mathscr{B}$ is partitioned into $N$ subcarriers, each having a bandwidth of $\mathscr{W}$. It also needs to be stated that a portion of the spectrum is used for ID while the remaining portion is exploited for EH proclaiming a demand for the use of two separate filters at receivers~\cite{4623916}. Moreover, the set of users and subcarriers are denoted by $\mathcal{K}=\{1,2,...,K\}$ and $\mathcal{N}=\{1,2,...,Z,Z+1,...,N\}$, respectively. Let $\mathcal{N}_i=\{1,2,,...,Z\}$ denote the set of subcarriers for ID, whereas the remaining subcarriers {$\mathcal{N}_{e}=\mathcal{N}-\mathcal{N}_{i}=\{Z+1,Z+2,...,N\}$} is used to indicate the set of subcarriers for EH. Note that the optimal value of $Z$, i.e. the cardinally of the set $\mathcal{N}_i$, can be obtained \cite{IA}. However, we choose not to adapt the problem of finding the optimal $Z$ in our proposed problem design for the sake of avoiding repetition. \begin{figure}[h] \centering \includegraphics[width=17cm,trim=4 4 4 4,clip]{figures/chap3/SWIPT_single_cell_F1.pdf} \caption{SWIPT in a DL of a co-located multi-user small single-cell OFDMA network.} \label{fig:3.1} \end{figure} Moreover, it is further assumed that all the subcarriers are considered to be perfectly orthogonal to one another, and no inter-subcarrier interference exists. Hence, the subcarrier assignment variable is given by \begin{equation*} a_{n,k} = \begin{cases} 1, & \text{if subcarrier $n$ is assigned to user \textit{k}}, \\ 0, & \text{otherwise}. \end{cases} \end{equation*} Let $h_{n,k}$ denote the DL channel coefficient from the AP to the $k^{th}$ user over the subcarrier $n$. We assume that the perfect CSI is available at a centralized resource allocator to design resource allocation policy. Specifically, it is presumed that the AP broadcasts orthogonal preambles, pilot signals, in the DL to the users. Then, through a feedback channel, each user estimates the CSI and transfers this information back to the AP. Afterward, the corresponding AP listens to the sounding reference signals communicated by the users and sends the CSI to the centralized controller for resource allocation design. Now, by denoting $p_{n,k}$ as the DL transmit power of AP to the $k^{th}$ user over the subcarrier $n$, the DL signal-to-noise ratio (SNR) of user $k$ in subcarrier $n$ is defined as \begin{equation} \label{F3-1} \gamma_{n,k}= \frac{a_{n,k}p_{n,k}|h_{n,k}|^2} {\sigma^{2}_{n,k}}, \end{equation} where $\sigma^{2}_{n,k}$ denotes the additive noise power.~More specifically, at receiver $k$, the received signal on each subcarrier is corrupted by noise $n_{n,k}$. This noise is modeled as an additive white Gaussian noise (AWGN) random variable with zero mean and variance $\sigma^{2}_{n,k}$, denoted by a circularly symmetric Gaussian distribution referred to as $n_{n,k} \sim \mathcal{CN}(0,\sigma^{2}_{n,k})$. However, for the sake of simplicity, we consider $\sigma^{2}_{n,k}$ = $\sigma^{2}$ throughout this chapter meaning that the variance of the noise is the same over all subcarriers for all users. According to the Shannon capacity formula, the data-rate of the $k^{th}$ user over the subcarrier $n$ can be expressed as \begin{equation}\label{F3-2} R_{n,k}= \log_2 \bigg(1+\gamma_{n,k}\bigg). \end{equation} For facilitating the presentation, we denote $\textbf{p} \in \mathbb{R}^{1\times KN}$ and $\textbf{a} \in \mathbb{Z}^{1\times KN}$ as vectors of optimization problem for power allocation and subcarrier assignment, respectively. Consequently, the data-rate of the $k^{th}$ user in DL is given as \begin{equation} \label{F3-3} R_{k}(\textbf{a},\textbf{p})= \sum_{n\in\mathcal{N}_i}R_{n,k}. \end{equation} Furthermore, to guarantee the quality of service (QoS) of users, a minimum data-rate denoted by $R_{min}$, should be provided for each user. That is \begin{equation} \label{F3-5} R_{k}(\textbf{a},\textbf{p})\geq R_{min}, ~ \forall k \in \mathcal{K}. \end{equation} Moreover, the amount of the harvested energy can be stated as \begin{equation} \label{F3-4} \textrm{EH}(\textbf{a},\textbf{p})= \sum_{k\in\mathcal{K}} \epsilon_k \bigg( \sum_{n\in\mathcal{N}_e} a_{n,k}p_{n,k}|h_{n,k}|^{2} \bigg), \end{equation} where $\epsilon_k$ is the power efficiency of the $k^{th}$ user capable of energy harvesting, which takes its value from the interval $0 < \epsilon_k < 1$. It should be noted that $\sigma^{2}$ also contributes to the EH formula in (\ref{F3-4}). However, since its value is very small, it is neglected in the representation of the EH formula. \section{Optimization Problem Formulation} The main objective persuaded in this chapter is to assign subcarrier(s) and set the transmit power(s), for each user, such that the total harvested energy in (\ref{F3-4}) is maximized. Thus, we can formulate the optimization problem as \vspace{-5mm} \begin{subequations} \begin{align} &\max_{\textbf{a},\textbf{p}} ~ \textrm{EH}(\textbf{a},\textbf{p}) \label{F3-6} \\ s.t.: &~C_{1}: \sum_{k \in \mathcal{K}} a_{n,k}\leq 1, ~~~~~~ \forall n \in \mathcal{N}_i, \label{F3-7} \\ &~C_{2}: \sum_{k \in \mathcal{K}} a_{n,k}\leq 1, ~~~~~~ \forall n \in \mathcal{N}_e, \label{F3-7-1} \\ &~C_{3}: \sum_{k\in \mathcal{K}}~ \sum_{n\in \mathcal{N}} a_{n,k}p_{n,k}\leq p_{max} , \label{F3-8} \\ &~C_{4}: R_{k}(\textbf{a},\textbf{p})\geq R_{min}, ~ \forall k \in \mathcal{K}, \label{F3-9} \\ &~C_{5}: a_{n,k}\in\{0,1\} , ~~~~~~ \forall k \in \mathcal{K} ,~ \forall n \in \mathcal{N} . \label{F3-10} \end{align} \label{F3-6new}% \end{subequations} In this optimization problem, $C_{1}$ and $C_{2}$ indicate that each subcarrier can be assigned to at most one user for ID and EH operation, respectively. $C_{3}$ states that power constraint for the AP with a maximum transmit power allowance of $p_{max}$. The forth constraint, $C_{4}$, guarantees the QoS for each user.~Finally, by keeping in mind that the subcarrier assignment variable is binary, the last constraint, $C_{5}$, makes sure that different subcarriers take their values from a binary set. This means whether a given subcarrier is going to be selected to maximize the energy harvesting, i.e., the objective of the optimization problem at hand. One can readily conclude that the optimization problem in~(\ref{F3-6new}) is a non-convex mixed-integer linear programming (MILP) problem \cite{MINLP} due to the binary constraint for the subcarrier assignment in $C_{5}$ and the non-linearity of the QoS constraint. In general, it is impossible to find an optimal solution for a non-convex MILP in a polynomial-time. However, in the next section, we exploit an approach to find a locally optimal solution for the considered system. Furthermore, we propose a suboptimal resource allocation algorithm, which has a polynomial-time computational complexity to strike a balance between complexity and system performance. \section{Solution to the Optimization Problem} In this section, we try to solve the problem in (\ref{F3-6new}) using the MM approach by constructing a sequence of surrogate functions to approximate the non-convex problem. The MM procedure in our setting, i.e., the problem (\ref{F3-6new}), consists of two major steps. In the first minorization step, a surrogate function is found that locally approximates the transformed objective function with their difference maximized at the current point. That is to say, the transformed objective function is lower-bounded by the surrogate function up to a constant value. Next, the surrogate function is maximized in the maximization step. By inspiring this approach, we make a convex approximation which can be globally and efficiently solved using optimization packages incorporating the MM algorithm. Nonetheless, finding a proper surrogate function that yields a low-complexity algorithm is not a straightforward task. On the one hand, a surrogate function that attempts to mimic the form of the objective function and obtains a fast convergence speed is preferable. On the other hand, the surrogate function should be easy to maximize in such a way that the computational cost per iteration remains low. Realizing the appropriate trade-off between the above-mentioned conflicting goals necessitates experiences in applying inequalities to particular problems, as becomes evident in the next subsection. For more details please refer to~\cite{MM,SCA,ASM,SCA2,SCA3} and the references therein. \subsection{Joint Power Allocation and Subcarrier Assignment Algorithm} In this subsection, we propose a locally optimal solution for the optimization problem in (\ref{F3-6new}). It is worth mentioning that the multiplication of two variables in the objective function of the problem in (\ref{F3-6new}) as well as the constrain $C_4$ are the obstacles for the design of a computationally efficient resource allocation algorithm. Since the multiplication of two variables in (\ref{F3-8}), i.e., $a_{n,k}p_{n,k}$, is non-convex, we define the product terms as $\tilde{p}_{n,k}=a_{n,k}p_{n,k}$. In order to handle this difficulty of the non-convexity, we adopt the big-M formulation \cite{big_M} to decouple the product terms. Therefore, the following additional constraints are imposed accordingly as \begin{align} &C_{6}: \tilde{p}_{n,k}\leq p_{max}a_{n,k},~~~~~~~~~~~~~~~~~ \forall k \in \mathcal{K},~ \forall n \in \mathcal{N}, \\ &C_{7}: \tilde{p}_{n,k}\leq p_{n,k},~~~~~~~~~~~~~~~~~~~~~~~ \forall k \in \mathcal{K},~ \forall n \in \mathcal{N}, \\ &C_{8}: \tilde{p}_{n,k}\geq p_{n,k}-(1-a_{n,k})p_{max}, ~ \forall k \in \mathcal{K},~ \forall n \in \mathcal{N}, \\ &C_{9}: \tilde{p}_{n,k}\geq 0,~~~~~~~~~~~~~~~~~~~~~~~~~~~ \forall k \in \mathcal{K},~ \forall n \in \mathcal{N}, \end{align} where $\tilde{\textbf{p}}\in \mathbb{R}^{1\times KN}$ is the collection of all $\tilde{p}_{n,k}$'s.~Therefore, the original optimization problem in (\ref{F3-6new}) can be recast in equivalent form as \begin{subequations} \begin{align} &\max_{{\textbf{a}},\textbf{p},\tilde{\textbf{p}}} ~\widehat{\overline{\textrm{EH}}}(\textbf{a},\textbf{p},\tilde{\textbf{p}}) \\ s.t.: &~C_{1}-C_2,C_5-C_9, \\ &~C_{3}: \sum_{k\in \mathcal{K}}~ \sum_{n\in \mathcal{N}} \tilde{p}_{n,k}\leq p_{max}, \label{F3-22} \\ &~C_{4}: \sum_{ n \in \mathcal{N}}\log_2(1+\frac{\tilde{p}_{n,k}|h_{n,k}|^2}{\sigma^{2}})\geq R_{min},~ \forall k \in \mathcal{K}, \end{align} \label{F3-new20}% \end{subequations} where, by revisiting the definition of the harvested energy in (\ref{F3-4}), the objective function of (\ref{F3-new20}) is as follows \begin{equation} \widehat{\overline{\textrm{EH}}}(\textbf{a},\textbf{p},\tilde{\textbf{p}})= \sum_{k\in\mathcal{K}} \epsilon_k\bigg( \sum_{n\in\mathcal{N}_e} \tilde{p}_{n,k}|h_{n,k}|^{2}\bigg). \end{equation} The optimization problem of (\ref{F3-new20}) is still a non-convex MILP problem, which is complicated to solve. To facilitate the solution design, we restate the integer variable in constraint $C_{5}$ as the intersection of the following regions \cite{TWC_Ata,che2014joint} \begin{align} &\dot{C}_{5}: 0\leq a_{n,k}\leq 1,~ \forall k \in \mathcal{K},~ \forall n \in \mathcal{N}, \\ &\ddot{C}_{5}: \sum_{k \in \mathcal{K}} \sum_{n \in \mathcal{N}} a_{n,k}-(a_{n,k})^{2}\leq 0. \end{align} The above transformations make integer optimization variables continuous with values between zero and one. Therefore, the original problem in (\ref{F3-new20}) can be rewritten as \begin{align} &\max_{{\textbf{a}},\textbf{p},\tilde{\textbf{p}}} ~ \widehat{\overline{\textrm{EH}}}(\textbf{a},\textbf{p},\tilde{\textbf{p}}) \label{F3-30} \\ s.t.: &~C_1-C_4,\dot{C}_{5},\ddot{C}_{5},C_6-C_9. \nonumber \end{align} Note that the optimization problem in (\ref{F3-30}) is a continuous optimization problem with respect to all variables. However, our concern is to obtain integer solutions for $a_{n,k}$. To this end, we add a penalty term to the objective function. The integer variable is now relaxed to take any values between zero to one. Thereby, the problem can be restated as follows \begin{align} &\max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}} \mathcal{L}(\textbf{a},\textbf{p},\tilde{\textbf{p}},\lambda) \label{F3-31} \\ s.t.: &~C_1-C_4,\dot{C}_{5},C_6-C_9, \nonumber \end{align} where $ \mathcal{L}(\textbf{a},\textbf{p},\tilde{\textbf{p}},\lambda)$ is the \textit{abstract Lagrangian duality} \cite{rockafellar1974conjugate} associated to (\ref{F3-30}), and is defined as \begin{align} \label{F3-32} \mathcal{L}(\textbf{a},\textbf{p},\tilde{\textbf{p}},\lambda)= \widehat{\overline{\textrm{EH}}}(\textbf{a},\textbf{p},\tilde{\textbf{p}})- \lambda\bigg(\sum_{k \in \mathcal{K}}\sum_{n\in \mathcal{N}}a_{n,k}-(a_{n,k})^{2}\bigg). \end{align} In (\ref{F3-32}), $\lambda$ acts as a penalty factor to penalize the objective function when $a_{n,k}$ is not an integer variable. \begin{proposition}\label{weak_duality} In the abstract Lagrangian of (\ref{F3-31}),~the $\lambda$ acts as a penalty factor to penalize the objective function when the subcarrier allocation variable $a_{n,k}$ does not have a binary value. For sufficiently large values of $\lambda$, the optimization problem in (\ref{F3-31}) is equivalent to (\ref{F3-30}) with both problems yielding the same optimal results. \end{proposition} \vspace{-2mm} \begin{proof} By using the abstract Lagrangian duality, the proof of~\textbf{Proposition~\ref{weak_duality}} is presented. Accordingly, the primal and dual problem of (\ref{F3-30}) can be written respectively as \vspace{-2mm} \begin{align} (\textrm {Primal problem})& ~~ p^{*}= \max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}}~ \min_{\lambda} \mathcal{L}(\textbf{a},\textbf{p},\tilde{\textbf{p}},\lambda),\\ (\textrm {Dual problem})~& ~~ d^{*}= \min_{\lambda}~\max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}} \mathcal{L}(\textbf{a},\textbf{p},\tilde{\textbf{p}},\lambda). \end{align} Now, by defining $\mathcal{G}(\lambda)\triangleq\max\limits_{\textbf{a},\textbf{p},\tilde{\textbf{p}}} \mathcal{L}(\textbf{a},\textbf{p},\tilde{\textbf{p}},\lambda)$,~the following inequality holds according to the weak duality theorem \begin{eqnarray}\label{F3-36} p^*= \max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}}\min_{\lambda} \mathcal{L}(\textbf{a},\textbf{p},\tilde{\textbf{p}},\lambda) \leq \min_{\lambda\geq 0}\mathcal{G}(\lambda)= d^*. \end{eqnarray} Additionally, it should be pointed out that for $\textbf{a},\textbf{p},\tilde{\textbf{p}}~s.t.:C_1-C_4,\dot{C}_{5},\ddot{C}_{5},C_6-C_9$, two cases can be observed. \textbf{\quad\emph{Case~1}}: In the first case, it is assumed that at the optimal subcarrier allocation point, we have \begin{equation} \ddot{C}_{5}:~ \sum_{k \in \mathcal{K}} \sum_{n\in \mathcal{N}} a_{n,k}-(a_{n,k})^{2}= 0. \end{equation} Consequently, $d^*$ becomes a feasible solution of (\ref{F3-30}). Afterward, substituting the optimal value of $\lambda$, i.e., $\lambda^{*}$, into the abstract Lagrangian duality problem of (\ref{F3-31}) results in \begin{equation} d^*= \mathcal{G}(\lambda^*) = \max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}} \textrm{EH}(\textbf{a},\textbf{p},\tilde{\textbf{p}})= p^*. \end{equation} Furthermore, considering the optimization problem in (\ref{F3-31}) and assuring that the subcarrier allocation variable, $a_{n,k}$, takes its values in the region \textbf{a}$~\in \dot{C}_5,\ddot{C}_5$, one can conclude that $\mathcal{G}(\lambda)$ is a monotonically decreasing function with respect to $\lambda$. On the other hand, it can be asserted that $d^*$=$\min_{\lambda\geq 0} \mathcal{G}(\lambda)$. Therefore, we have \begin{equation} \mathcal{G}(\lambda)= d^*, \forall ~\lambda\geq \lambda_{0}, \end{equation} where $\lambda_0$ is a given value. This confirms that for any value of $\lambda_{}\geq\lambda^*$, the solution of (\ref{F3-31}) can return the optimal solution of (\ref{F3-30}). \textbf{\quad\emph{Case~2}}: In the second case, it is assumed that the subcarrier allocation variable, $a_{n,k}$, takes some value from the interval between zero and one. This is equivalent to satisfying the following inequality \begin{equation} \ddot{C}_{5}:\sum_{k\in \mathcal{K}}\sum_{n\in \mathcal{N}}a_{n,k}-(a_{n,k})^{2}> 0. \end{equation} In this regard, one may conclude $\mathcal{G}(\lambda^*)$ tends to $-\infty$ at the optimal point by referring to (\ref{F3-31}) and (\ref{F3-36}).~Nevertheless, this cannot happen. Since the primal solution must always be greater than zero, this would be a contradiction as it states that $\mathcal{G}(\lambda^*)$ would be limited from below by the solution of (\ref{F3-30}). Hence, $\sum_{k \in \mathcal{K}}\sum_{n\in \mathcal{N}}a_{n,k}-(a_{n,k})^{2}= 0$.~Therefore, the solution of (\ref{F3-31}) yields the optimal solution of (\ref{F3-30}). \end{proof} \vspace{-5mm} Now, we can express the optimization problem in (\ref{F3-30}) in terms of difference of convex functions (D.C.) as follows \begin{align}\label{F3-41} &\max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}}~ \widehat{\overline{\textrm{EH}}} (\textbf{a},\textbf{p},\tilde{\textbf{p}})- \lambda\big(\mathcal{V}(\textbf{a})-\mathcal{W}(\textbf{a})\big) \\ s.t.: &~C_1-C_4,\dot{C}_{5},C_6-C_9, \nonumber \end{align} where \vspace{-5mm} \begin{align} \mathcal{V}(\textbf{a})&= \sum_{k\in\mathcal{K}} \sum_{n\in\mathcal{N}} a_{n,k}, \\ \mathcal{W}(\textbf{a})&= \sum_{k\in\mathcal{K}} \sum_{n\in\mathcal{N}} (a_{n,k})^2, \end{align} are all convex functions, and $\lambda$ is the penalty factors to penalize the objective function when $a_{n,k}$ is not integer values.~Although all terms in the objective function of (\ref{F3-41}) are convex, the subtraction of two convex functions are not necessarily convex \cite{yuille2003concave,che2014joint,DC}.~To make a convex approximation for the objective function, we adopt the MM algorithm by constructing a surrogate function via a first-order Taylor approximation as \begin{align}\label{F3-42} \mathcal{W}(\textbf{a}) \simeq \mathcal{W}(\textbf{a}^{t-1})+ \nabla_{\textbf{a}}\mathcal{W} (\textbf{a}^{t-1})^{T}.(\textbf{a}-\textbf{a}^{t-1}) \triangleq \tilde{\mathcal{W}}(\textbf{a}), \end{align} where $t$ denotes the iteration number, the $\textbf{a}^{t-1}$ is the solution of the problem at $(t-1)^{th}$ iteration, and $\nabla_{\square}$ represents the gradient with respect to ${\square}$.~Approximation (\ref{F3-42}), satisfies the MM criteria and is a tight upper bound of $\mathcal{W}(\textbf{a})$ \cite{MM,DC}.~Therefore, using the MM approach while constructing a sequence of surrogate functions at the $t^{th}$ iteration, we can solve the following convex problem instead of dealing with the non-convex optimization problem in (\ref{F3-41}). Thus, we have \begin{align}\label{F3-43} &\max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}}~ \widehat{\overline{\textrm{EH}}} (\textbf{a},\textbf{p},\tilde{\textbf{p}})- \lambda\big( \mathcal{V}(\textbf{a})- \tilde{\mathcal{W}}(\textbf{a}) \big)\\ s.t.: &~C_1-C_4,\dot{C}_{5},C_6-C_9. \nonumber \end{align} It is easy to demonstrate that the optimization problem~(\ref{F3-43}) is convex and can be solved efficiently via D.C. approximation based on the interior point methods~\cite{MM}.~As a consequence, the solution of (\ref{F3-43}) would be an approximation to the solution of the original problem given~in~(\ref{F3-41}). However, in D.C. programming, the iteration begins from a feasible initial point and solves the optimization problem iteratively until it eventually approaches a close-to-optimal solution~\cite{che2014joint,DC,CL_Ata}.~Besides, it must be noticed that the MM approach produces a sequence of improved feasible solutions with the adopted D.C. approximation, which would ultimately converge to a locally optimal solution $(\textbf{a}^{*},\textbf{p}^*,\tilde{\textbf{p}}^*)$ using standard convex program solvers such as CVX. \begin{proposition} By incorporating D.C. approximation, the solution of the problem~(\ref{F3-43}) {\text{becomes}} a tightly lower-bounded solution from below for the original problem (\ref{F3-41}) at the end of each {\text{iteration.}} \end{proposition} \vspace{-2mm} \begin{proof} In the $t^{th}$ iteration, the objective function of (\ref{F3-41}) is \begin{equation*} \widehat{\overline{\textrm{EH}}} (\textbf{a}^t,\textbf{p}^t,\tilde{\textbf{p}}^t)- \lambda\big(\mathcal{V}(\textbf{a}^t)- \mathcal{W}(\textbf{a}^t)\big). \end{equation*} Subsequently, in the next iteration, we have \begin{align} \widehat{\overline{\textrm{EH}}} (\textbf{a}^{t+1},&\textbf{p}^{t+1},\tilde{\textbf{p}}^{t+1})- \lambda\big(\mathcal{V}(\textbf{a}^{t+1})- \mathcal{W}(\textbf{a}^{t+1})\big) \nonumber\\& \geq \widehat{\overline{\textrm{EH}}} (\textbf{a}^{t+1},\textbf{p}^{t+1},\tilde{\textbf{p}}^{t+1})- \lambda\big(\mathcal{V}(\textbf{a}^{t+1})- \mathcal{W}(\textbf{a}^{t})\big)+ \lambda\nabla_{\textbf{a}}\mathcal{W} (\textbf{a}^{t})^{T}.(\textbf{a}-\textbf{a}^{t}) \nonumber\\ \nonumber&= \max_{{\textbf{a}},{\textbf{p}}, \tilde{\textbf{p}}} \widehat{\overline{\textrm{EH}}} (\textbf{a},\textbf{p},\tilde{\textbf{p}})- \lambda\big(\mathcal{V}(\textbf{a})- \mathcal{W}(\textbf{a}^{t})- \nabla_{\textbf{a}}\mathcal{W} (\textbf{a}^{t})^{T}.(\textbf{a}-\textbf{a}^{t})\big) \\ &\geq \widehat{\overline{\textrm{EH}}}(\textbf{a},\textbf{p},\tilde{\textbf{p}})- \lambda\big(\mathcal{V}(\textbf{a})- \mathcal{W}(\textbf{a}^{t})- \nabla_{\textbf{a}}\mathcal{W} (\textbf{a}^{t})^{T}.(\textbf{a}^{t}-\textbf{a}^{t})\big) \nonumber\\ \nonumber & = \widehat{\overline{\textrm{EH}}} (\textbf{a}^{t},\textbf{p}^{t},\tilde{\textbf{p}}^{t})- \lambda\big(\mathcal{V}(\textbf{a}^{t})- \mathcal{W}(\textbf{a}^{t})\big). \end{align} This completes the proof. \end{proof} \vspace{-5mm} One can readily verify that the objective function of (\ref{F3-43}) takes larger values as the iteration continues. Hence, we adopt an iterative solution to tighten the obtained upper bound based on the \textbf{Algorithm~\ref{alg2D.C.}}. \begin{algorithm}[H] \caption{Proposed Iterative Method via D.C. Programming Based on the MM Approach} \label{alg2D.C.} \begin{algorithmic}[1] \STATE {$\mathbf{Initialize}$} \\{ \begin{addmargin}[1em]{0em} {MM iteration index $t=0$ with maximum number of MM iteration $T_{max}$, \\ feasible set vector $\mathbf{a}^{0}$, $\mathbf{p}^{0}$, and $\tilde{\mathbf{p}}^{{0}}$},\\ and the penalty factor $\lambda\gg1$. \end{addmargin}} \STATE{\textbf{repeat} } \STATE{ \begin{addmargin}[1em]{0em} Update $\tilde{\mathcal{W}}(\textbf{a})$ based on (\ref{F3-42}). \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Solve optimization problem of (\ref{F3-43}) and store the intermediate resource allocation policy $\textbf{a}^{t}$, $\textbf{p}^{t}$, and $\tilde{\textbf{p}}^t$. \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Set $t=t+1$. \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Set \{$\mathbf{a}^t,\mathbf{p}^t$,$\tilde{\mathbf{p}}^t$\}= \{$\mathbf{a},\mathbf{p}$,$\tilde{\mathbf{p}}$\}. \end{addmargin}} \STATE \textbf{until} Convergence or $t=T_{max}$ \STATE \textbf{return} \{$\mathbf{a}^{*},{\mathbf{p}}^{*}$,$\tilde{\mathbf{p}}^*$\} $=$ \{$\mathbf{a}^{t},{\mathbf{p}}^{t}$,$\tilde{\mathbf{p}}^t$\} \end{algorithmic} \end{algorithm} \section{Complexity Analysis} As can be observed, the joint optimization problem in (\ref{F3-43}) involves $KN$ variables and $N+K+5NK$ linear constraints. Consequently, it can be concluded that the overall computational complexity of optimization problem is $\mathcal{O}(NK)^{2}(N+K+5NK)$. This is asymptotically equal to $\mathcal{O}(NK)^{3}$, exhibiting a polynomial-time complexity. \section{Simulation Results} \begin{table}[t] \centering \caption{Simulation Parameters} \label{chap3:Simulation_Parameters} \begin{tabular}{|c|c|}\hline {\bf Parameter} & {\bf Value} \\ \hline \hline {Cell coverage ($d_{max}$)} & {$20$ m}\\ {Reference distance ($d_{0}$)}&{$5$ m}\\ {The number of user ($K$)} & {$4$}\\ {The number of subcarriers ($N$)} & {$16$} \\ {Noise power ($\sigma^2$}) & {$-120$} dBm \\ {The bandwidth of each subcarrier} & {$180$} kHz \\ {Path loss exponent ($\alpha$)} & {$2.76$} \\ {Path loss model for cellular links} & {$31.7+27.6\log(\frac{d}{d_0})$} \\ {Multi-path fading distribution} & {Rician fading with factor $3$ dB} \\ {Power conversion efficiency ($\epsilon$)} & {$30\%$} \\ {The maximum transmit power of the AP ($p_{max}$)} & {$30$ dBm} \\ {The minimum data-rate requirement for the $k^{th}$ user (${R}_\textnormal{min}$)} & {$1$ bps$/$Hz} \\ {Channel realization number} & $100$\\ \hline \end{tabular} \end{table} In this section, the performance gain of the proposed joint subcarrier assignment and power allocation algorithm for SWIPT in the DL direction of a single-cell multi-user OFDMA system is evaluated through extensive simulations. The radius of the cell, $d_{max}$, is 20 meters, with a reference distance, $d_{0}$, of 5 meters. Moreover, there are $K = 4$ uniformly and randomly located users between, $d_{0}$, the reference distance and maximum coverage of the small-cell, $d_{max}$. We further assume a frequency-selective fading channel and consider the central carrier frequency is set to 3 GHz with the bandwidth of each subcarrier being {\text{180 kHz}}. The number of subcarriers is $N=16$, where the optimal set cardinality of subcarriers for ID and EH is determined based on~\cite{IA}. The variance of the background noise at the receiver is equal to ${\sigma_{n,k}^2=\sigma^2=-120}$ dBm throughout the simulations.~Since a line-of-sight (LoS) signal is expected in the received signal, the small-scale fading channel is modeled as Rician fading with Rician factor $\rho=3$ dB. Besides, the Rician flat fading channel gains include a distance-dependent path loss model of $31.7+10 \alpha \log(\frac{d}{d_0})$ [dB] (where $d$ is the distance between the transmitter and the receiver) and a log-normal shadowing component with~$8$ dB standard deviation where the path loss exponent is equal to $\alpha = 2.8$~\cite{339880}. These parameters for propagation modeling and simulations follow the suggestions in 3GPP evaluation methodology~\cite{chap3:3GPP}.~The power conversion efficiency of all users, $\epsilon_k$, is assumed to be the same and is equal to $\epsilon_k = \epsilon = 0.3 $.~The target transmission rate {\text{$R_{min}=1$ bit/second/Hz (bps/Hz)}} unless otherwise stated.~Moreover, we conduct Monte Carlo simulations by generating random realizations of the channel gains to obtain the average harvested energy of the network. Finally, the setting summarized in \textbf{Table}~(\ref{chap3:Simulation_Parameters}) are used unless otherwise specified. \subsection{Total Harvested Energy versus the Maximum Transmit Power} Figure~(\ref{plot:3.1}) shows the total harvested energy versus the maximum transmit power $p_{max}$. As can be observed, the average total harvested energy grows monotonically as the maximum transmit power increases. Besides, harvested energy for the lower values of the maximum transmit power is low as compared to higher values. This is due to the inability of the AP to contribute to energy harvesting as it is forced to ensure the QoS requirements. Consequently, for the higher values of the maximum transmit power, yet with the same data-rate requirement, the AP can help users to harvest more energy since fewer subcarriers are assigned to ID and more to EH. Hence, excess subcarriers are utilized to provide more harvested energy. \begin{figure}[!b] \centering \includegraphics[width=12cm]{figures/chap3/Fig_3_1.pdf} \caption{Total harvested energy versus the maximum transmit power} \label{plot:3.1} \end{figure} For comparison, the performance of the proposed joint optimization algorithm, Method~A, is compared with the following benchmark algorithms as of four Methods~B-E. Method B is the proposed method in \cite{IA}.~In this method, a subset of subcarriers is assigned to EH to maximize the harvesting energy based on the resource allocation design. On the other hand, the remaining ones are allocated for ID with an imposed QoS requirement. Method~C examines the proposed method in~\cite{18} in which each subcarrier set is divided into two groups. Specifically, one group is employed for EH, and the other one is utilized for ID while considering fixed PS ratios. Method~D is our proposed method with equal power allocation. This method is based on the proposed algorithm for the subcarrier assignment, while equal power is allocated across subcarrier for each user. Method~E is our proposed method with an equal power allocation alongside a random subcarrier assignment. This method considers equal power, where the subcarrier assignment is done randomly for meeting the data-rate requirement. Once the minimum data-rate requirement is satisfied, the remaining subcarriers are assigned to EH. It can be seen that our proposed method outperforms the other benchmark algorithms due to the joint optimization framework. As can be seen, Method B performs better than Method~C. This is because Method B considers the power allocation and subcarrier assignment based on a heuristic search, while in Method~C, each subcarrier is split into two streams with a fixed PS ratio. In particular, in Method C, the receiver does not adapt to the channel condition of each subcarrier, and each subcarrier splits the same ratio of power for ID and EH. This results in the degradation of the performance gain and less harvested energy comparatively. Another interesting observation is that considering a joint subcarrier assignment and power allocation in the optimization problem leads to a remarkable enhancement of the performance gain due to mitigating the harmful effect of deep fades experienced in the wireless channel as the resources allocator carefully takes this phenomenon under consideration. It should be noted that the reason the Method~E performs worse than the rest is that the subcarriers are assigned randomly together with an equal power assignment to users. Although the performance can improve if the subcarriers are assigned according to the designed resource allocation policy as it is the case in Method~D. \subsection{Average Harvested Energy versus Distance} Figure~\ref{plot:3.2} depicts the harvested energy versus distance between transmitter and receiver. It is observed that as the distance increases, the harvested energy decreases. The reason for this is that by increasing the distance, the channel strength would become weak, and subsequently, more subcarriers with more power are needed to be assigned to meet the minimum required data-rate. Hence, less energy would be harvested by the users since the AP first needs to consume power to secure the quality of experience provisioning. Once the minimum data-rate is met, the rest of the subcarriers would be exploited for EH. It should be noted that the Methods A-E are the same as defined in the last subsection. The explanations and superiority of Method A also stay the same and is because of our joint optimization network resource allocation framework. \begin{figure}[!h] \centering \includegraphics[width=12cm]{figures/chap3/Fig_3_2.pdf} \caption{Average harvested energy versus distance.} \label{plot:3.2} \end{figure} \subsection{Average Harvested Energy versus Number of Iterations} Figure~(\ref{plot:3.3}) illustrates the convergence of the proposed algorithm for different initialization for the power allocation. It can be observed that the average harvested energy rises up gradually as the number of iterations increases and tends to a stationary point no matter how the power is allocated among users over all subcarriers initially. This is because a more accurate, locally optimal solution can be found by increasing the number of iterations. Likewise, this is equivalent to saying that the algorithm is getting closer to satisfying convergence requirements. In the figure, we compare three initial selections for the power control; $\mathbf{p}^0(i) = \frac{p_{max}}{N}$, i.e., equal power is assigned to all users over all subcarriers, be it to an EH or an ID operation, $\mathbf{p}^0(i) = p_{max}$, i.e., all subcarriers have the maximum power, and $\mathbf{p}^0(i) = 0$, no power is assigned to different subcarriers to start with. Note that the last two power assignments do not satisfy the constraint ${C_{3}}$ in (\ref{F3-22}). Moreover, even though these curves are obtained for different initial power allocations, they all converge to almost the same value. However, the convergence rate differs significantly from the different power allocation settings: while the initial setting $\mathbf{p}^0(i) = 0$ converges the slowest to the optimal harvested energy, the setting $\mathbf{p}^0(i)=\frac{p_{max}}{N}$, which satisfies the constraints of the optimization problem, converges the fastest, i.e., less than nine iterations are required.\\ \begin{figure}[!t] \centering \includegraphics[width=12cm]{figures/chap3/Fig_3_3.pdf} \caption{Average harvested energy versus number of iterations.} \label{plot:3.3} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=12cm]{figures/chap3/Fig_3_4.pdf} \caption{Harvested energy versus sum transmit power at different target data-rates.} \label{plot:3.4} \end{figure} \subsection{Harvested Energy versus Maximum Transmit Power at Different Target Data-rates} Figure (\ref{plot:3.4}) illustrates the harvested energy versus the maximum transmit power, $p_{max}$, for different data-rate requirements. It can be seen that less energy would be harvested by increasing the minimum data-rate requirement, $R_{min}$. The reason behind it is quite evident. Considering a fixed maximum transmit power at the AP, more subcarriers with more power would be required to be assigned to each user when the minimum data-rate requirement becomes larger. This means that fewer subcarriers would be assigned to EH operation with lower allocated power. This, in turn, causes less energy to be harvested by the users over the EH subcarriers as the minimum data-rate requirement increases. \section{Summary} In this chapter, we investigated the problem of joint subcarrier assignment and power control for SWIPT in a multi-user single small-cell network to maximize the harvesting energy while respecting the minimum required data-rate for each user. We reduced the complexity of the receiver in our proposed algorithm as the receiver does not need a splitter to perform appropriately; that is, neither time nor power splitter is utilized at the receiver. The only knowledge that the resource allocator needs to know is which group of subcarriers is allocated for EH and which group for ID operation, where this knowledge is derived based on the CSI and the designed resource allocation policy. The problem considered in this chapter was a non-convex MILP problem, which is generally difficult to solve. In order to circumvent this difficulty, we relaxed the integer variables and employed the MM approach to convexify the problem. Afterward, the problem was solved in an iterative manner to obtain a locally optimal solution. Simulation results demonstrated that the designed algorithms performed better than other algorithms that have been addressed in the literature. \chapter{SWIPT in Multi-Cell Networks} \label{CHAP4} \fancyhf{} \renewcommand{\headrulewidth}{2pt} \fancyhead[LE,RO]{\thepage} \fancyhead[RE]{\textit{ \nouppercase{\leftmark}} } \fancyhead[LO]{\textit{ \nouppercase{\rightmark}} } \renewcommand{\footrulewidth}{0.1pt} \fancyfoot[CE,CO]{\nouppercase{\leftmark}} \fancyfoot[LE,RO]{JFMJ} \vspace{15mm} Most of the literature on the subject of simultaneous wireless information and power transfer (SWIPT) considers a single-cell network, in which users’ data-rate is a function of their signal-to-noise ratio (SNR). Although the absence of interference simplifies problem solving, it is clear that these kinds of system models do not accurately represent real-world networks. In contrast to chapter \ref{CHAP3}, here we examine a multi-cell network that incorporates interference in the data-rate function. It is not always the best practice to maximize energy harvesting when it could result in the network having a lower total data-rate or throughput. Data-rate is conventionally defined as the number of delivered information bits per channel use measured in~bit/second/Hz~(bps/Hz). In this chapter, we use a smart resource allocation policy to maximize the data-rate, subject to a minimum harvested energy constraint for users. The problem of resource allocation in multi-cell networks has already been addressed by other researchers, for instance, the authors of~\cite{06294504}, who studied an energy-efficient communication in the downlink (DL) of an orthogonal frequency division multiple access (OFDMA) multi-cell network with large numbers of single antenna cooperative base stations (BSs). They based their work on joint BS zero-forcing beamforming with full channel state information (CSI) while looking at the trade-off between energy efficiency, backhaul capacity, and network capacity. The authors of \cite{08885781} studied an adaptive fairness scheduling algorithm in co-located SWIPT-enabled multi-cell DL networks that employs an adjustable weighted sum method as the utility function of each user to study fairness among users. With the aim of maximizing energy harvesting efficiency, the authors of~\cite{08690892} investigated the beamforming design for multi-cell multi-user networks with SWIPT in a co-located receiver architecture. A max-min signal-to-interference plus interference ratio (SINR) problem in the DL of a dense multi-cell SWIPT-enabled network was explored in~\cite{7552733}, in which the users near their serving BS can perform both energy harvesting (EH) and information decoding (ID) using the power splitting receiver approach, whereas far-field users can only perform ID. In this chapter, we present a resource allocation policy in a {\text{multi-cell multi-user}} SWIPT-enabled network with a separated receiver architecture. In our system model design, each cell has two ring-shaped boundary regions, in which EH users are placed inside the inner boundary near small base stations (SBSs) and ID users are located in the outer region. As described in the first chapter, the literature presents several interesting receiver designs for enabling SWIPT, of which the four most viable designs are time switching~(TS), power splitting~(PS), antenna switching~(AS), and the separated receiver architecture. The complex circuitries associated with TS, PS, and AS receiver approaches for SWIPT add one or more additional optimization parameter(s) to the design of a proper resource allocation policy in order to segregate the received signal so it can carry out distinct or simultaneous ID and EH operations. The separated receiver architecture as an enabler of SWIPT networks could be very useful for dramatically reducing the architectural requirements within the transmit-receive operations. Separated design architecture in itself effectively reduces the design complexity of a receiver capable of SWIPT. Here, we investigate resource allocation for a SWIPT-enabled multi-cell multi-user OFDMA network with a separated receiver architecture. More precisely, we focus on designing a resource allocation algorithm for an OFMDA scheme with SWIPT, in which each SBS serves multiple ID and EH users with a single antenna. For this configuration, we propose a joint subcarrier assignment and power allocation optimization problem that maximizes the total data-rate of the network with the aim of satisfying a minimum data-rate requirement, fulfilling a minimum amount of harvested energy, and respecting the maximum transmit power allowance constraints. The users’ data-rate is proportional to their SINR due to the shared frequency spectrum between the cells. Interference in the data-rate function would make the optimization problem non-linear and non-convex – and therefore, very challenging. Next we propose an efficient algorithm via the majorization minimization (MM) approach based on the difference of convex functions (D.C.) programming and variable relaxation. This algorithm is intended to deal with these issues and effectively help to obtain a locally optimal solution for the original problem. Simulation results confirm that our proposed algorithm achieves excellent performance as compared to other studies described in the literature. \section{System Model} In this section, we aim at extending the scenario of having only one small-cell, as in \text{chapter \ref{CHAP3}}, to a multi-cell case in an indoor application. As shown in figure~(\ref{fig:4-1}), we consider a DL OFDMA-based network with $J$ cells each having one serving SBS. The set of all SBSs is denoted by ${j \in \mathcal{J}=\{1,2,...,J\}.}$ We further assume that each SBS is equipped with a single antenna.~The single antenna assumption also holds for all receivers. Furthermore, two sets of users are distinguished in each cell. Let us define the set of ID and EH users at the $j^{th}$ cell as $\mathcal{K}^{ID}_{j}=\{1,2,...,K^{ID}_{j}\}$ and $\mathcal{K}^{EH}_{j}=\{1,2,...,K^{EH}_{j}\}$,~respectively, where~$\mathcal{K_I}=\sum_{j\in\mathcal{J}}\mathcal{K}_j^{ID}$ indicates the total number of ID users, {$\mathcal{K_E}=\sum_{j\in\mathcal{J}}\mathcal{K}_j^{EH}$} shows the total number of EH users, and $\mathcal{K}=\mathcal{K_I}+\mathcal{K_E}$ gives the total number of users in the network. The receivers in this chapter are particularly configured based on the separated receiver topology in a multi-cell system using SWIPT, where each cell has two ring-shaped boundary regions. The near users inside the inner boundary can only harvest energy from the SBSs, whereas the far users only receive information signals from the SBSs in the outer region. We additionally assume that the entire frequency band of $\mathscr{B}$ is divided for $N$ orthogonal subcarrier, each having a bandwidth of $\mathscr{W}$. Furthermore, we consider that each subcarrier is assigned to at most one ID user. Moreover, we assume that the perfect CSI is available at the resource allocator to design the resource allocation policy so as to unveil the performance upper bound of the considered network. For the sake of readability, we first introduce some of the essential parameters that are used to describe the system model: \begin{itemize} \item $h^{ID}_{j,n,k}$: The DL channel gain for the wireless information transfer from the $j^{th}$ SBS to the $k^{th}$ ID user over the $n^{th}$ subcarrier. \item $g^{EH}_{j,n,k}$: The DL channel gain for the wireless power transfer from the $j^{th}$ SBS to the $k^{th}$ user over the $n^{th}$ subcarrier. \item $a_{j,n,k}$: Binary subcarrier indicators from the $j^{th}$ SBS corresponding to the $k^{th}$ user when the $n^{th}$ subcarrier is selected. \item $p_{j,n,k}$: The corresponding transmit power from the $j^{th}$ SBS to the $k^{th}$ user over the $n^{th}$ subcarrier. \end{itemize} \begin{figure}[p] \centering \includegraphics[width=21cm,trim=4 4 4 4,clip,angle=90]{figures/chap4/Chap4-system-model1-F1.pdf} \caption{SWIPT in a DL of a multi-user multi-cell OFDMA network with a separated receiver architecture.} \label{fig:4-1} \end{figure} Then, the received signal at the $j^{th}$ SBS over the $n^{th}$ subcarrier corresponding to the $k^{th}$ ID user is given by \begin{equation}\label{F4-1} y^{ID}_{j,n,k}= a_{j,n,k}\sqrt{p_{j,n,k}}h^{ID}_{j,n,k}+ \sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}} \sum_{\substack{k'\neq k \\ k' \in \mathcal{K}}} a_{j',n,k'}\sqrt{p_{j',n,k'}}h^{ID}_{j',n,k}+ z^{ID}_{j,n,k}, \end{equation} where $z^{ID}_{j,n,k}$ is the additive white Gaussian noise at an ID user.~More specifically, $z^{ID}_{j,n,k}$ is an additive white Gaussian noise (AWGN) random variable with zero mean and variance $|\sigma_{j,n,k}^{{ID}}|^{^2}$ denoted by $z^{ID}_{j,n,k} \sim \mathcal{CN}(0,|\sigma_{j,n,k}^{{ID}}|^{^2})$. Furthermore,~the harvested signal from the $j^{th}$ SBS for the EH user $k$ over the subcarrier $n$ is given by \begin{equation} y^{EH}_{j,n,k}= \sqrt{p_{j,n,k}}g^{EH}_{j,n,k} + z^{EH}_{j,n,k}, \end{equation} where $z^{EH}_{j,n,k}$ is the additive white Gaussian noise at the EH user with a circularly symmetric Gaussian distribution referred to as $z^{EH}_{j,n,k} \sim \mathcal{CN}(0,|\sigma_{j,n,k}^{{ED}}|^{^2})$.~According to the famous Shannon capacity formula, the data-rate of the $k^{th}$ ID user over the subcarrier $n$ inside the cell $j$ can be written as \begin{equation}\label{F4-3} R_{j,n,k}=\log_2 \Bigg(1+ \frac{a_{j,n,k}p_{j,n,k}|h_{j,n,k}^{ID}|^2}{|\sigma_{j,n,k}^{{ID}}|^{^2}+I_{j,n,k}} \Bigg), \end{equation} where \begin{equation} I_{j,n,k}= \sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}} \sum_{\substack{k'\neq k \\ k' \in \mathcal{K}}} a_{j',n,k'}{p_{j',n,k'}}|h^{ID}_{j',n,k}|^2, \end{equation} is the interference term arising from the co-channel effect on the subcarrier $n$ that is emitted by unintended transmitters sharing the same frequency channel. On the other hand, the transferred wireless energy can be harvested at each EH user when a simple linear EH model is considered. Therefore, the amount of the harvested energy of the $k^{th}$ EH user in the cell $j$ can be calculate according to \begin{equation} \label{F4-4} \textrm{EH}_{j,k}= \epsilon_{j,k} \sum_{n\in \mathcal{N}}a_{j,n,k}{p_{j,n,k}}|g^{EH}_{j,n,k}|^{2}, \end{equation} where $\epsilon_{j,k}$ is the power conversion efficiency as introduced in the previous chapter.~It should be noted that the contribution of the noise power, i.e., $|\sigma_{j,n,k}^{{ID}}|^{^2}$, to EH formula in (\ref{F4-4}) is ignored due to being reportedly small compared to the other existing term in (\ref{F4-4}). Let us now define $\textbf{p}_{kj}=[p_{j,1,k},...,p_{j,N,k}]$ as a row vector encompassing transmit power values assigned to the $k^{th}$ user in the $j^{th}$ cell for subcarriers, and $\textbf{a}_{kj}=[a_{j,1,k},...,a_{j,N,k}]$ as a row binary vector for the same user indicating the selected subcarriers in the $j^{th}$ cell.~Furthermore, $\textbf{p}=[\textbf{p}_{1,1},...,\textbf{p}_{|\mathcal{K}|,j}]^T$ and $\textbf{a}=[\textbf{a}_{1,1},...,\textbf{a}_{|\mathcal{K}|,j}]^T$, denote a vector containing total transmit power in the $j^{th}$ cell for all user and a binary vector representing the assigned subcarrier(s) to each user,~respectively. Therefore, we can define the total data-rate as \begin{equation} R^{\text{Total}}(\textbf{a},\textbf{p}) = \sum_{j\in \mathcal{J}} \sum_{k\in\mathcal{K_{I}}} \sum_{n\in \mathcal{N}} R_{j,n,k}. \label{chap4:F4:rate} \end{equation} Similarly, we can define the total harvested energy in the network as \begin{equation}\label{4F4-7} \textrm{EH}^{\text{Total}}(\textbf{a},\textbf{p})= \sum_{j\in \mathcal{J}} \sum_{k\in \mathcal{K_E}}\text{EH}_{j,k}. \end{equation} \begin{figure}[p] \centering \includegraphics[width=21cm,trim=4 4 4 4,clip,angle=90]{figures/chap4/Chap4-system-model2-F1.pdf} \caption{SWIPT in a DL of an OFDMA network consisting of $J=2$ small cells, where there is one user of each type in each cell, i.e., $\mathcal{K}^{ID}_1=\mathcal{K}^{ID}_2=\mathcal{K}^{EH}_1=\mathcal{K}^{EH}_2$=1.} \label{fig:4-2} \end{figure} Furthermore, to guarantee the quality of service (QoS), a minimum data-rate denoted by $R_{min}$ should be provided for ID users. That is \begin{align} \sum_{n \in \mathcal{N}}R_{j,n,k}\geq R_{min}, ~\forall j \in \mathcal{J}, k \in \mathcal{K_I}. \end{align} Moreover, a minimum harvested energy referred to as EH$_{min}$ is also considered for each EH user \begin{align} \text{EH}_{j,k}\geq \text{EH}_{min}, ~\forall j \in \mathcal{J}, k \in \mathcal{K_E}. \end{align} \section{Optimization Problem Formulation} In this section, we aim at finding a subcarrier assignment and power allocation policy via formulating the system data-rate or throughput maximization problem, while fulfilling a minimum harvested energy requirement for EH receivers and a minimum data-rate requirement for ID receivers. Consequently, we introduce the following optimization problem to maximize the total data-rate of the network \begin{subequations} \begin{align} &\max_{\textbf{a},\textbf{p}}R^{\text{Total}}(\textbf{a},\textbf{p})\label{F4-5}\\ s.t.: &~C_{1}: \sum_{k\in \mathcal{K_I}} a_{j,n,k}\leq 1,~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall n \in \mathcal{N}, \label{F4-6}\\ &~C_{2}: \sum_{k\in \mathcal{K}}~ \sum_{n\in \mathcal{N}} a_{j,n,k}\ p_{j,n,k} \leq p_{max},~ \forall j \in \mathcal{J}, \label{F4-7}\\ &~C_{3}: \sum_{n\in \mathcal{N}} R_{j,n,k}\geq R_{min},~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall k \in \mathcal{K_{I}}, \label{F4-8}\\ &~C_{4}: \text{EH}_{j,k}\geq \text{EH}_{min},~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall k \in \mathcal{K_{E}}, \label{F4-8-}\\ &~C_{5}:a_{j,n,k}\in\{0,1\},~~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall n \in \mathcal{N},~ \forall k \in \mathcal{K}. \label{F4-9} \end{align} \label{chap4:f6:main}% \end{subequations} \vspace{-8mm} In the optimization problem (\ref{chap4:f6:main}),~$C_{1}$ indicates that each subcarrier can be allocated to at most one ID user in each cell. $C_{2}$ indicates that the total transmit power of SBSs should not exceed their maximum threshold, which is denoted by $p_{max}$. In $C_{3}$, a minimum rate requirement, $R_{min}$, is guaranteed for each ID user in each cell and $C_{4}$ makes sure of a minimum harvested energy, EH$_{min}$, is satisfied for each EH user. Lastly, $C_{5}$ represents that the subcarrier indicator variable takes only binary values. Due to the multiplication of two variables, the binary subcarrier allocation variables, and the interference included in the data-rate function, the problem (\ref{chap4:f6:main}) is mixed integer non-linear programming (MINLP), which is generally difficult to solve. These challenges that make the above optimization problem complicated are explained further below: \begin{itemize} \item Multiplication of two variables: Since the multiplication of two variables is non-convex, the term~$a_{j,n,k}p_{j,n,k}$ poses a challenge in tackling the optimization problem in (\ref{chap4:f6:main}). More specifically, because the maximum total transmit power constraint in $C_2$, the minimum data-rate requirement constraint in $C_3$, the minimum harvested energy requirement constraint in $C_4$, and also the total data-rate objective function, are multiplied by a function of both the transmit power and subcarrier allocation variables (as given in (\ref{F4-7}), (\ref{F4-8}), (\ref{F4-8-}), and (\ref{F4-5})), these mentioned constraints together with the objective function are non-convex. \item Interference: The inter-cell interference incorporated in data-rate functions, makes both the constraint $C_3$ and the objective function non-convex. \item Binary subcarrier assignment variable: Discrete subcarrier assignment in $C_5$ turns (\ref{chap4:f6:main}) into a complex MINLP problem. \end{itemize} In the following section, we first restate the problem (\ref{chap4:f6:main}) as a mathematically tractable form to maximize the total data-rate by considering the interference in the data-rate function. We also guarantee a minimum data-rate for ID users and a minimum harvested energy for EH users. Furthermore, we propose a suboptimal resource allocation algorithm, which has a polynomial-time computational complexity to compromise between complexity and system performance. \section{Solution to the Optimization Problem} In order to address the mixed non-convex and combinatorial optimization problem (\ref{chap4:f6:main}), we first deal with the problem of variables multiplication in constraints $C_{2}$, $C_{3}$, and $C_{4}$. In order to handle this difficulty, we adopt the big-M formulation \cite{big_M} to decouple the product terms. Therefore, we impose the following additional constraints \begin{align} &C_{6}: \tilde{p}_{j,n,k}\leq p_{max}a_{j,n,k},~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~\forall n \in \mathcal{N},~\forall k \in \mathcal{K}, \\ &C_{7}: \tilde{p}_{j,n,k}\leq p_{j,n,k},~~~~~~~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~\forall n \in \mathcal{N},~\forall k \in \mathcal{K}, \\ &C_{8}: \tilde{p}_{j,n,k}\geq p_{j,n,k}-(1-a_{j,n,k})p_{max},~ \forall j \in \mathcal{J},~\forall n \in \mathcal{N},~\forall k \in \mathcal{K}, \\ &C_{9}: \tilde{p}_{j,n,k}\geq 0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~\forall n \in \mathcal{N},~\forall k \in \mathcal{K}, \end{align} where $\tilde{\textbf{p}}\in \mathbb{R}^{1\times JNK}$ is the collection of all $\tilde{p}_{j,n,k}$'s, and ${\textbf{a}}\in \mathbb{Z}^{1\times JNK}$ is the collection of all $a_{j,n,k}$'s. Now, by revisiting the definition of the data-rate function, the data-rate of the $k^{th}$ ID user over the subcarrier $n$ inside the $j^{th}$ cell in (\ref{F4-3}), and also, the total data-rate of the network in (\ref{chap4:F4:rate}) can be rewritten respectively as \begin{align} &\overline{R}_{j,n,k} = \log_2 \Bigg(1+\frac{\tilde{p}_{j,n,k}|h^{ID}_{j,n,k}|^2} {|\sigma_{j,n,k}^{{ID}}|^{^2}+ \sum\limits_{\substack{j'\neq j \\ j' \in \mathcal{J}}}~\sum\limits_{\substack{k'\neq k \\ k' \in \mathcal{K}}}~{\tilde{p}_{j',n,k'}}|h^{ID}_{j',n,k}|^{2}}\Bigg),\\ &\widehat{\overline{R}}^{\text{Total}}(\textbf{a},\tilde{\textbf{p}}) = \sum_{j\in \mathcal{J}}\sum_{k\in\mathcal{K_{I}}}\sum_{n\in \mathcal{N}} \overline{R}_{j,n,k}. \end{align} Furthermore, the amount of the harvested energy of the $k^{th}$ EH user in the cell $j$ in (\ref{F4-4}) and the total amount of the harvested energy of the network in (\ref{4F4-7}) can be restated as \begin{align} &\overline{\textrm{EH}}_{j,k}= \epsilon_{j,k} \sum_{n\in \mathcal{N}}\tilde{p}_{j,n,k}|g^{EH}_{j,n,k}|^{2},\\ &\widehat{\overline{\textrm{EH}}}^{\text{Total}}(\textbf{a},\tilde{\textbf{p}})= \sum_{j\in \mathcal{J}} \sum_{k\in \mathcal{K_E}} \overline{\textrm{EH}}_{j,k}. \end{align} Hence, the original optimization problem in (\ref{chap4:f6:main}) can be modified as \begin{subequations} \begin{align} &\max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}} \widehat{\overline{R}}^{\text{Total}} (\textbf{a},\tilde{\textbf{p}})\\ s.t.: &~C_{1},C_{5}-C_{9},\\ &~C_{2}: \sum_{k\in \mathcal{K}}~ \sum_{n\in \mathcal{N}} \tilde{p}_{j,n,k} \ \leq p_{max},~ \forall j \in \mathcal{J},\\ &~C_{3}: \overline{R}_{j,n,k}\geq R_{min},~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall k \in \mathcal{K_{I}},\\ &~C_{4}: \overline{\textrm{EH}}_{j,k} \geq \text{EH}_{min},~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall k \in \mathcal{K_{E}}. \end{align} \label{chap4:f6:main2}% \end{subequations} Through this method, we have easily and efficiently dealt with the non-convex constraints $C_{2}-C_{4}$ by using their equivalent convex forms. Another challenge in solving the above optimization problem is due to the incorporating interference in the data-rate functions in the constraint $C_{3}$ and objective function. This makes the resulting optimization problem in (\ref{chap4:f6:main2}) still non-convex. To facilitate the solution design, we first rewrite the optimization problem in terms of the D.C. functions. Mathematically speaking, the problem can be restated as \begin{subequations} \begin{align} &\max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}} \ \sum_{j \in \mathcal{J}} \sum_{ k \in \mathcal{K_I}} \mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \mathcal{V}(\textbf{a},\tilde{\textbf{p}}) \label{F4-17}\\ s.t.:~& \dot{C}_3: \mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \mathcal{V}(\textbf{a},\tilde{\textbf{p}}) \geq R_{min},~\forall j \in \mathcal{J},~\forall k \in \mathcal{K_{I}}, \label{F4-20b}\\ & C_{1}-C_{2},C_{4}-C_{9}. \end{align} \label{F2:20}% \end{subequations} where $\mathscr{U}(\textbf{a},\tilde{\textbf{p}})$~and $\mathcal{V}(\textbf{a},\tilde{\textbf{p}})$ are defined as \begin{align} \mathscr{U}(\textbf{a},\tilde{\textbf{p}}) & = \sum_{n \in \mathcal{N}}\log_2\Big( \tilde{p}_{j,n,k}|h^{ID}_{j,n,k}|^2+{|\sigma_{j,n,k}^{{ID}}|^{^2}}+ \sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}} \sum_{\substack{k'\neq k \label{4F4:21}\\ k' \in \mathcal{K}}} \tilde{p}_{j',n,k'}{|h^{ID}_{j',n,k}|^{2}}\Big),\\ \mathcal{V}(\textbf{a},\tilde{\textbf{p}}) & = \sum_{n \in \mathcal{N}} \log_2\Big({{|\sigma_{j,n,k}^{{ID}}|^{^2}}+ \sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}} \sum_{\substack{k'\neq k \\ k' \in \mathcal{K}}} \tilde{p}_{j',n,k'}|h^{ID}_{j',n,k}|^{2}}\Big). \label{4F4:22} \end{align} It should be emphasized that $-\mathscr{U}(\textbf{a},\tilde{\textbf{p}})$ and $-\mathcal{V}(\textbf{a},\tilde{\textbf{p}})$ are now convex functions. In addition, the problem in (\ref{F2:20}) belongs to the class of D.C. programming problems. Consequently, the first-order Taylor approximation can be applied to approximate the D.C. components. This facilitates the design of a computationally efficient iterative resource allocation algorithm for obtaining a locally close-to-optimal solution. Thus, for any feasible point ${\textbf{a}}^{t-1}$ and $\tilde{\textbf{p}}^{t-1}$, the following approximation holds \cite{SCA,DC} \begin{align} \mathcal{V}(\textbf{a},\tilde{\textbf{p}}) \simeq \mathcal{V}(\textbf{a}^{t-1},\tilde{\textbf{p}}^{t-1}) + \nabla_{\tilde{\textbf{p}}}\mathcal{V} (\textbf{a}^{t-1},\tilde{\textbf{p}}^{t-1})^T.(\tilde{\textbf{p}}- \tilde{\textbf{p}}^{t-1}) \triangleq \tilde{\mathcal{V}}(\textbf{a},\tilde{\textbf{p}}), \label{F4-25} \end{align} where $\tilde{\textbf{p}}^{t-1}$ is the solution of the problem at the $(t-1)^{th}$ iteration, and $\nabla_{\square}$ represents the gradient with respect to ${\square}$. Accordingly, we rewrite the optimization problem in (\ref{F2:20}) as follows \begin{subequations} \begin{align} &\max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}} \ \sum_{j \in \mathcal{J}} \sum_{ k \in \mathcal{K_I}} \mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \tilde{\mathcal{V}}(\textbf{a},\tilde{\textbf{p}}) \\ s.t.:~& \dot{C}_3: \mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \tilde{\mathcal{V}}(\textbf{a},\tilde{\textbf{p}})\geq R_{min},~ \forall j \in \mathcal{J},~ \forall k \in \mathcal{K_{I}}, \\ ~& C_{1}-C_{2},C_{4}-C_{9}. \end{align} \label{F4-26}% \end{subequations} It can be perceived that optimization problem in (\ref{F4-26}) is still non-convex due to the integer subcarrier allocation variable, i.e., $a_{j,n,k}$. This binary variable turns (\ref{F4-26}) into an MINLP problem, which makes it challenging to solve with a polynomial-time complexity. To address this issue, we adopt an approach similar to chapter \ref{CHAP3}, and substitute the constraint $C_{5}$ with the following inequalities \begin{align} &\dot{C}_{5}: 0\leq a_{j,n,k}\leq 1,~ \forall j \in \mathcal{J},~ \forall n \in \mathcal{N},~ \forall k \in \mathcal{K},\\ &\ddot{C}_{5}: \sum_{j\in \mathcal{J}} \sum_{k\in\mathcal{K}} \sum_{n\in \mathcal{N}} a_{j,n,k}-(a_{j,n,k})^{2}\leq 0 \label{F4:26}. \end{align} In this regard, the constraint $\dot{C}_{5}$ converts the binary variable $a_{j,n,k}$ into a continuous variable with values in the close interval $[0,1]$ which contains zero, one, and all real numbers in between. However, in the constraint $\ddot{C}_{5}$, the value of $a_{j,n,k}$ is restricted to two possible values, i.e., zero and one. These two integer numbers are the only numbers that satisfy the constraint $\ddot{C}_{5}$, and thereby, belong to the set $\{0,1\}$. In the same way as discussed in previous paragraphs, we can now write the constraint $\ddot{C}_{5}$ in the D.C. format as $\nu(\textbf{a})-\mu(\textbf{a})\leq 0$ where \begin{align} &\nu(\textbf{a})= \sum_{j\in \mathcal{J}} \sum_{k\in\mathcal{K}} \sum_{n\in \mathcal{N}}a_{j,n,k},\\ &\mu(\textbf{a})= \sum_{j\in \mathcal{J}} \sum_{k\in\mathcal{K}} \sum_{n\in \mathcal{N}} (a_{j,n,k})^{2}. \end{align} Similar to the approach used for the data-rate function in (\ref{4F4:21}) and (\ref{4F4:22}), we employ an equivalent methodology based on the MM approach to make the constraint $\ddot{C}_{5}$ convex. This is done by taking the first-order Taylor approximation of $\mu(\mathbf{a})$. Consequently,~$\mu(\mathbf{a})$ can be approximated as \begin{align}\label{mu} \mu(\mathbf{a}) \simeq \mu(\mathbf{a}^{(t-1)}) + \nabla_{\mathbf{a}}{\mu} (\mathbf{a}^{t-1})^T(\mathbf{a}-\mathbf{a}^{t-1}) \triangleq \tilde{\mu}(\mathbf{a}). \end{align} Therefore, the constraint $\ddot{C}_5$ can be restated as $\nu(\mathbf{a})-\tilde{\mu}(\mathbf{a})\leq 0$ which is now convex. Finally, the optimization problem at hand can be reformulated as \begin{subequations} \begin{align} &\max_{\textbf{a},\textbf{p},\tilde{\textbf{p}}} \ \sum_{j \in \mathcal{J}} \sum_{ k \in \mathcal{K_I}} \mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \tilde{\mathcal{V}} (\textbf{a},\tilde{\textbf{p}})\\ s.t.:~ & \dot{C}_3: \mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \tilde{\mathcal{V}}(\textbf{a},\tilde{\textbf{p}})\geq R_{min} \label{F4-18},~ \forall j \in \mathcal{J},~ \forall k \in \mathcal{K_{I}},\\ & \ddot{C}_5: \nu(\mathbf{a})-\tilde{\mu}(\mathbf{a}) \leq 0,\label{F4-19}\\ & C_{1}-C_{2},C_{4},\dot{C}_5,C_{6}-C_{9}. \end{align} \label{F4-26-}% \end{subequations} The optimization problem (\ref{F4-26-}) is convex and can be solved efficiently via D.C. approximation based on the interior point methods~\cite{MM}.~As a result, the solution of (\ref{F4-26-}) would be an approximation to the solution of the original problem given in (\ref{chap4:f6:main}). However, in D.C. programming, the iteration begins from a feasible initial point and solves the optimization problem iteratively until it eventually approaches a close-to-optimal solution~\cite{che2014joint,DC,CL_Ata}. Besides, it worth mentioning that the MM approach produces a sequence of improved feasible solutions with the adopted D.C. approximation, which will ultimately converge to a locally optimal solution $(\textbf{a}^{*},\textbf{p}^*,\tilde{\textbf{p}}^*)$ using standard convex program solvers such as CVX. \begin{proposition}\label{proposition3} The solution obtained from (\ref{F4-26-}) satisfies the constraints $\dot{C}_3$ and $\ddot{C}_5$ in (\ref{F4-20b}) respectively (\ref{F4:26}), by using the first-order concave function for $\mathcal{V}(\textbf{a},\tilde{\textbf{p}})$ and the first-order convex function for ${\mu}(\textbf{a})$. \end{proposition} \begin{proof} It is straightforward to show that $\mathcal{V}(\textbf{a},\tilde{\textbf{p}})$ is a concave function. Hence, the gradient of $\mathcal{V}(\textbf{a},\tilde{\textbf{p}})$ is a supergradient~\cite{KENNETH-LANGE}.~This yields \begin{equation} \label{F4-29} \mathcal{V}(\textbf{a},\tilde{\textbf{p}})\leq\tilde{\mathcal{V}}(\textbf{a},\tilde{\textbf{p}}). \end{equation} Subsequently, one may easily conclude from the inequality in (\ref{F4-25}) that \begin{equation} \mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \mathcal{V}(\textbf{a},\tilde{\textbf{p}}) \geq \mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \tilde{\mathcal{V}}(\textbf{a},\tilde{\textbf{p}}). \end{equation} Similarly, since $\mu(\textbf{a})$ is a convex function, its gradient is a subgradient~\cite{KENNETH-LANGE}.~Thus, we have \begin{equation} \nu(\textbf{a})-\mu(\textbf{a}) \leq \nu(\textbf{a})-\tilde{\mu}(\textbf{a}). \end{equation} Nevertheless, it should be noted that the existence of subgradients and supergradients are closely tied to the fact that $\mu(\textbf{a})$ and $-\mathcal{V}(\textbf{a},\tilde{\textbf{p}})$ are locally Lipschitz \cite{KENNETH-LANGE}. Accordingly, the constraint $\mathscr{U}(\textbf{a},\tilde{\textbf{p}})-{\mathcal{V}}(\textbf{a},\tilde{\textbf{p}})\geq R_{min}$ is satisfied if $\mathscr{U}(\textbf{a},\tilde{\textbf{p}})-\tilde{\mathcal{V}}(\textbf{a},\tilde{\textbf{p}})$ is greater than $R_{min}$ for each user. In the same way, if $\nu(\textbf{a})-\tilde{\mu}(\textbf{a}) \leq 0$, the inequality $\nu(\textbf{a})-{\mu}(\textbf{a}) \leq 0$ holds as well. This completes the proof. \end{proof} \begin{proposition} The surrogate functions that are employed to approximate the non-convex optimization problem in (\ref{F4-26-}) provide a tight lower bound for the objective function in (\ref{F2:20}). \end{proposition} \begin{proof} Borrowing the first part of the proof from the proof of \textbf{Proposition~\ref{proposition3}}, the following inequality holds for the objective function in (\ref{F4-26-}) \begin{equation} \mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \mathcal{V}(\textbf{a},\tilde{\textbf{p}}) \geq \mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \mathcal{V}(\textbf{a}^{t-1},\tilde{\textbf{p}}^{t-1}) + \nabla_{\tilde{\textbf{p}}}\mathcal{V}(\textbf{a}^{t-1},\tilde{\textbf{p}}^{t-1})^T.(\tilde{\textbf{p}}-\tilde{\textbf{p}}^{t-1}), \end{equation} where, the equality holds when $\textbf{a}=\textbf{a}^{t-1}$ and $\tilde{\textbf{p}}=\tilde{\textbf{p}}^{t-1}$.~Thus, the MM updates furnish a candidate subgradient in each iteration.~This demonstrates the tightness of the lower bound and completes the proof. \end{proof} \begin{proposition} By incorporating D.C. approximation, the solution of (\ref{F4-26-}) improves after each iteration \end{proposition} \begin{proof} For the objective function in (\ref{F2:20}), we have the following in the $t^{th}$ iteration \begin{equation} \mathscr{U}(\textbf{a}^t,\tilde{\textbf{p}}^t)-\mathcal{V}(\textbf{a}^t,\tilde{\textbf{p}}^t) \end{equation} Subsequently, in the next iteration, we have \begin{align} \mathscr{U}(\textbf{a}^{t+1},\tilde{\textbf{p}}^{t+1})- \mathcal{V}(\textbf{a}^{t+1},&\tilde{\textbf{p}}^{t+1}) \nonumber\\ & \geq \mathscr{U}(\textbf{a}^{t+1},\tilde{\textbf{p}}^{t+1})- \mathcal{V}(\textbf{a}^{t},\tilde{\textbf{p}}^{t})- \nabla_{\tilde{\textbf{p}}}\mathcal{V}(\textbf{a}^{t},\tilde{\textbf{p}}^{t})^{T}.(\tilde{\textbf{p}}-\tilde{\textbf{p}}^{t}) \nonumber \\ &=\max_{\textbf{a},\tilde{\textbf{p}}}\mathscr{U}(\textbf{a},\tilde{\textbf{p}})- \mathcal{V}(\textbf{a}^{t},\tilde{\textbf{p}}^{t})- \nabla_{\tilde{\textbf{p}}}\mathcal{V}(\textbf{a}^{t},\tilde{\textbf{p}}^{t})^{T}.(\tilde{\textbf{p}}-\tilde{\textbf{p}}^{t}) \nonumber\\ &\geq \mathscr{U}(\textbf{a}^{t},\tilde{\textbf{p}}^{t})- \mathcal{V}(\textbf{a}^{t},\tilde{\textbf{p}}^{t}) - \nabla_{\tilde{\textbf{p}}}\mathcal{V}(\textbf{a}^{t},\tilde{\textbf{p}}^{t})^{T}.(\tilde{\textbf{p}}^{t}-\tilde{\textbf{p}}^{t}) \nonumber\\ &= \mathscr{U}(\textbf{a}^{t},\tilde{\textbf{p}}^{t})- {\mathcal{V}}(\textbf{a}^{t},\tilde{\textbf{p}}^{t}). \nonumber \end{align} This completes the proof. \end{proof} One can readily verify that the objective function of (\ref{F4-26-}) takes larger values as the iteration proceeds.~Therefore, the solution to the optimization problem improves gradually. Hence, we adopt an iterative solution to tighten the obtained upper bound based on the \textbf{Algorithm~\ref{euclid}}. Like so, the proposed iterative resource allocation scheme generates a monotonically non-decreasing sequence of feasible solution, i.e., $\textbf{a}^{t+1}$, $\textbf{p}^{t+1}$, and $\tilde{\textbf{p}}^{t+1}$, by solving the convex problem in (\ref{F4-26-}). \begin{algorithm}[t] \caption{Proposed Iterative Method via D.C. Programming Based on the MM Approach} \label{euclid} \begin{algorithmic}[1] \STATE {$\mathbf{Initialize}$} \\{ \begin{addmargin}[1em]{0em} {MM iteration index $t=0$ with maximum number of MM iteration $T_{max}$\\ and feasible set vector $\mathbf{a}^{0}$, $\mathbf{p}^{0}$, and $\tilde{\mathbf{p}}^0$}. \end{addmargin}} \STATE{\textbf{repeat}} \STATE{ \begin{addmargin}[1em]{0em} Update $\tilde{\mathcal{V}}(\textbf{a},\tilde{\textbf{p}})$,~$\tilde{\mu}(\mathbf{a})$ as presented in (\ref{F4-25}) and (\ref{mu})~respectively. \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Solve optimization problem of (\ref{F4-26-}) and store the intermediate resource allocation policy $\mathbf{a}^t$, $\mathbf{p}^t$, and $\tilde{\mathbf{p}}^t$. \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Set $t=t+1$. \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Set \{$\mathbf{a}^t,\mathbf{p}^t$,$\tilde{\mathbf{p}}^t$\} $=$ \{$\mathbf{a},\mathbf{p}$,$\tilde{\mathbf{p}}$\}. \end{addmargin}} \STATE \textbf{until} Convergence or $t=T_{max}$ \STATE \textbf{return} \{$\mathbf{a}^{*},{\mathbf{p}}^{*}$,$\tilde{\mathbf{p}}^*$\} $=$ \{$\mathbf{a}^{t},{\mathbf{p}}^{t}$,$\tilde{\mathbf{p}}^t$\} \end{algorithmic} \end{algorithm} \section{Computational Complexity} In this section, we aim at investigating the computational complexity of the proposed algorithm. The optimization problem (\ref{F4-26-}) includes $NJK$ variables and $J(1+N+K)+5JNK$ linear convex constraints. Therefore, the computational complexity is that of order $\mathcal{O}(NJK)^{3}(J(1+N+K)+5JNK)$. Despite the polynomial-time computational complexity of our proposed algorithm, the computation cost is still high and may become unaffordable for resource allocators with predefined capabilities. However, it should not be neglected that the computational complexity of \textbf{Algorithm~\ref{euclid}} is lower compared to the exhaustive search approaches. Moreover, it is worth mentioning that \textbf{Algorithm~\ref{euclid}} provides a locally optimal solution that is closely approaching the optimal solution. Nevertheless, in what follows, we provide a low complexity algorithm for designing the resource allocation policy. \section{Low Complexity Algorithm Design (Lower Bound)}\vspace{-3mm} In this section, we introduce another algorithm with even lower computational complexity to improve the practicality of \textbf{Algorithm~\ref{euclid}}. In order to handle the non-convexity of data-rate functions, we first assume that there is an upper bound for the interference term and impose the following constraint on the optimization problem \begin{align} I_{j,n,k} \leq I^{n}_{max}, \end{align} where $I^{n}_{max}$ is the maximum tolerable inter-cell interference parameter. In this way, we can derive an efficient resource allocation algorithm by trimming the solution design. To improve performance, we can control the interference level in each subcarrier by the resource allocator policy through varying the value of $ I^{n}_{max}$~\cite{6294504}. Also, we can have a concave function and therefore a computable data-rate, if we replace $I_{j,n,k}$ by $I^{n}_{max}$ in data-rate functions. At this point, we describe a tractable solution methodology for the original problem in the following subsection. \vspace{-3mm} \subsection{Low Complexity Power Control and Subcarrier Assignment}\vspace{-2mm} In this subsection, we seek to attain a low complexity suboptimal subcarrier assignment and power allocation algorithm. To derive a cost-efficient resource allocation design, it is required to relax the binary subcarrier assignment constraint. This is done by transforming the subcarrier assignment variable $a_{j,n,k}$ into a continuous constraint with values within the close interval~$[0,1]$. In this sense, the fraction of time that subcarrier $n$ is assigned to the user $k$ would be the physical interpretation of the continuous $a_{j,n,k}$. Notwithstanding the non-convexity of the original optimization problem, strong duality still holds as a consequence of the time-sharing condition addressed in \cite{Time,6825834}. Finally, by defining a new power allocation variable as $\tilde{q}_{j,n,k}=a_{j,n,k} p_{j,n,k}$, the modified optimization problem can be expressed as \vspace{-2mm} \begin{subequations} \begin{align} &\max_{\textbf{a},\textbf{p},\tilde{\textbf{q}}} \underline{\mathcal{R}}^{\text{Total}}(\textbf{a},\tilde{\textbf{q}}) \label{F4-32} \\ s.t.: &~C_{1}: \sum_{k\in \mathcal{K_{I}}} a_{j,n,k}\leq 1,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall n \in \mathcal{N}, \\ &~C_{2}: \sum_{k\in \mathcal{K}}~ \sum_{n\in \mathcal{N}}\tilde{q}_{j,n,k}\leq p_{max},~~~~~~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},\\ &~C_{3}: \sum_{n\in \mathcal{N}} \log_2 \Bigg(1+\frac{\tilde{q}_{j,n,k}\ |h^{ID}_{j,n,k}|^2}{{|\sigma_{j,n,k}^{{ID}}|^{^2}}+I^{n}_{max}}\Bigg) \geq R_{min},~ \forall j \in \mathcal{J}, k \in \mathcal{K_{I}}, \\ &~C_{4}: \epsilon_{j,k} \sum_{n \in \mathcal{N}}\tilde{q}_{j,n,k}|g^{EH}_{j,n,k}|^{2}\geq \text{EH}_{min},~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall k \in \mathcal{K_{E}},\\ &~C_{5}: 0\leq a_{j,n,k}\leq 1,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall n \in \mathcal{N},~ \forall k \in \mathcal{K},\\ &~C_{6}: \sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}} \sum_{\substack{k'\neq k \\ k' \in \mathcal{K}}} \tilde{p}_{j',n,k'}|h^{ID}_{j',n,k}|^2\leq I^{n}_{max},~~~~~~~~~~~~ \forall n \in \mathcal{N},~ \forall k \in \mathcal{K}, \end{align} \label{F4-32:main}% \end{subequations} where \vspace{-4mm} \begin{align} \underline{\mathcal{R}}^{\text{Total}}(\textbf{a},\tilde{\textbf{q}}) = \sum_{j\in \mathcal{J}}\sum_{k\in\mathcal{K_{I}}}\sum_{n\in \mathcal{N}} \log_2 \Bigg(1+\frac{\tilde{q}_{j,n,k}|h^{ID}_{j,n,k}|^2} {|\sigma_{j,n,k}^{{ID}}|^{^2}+ I^{n}_{max}}\Bigg). \end{align} It is easy to verify that Slater's condition holds for the above convex optimization problem. Therefore, solving the dual problem is equivalent to solving the primal problem due to strong duality. In order to obtain the corresponding resource allocation policy, the Lagrangian method is applied to the convex optimization problem in (\ref{F4-32:main}). Hence, the {\text{Lagrangian function is as follows}} \vspace{-2mm} \begin{align} \mathcal{L}(\textbf{a},\mathbf{p},\tilde{\textbf{q}},\boldsymbol{\chi},\boldsymbol{\phi},\boldsymbol{\zeta},\boldsymbol{\tau},\boldsymbol{\theta})= & \quad \underline{\mathcal{R}}^{\text{Total}}(\textbf{a},\tilde{\textbf{q}}) \nonumber \label{F4-38}\\ &-\boldsymbol{\chi}\bigg(\sum_{k\in \mathcal{K_{I}}} a_{j,n,k}- 1\bigg) \nonumber \\ &-\boldsymbol{\phi}\bigg(\sum_{k\in \mathcal{K}}~\sum_{n\in \mathcal{N}}\tilde{q}_{j,n,k} - p_{max}\bigg)\nonumber\\ &+\boldsymbol{\zeta}\bigg(\sum_{n\in \mathcal{N}}\log_2\big(1+\frac{\tilde{q}_{j,n,k}\ |h^{ID}_{j,n,k}|^2}{{|\sigma_{j,n,k}^{{ID}}|^{^2}}+I^{n}_{max}}\big)-R_{min}\bigg) \nonumber \\ \nonumber &+\boldsymbol{\tau}\bigg(\epsilon_{j,k}\sum_{n \in \mathcal{N}} \tilde{q}_{j,n,k}|g^{EH}_{j,n,k}|^{2}-\text{EH}_{min}\bigg)\\ &-\boldsymbol{\theta}\big(\sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}}\sum_{\substack{k'\neq k \\ k' \in \mathcal{K}}}\tilde{p}_{j',n,k'}|h^{ID}_{j',n,k}|^2- I^{n}_{max}\big), \end{align} where $~\boldsymbol{\chi},~\boldsymbol{\phi},~\boldsymbol{\zeta},~\boldsymbol{\tau},~\boldsymbol{\theta}$ are the Lagrangian vectors associated with the constraints. Specifically, the Lagrange multiplier vector with respect to the OFDMA constraint, i.e., the first constraint $C_{1}$, has its elements as $\chi_{j,n}$'s where $j \in \{1,2,...,J\}$ and $n \in \{1,2,..., N\}$. The $\boldsymbol{\phi}$ is the Lagrange multiplier vector accounting for the maximum transmit power constraint $C_{2}$ with {$\boldsymbol{\phi}_{j}$'s where $j \in \{1,2,...,J\}$}. The Lagrange multiplier vector $\boldsymbol{\zeta}$ that corresponds to the data-rate constraint $C_{3}$ posses the elements $\zeta_{j,k}$'s, where $j \in \{1,2,...,J\}$ and $k \in \{1,2,..., \mathcal{K}_{I}\}$. The vector $\boldsymbol{\tau}$ is the Lagrange multiplier vector for the constraint $C_{4}$ with the elements $\tau_{j,k}$'s, where {$j \in \{1,2,...,J\}$ and $k \in \{1,2,..., \mathcal{K_{E}}\}$}. Finally, the Lagrange multipliers vector for the interference threshold constraint $C_{6}$ is $\boldsymbol{\theta}$ that has components $\theta_{j,n,k}$'s, where ${j \in \{1,2,...,J\}}$,~${n \in \{1,2,..., N\}}$, and ${k \in \{1,2,..., \mathcal{K_{I}}\}}$. It should be pointed out that the boundary constraints are absorbed into the Karush-Kuhn-Tucker (KKT) conditions when deriving the resource allocation policy. Thus, the dual problem of (\ref{F4-32:main}) is given by \begin{align} \min_{\boldsymbol{\chi},\boldsymbol{\phi},\boldsymbol{\zeta},\boldsymbol{\tau},\boldsymbol{\theta}} \ \ \max_{\textbf{a},\mathbf{p},\tilde{\textbf{q}}}\ \ \mathcal{L}(\textbf{a},\mathbf{p},\tilde{\textbf{q}},\boldsymbol{\chi},\boldsymbol{\phi},\boldsymbol{\zeta},\boldsymbol{\tau},\boldsymbol{\theta}). \label{F4-39} \end{align} In the following, we solve the above dual problem iteratively by decomposing it into two layers. The first layer, Layer 1, consists of subproblems with identical structures while the second layer, Layer 2, is the master dual problem to be solved with the gradient method. \textbf{\textit{Dual Decomposition and Layer 1 Solutions}:} By dual decomposition, the first layer can be written as follows \begin{align} \mathcal{D}(\boldsymbol{\chi},\boldsymbol{\phi},\boldsymbol{\zeta},\boldsymbol{\tau},\boldsymbol{\theta})= \max_{\textbf{a},\mathbf{p},\tilde{\textbf{q}}} \ \ \mathcal{L}(\textbf{a},\mathbf{p},\tilde{\textbf{q}},\boldsymbol{\chi},\boldsymbol{\phi},\boldsymbol{\zeta},\boldsymbol{\tau},\boldsymbol{\theta}). \label{F4-40} \end{align} For a fixed set of Lagrange multipliers, (\ref{F4-40}) is a convex optimization problem, for which a unique optimal solution can be obtained using the Lagrange dual function. Forming the Lagrangian, taking the derivative of the Lagrangian with respect to $\tilde{\textbf{q}}$ and setting the derivative equal to zero, the transmit power $\tilde{\textbf{q}}$ is obtained. Using standard optimization techniques and the KKT conditions, the power allocation for user $k$ on subcarrier $n$ in the cell $j$ is obtained as \begin{align} \tilde{q}^*_{j,n,k}= a_{j,n,k}{p}^*_{j,n,k}=a_{j,n,k} \Bigg[\frac{1}{\ln{(2)}} \Bigg( \frac{1+\zeta_{j,k}}{\phi_{j} + \theta_{j,n,k}|h^{ID}_{j,n,k}|^2- \tau_{j,k}\epsilon_{j,k}|g^{EH}_{j,n,k}|^2} \Bigg)- \frac{{|\sigma_{j,n,k}^{{ID}}|^{^2}}+I^{n}_{max}}{|h^{ID}_{j,n,k}|^2}\Bigg]^{+}. \label{F4-41} \end{align} The power allocation has the form of multilevel water-filling. It can be seen that the data-rate prevents energy inefficient transmission by truncating the water-levels~\cite{David-tse}. In order to obtain the optimal subcarrier allocation, we take the derivative of the subproblem's objective function with respect to $a_{j,n,k}$, that is \begin{equation} \frac{\partial{}\mathcal{L} (\textbf{a},\mathbf{p},\tilde{\textbf{q}},\boldsymbol{\chi},\boldsymbol{\phi},\boldsymbol{\zeta},\boldsymbol{\tau},\boldsymbol{\theta})}{\partial a_{j,n,k}} \bigg|_{\tilde{q}^*_{j,n,k}}= \mathscr{S}_{j,n,k}, \end{equation} where $\mathscr{S}_{j,n,k}\geq 0$ can be interpreted as the marginal benefit, as discussed in \cite{975766}, for allocating subcarrier $n$ to user $k$ and is given by \begin{align} \mathscr{S}_{j,n,k}=~ & (1+ \zeta_{j,k}) \Bigg[ \log_2\Bigg(1+\frac{ {p}^*_{j,n,k} |h_{{j,n,k}}|^2} {{|\sigma_{j,n,k}^{{ID}}|^{^2}}+I^{n}_{max}}\Bigg) - \frac{1}{\ln(2)} \Bigg(\frac{ {p}^*_{j,n,k} |h_{j,n,k}|^2}{\tilde{p}^*_{j,n,k} |h_{{j,n,k}}|^2 +{|\sigma_{j,n,k}^{{ID}}|^{^2}}+I^{n}_{max}}\Bigg) \Bigg]\nonumber\\&+ \tau_{j,k}\epsilon_{j,k} {p}^*_{j,n,k} |g^{EH}_{j,n,k}|^2- \theta_{j,n,k}{p}^*_{j,n,k} |h^{ID}_{j,n,k}|^2- \phi_{j}{p}^*_{j,n,k}- \chi_{j,n}. \label{F4-42} \end{align} It should be noted that $ \mathscr{S}_{j,n,k}\geq 0$ has a physical meaning that users with negative scheduled data-rate on subcarrier $n$ are not selected as they can only provide a negative marginal benefit to the system. Subsequently, the subcarrier allocation should satisfy the following region \begin{align} \label{F4-43} a^{*}_{j,n,k} = \left\{ \begin{array}{ll} 1,~~~\textrm{if}~~\mathscr{S}_{j,n,k}\geq \chi_{j,n},\\ 0,~~~ \textrm{otherwise}. \end{array} \right. \end{align} \textbf{\textit{Solution of Layer 2 Master Problem}:} To find the optimum subcarrier assignment (\ref{F4-43}), we must first determine the threshold $\mathscr{S}_{j,n,k}$. However, the subcarrier assignment depends on the Lagrangian variables $\tilde{q}_{j,n,k}$. Therefore, we employ the subgradient method to find the Lagrangian multipliers for a given $\tilde{\textbf{q}}$.~Hence, we have \begin{align} \phi_{j}^{i+1}&= \bigg[\phi_{j}^{i}+ \alpha_{1} \bigg(\sum_{k\in \mathcal{K}}~\sum_{n\in \mathcal{N}}\tilde{q}_{j,n,k}- p_{max}\bigg)\bigg]^{+}, \label{F4-44}\\ \zeta_{j,k}^{i+1}&= \bigg[\zeta_{j,k}^{i}- \alpha_{2} \bigg(\sum_{n \in \mathcal{N}} \log_2\Big(1+\frac{\tilde{q}_{j,n,k}\ |h_{j,n,k}|^2}{|\sigma_{j,n,k}^{{ID}}|^{^2}+ I^{n}_{max}}\Big)-R_{min}\bigg)\bigg]^{+}, \label{F4-45}\\ \tau^{i+1}_{j,k}&= \bigg[\tau_{j,k}^{i}+ \alpha_{3} \bigg(\epsilon_{j,k}\sum_{n \in \mathcal{N}}{\tilde{q}_{j,n,k}}|g^{EH}_{j,n,k}|^{2}- \text{EH}_{min}\bigg)\bigg]^{+}, \label{F4-46-1}\\ \theta_{j,n,k}^{i+1}&= \bigg[\theta_{j,n,k}^{i}- \alpha_{4} \bigg(\sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}} \sum_{\substack{k'\neq k \\ k' \in \mathcal{K}}}\tilde{p}_{j',n,k'}|h^{ID}_{j',n,k}|^2- I^{n}_{max}\bigg)\bigg]^{+}, \label{F4-46} \end{align} where index $i\geq 0$ is iteration index, and $\alpha_{q}$'s, $q \in \{1,2,3,4\}$, are positive step sizes. The details of the low complexity algorithm are sketched in \textbf{Algorithm \ref{A_Low_Complexity_Algorithm_Design}}. \begin{algorithm}[H] \caption{Low Complexity Power Control and Subcarrier Assignment} \label{A_Low_Complexity_Algorithm_Design} \begin{algorithmic}[1] \STATE {$\mathbf{Initialize}$} \\{ \begin{addmargin}[1em]{0em} {iteration index $i=0$ with the maximum number of iteration $\mathcal{I}_{max}$\\ and Lagrangian variables vectors $\boldsymbol{\chi},\boldsymbol{\phi},\boldsymbol{\zeta},\boldsymbol{\tau},\boldsymbol{\theta}$} for a feasible set vector \{$\mathbf{a}^0,{\mathbf{p}}^0$,$\tilde{\mathbf{q}}^0$\}. \end{addmargin}} \STATE{\textbf{repeat} } \STATE{ \begin{addmargin}[1em]{0em} Update power allocation policy using (\ref{F4-41}). \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Calculate $\mathscr{S}_{j,n,k}$ based on (\ref{F4-42}) and find optimal subcarrier assignment using (\ref{F4-43}). \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Update Lagrangian variables vectors based on (\ref{F4-44})-(\ref{F4-46}). \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Set $i=i+1$. \end{addmargin}} \STATE \textbf{until} Convergence or $i=\mathcal{I}_{max}$ \STATE{\textbf{return}~\{$\mathbf{a}^{*},{\mathbf{p}}^{*}$,$\tilde{\mathbf{q}}^*$\}} \end{algorithmic} \end{algorithm} \section{Simulation Results} In this section, the performance gain of the proposed subcarrier and power allocation algorithms for SWIPT in the DL direction of a multi-cell multi-user OFDMA system is evaluated through extensive simulations. There are $J=3$ cells in the network topology with $K_{j}=4$ users in each cell. From the considered four users in each cell with ring-shaped boundary regions, two are uniformly and randomly located inside the inner-circle while two are in the outer-zone, i.e., ${K}^{EH}_j=2={K}^{ID}_j=2$ $\forall j \in \mathcal{J} = \{1,2,3\}$. We set the radius of a cell, $d_{max}$, as 20 meters, with a reference distance, $d_{0}$, of 5 meters, where the EH users are placed in the interval (0, $d_{0}$] while ID used are inside ($d_{0}$, $d_{max}$). Additionally, we consider a frequency-selective fading channel and further assume the central carrier frequency is set to be 3 GHz. The number of subcarriers is $N = 16$, where the bandwidth of each subcarrier is set to 180 kHz. It should be noted that as the power of the background noise for both EH and ID receivers is rather small compared to maximum transmit power, $p_{max}$, it is assumed to have {$|\sigma_{j,n,k}^{{EH}}|^{^2} = |\sigma_{j,n,k}^{{ID}}|^{^2} = \sigma^{2} = $ -120 dBm} in all simulations. Since a line-of-sight (LoS) signal is expected in the received signal, the small-scale fading channel is modeled as Rician fading with Rician factor $\rho=3$ dB. \begin{table}[!b] {\caption{Simulation Parameters} \label{chap:4:Simulation_Parameters} \centering \begin{tabular}{|c|c|}\hline {\bf Parameter} & {\bf Value} \\ \hline \hline {Cell coverage ($d_{max}$)} & {$20$ m} \\ {Reference distance ($d_{0}$)} & {$5$ m} \\ {The number of cell ($J$)} & {$3$}\\ {The number of ID user in each cell ($K^{ID}_{j}$)} & {$2$}\\ {The number of EH user in each cell ($K^{EH}_{j}$)} & {$2$}\\ {The number of subcarrier (N)} & {$16$} \\ {Noise power ($\sigma^{2}$)} & {$-120$} dBm \\ {The bandwidth of each subcarrier} & {$180$ kHz}\\ {Path loss exponent ($\alpha$)} & {$2.76$} \\ {Path loss model for cellular links} & {$31.7+27.6 \log(\frac{d}{d_0})$} \\ {Multi-path fading distribution}& {Rician fading with factor 3~dB}\\ {Power conversion efficiency ($\epsilon$)}& {30\%}\\ The maximum transmit power of the SBS ($p_{\textrm{max}}$) & {$30$ dBm} \\ The minimum data-rate requirement for $k^{th}$ ID user~($R_{\textrm{min}}$) & $1$ bps/Hz \\ {The minimum harvested requirement ($\text{EH}_{min}$)}& {$0$~dBm}\\ {The maximum interference threshold ($I^n_{max}$)}&$-70$~dBm\\ Channel realization number & $100$\\\hline \end{tabular}} \end{table} Moreover, the Rician flat fading channel gains include a distance-dependent path loss component of $31.7+10 \alpha \log(\frac{d}{d_0})$ [dB] (where $d$ is the distance between the transmitter and the receiver) and a log-normal shadowing component with~$8$ dB standard deviation where the path loss exponent is equal to $\alpha = 2.8$~\cite{339880}. These parameters for propagation modeling and simulations follow the suggestions in 3GPP evaluation methodology~\cite{chap3:3GPP}. The power conversion efficiency of all EH users, $\epsilon_{j,k}$, is assumed to be the same and is equal to $\epsilon_{j,k} = \epsilon = 0.3 $. The target transmission rate $R_{min}=1$ bps/Hz for each ID used unless otherwise stated. The minimum harvested energy is EH$_{min}$ for each EH user. Besides, a maximum interference threshold of -70 dBm is considered for the low complexity design algorithm~\cite{6294504}. Furthermore, we conduct Monte Carlo simulations by generating random realizations of the channel gains to obtain the average data-rate of the network. In fact, the channel gain between a transmitter and a receiver is calculated using independent and identically distributed Rician flat fading and the figures shown in this section are obtained by estimating the average of results over different realizations of path-loss as well as multi-path fading. The rest of the simulation parameters are given in \textbf{Table}~(\ref{chap:4:Simulation_Parameters}) unless otherwise is specified. \begin{figure}[!b] \centering \includegraphics[width=12cm]{figures/chap4/Fig_4_1.pdf} \caption{Average sum data-rate versus minimum data-rate requirement.} \label{plot:4.1} \end{figure} \subsection{Average Sum Data-rate versus Minimum Data-rate Requirement} In figure (\ref{plot:4.1}), the average sum data-rate versus minimum data-rate requirement is depicted. It is illustrated that as $R_{min}$ increases, the average sum data-rate decreases. The reason is that when $R_{min}$ is high, more subcarrier have to be assigned to ID users to satisfy the minimum data-rate requirement, especially to those users with poor channel conditions in an extremely deep fade. This is in conjunction with the necessity of higher transmit power for reaching a certain data-rate. However, more transmit power also means more interference that obstructs the data transmission. This excess interference adversely shows itself in data-rate functions that would substantially decrease the achievable data-rate. In fact, as the maximum transmit power increases, the interference power arising from co-channel becomes more severe, degenerating the received user signals. We also observe that \textbf{Algorithm~\ref{euclid}} outperforms both \textbf{Algorithm~\ref{A_Low_Complexity_Algorithm_Design}} and the alternative search method (ASM). It can be concluded that the proposed \textbf{Algorithm~\ref{euclid}} has considerably better performance due not only to performing a joint resource allocation policy, but also acknowledging the interference term as a variable in data-rate functions, which in turn stresses the dependency between power and subcarrier allocation. For the ASM, we employ a heuristic search method in which we decouple the problem of joint subcarrier assignment and power allocation to maximize the system throughput based on~\cite{jalal}. \begin{figure}[!b] \centering \includegraphics[width=12cm]{figures/chap4/Fig_4_2.pdf} \caption{Average sum data-rate versus maximum transmit power.} \label{plot:4.2} \end{figure} \subsection{Average Sum Data-rate versus Maximum Transmit Power} Figure (\ref{plot:4.2}) shows the average system data-rate versus maximum allowance transmit power, $p_{max}$, for SBSs. It can be seen from this figure that the average sum data-rate increases by raising the maximum transmit power. We also observe that the average sum data-rate grows monotonically up to 35 dBm. Nevertheless, the slope of the curve in the average sum data-rate is declined as the maximum transmit power gets larger values. Particularly, the average sum data-rate starts to saturate when $p_{max}\geq 35$ dBm. However, there exists an important point that should be kept in mind: increasing the transmit power does not improve data-rate perpetually. More transmit power also means more intensified interference level that hampers the information transmission. These excess interference terms from the co-channel show itself in the data-rate functions negatively. Therefore, the slope of the sum data-rate tends to decrease at some point, which bounds the maximum achievable data-rate to an almost constant value. Furthermore, for comparison and evaluation of our proposed method, we consider two baseline schemes. Baseline scheme 1 is based on the decoupling of the subcarrier assignment and power allocation variables in which the original problem is divided into two disjoints optimization problems~\cite{jalal}. For baseline scheme~2, only power allocation is performed while the subcarrier assignment is done randomly. It can be seen that the proposed \textbf{Algorithm~\ref{euclid}} outperforms the other methods due to solving the optimization problem jointly based on the MM approach. This yields a close-to-optimal solution. \subsection{Average Sum Data-rate Versus Number of Iterations} In figure (\ref{plot:4.4}), we examine the convergence behavior of our proposed iterative method via the MM approach under different initialization of power. It can be observed that \textbf{Algorithm~\ref{euclid}} has a quick convergence at an equal power allocation over all subcarriers, that is $\mathbf{p}^0(i) = \frac{p_{max}}{N}$, whereas it requires a little more number of iterations for zero power, i.e., the extreme case with $\mathbf{p}^0(i) = 0$. This figure also demonstrates even though the speed of convergence differs from one case to another, our proposed method quickly converges to a stationary point only after a small limited number of iterations. \begin{figure}[!b] \centering \includegraphics[width=12cm]{figures/chap4/Fig_4_4.pdf} \caption{Average sum data-rate versus number of iteration under different initialization of the power.} \label{plot:4.4} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=12cm]{figures/chap4/Fig_4_3.pdf} \caption{Average sum data-rate versus number of iteration under different number of subcarriers.} \label{plot:4.3} \end{figure} To give another perspective, figure (\ref{plot:4.3}) is plotted that shows the average sum data-rate versus the number of iteration for a different number of subcarriers. It can be seen that as the number of subcarrier increases, the average sum data-rate increases as well. This is because with increasing the number of subcarriers, an ID user is more likely to choose subcarriers with higher channel quality gains, leading to more significant system throughput in agreement with the effect of channel diversity. Another interesting observation in this figure is the convergence behavior of our proposed algorithm. It should be emphasised that the number of subcarriers affects convergence behavior of \textbf{Algorithm~\ref{euclid}}. The more the number of subcarriers, the more iterations are needed for our proposed algorithm to converge. The reason for this can be attributed to an expansion in the search region of the optimization problem in (\ref{F4-26-}). Due to the existence of more binary subcarriers allocation variables that must accordingly be assigned to ID users, extra iterations are required for our proposed algorithm based on the MM approach to settle in a stationary point. \subsection{Average Sum Data-rate versus Number of Cells} Figure (\ref{plot:4.5}) investigates the sum data-rate of $J$ small-cells when the number of small-cells varies from 3 to 7. With the number of small-cells going up, the average sum data-rate improves even more, although the total number of subcarrier is fixed in the entire network. This is because each subcarrier has more candidate users of small-cells to choose from (according to the channel quality between the SBS and each ID user), when the number of small-cells increases. This is known as multi-user diversity. Consequently, each small-cell throughput improves, which results in an enhancement of the average sum data-rate of the whole network. Moreover, we can further conclude from the same figure that \textbf{Algorithm~\ref{euclid}} performs better in comparison to other algorithms due to designing a joint resource allocation framework. \begin{figure}[!t] \centering \includegraphics[width=12cm]{figures/chap4/Fig_4_5.pdf} \caption{Average sum data-rate versus number of cells.} \label{plot:4.5} \end{figure} \section{Summary} In this chapter, we studied throughput or data-rate maximization for indoor SWIPT-enabled OFDMA multi-user multi-cell networks. In particular, we considered the separated receiver architecture in which the ID and EH users are separated in the coverage area of a SBS and located in two distinct regions. Taking into account the subcarrier assignment and power allocation, a resource allocation problem was formulated to maximize data-rate while respecting the minimum required data-rate for each ID user and minimum harvesting energy for each EH user. The underlying problem was non-convex MINLP. We employed the MM approach, where a surrogate function serves to approximate the non-convex term. Since the computational complexity of MM approach was high, we also proposed a suboptimal subcarrier assignment and power allocation algorithm to solve the problem with lower complexity . Through simulation results, we demonstrated the excellent performance of our proposed algorithms compared to state-of-the-art algorithms that have been addressed in the literature. Furthermore, numerical results clearly demonstrated that our proposed scheme would substantially improve the system throughput, although it converges to a stationary point even after small number of iterations. \chapter{Generalized Antenna Switching Technique in SWIPT} \label{CHAP5} \fancyhf{} \renewcommand{\headrulewidth}{2pt} \fancyhead[LE,RO]{\thepage} \fancyhead[RE]{\textit{ \nouppercase{\leftmark}} } \fancyhead[LO]{\textit{ \nouppercase{\rightmark}} } \renewcommand{\footrulewidth}{0.1pt} \fancyfoot[CE,CO]{\nouppercase{\leftmark}} \fancyfoot[LE,RO]{JFMJ} \vspace{15mm} Most works about simultaneous wireless information and power transfer (SWIPT) focus either on maximizing energy harvesting, which we discuss in chapter~\ref{CHAP3}, or maximizing throughput, in chapter~\ref{CHAP4}. Nevertheless, deploying algorithms to reap more harvested energy in the overall network topology adversely affects information transfer, and this in turn causes the quality of service (QoS) of the system to degenerate. Besides, global commitments to sustainable development are contravened if the system design merely seeks to maximize the spectral efficiency (SE) by improving throughput in light of the inexorable increase of network power consumption. Therefore, energy efficiency (EE) maximization is the focus of this chapter. It should be noted that EE, conventionally defined as the quantity of transmitted information bits per unit energy (bits/joule), and the need for a high SE are key performance indicators in communication networks. More features can be added to SWIPT networks by employing multiple antennas at the transmitter and receiver – commonly referred to as multiple-input multiple-output (MIMO) systems. With respect to receivers, multiple receive antennas can help users harvest more energy because of the broadcast nature of wireless transmission. Furthermore, the efficiency of information and energy transfer can be significantly improved by using multiple transmit antennas. That is why it is not surprising that MIMO is regarded as a promising technology – not only for improving network throughput and radio communication system reliability, but also because of the distinct features of SWIPT systems. However, pre-processing and post-processing are required at both transmitters and receivers in multiple antenna systems, which increases the cost and complexity of system designs. Various antenna selection methods have been proposed in~\cite{jalal,13,21} as low-cost and simpler solutions for exploiting the performance gain, by achieving a diversity gain, that is promised by multiple antenna systems. \begin{figure}[!t] \vspace*{10mm} \centering \hspace*{-7mm} \includegraphics[width=17.1cm,trim=4 4 4 4,clip] {figures/chap5/Chap5-system-model0-F1.pdf} \caption{Antenna selection architecture.} \label{fig:5-1} \end{figure} The fundamental idea of antenna selection is that the limited available radio frequency (RF) chains, which provide wireless links with the most reliable signal-to-noise ratio (SNR), are assigned to transmit and receive antennas. The building block of an antenna selection unit is depicted in figure (\ref{fig:5-1}). Since antenna selection is considered a novel technique that provides higher flexibility to the system operator while also offering a better policy for resource allocation, we briefly explain its basic working principle below. As can be seen, the input data stream is mapped to the corresponding symbol data at the transmitter by the symbol mapping block. A symbol data frame, generated by the symbol mapping block, goes to the subcarrier allocation block, which allocates the data to the antenna selected by the RF switch associated with a specific subcarrier. Then the output sequences from the subcarrier allocation block are applied to inverse fast Fourier transform (IFFT) blocks, and a guard interval (GI) is added to each time-domain signal being transmitted by its respective transmit antenna. There are two basic approaches to deploying antenna selection in OFDMA systems: bulk selection and per-subcarrier selection. In bulk selection, the same antennas are chosen for all subcarriers, whereas in per-subcarrier selection the antennas for each subcarrier are selected independently~\cite{6111188}. The same principle holds for antenna selection at the receiver. This subject is developed in \cite{7100915}. The uplink (UL) of the fourth generation (4G) standard of long term evolution, known as LTE-Advanced, uses the antenna selection technique because of its low implementation cost and the small amount of feedback required – compared to existing techniques such as beamforming and precoding \cite{6316788}. To the authors' best knowledge, antenna selection in SWIPT networks with a focus on resource allocation design has not yet been studied. We therefore present a theoretical study of an EE optimization problem that considers an antenna selection approach in a SWIPT system. In the introduction we defined the antenna switching (AS) scheme as a SWIPT enabler, whereby the user is equipped with independent antennas for energy harvesting (EH) and information decoding (ID) operations. The antenna selection technique at the receiver can be viewed as a generalization of the AS scheme in a co-located SWIPT network with distinct subsets of antennas that could be assigned to EH and ID operations if permitted by channel quality and the channel state information (CSI). We refer to this observation as a “generalized AS technique” in SWIPT. By this we mean that the generalized AS acts as a “switch” in the operation mode of the antenna; each antenna is capable of both ID and EH operations. To further explore the EE of the generalized AS-based SWIPT system, we study this scheme in a downlink (DL) of a multi-user multi-cell orthogonal frequency division multiple access (OFDMA) network. Intuitively, a higher data-rate and more harvested energy would be expected if more receive antennas were activated. The objective function of this chapter's optimization problem is under the condition of satisfying the minimum data-rate requirement and respecting the maximum power transfer constraints. This is achieved by jointly optimizing subcarrier assignment and power allocation with the receive active antenna set selection. This particular EE optimization problem is complicated because it is non-convex and fractional-combinatorial. We begin by dividing the main problem into two subproblems. The first subproblem concerns the joint small base station (SBS)-subcarrier assignment and power allocation while the second subproblem seeks to determine the best antennas based on the scheduling (joint SBS-subcarrier assignment and power allocation) chosen for ID or EH operations. We confirm the validity of our theoretical findings for the generalized AS-based SWIPT systems by providing simulation results to draw design insights and demonstrate how our proposed algorithm achieves excellent performance while also reducing the computational cost at the receiver. \section{System Model} In this section, we consider a DL of an OFDMA network in a multi-user multi-cell network using SWIPT, as shown in figure (\ref{fig:5.1}). We assume that the coverage area of the specific region is provided via a set of $j \in {\mathcal{J} = \{1,2,...,J\}}$ cells with one serving SBS in each cell. Moreover, only one single transmit antenna is handling the corresponding users in the associating cell, even though all SBSs are equipped with multiple antennas. We additionally assume that the entire frequency band of $\mathscr{B}$ is divided into $N$ orthogonal subcarrier, each having a bandwidth of $\mathscr{W}$. All cells share the subcarrier set $\mathcal{N}=\{1,2,..,N\}$, where $|\mathcal{N}|=N$ indicates the total number of subcarriers. The set of all users in a given cell $j$ is represented by $\mathcal{K}_{j}=\{1,2,...,K_j\}$, where the total number of users in that cell is $|\mathcal{K}_{j}|=K_j$. Furthermore, $\mathcal{K}=\sum_{j\in\mathcal{J}}\mathcal{K}_j$ gives the total number of users in the network. Besides, each user is equipped with multiple antennas, where the set of antennas is represented by $m \in \mathcal{M}=\{1,2,...,M\}$ with $|\mathcal{M}|=M$ for each user. It is also assumed that the perfect CSI is available at the resource allocator to design the resource allocation policy. Note that there is a centralized controller that is connected to all the SBSs. Specifically, it is presumed that each SBS broadcasts orthogonal preambles, pilot signals, in the DL to the users. Then, through a feedback channel, each user estimates the CSI and transfers this information back to the associated SBS. Afterward, the corresponding SBS listens to the sounding reference signals communicated by the users and sends the CSI to the centralized controller for resource allocation design. \begin{figure}[p] \centering \includegraphics[width=21cm,trim=4 4 4 4,clip,angle=90] {figures/chap5/Chap5-system-model1-F1.pdf} \caption{SWIPT in a DL of a multi-user multi-cell OFDMA network with generalized AS receivers.} \label{fig:5.1} \end{figure} Furthermore, with multiple antenna setting in each user, the best antenna can be selected for both ID and EH operation in each subcarrier based on the optimization problem. However, these operation modes cannot be done over the same antenna at the same time. Beside, we assume a per-subcarrier selection method for the generalized AS approach in this chapter. In this regard, all the assigned subcarriers can be used to receive signals from different antennas resulting in a better degree of freedom by increasing the throughput of the system.~For the sake of readability, we first introduce some of the essential parameters that are used to describe the system model: \begin{itemize} \item $h^{m}_{j,n,k}$: The DL channel gain for the wireless information transfer from the $j^{th}$ SBS to the user $k$ using its $m$ antenna over the $n^{th}$ subcarrier. \item $g^{m}_{j,n,k}$: The DL channel gain for the wireless power transfer from the $j^{th}$ SBS to the user $k$ using its $m$ antenna over the $n^{th}$ subcarrier. \item $x^{m}_{j,n,k}$: Binary antenna selection indicators from the $j^{th}$ SBS to the $k^{th}$ user over the $n^{th}$ subcarrier when the $m^{th}$ antenna is selected for harvesting energy. \item $a_{j,n,k}$: Binary subcarrier indicators from the $j^{th}$ SBS to the $k^{th}$ user when the $n^{th}$ subcarrier is selected. \item $p_{j,n,k}$: The corresponding transmit power from the $j^{th}$ SBS to the $k^{th}$ user in the $n^{th}$ subcarrier. \end{itemize} The generalized AS technique is performed to distinguish between the information and power transfer signals. Through this methodology, the receiver antennas can be separated into two groups. One group of antennas is used for harvesting energy, whereas the other group handles wireless information reception.~In fact, each antenna of each user can switch its operation mode in different subcarriers for either ID or EH.~Then, the received ID signal from the $j^{th}$ SBS to the $k^{th}$ user over the $n^{th}$ subcarrier is given by \begin{align} y^{\text{ID}}_{j,n,k} & = \sum_{m \in \mathcal{M}} (1-x^{m}_{j,n,k})a_{j,n,k}\sqrt{p_{j,n,k}}h^{m}_{j,n,k} \nonumber \\ & \quad + \sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}} \sum_{\substack{k'\neq k \\ k' \in \mathcal{K}}} \sum_{\substack{m'\neq m \\ m' \in \mathcal{M}}} (1-x^{m'}_{j,n,k})a_{j',n,k'}\sqrt{p_{j',n,k'}}h^{m'}_{j',n,k} +z^{\text{ID}}_{j,n,k}, \end{align} where $z^{ID}_{j,n,k}$ is the additive white Gaussian noise at the $k^{th}$ user when its ID antennas are switched on. More specifically, $z^{ID}_{j,n,k}$ is an additive white Gaussian noise (AWGN) random variable with zero mean and variance $|\sigma_{j,n,k}^{{ID}}|^{^2}$ denoted by $z^{ID}_{j,n,k} \sim \mathcal{CN}(0,|\sigma_{j,n,k}^{{ID}}|^{^2})$. Moreover,~the EH signal from the $j^{th}$ SBS for the user $k$ over the subcarrier $n$ is given by \vspace{5mm} \begin{align} &y^{EH}_{j,n,k}= \sum_{m \in \mathcal{M}}(x^{m}_{j,n,k}) \sqrt{p_{j,n,k}}g^{m}_{j,n,k} \nonumber\\& \quad \quad \quad + \sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}} \sum_{\substack{k'\neq k \\ k' \in \mathcal{K}}} \sum_{\substack{m'\neq m \\ m' \in \mathcal{M}}} (x^{m'}_{j,n,k})\sqrt{p_{j',n,k'}}g^{m'}_{j',n,k}+z^{EH}_{j,n,k}, \label{F5-2} \end{align} where $z^{EH}_{j,n,k}$ is the additive white Gaussian noise with a circularly symmetric Gaussian distribution, referred to as $z^{EH}_{j,n,k} \sim \mathcal{CN}(0,|\sigma_{j,n,k}^{{ED}}|^{^2})$, at the $k^{th}$ user when its EH antennas are activated. According to the famous Shannon capacity formula, the data-rate of the user $k$ using its $m$ antenna over the subcarrier $n$ inside the cell $j$ can be written as \begin{equation} R^{m}_{j,n,k}= \log_2 \bigg( 1+ \frac{a_{j,n,k}p_{j,n,k}|h^{m}_{j,n,k}|^{2}} {{|\sigma^m_{j,n,k}|}^2+I_{j,n,k}} \bigg), \end{equation} where \vspace{5mm} \begin{align} I_{j,n,k}= \sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}} \sum_{\substack{k'\neq k \\ k' \in \mathcal{K}}} a_{j',n,k'}{p_{j',n,k'}}|h^{m}_{j',n,k}|^{2}, \end{align} is the interference term arising from the co-channel effect on the subcarrier $n$ which is emitted by unintended transmitters sharing the same frequency channel. For facilitating the presentation, we denote $\textbf{p} \in \mathbb{R}^{1\times JNK}$, $\textbf{a} \in \mathbb{Z}^{1\times JNK}$, and $\textbf{x} \in \mathbb{Z}^{1\times JNK}$ as vectors of optimization problem for power allocation, subcarrier assignment, and antenna selection, respectively.~Consequently, the data-rate of the user $k$ when using all its active ID antennas can be stated as \begin{align} R_k(\mathbf{a},\mathbf{x},\mathbf{p}) = \sum_{n\in \mathcal{N}} \sum_{m\in \mathcal{M}} (1-x^{m}_{j,n,k})R^{m}_{j,n,k}. \end{align} \begin{figure}[p] \centering \vspace{-5mm} \includegraphics[width=20cm,trim=4 4 4 4,clip,angle=90] {figures/chap5/Chap5-system-model2-F1.pdf} \caption{SWIPT in a DL of an OFDMA network consisting of $J=2$ small cells, where there is one user in the intersection of the two cells, i.e., $\mathcal{K}_1=$1. The user has for antennas; two of which are employed four ID and the rest for EH. Also, the user is served in the first cell and receives interference from the second cell.} \label{fig:5.2}% \end{figure} Hence, the total system throughput, denoted by $R^{\textrm{Total}}(\mathbf{a},\mathbf{x},\mathbf{p})$, is obtained as \begin{equation} R^\textrm{Total}(\mathbf{a},\mathbf{x},\mathbf{p}) = \sum_{j \in \mathcal{J}} \sum_{k\in \mathcal{K}} R_k(\mathbf{a},\mathbf{x},\mathbf{p}). \end{equation} On the other hand, to compute the total power consumption of the network, we use the following linear model in which the transmit power consumption, circuit energy consumption, and the harvested energy are taken into account. In this model, there exist coefficients that represent the efficiency of power amplifiers in network devices as well as the power efficiency of EH antennas, which will be explained later on. In particular, the total power consumption of the considered system $P^{\textrm{Total}}(\textbf{a},\textbf{x},\textbf{p})$ consists of three major terms and can be expressed as \begin{align} P^{\textrm{Total}}(\textbf{a},\textbf{x},\textbf{p}) = \bigg( \sum_{j \in \mathcal{J}} \sum_{k\in \mathcal{K}} \sum_{n\in \mathcal{N}} \big( \frac{a_{j,n,k}p_{j,n,k}} {\kappa_j}+P_c^\mathrm{SBS} \big) \bigg) -P^{\textrm{EH}}(\textbf{x},\textbf{p}), \end{align} where \begin{equation}\label{Chap5:5-5} P^{\textrm{EH}}(\textbf{x},\textbf{p})= \sum_{j \in \mathcal{J}} \sum_{k\in \mathcal{K}} \sum_{n\in \mathcal{N}} \sum_{m\in \mathcal{M}} \epsilon^m_{j,k}x^{m}_{j,n,k}{p_{j,n,k}}|g^{m}_{j,n,k}|^{2}, \end{equation} is the total harvested energy in the network topology using the active EH antenna set of each user. In the above equation, $P^\mathrm{SBS}_\mathrm{c}$ denotes the circuit energy consumption of SBSs, where $\kappa_j$ is the power amplifier of SBSs that takes its values from the interval of $0< \kappa_j < 1$. Moreover, $0<\epsilon^m_{j,k}<1$ is the power conversion efficiency for the $m^{th}$ active EH antenna of the $k^{th}$ receiver in cell $j$ as introduced in the previous chapters. It should also be noted that the contribution of the noise power, i.e., ${|\sigma^m_{j,n,k}|}^2$, to $P^{\text{EH}}$ formula in (\ref{Chap5:5-5}) is ignored, since its value is very small compared to the other existing term in (\ref{Chap5:5-5}). In this chapter, we define the EE as the ratio of system throughput to the corresponding network energy consumption in bits/joule, and denote it by $\mathscr{E}_{eff}(\mathbf{a},\mathbf{p},\mathbf{q})$, where \begin{align} \label{F5-6} \mathscr{E}_{eff}(\mathbf{a},\mathbf{x},\mathbf{p}) = \frac{R^\textrm{Total}(\mathbf{a},\mathbf{x},\mathbf{p})} {P^\textrm{Total}(\mathbf{a},\mathbf{x},\mathbf{p})}. \end{align} In what follows, we first formulate the problem to maximize the EE while considering the feasibility of the transmitted power, minimum data-rate requirement as well as the OFDMA constraint in multi-user multi-cell network with SWIPT. Then, we propose a solution to solve the optimization problem. \section{Optimization Problem Formulation} Notwithstanding the fact that EE is an effective resource allocation metric in cellular networks, some limitations might turn it into an undesired metric in many of the modern applications. In energy-efficient algorithms, the main goal is to maximize system throughput and minimize the corresponding energy consumption simultaneously, without differentiating between the priority of these competing objectives. In this section, we first formulate the optimization problem of joint SBS-subcarrier assignment and power allocation together with the antenna selection for the EE maximization problem of a multi-user multi-cell OFDMA network with a generalized AS-based SWIPT framework. Afterward, we present our proposed algorithm to solve the stated problem. Hence, we introduce the following optimization problem to maximize the system EE \begin{subequations} \begin{align} &\max_{\textbf{a},\textbf{x},\textbf{p}} \mathscr{E}_{eff}(\mathbf{a},\mathbf{x},\mathbf{p}) \label{F5-7}\\ s.t.: &~C_{1}: \sum_{k\in \mathcal{K}} a_{j,n,k}\leq 1,~~~~~~~~~~~~~~~~~~~~ \forall {j \in \mathcal{J},~ \forall n\in \mathcal{N}}, \label{F5-8}\\ &~C_{2}: \sum_{k\in \mathcal{K}}~ \sum_{n\in \mathcal{N}} a_{j,n,k}\ p_{j,n,k}\leq p_{max},~ \forall j \in \mathcal{J}, \label{F5-9}\\ &~C_{3}: \sum_{j\in \mathcal{J}} R_k(\mathbf{a},\mathbf{x},\mathbf{p}) \geq R_{min},~~~~~~~~ \forall k \in \mathcal{K},\label{F5-10}\\ &~C_{4}: \sum_{m\in \mathcal{M}} x^{m}_{j,n,k}=1,~~~~~~~~~~~~~~~~~~ \forall j\in \mathcal{J},~ \forall n \in \mathcal{N},~ \forall k \in \mathcal{K}, \label{F5-11}\\ &~C_{5}: a_{j,n,k}\in\{0,1\} ,~~~~~~~~~~~~~~~~~~~~ \forall {j \in \mathcal{J},~ \forall n\in \mathcal{N},~ \forall k\in \mathcal{K}}, \label{F5-12}\\ &~C_{6}: x^{m}_{j,n,k}\in\{0,1\},~~~~~~~~~~~~~~~~~~~~ \forall {j \in \mathcal{J},~ \forall n\in \mathcal{N},~ \forall k\in \mathcal{K}, ~ \forall m\in \mathcal{M}}, \label{F5-13}\\ &~C_{7}: a_{j,n,k} + a_{j',n',k}\leq 1,~~~~~~~~~~~~~~ \forall {j\neq j' \in \mathcal{J},~ \forall n,n' \in \mathcal{N}},~ \forall k\in \mathcal{K}. \end{align} \label{chap5:5:main}% \end{subequations} In the optimization problem (\ref{chap5:5:main}), constraint $C_1$ indicates that each subcarrier can be allocated to at most one user. The constraint $C_2$ makes sure that the total transmit power of SBSs should not exceed their maximum threshold denoted by $p_{max}$.~In constraint $C_3$, the minimum data-rate requirement, $R_{min}$, is guaranteed for each user in DL in each cell. Constraint $C_4$ indicates that each user utilizes only one antenna in each subcarrier. $C_5$ and $C_6$ indicate that the subcarrier assignment and the antenna selection variables take only binary values. Finally,~$C_7$ presents that each user can be assigned to at most one SBS in each subcarrier. It is worth mentioning that this last constraint is the so-called user association in the literature \cite{6845056}. Due to the binary subcarrier assignment and antenna selection variables as well as the interference included in the data-rate function, and the fractional form of the objective function, the optimization problem~(\ref{chap5:5:main}) is a mixed-integer non-linear programming (MINLP) problem, which is generally complicated. These challenges that make the above optimization problem difficult to solve are further explained below: \begin{itemize} \item Fractional form of the objective function: As a fractional objective function, $\mathscr{E}_{eff}(\mathbf{a},\mathbf{x},\mathbf{p})$ is non-convex. \item Multiplication of two variables: Since the multiplication of two variables is non-convex, the term $a_{j,n,k}p_{j,n,k}$ in constraint $C_2$, poses a challenge in tackling the optimization problem in (\ref{chap5:5:main}). Because the maximum total transmit power constraint in $C_2$, the minimum data-rate QoS requirement constraint in $C_3$, and also the total data-rate objective function are multiplied by a function of joint SBS-subcarrier assignment and the transmit power variables (as given in (\ref{F5-9}), (\ref{F5-10}), and (\ref{F5-7})), these mentioned constraints together with the objective function are non-convex. \item Interference: The inter-cell interference incorporated in data-rate functions,~makes both constraints $C_3$ and the objective function non-convex. \item Binary antenna selection and SBS-subcarrier assignment variable: Discrete antenna selection and SBS-subcarrier assignment variables turn (\ref{chap5:5:main}) into a complex MINLP problem. \end{itemize} \section{Solution to the Optimization Problem} To cope with the complexity of the MINLP problem, we decompose the optimization problem (\ref{chap5:5:main}) into two subproblems: 1) joint SBS-subcarrier assignment and power allocation, and 2) antenna selection. In order to find the locally optimal SBS-subcarrier allocation, power assignment, and antenna selection, the following iterative procedure is employed. At the beginning of each iteration, the optimal SBS-subcarrier allocation and power assignment are obtained from an optimal antenna selection of the previous iteration, i.e., $\mathbf{x}^{t-1}$, using the results of layer \ref{Chap5:layer1}.\textbf{A}. Knowing the best SBS-subcarrier allocation and power assignment, the best antenna for either EH or ID operation is selected by incorporating the results of layer \ref{Chap5:layer2}.\textbf{B}. The corresponding update rule is summarized as follows \begin{align}\label{update} & \underbrace{(\mathbf{a}^{0},\mathbf{p}^{0}) \rightarrow \mathbf{x}^{0}}_{\text{Initialization}} \rightarrow ...\rightarrow \underbrace{(\mathbf{a}^{t-1},\mathbf{p}^{t-1}) \rightarrow\mathbf{x}^{t-1}}_{\text{Iteration \textit{t-}1}} \nonumber\\ \rightarrow & \underbrace{(\mathbf{a}^{t},\mathbf{p}^{t}) \rightarrow\mathbf{x}^{t}}_{\text{Iteration \textit{t}}} \rightarrow ...\rightarrow \underbrace{(\mathbf{a}^{opt},\mathbf{p}^{opt}) \rightarrow\mathbf{x}^{opt}}_{\text{Optimal Solution}}. \end{align} After solving each subproblem by algorithms that are discussed in the following two layers methodology, an iterative procedure is employed in which the solution of the previous subproblem is used as the input of the current problem. Through this iterative procedure, we are able to enhance the accuracy of our obtained solution. The iterative algorithm in (\ref{update}) needs an initial setting for $\mathbf{a,x}$, and $\mathbf{p}$. In order to converge to a locally optimal solution, these initial settings must be appropriately selected. To this end, we first note that the optimization problem (\ref{chap5:5:main}) is feasible, as we can find initial settings $\mathbf{a}^0, \mathbf{p}^0$, and $\mathbf{x}^0$, satisfying the constraints. Such a setting that respects all the constraints can be found as follows. First, for the initial SBS-subcarrier assignment $\mathbf{a}^0$ and power allocation $\mathbf{p}^0$, we assume each SBS-subcarrier is assigned to the small-cell user with the highest channel gain, where equal power is allocated to all small-cell users across all subcarriers. Lastly, for the antenna selection $\mathbf{x}^0$, a set of antennas are selected for each of the small-cell users based on quasi-stationary channel statistics. The selected antennas in each user can perform either EH or ID operation accordingly. Having equipped with the necessary background, a tractable solution procedure to the original problem (\ref{chap5:5:main}) is described in a two-layer format in the next following subsections. \subsection{A. Joint SBS-subcarrier Assignment and Power Allocation}\label{Chap5:layer1} In this subsection, we carry out the subproblem of the SBS-subcarrier assignment and power allocation in order to maximize EE. Assuming the antenna set selection is given from the previous iteration, we aim to tackle the non-convexity of the multiplication of a binary variable with the transmit power in constraints $C_2$ and $C_3$ along with the objective function. In order to handle this issue,~we adopt the big-M formulation \cite{big_M} to decouple the product terms. Therefore, we introduce the following additional constraints into the optimization problem in (\ref{chap5:5:main}) \begin{align} &C_{8}: \tilde{p}_{j,n,k}\leq p_{max}a_{j,n,k},~~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall n \in \mathcal{N},~ \forall k \in \mathcal{K},\\ &C_{9}: \tilde{p}_{j,n,k}\leq p_{j,n,k},~~~~~~~~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall n \in \mathcal{N},~ \forall k \in \mathcal{K},\\ &C_{10}: \tilde{p}_{j,n,k}\geq p_{j,n,k}-(1-a_{j,n,k})p_{max},~ \forall j \in \mathcal{J},~ \forall n \in \mathcal{N},~ \forall k \in \mathcal{K},\\ &C_{11}: \tilde{p}_{j,n,k}\geq 0,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \forall j \in \mathcal{J},~ \forall n \in \mathcal{N},~ \forall k \in \mathcal{K}, \end{align} where $\tilde{\textbf{p}} \in \mathbb{R}^{1\times JNK}$ is the collection of all $\tilde{p}_{j,n,k}$'s. Through this method, we can easily deal with the non-convexity of the multiplication of two variables, both in the objective and also the constraints. Next, we relax the integer SBS-subcarrier assignment variable by converting it into a continuous variable between zero and one as \begin{equation} \dot{C}_{5}:~ 0\leq a_{j,n,k} \leq 1,~\forall j \in \mathcal{J},~\forall n \in \mathcal{N},~\forall k \in \mathcal{K}. \end{equation} Moreover,~by inspiring the same approach in previous chapters, we impose the following region to the optimization problem \begin{align} \ddot{C}_{5}:~ \sum_{j \in \mathcal{J}} \sum_{ n \in \mathcal{N}} \sum_{ k \in \mathcal{K}} \left( a_{j,n,k} - (a_{j,n,k})^2 \right) \leq 0. \end{align} Subsequently , the original optimization problem in (\ref{chap5:5:main}) can be reformulated as follows \begin{subequations} \begin{align} (\mathbf{a}^{t},\tilde{\mathbf{p}}^{t}) = & \arg \max_{\mathbf{a},\mathbf{p},\tilde{\mathbf{p}}} \frac{{\overline{{R}}}^\textnormal{Total} (\mathbf{a}^{t},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{t})} {{\overline{{P}}}^\textnormal{Total} (\mathbf{a}^{t},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{t},\textbf{p}^{t})} \label{F5-13_new}\\ s.t.: &~C_{1},C_{4},\dot{C}_{5},\ddot{C}_{5},{C}_{7}-C_{11},\\ &~C_{2}: \sum_{k\in \mathcal{K}}~ \sum_{n\in \mathcal{N}} \tilde{p}_{j,n,k}\leq p_{max},~~ \forall j \in \mathcal{J}, \\ &~C_{3}: \sum_{j\in \mathcal{J}} \overline{R}_k(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}}) \geq R_{min},~ \forall k \in \mathcal{K}. \end{align} \label{F5:18}% \end{subequations} Furthermore, the numerator and denominator of the objective function in the above optimization problem (\ref{F5:18}) are as follows \begin{equation} {\overline{R}}^\textrm{Total}(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}}) = \sum_{j\in \mathcal{J}} \sum_{k\in \mathcal{K}} \underbrace{ \sum_{n\in \mathcal{N}} \sum_{m\in \mathcal{M}}(1-x^{m}_{j,n,k}) \log_2 \bigg( 1+\frac{\tilde{p}_{j,n,k}|h^{m}_{j,n,k}|^{2}}{{|\sigma^m_{j,n,k}|}^2+I_{j,n,k}} \bigg)}_{\text{\normalsize $\overline{R}_k(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}})$}}, \end{equation} \begin{align} {\overline{P}}^{\textrm{Total}}(\textbf{a},\textbf{x},\tilde{\mathbf{p}},\textbf{p}) = & \bigg( \sum_{j \in \mathcal{J}} \sum_{k\in \mathcal{K}} \sum_{n\in \mathcal{N}} (\frac{\tilde{p}_{j,n,k}}{\kappa_j}+ P_c^\mathrm{SBS})\bigg)- P^{\textrm{EH}}(\textbf{x},\textbf{p}), \end{align} where \begin{equation} I_{j,n,k}= \sum_{\substack{j'\neq j \\ j' \in \mathcal{J}}} \sum_{\substack{k'\neq k \\ k' \in \mathcal{K}}} \tilde{p}_{j',n,k'}|h^{m}_{j',n,k}|^{2}. \end{equation} Although we addressed the issue of the coupling variables in constraints $C_2$ (\ref{F5-9}), $C_3$ (\ref{F5-10}), and the objective function, the main problem in (\ref{F5:18}) is still non-convex due to existence of the interference in logarithmic data-rate functions. In order to handle this problem, we employ the majorization minimization (MM) algorithm \cite{MM} to make the constraint $C_{3}$ in (\ref{F5-13_new}) and the data-rate function in the objective function of (\ref{F5-13_new}) convex, as becomes clear in what follows. To do so, we first restate the constraint $C_3$ as a difference of convex (D.C.) functions \cite{SCA,DC} as \begin{equation}\label{F5-17} \dot{C}_{3}:~ \sum_{j\in \mathcal{J}} \sum_{n\in \mathcal{N}} \sum_{m\in \mathcal{M}} \mathcal{U}(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}}) - \sum_{j\in \mathcal{J}} \sum_{n\in \mathcal{N}} \sum_{m\in \mathcal{M}} \mathcal{V}(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}}) \geq R_{min},~ \forall k \in \mathcal{K}, \end{equation} where $\mathcal{U}(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}})$ and $\mathcal{V}(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}})$ are \begin{align} \mathcal{U}(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}}) &= (1-x^{m}_{j,n,k})\log_2\bigg(\tilde{p}_{j,n,k}h^{m}_{j,n,k} + {|\sigma^m_{j,n,k}|}^2+I_{j,n,k}\bigg), \\ \mathcal{V}(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}}) &= (1-x^{m}_{j,n,k})\log_2\bigg({|\sigma^m_{j,n,k}|}^2+I_{j,n,k}\bigg). \end{align} To obtain a concave approximation for the constraint $\dot{C}_3$, we apply the MM algorithm \cite{MM}, and we construct a surrogate function for $\mathcal{V}(\mathbf{a},\mathbf{x},{\tilde{\mathbf{p}}})$ using first-order Taylor approximation as \begin{align} \label{F5-18} {{\mathcal{V}}}(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}}) \simeq & \sum_{j\in \mathcal{J}} \sum_{n\in \mathcal{N}} \sum_{m\in \mathcal{M}} \mathcal{V}(\mathbf{a}^{(t-1)},\mathbf{x}^{(t-1)},\tilde{\mathbf{p}}^{(s-1)}) \nonumber \\+ & \sum_{j\in \mathcal{J}} \sum_{n\in \mathcal{N}} \sum_{m\in \mathcal{M}} \nabla_{\tilde{\mathbf{p}}}{\mathcal{V}^T}(\mathbf{a}^{(t-1)},\textbf{x}^{(t-1)},\tilde{\mathbf{p}}^{(s-1)}) (\tilde{\mathbf{p}}-\tilde{\mathbf{p}}^{(s-1)}) \triangleq \tilde{{\mathcal{V}}}(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}}), \end{align} where ${\tilde{\mathbf{p}}}^{(s-1)}$ is the solution of the problem at the $(s-1)^{th}$ iteration, and $\nabla_{\square}$ represents the gradient operation with respect to ${\square}$. In a similar manner, we handle the total data-rate function in the nominator of the objective function in the optimization problem (\ref{F5-13_new}) as \begin{align}\label{F5-16_new} \overline{R}^\textrm{Total}(\mathbf{a},\mathbf{x},\tilde{\mathbf{p}}) = \mathscr{U}(\mathbf{a},\mathbf{x},{\tilde{\mathbf{p}}})- \mathscr{V}(\mathbf{a},\mathbf{x},{\tilde{\mathbf{p}}}), \end{align} where \begin{equation} \mathscr{U}(\mathbf{a},\mathbf{x}, {\tilde{\mathbf{p}}})= \sum_{j \in \mathcal{J}} \sum_{n \in \mathcal{N}} \sum_{k \in \mathcal{K}} \sum_{m\in \mathcal{M}} \mathcal{U}(\mathbf{x},\tilde{{\mathbf{p}}}), \end{equation} and \begin{align} \mathscr{V}(\mathbf{a},\mathbf{x}, {\tilde{\mathbf{p}}})= \sum_{j \in \mathcal{J}} \sum_{n \in \mathcal{N}} \sum_{k \in \mathcal{K}} \sum_{m\in \mathcal{M}} \mathcal{V}(\mathbf{x},\tilde{{\mathbf{p}}}). \end{align} With the same reasoning as before, it can be concluded that numerator of the objective function in (\ref{F5-16_new}) is not a concave function. However, equation (\ref{F5-16_new}) belongs to the class of D.C. functions. To approximate a concave function in (\ref{F5-16_new}), the MM approach is applied again to make a concave approximation via the first-order Taylor approximation as \begin{align}\label{F5-30} {\mathscr{V}}(\mathbf{a},\mathbf{x},{\tilde{\mathbf{p}}}) & \simeq \mathscr{V}\big(\mathbf{a}^{(s-1)},\mathbf{x}^{(t-1)} , {\tilde{\mathbf{p}}}^{(s-1)}\big) \nonumber \\& + \nabla_{\tilde{\mathbf{p}}} {\mathscr{V}^T\!\big(\mathbf{a}^{(s-1)},\mathbf{x}^{(t-1)} , {\tilde{\mathbf{p}}}^{(s-1)}\big)} \big({\tilde{\mathbf{p}}} - {\tilde{\mathbf{p}}}^{(s-1)}\big) \triangleq \tilde{\mathscr{V}}(\mathbf{a},\mathbf{x},{\tilde{\mathbf{p}}}), \end{align} where the total data-rate of the network can be rewritten as \begin{equation} \widehat{\overline{R}}^\textnormal{Total} (\mathbf{a},\mathbf{x},\tilde{\mathbf{p}}) = \mathscr{U}(\mathbf{a},\mathbf{x}, \tilde{\mathbf{p}})- \tilde{\mathscr{V}}(\mathbf{a},\mathbf{x}, \tilde{\mathbf{p}}). \end{equation} By using approximation (\ref{F5-30}), the MM principles is satisfied. This makes a tight lower bound of equation (\ref{F5-16_new}) \cite{che2014joint,DC}. Now, the optimization problem in (\ref{chap5:5:main}) can be restated as \begin{align}\label{F5-20} (\mathbf{a}^{t},\mathbf{p}^{t}) = & \arg \max_{\mathbf{a},\mathbf{p},\tilde{\mathbf{p}}} \frac{\widehat{\overline{R}}^\textnormal{Total}(\mathbf{a}^{t},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{t}) } {\overline{P}^\textnormal{Total}(\mathbf{a}^{t},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{t},\mathbf{p}^t)} \\ s.t.: &~C_{1}-C_{2},\dot{C}_{3},C_{4},\dot{C}_{5},\ddot{C}_{5},{C}_{7}-C_{11}. \nonumber \end{align} However, the optimization problem in (\ref{F5-20}) is still non-convex due to non-convexity of $\ddot{C}_{5}$. Nevertheless, let us rewrite $\ddot{C}_{5}$ as the difference of two convex functions as $\mu(\mathbf{a})-\nu(\mathbf{a})$, where \begin{align} &\mu(\mathbf{a})=\sum_{j \in \mathcal{J}} \sum_{ n \in \mathcal{N}}\sum_{ k \in \mathcal{K}}( a_{j,n,k}),\\ &\nu(\mathbf{a})=\sum_{j \in \mathcal{J}} \sum_{ n \in \mathcal{N}}\sum_{ k \in \mathcal{K}}(a_{j,n,k})^2. \end{align} Similar to the approach used for data-rate functions in (\ref{F5-17}) and (\ref{F5-16_new}), we employ the same methodology based on the MM algorithm to convexify the constraint by taking the first-order Taylor approximation from $\nu(\mathbf{a})$. Consequently,~$\nu(\mathbf{a})$ can be approximated as \begin{align} \nu(\mathbf{a}) \simeq {\nu}(\tilde{\mathbf{a}}^{(s-1)}) + \nabla_{\tilde{\mathbf{a}}}{\nu^T}(\tilde{\mathbf{a}}^{(s-1)}) (\tilde{\mathbf{a}}-\tilde{\mathbf{a}}^{(s-1)}) \triangleq \tilde{\nu}(\mathbf{a}). \label{F5-34} \end{align} Hence, the constraint $\ddot{C}_{5}$ can be restated as $\mu(\mathbf{a})-\tilde{\nu}(\mathbf{a})$ that would be a convex function.~Finally, the optimization problem at hand can be recast as \begin{subequations} \begin{align} (\mathbf{a}^{t},\mathbf{p}^{t}) = & \arg \max_{\mathbf{a},\mathbf{p},\tilde{\mathbf{p}}} \frac{\widehat{\overline{R}}^\textnormal{Total}(\mathbf{a}^{t},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{t}) } {\overline{P}^\textnormal{Total}(\mathbf{a}^{t},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{t},\mathbf{p}^t)} \\s.t.: &~\ddot{C}_{5}:~\mu(\mathbf{a})-\tilde{\nu}(\mathbf{a}) \leq 0,\\ &~C_{1}-C_{2},\dot{C}_{3},C_{4},\dot{C}_{5},{C}_{7}-C_{11}. \end{align} \label{F5-26_new}% \end{subequations} So far, we tackled the issues with the multiplication of two variables, the interference in data-rate functions, and the binary SBS-subcarrier assignment variable. As for the last step in our solution design, we must take care of the issue with the fractional form of the objective function that makes the optimization problem non-convex. Thus, we now address the fractional objective function in the optimization problem in (\ref{F5-26_new}) by describing a technique to treat fractional programming problems. It can be concluded that the optimization problem in (\ref{F5-26_new}) can be solved via a well-known algorithm, namely, the Dinkelback method \cite{Dinkelbach}. For this matter, let us denote $\mathscr{E}_{eff}^{^{*}}$ as the optimal EE point of the optimization problem (\ref{F5-26_new}) that belongs to the set of feasible solutions spanned by its constraints. Therefore, we can solve the following non-fractional optimization problem given the data from the previous round of the main loop for the chosen set of antennas, i.e., $t-1$, and the $i^{th}$ iteration for the Dinkelback algorithm. Thus, by considering the \textbf{Algorithm~\ref{Dinkelbach-algorithm}}, an optimization problem with a transformed objective function is introduced to obtain the resource allocation policy as follows \begin{align}\label{F5-36_new} (\mathbf{a}^{i},\tilde{\mathbf{p}}^{i}) = & \arg \max_{\mathbf{a},\mathbf{p},\tilde{\mathbf{p}}} \widehat{\overline{R}}^\textnormal{Total} (\mathbf{a}^i,\mathbf{x}^{{t-1}},\tilde{\mathbf{p}}^i)- \mathscr{E}_{eff}^{^i} \overline{P}^\textnormal{Total} (\mathbf{a}^{*^i},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{*^i},\mathbf{p}^{*^i}) \\ s.t.: &~C_{1}-C_{2},\dot{C}_{3},C_{4},\dot{C}_{5},\ddot{C}_{5},{C}_{7}-C_{11}. \nonumber \end{align} In the above optimization problem, we have {$\mathscr{E}_{eff}^{^i} = \frac{\widehat{\overline{R}}^\textnormal{Total}(\mathbf{a}^i,\mathbf{x}^{{t-1}},\tilde{\mathbf{p}}^i)}{\overline{P}^\textnormal{Total}(\mathbf{a}^i,\mathbf{x}^{{t-1}},\tilde{\mathbf{p}}^i,{\mathbf{p}}^i)}$} in which {$\{\mathbf{a}^i,\mathbf{x}^{{t-1}},\tilde{\mathbf{p}}^i,{\mathbf{p}}^i\}$} are the corresponding resource allocation parameters. It is easy to demonstrate that the optimization problem in (\ref{F5-36_new}) is now convex with respect to all variables. Therefore, the \text{\textbf{Algorithm \ref{Dinkelbach-algorithm}}} terminates when $\mathscr{E}_{eff}^{^i}$ converges and so does the solution to problem (\ref{F5-36_new}). That is \{$\mathbf{a}^{*},\tilde{\mathbf{p}}^{*}$,$\mathbf{p}^*$\} are eventually achieved. \begin{proposition}\label{fractional_programming} The optimal EE, i.e., $\mathscr{E}_{eff}^{^*}$, can be used to obtain the resource allocation policy if and only if \begin{align} \max_{\mathbf{a},\tilde{\mathbf{p}},\mathbf{p} \in \mathcal{S_F}}\quad \widehat{\overline{R}}^\textnormal{Total}(\mathbf{a}^{i}, \mathbf{x}^{t-1}, \tilde{\mathbf{p}}^{i})-& \mathscr{E}_{eff}^{^i}\overline{P}^\textnormal{Total}(\mathbf{a}^{i}, \mathbf{x}^{t-1}, \tilde{\mathbf{p}}^i,{\mathbf{p}}^i) = \nonumber \\ \widehat{\overline{R}}^\textnormal{Total}(\mathbf{a}^{*},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{*})-& \mathscr{E}_{eff}^{^*}\overline{P}^\textnormal{Total}(\mathbf{a}^{*},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^*,{\mathbf{p}}^*) =0 , \end{align} for $\widehat{\overline{R}}^\textnormal{Total}(\mathbf{a}^{*},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{*}) \geq 0$ and $\overline{P}^\textnormal{Total}(\mathbf{a}^{*},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^*,{\mathbf{p}}^*) \geq 0$, where $\mathbf{a^*}$,~$\mathbf{x}^{t-1}$, $\tilde{\mathbf{p}}^*$,~and $\mathbf{{p}^*}$ yield the optimal solution to the convex optimization problem in (\ref{F5-36_new}). \end{proposition} \begin{proof} We define $\mathscr{E}_{eff}^{^*}$ and $\{\mathbf{a}^{*},\mathbf{x}^{t-1},\mathbf{p}^{*},\tilde{\mathbf{p}}^{*}\}\in \mathcal{S_F}$ as the optimal EE and the optimal resource allocation policy of the original objective function in (\ref{chap5:5:main}), respectively, which can be obtained as \begin{align} \label{F5-14} \mathscr{E}_{eff}^{^*} & = \max_{\mathbf{a},\mathbf{{p},\tilde{\mathbf{p}}}\in \mathcal{S_F}} { \frac{\widehat{\overline{R}}^\textnormal{Total}(\mathbf{a},\mathbf{{x}}^{t-1},{\mathbf{p}})} {\overline{P}^\textnormal{Total}(\mathbf{a},\mathbf{{x}}^{t-1},\tilde{\mathbf{p}},\mathbf{p})} }. \end{align} Then, the optimal EE would be \begin{align} \mathscr{E}_{eff}^{^*} = & \frac{\widehat{\overline{R}}^\textnormal{Total} (\mathbf{a}^{*},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{*})} {P^\textrm{Total}(\mathbf{a}^{*},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{*},\mathbf{p}^{*})} \nonumber \\ \geq & \frac{\widehat{\overline{R}}^\textnormal{Total}(\mathbf{a},\mathbf{x}^{t-1},\tilde{\mathbf{p}})} {P^\textrm{Total}(\mathbf{a},\mathbf{x}^{t-1},\tilde{\mathbf{p}},\mathbf{p})}, \forall \{\mathbf{a}^{*},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{*},\mathbf{p}^{*}\}\in \mathcal{S_F}. \end{align} Therefore, it can easily be seen that \begin{align} & \widehat{\overline{R}}^\textnormal{Total}(\mathbf{a},\mathbf{x}^{t-1},\tilde{\mathbf{p}})- \mathscr{E}_{eff}^{^*}\overline{P}^\textnormal{Total}(\mathbf{a},\mathbf{x}^{t-1},\tilde{\mathbf{p}},\mathbf{p}) \leq 0, \end{align} Hence, one can conclude that \begin{align} \widehat{\overline{R}}^\textnormal{Total}(\mathbf{a}^{*},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{*})- \mathscr{E}_{eff}^{^*}\overline{P}^\textnormal{Total}(\mathbf{a}^{*},\mathbf{x}^{t-1},\tilde{\mathbf{p}}^{*},\mathbf{p}^{*})=0. \end{align} Thus, we have \begin{align} \max_{\mathbf{a},\tilde{\mathbf{p}},\mathbf{{p}} \in \mathcal{S_F}} \quad {\widehat{\overline{R}}^\textnormal{Total}(\mathbf{a},\mathbf{x}^{t-1},\tilde{\mathbf{p}}) -\mathscr{E}_{eff}^{^*}\overline{P}^\textnormal{Total}(\mathbf{a},\mathbf{x}^{t-1},\tilde{\mathbf{p}},\mathbf{p})}=0, \end{align} and this would be achievable by the resource allocation policy. This completes the proof. \end{proof} Since the problem in (\ref{F5-36_new}) is a convex optimization problem at each iteration, it can be solved efficiently using the optimization packages, including interior point methods such as CVX \cite{boyd2004convex}. Moreover, it is worth mentioning that the proposed \textbf{Algorithm \ref{Dinkelbach-algorithm}} can obtain an optimal solution to the problem (\ref{chap5:5:main}), if the inner loop of the optimization problem in (\ref{F5-36_new}) can be solved optimally in each iteration. For solving the inner problem of (\ref{F5-36_new}), we have to check the approximations for the data-rate functions and check a condition for their convergence. Let us consider the data-rate approximation at the $s^{th}$ iteration of the MM method as $\widehat{\overline{R}}^{(s)}$. In order to make sure that the Taylor approximation is a tight lower bound, we investigate the difference of data-rate functions for two consecutive iterations of the MM method. The proof of convergence of the MM procedure in the inner loop of \textbf{Algorithm~\ref{Dinkelbach-algorithm}} is similar to the proof given in {\text{\textbf{Proposition~\ref{proposition3}}}} of chapter~\ref{CHAP4}. We also note that the optimal solution of the Dinkelback algorithm can be considered as the solution for the $(t-1)^{th}$ iteration of the main loop for obtaining the antenna selection set for either an ID or EH operation. In the next subsection, we discuss the solution methodology for solving the antenna selection problem. \begin{algorithm}[ht] \caption{Resource Allocation Algorithm for Solving Joint Subcarrier and Power Assignment} \label{Dinkelbach-algorithm} \centering \begin{algorithmic}[1] \STATE {$\mathbf{Initialize}$} \\ {\begin{addmargin}[1em]{0em} {iteration index of resource allocation policy $i=0$ with maximum allowed tolerance $\Theta>0$, \\ MM iteration index $s=0$ with maximum number of MM iteration $T_{max}$ and $\mathbf{\Psi}>0$, \\ feasible set vector $\mathbf{a}^{0}$, $\mathbf{x}^{t-1}$, $\tilde{\mathbf{p}}^{{0}}$, and $\mathbf{p}^{0}$},\\ and the penalty factor $\lambda \gg 1$. \end{addmargin}} \STATE Set maximum EE for resource allocation policy $\mathscr{E}_{eff}^{^0}=0$. \STATE \textbf{while} {$|\mathscr{E}_{eff}^{^i}- \mathscr{E}_{eff}^{^{i-1}}| > \Theta$} \textbf{do} \STATE{ \begin{addmargin}[1em]{0em} {\textbf{repeat}} \end{addmargin}} \STATE{ \begin{addmargin}[2em]{0em} {Update $\tilde{\mathcal{V}}(\mathbf{a},\mathbf{x},{\tilde{\mathbf{p}}})$, $\tilde{\mathscr{V}}(\mathbf{a},\mathbf{x},{\tilde{\mathbf{p}}})$, and $\tilde{\nu}(\mathbf{a})$ using equations (\ref{F5-18}), (\ref{F5-30}), and (\ref{F5-34}), respectively.} \end{addmargin}} \STATE{ \begin{addmargin}[2em]{0em} {Set $s=s+1$}. \end{addmargin}} \STATE{ \begin{addmargin}[2em]{0em} {Solve $\mathscr{U}(\mathbf{a},\mathbf{x},{\tilde{\mathbf{p}}})- \tilde{\mathscr{V}}(\mathbf{a},\mathbf{x},{\tilde{\mathbf{p}}})$ to obtain the data-rate as well as $\mathbf{a}^{s}$~and $\tilde{\mathbf{p}}^{{s}}$}. \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} \textbf{until} $|\widehat{\overline{R}}^{(s)}-\widehat{\overline{R}}^{(s-1)}| \leq \mathbf{\Psi}$ or $s =T_{max}$ \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Set \{$\mathbf{a}^i,\tilde{\mathbf{p}}^i$,$\mathbf{p}^i\} =$ $\{\mathbf{a}^{{s}^*},\tilde{\mathbf{p}}^{{s}^*},\mathbf{p}^{{s}^*}\}$. \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Set $i=i+1$. \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} Set {$\mathscr{E}_{eff}^{^i} = \frac{\widehat{\overline{R}}^\textnormal{Total}(\mathbf{a}^i,\mathbf{x}^{{t-1}},\tilde{\mathbf{p}}^i)}{\overline{P}^\textnormal{Total}(\mathbf{a}^i,\mathbf{x}^{{t-1}},\tilde{\mathbf{p}}^i,{\mathbf{p}}^i)}$}. \end{addmargin}} \STATE \textbf{end while} \STATE Set \{$\mathbf{a}^{*},\tilde{\mathbf{p}}^{*},\mathbf{p}^*\} =$ $\{\mathbf{a}^{i-1},\tilde{\mathbf{p}}^{i-1},\mathbf{p}^{i-1}\}$. \STATE \textbf{return} \{$\mathbf{a}^{*},\tilde{\mathbf{p}}^{*}$,$\mathbf{p}^*$\} \end{algorithmic} \end{algorithm} \subsection{B. Antenna selection}\label{Chap5:layer2} Since the system throughput is by itself a function of the transmit power, it can be perceived that the effect of total energy consumption on the EE is generally much more substantial than that of the system throughput. Here, we aim at investigating the optimal antenna selection for each user among all available antennas to perform ID and EH appropriately. Assuming the SBS-subcarrier assignment and the power allocation are obtained from a previous iteration {\text{($\mathbf{a}^{t-1}$, $\mathbf{p}^{t-1}$)}} in the first layer, we formulate the following optimization problem to find the optimal antenna set {\text{selection}} for a generalized AS-based SWIPT in the DL direction of a multi-user multi-cell OFDMA network \begin{subequations} \begin{align}\label{max_prob.6} \mathbf{x}^t = & \arg \max_{\mathbf{x}} \frac{R^\textnormal{Total}(\mathbf{a}^{t-1},\mathbf{x}^t,\mathbf{p}^{t-1})} {P^\textnormal{Total}(\mathbf{a}^{t-1},\mathbf{x}^t,\mathbf{p}^{t-1})} \\ \propto & \arg \min_{\textbf{x}}P^{\textrm{Total}}(\textbf{a}^{t-1},\textbf{x}^{t},\textbf{p}^{t-1}) \\ = & \arg \min_{\textbf{x}}\bigg( \sum_{j \in \mathcal{J}} \sum_{k\in \mathcal{K}} \sum_{n\in \mathcal{N}} \big(\frac{{a}_{j,n,k}{p}_{j,n,k}}{\kappa_j}+ P_c^\mathrm{SBS}\big)\bigg)-P^{\textrm{EH}}(\textbf{x}^{t},\textbf{p}^{t-1})\\ s.t.: &~C_{3}: \sum_{j\in \mathcal{J}} R_k(\mathbf{a},\mathbf{x},\mathbf{p}) \geq R_{min},~ \forall k \in \mathcal{K},\\ &~C_{4}: \sum_{m\in \mathcal{M}} x^{m}_{j,n,k}=1,~~~~~~~~~~~~ \forall j\in \mathcal{J},~ \forall n \in \mathcal{N},~ \forall k \in \mathcal{K},\\ &~C_{6}: x^{m}_{j,n,k}\in\{0,1\},~~~~~~~~~~~~~ \forall {j \in \mathcal{J}},~ \forall {n\in \mathcal{N}},~ \forall {k\in \mathcal{K}},~ \forall {m\in \mathcal{M}}. \end{align} \label{chap5:F5_35-new}% \end{subequations} It can be noticed that the first term in (\ref{chap5:F5_35-new}) is constant.~Hence, the objective function in (\ref{chap5:F5_35-new}) can be restated as follows \begin{align}\label{Chap5:F5-35} \mathbf{x}^t= & \arg \max_{\textbf{x}}P^{\textrm{EH}} (\textbf{x}^{t},\textbf{p}^{t-1})\\ s.t.: &~C_{3}-C_{4},C_{6}. \nonumber \end{align} It is worth mentioning that when the power and the corresponding SBS-subcarrier is fixed, the problem can be decomposed into M subproblems, where each subproblem in (\ref{Chap5:F5-35}) can be solved using well-established optimization packages including CVX. One may conclude that the solution of (\ref{Chap5:F5-35}) would require the knowledge of all branch SNRs. However, there are various techniques to address this issue based on the quasi-stationary property of the channel gains despite the difficulty of knowing all SNRs simultaneously. For instance, one may use a training signal in a preamble. During this preamble, when the receiver scans the antennas, the highest channel gain is selected for receiving the next data burst or power signal. Finally, the pseudo-code of the iterative solution for the antenna selection with a joint SBS-subcarrier assignment and power control is given in \textbf{Algorithm~\ref{last-alg}}. \begin{algorithm}[t] \caption{Proposed Iterative Method} \label{last-alg} \begin{algorithmic}[1] \STATE {$\mathbf{Initialize}$} \\ {\begin{addmargin}[1em]{0em} {iteration index $t = 1$ with the maximum number of iterations $\Delta_{max}$. } \end{addmargin}} \STATE{\textbf{repeat} \{Main Loop\} \\ \textbf{\quad Joint SBS-Subcarrier Assignment and Power Control:}} \STATE{ \begin{addmargin}[1em]{0em} For a given antenna set $\textbf{x}^{t-1}$, find the optimal subcarrier assignment $\textbf{a}^{t}$ and power allocation based on (\ref{F5-36_new}) using \textbf{Algorithm \ref{Dinkelbach-algorithm}}. \\ \textbf{Antenna Selection Policy:} \end{addmargin}} \STATE{ \begin{addmargin}[1em]{0em} For a fixed subcarrier assignment $\textbf{a}^{t-1}$ and power allocation $\textbf{p}^{t-1}$,~determine the best antenna based on~(\ref{Chap5:F5-35}). \end{addmargin}} \STATE {\quad Set $t = t + 1$.} \STATE \textbf{until} Convergence with $\mathscr{E}_{eff}^{^*}$ or $t = \Delta_{max}$ \STATE \textbf{return} \{$\mathbf{a}^{*},{\mathbf{x}}^{*}$,$\mathbf{p}^*$\} \end{algorithmic} \end{algorithm} \section{Complexity Analysis} Our proposed iterative algorithm includes two subproblems: 1) joint SBS-subcarrier assignment and power allocation and 2) antenna selection. The solution for the first subproblem includes a two-layer approach. The outer layer provides a solution according to the Dinkelbach algorithm, whereas the inner layer's solution is based on the MM algorithm. It can be seen that the inner solution to the optimization problem (\ref{F5-36_new}) includes $JNK$ variables and $J+K+JN+6JNK+J^{2}N^{2}K$ linear convex constraints. As a result, the computational complexity of the first subproblem is $\mathcal{O}(JNK)^{2}(J+K+JN+6JNK+J^{2}N^{2}K)$. This can be asymptotically estimated as $\mathcal{O}(JN)^{4}(K)^{3}$, showing a polynomial-time complexity. Moreover, the complexity of the outer layer is $\mathcal{O}(T_{\text{Dinkelbach}})$, where $T_{\text{Dinkelbach}}$ is the number of iterations needed for the convergence of the outer layer. Consequently, the overall complexity of our proposed scheme becomes $\mathcal{O}(T_{\text{Dinkelbach}}T_{\text{MM}}(JN)^{4}(K)^{3})$ in which $T_{\text{MM}}$ is the number of iterations required for reaching convergence in the MM method. Now, we aim at calculating $T_{\text{MM}}$ for the D.C. programming that includes the interior point method. It should be noted that when CVX is adopted based on the interior point method to solve the optimization problem (\ref{F5-36_new}), it requires $\log \frac{JN+J+K+2JNK+JNK+J^{2}N^{2}K}{t^{0}\phi \xi}$ iterations, where $t^{0}$ is the initial point for approximating the accuracy of the interior point method, $0<\phi\ll 1$ is the stopping criterion, and $\xi$ shows the accuracy of the method \cite{Ata_2020}. For the antenna selection subproblem in (\ref{Chap5:F5-35}), the computational complexity is $\mathcal{O}(JNKM)$. This computational complexity is substantially lower than the algorithm for the joint power allocation and subcarrier assignment subproblem. Note that this subproblem can be solved via MOSEK or Gurobi solver, which provides a polynomial-time complexity. Besides, it is worth noting that this algorithm reduces the number of variables by half in each optimization subproblem and changes the original problem (\ref{chap5:5:main}) into a mathematically tractable form to be solved. Furthermore, the number of iterations for this subproblem is $T_{AS}=\log \frac{K+NK+JNKM}{t^{0}\phi \xi}$. Finally, the total complexity order of the optimization problem, including a two-layer approach, is $\mathcal{O}(T_{\text{Dinkelbach}}T_{\text{MM}}T_{\text{AS}}(JN)^{4}(K)^{3})$. This denotes the total number of iteration that is required for reaching convergence of the EE optimization problem. \section{Simulation Results} In this section, the performance gain of the proposed scheme for a generalized AS-based SWIPT in the DL direction of a multi-user multi-cell OFDMA system is evaluated under various system parameters. There are $J=3$ cells in the network topology. The radius of a cell, $d_{max}$, is 20 meters, with a reference distance, $d_{0}$, of 5 meters. Moreover, there are $K_j = 4$ users in each cell uniformly located between the reference distance, $d_{0}$, and maximum coverage of the cell, $d_{max}$. Also, each user is equipped with two antennas ($M=2$), where the receiver antennas are capable of both ID and EH operations. Additionally, we consider a frequency-selective fading channel and further assume the central carrier frequency is set to 3 GHz. The number of subcarriers is $N = 16$, where the bandwidth of each subcarrier is set to 180 kHz. It should be noted as the power of the background noise on all antennas of each receiver is rather small compared to maximum transmit power, $p_{max}$, it is assumed {$|\sigma_{j,n,k}^{m}|^{^2} = \sigma^{2} = $ -120 dBm} in all simulations. Since a line-of-sight (LoS) signal is expected in the received signal, the small-scale fading channel is modeled as Rician fading with Rician factor $\rho=3$ dB. Moreover, the Rician flat fading channel gains include a distance-dependent path loss component of $31.7+10 \alpha \log(\frac{d}{d_0})$ [dB] (where $d$ is the distance between the transmitter and the receiver) and a log-normal shadowing component with~$8$ dB standard deviation, where the path loss exponent is equal to $\alpha = 2.8$~\cite{339880}. These parameters for propagation modeling and simulations follow the suggestions in 3GPP evaluation methodology~\cite{chap3:3GPP}. The power conversion efficiency of all active EH antennas, $\epsilon^m_{j,k}$, is assumed to be the same and is equal to $\epsilon^m_{j,k} = \epsilon = 0.3 $. For the power consumption model, a constant consumed circuit power, $P_{\textrm{c}}^{\textrm{SBS}}$, is considered for all SBSs and is equal to~23 dBm. The power amplifier efficiency of all SBSs is also supposed to be the same and is $\kappa_j = \kappa = 0.2$. The target transmission rate is $R_{min}=1$ bit/second/Hz (bps/Hz) for each user. Furthermore, we conduct Monte Carlo simulations by generating random realizations of the channel gains to obtain the average data-rate of the network. In fact, the channel gain between a transmitter and a receiver is calculated using independent and identically distributed Rician flat fading and the figures shown in this section are obtained by estimating the average of results over different realizations of the path-loss and the multi-path fading. The rest of the simulation parameters are given in \textbf{Table}~(\ref{chap:5:Simulation_Parameters}) unless otherwise is specified. \begin{table}[t] {\caption{Simulation Parameters} \label{chap:5:Simulation_Parameters} \centering \begin{tabular}{|c|c|}\hline {\bf Parameter} & {\bf Value} \\ \hline \hline {Coverage cell ($d_{max}$)} & {$20$ m} \\ {Reference distance ($d_{0}$)} & {$5$ m} \\ {The number of cell ($J$)} & {$3$}\\ {The number of user in each cell ($K_{j}$)} & {$4$}\\ {The number of antenna pf each user ($M$)} & {$2$}\\ {The number of subcarrier (N)} & {$16$} \\ {Noise power ($\sigma^{2}$)} & {$-120$} dBm \\ {The bandwidth of each subcarrier} & {$180$ kHz}\\ {Path loss exponent ($\alpha$)} & {$2.76$} \\ {Path loss model for cellular links} & { $31.7+27.6 \log(\frac{d}{d_0})$} \\ {Multi-path fading distribution}& {Rician fading with factor 3~dB}\\ {Power conversion efficiency of EH antennas ($\epsilon$)}& {30\%}\\ {Power amplifier efficiency of SBSs ($\kappa$ )}& {20\%}\\ The maximum transmit power of the SBS ($p_{\textrm{max}}$) & {$30$ dBm} \\ The circuit power consumption of SBSs ($P_{\textrm{c}}^{\textrm{SBS}}$) & {$23$ dBm} \\ The minimum data-rate requirement for each user~($R_{\textrm{min}}$) & $1$ bps/Hz \\ Channel realization number & $100$\\\hline \end{tabular}} \end{table} \subsection{Convergence Speed} Figure (\ref{plot:5.1}) depicts the average system EE versus the number of iterations of the proposed algorithm under different initialization of the power control; $\mathbf{p}^0(i) = \frac{p_{max}}{N}$, i.e., equal power is assigned to all small-cell users over all subcarriers, $\mathbf{p}^0(i) = p_{max}$, i.e., all subcarriers has the maximum power, and $\mathbf{p}^0(i) = 0$, no power is assigned to the subcarriers initially. As can be seen, our proposed iterative algorithm runs until it converges to a fixed value. Moreover, the convergence rate of the proposed algorithm is considered fast as it reaches to a specific value only after a small number of iterations. This figure also demonstrates that although the speed of convergence differs from one case to another, in all cases, our proposed algorithm converges to a stationary point only after a small limited number of iterations. \begin{figure}[t] \centering \includegraphics[width=12cm]{figures/chap5/Fig_5_1.pdf} \caption{Convergence speed.} \label{plot:5.1} \end{figure} \subsection{Energy Efficiency versus Maximum Allowed Transmit Power} In figure (\ref{plot:5.2}), we present the average EE versus the maximum transmit power $p_{max}$. As can be observed from this figure, the average system EE for the resource allocation schemes is monotonically non-decreasing with the maximum allowed transmit power. This is because the received SINR at the users can be enhanced by allocating the additional available transmit power via the solution of the problem, which leads to an improvement of the system EE. However, there is a diminishing return in the average system EE when $p_{max}$ is higher than $30$ dBm. As a matter of fact, with an increase in maximum transmit power, the interference power level arising from the other SBSs becomes more severe, resulting in a degradation of the received users' signals. Consequently, the throughput of the users deteriorates, which results in a reduction of the EE. In particular, by increasing the value of $p_{max}$, the system EE quickly increases at first, and then starts to saturate when $p_{max}$ is higher than $30$ dBm. The reason for this is quite evident as the resource allocator is not willing to consume more power, once the maximum EE is obtained. This figure also consists of four baseline schemes for EE maximization, i.e., Methods B-E, and compares their performance with the proposed iterative \textbf{Algorithm~\ref{last-alg}}, that we call Method A. For Method B, we optimize the system EE maximization problem by considering the power splitting receiver architecture. In particular, Method B considers receivers with two antennas ($M=2$). Each antenna in this architecture is capable of both EH and ID operations at the same time through a power splitting method with fixed power splitting ratios. Moreover, Method~C examines the proposed method in~\cite{TWC_Ata}. It considers the EE maximization with respect to the subcarrier assignment, power allocation, and antenna selection for users without the energy harvesting capability. Furthermore, Method D is our proposed algorithm based only on the power allocation optimization when random scheduling of the subcarrier assignment and antenna selection variables is performed to obtain the resource allocation policy. Finally, Method E is the full power approach in the sense that equal power is allocated across subcarriers for each user. \begin{figure}[!b] \centering \includegraphics[width=12cm]{figures/chap5/Fig_5_2.pdf} \caption{Energy efficiency versus maximum allowed transmit power.} \label{plot:5.2} \end{figure} It can be concluded that our proposed iterative algorithm has a better performance in comparison with other methods as we optimize the resource allocation jointly and use a generalized AS-based harvesting technique at the receiver based on the antenna selection architecture. We also observe that power control has a significant impact on system EE. As it can be seen in the low transmit power regime, the received power of the desired signal at receivers may not be sufficiently large for simultaneous information decoding and energy harvesting. It can be realized that in the low transmit power regime, the received power of the desired signal may not be sufficiently large for simultaneous information decoding and energy harvesting. We also note that for the higher value of the maximum allowed transmit power, the maximum EE achieved by the system via EH active antennas or EH users cannot be attained by a system without energy harvesting capabilities through increasing the transmit power. This confirms that energy harvesting contributes to the average system EE. \begin{figure}[!b] \centering \includegraphics[width=12cm]{figures/chap5/Fig_5_3.pdf} \caption{Energy efficiency versus minimum data-rate requirement.} \label{plot:5.3} \end{figure} \subsection{Energy Efficiency versus Minimum Data-rate Requirement} In this section, we show the maximum average EE under different data-rate requirements for different values of circuit power consumption, $P_{\textrm{c}}^{\textrm{SBS}}$, in figure (\ref{plot:5.3}). It can be acknowledged that by increasing the data-rate, the system EE stays almost the same up to a specific minimum data-rate requirement value, but starts to decline afterward. This is because the required transmit power is similarly low in order to satisfy the QoS provisioning for a lower value of the minimum data-rate requirement. However, by increasing the data-rate requirement, more subcarrier are needed and more ID antennas have to be proportionally activated in users with a more mediocre channel quality to meet the QoS requirement. This is in addition to a need for an energy-efficient design that operates at a higher transmit power for achieving the optimal system EE. Moreover, we can observe that our proposed iterative \textbf{Algorithm~\ref{last-alg}} outperforms the baseline scheme algorithm in \cite{jalal} due to performing a joint resource allocation policy as well as obtaining a locally optimal solution. Figure~(\ref{plot:5.3}) also compares the effect of static circuit power consumption on the system EE. From there, we can see that EE decreases with increased circuit power due to higher total power consumption in the network. \subsection{Energy Efficiency versus Distance} We investigate the average EE versus different reference distances. As can be observed from figure~(\ref{plot:5.5}), the system EE decreases with increasing the distance. Consequently, as the channel strength deteriorates by increasing the distance between transmitter and the receiver due to the effect of the path-loss, more ID antennas have to be activated, and more power must be allocated to users to meet the minimum required data-rate. Hence, the average system EE would decline due to increasing the total power consumption in addition to decreasing the total data-rate of the network. However, the system EE achieved by our proposed iterative \textbf{Algorithm~\ref{last-alg}} still has superior performance as compared to the other algorithms. It should be noted that the Methods A-D are the same as defined earlier in figure (\ref{plot:5.2}). \begin{figure}[t] \centering \includegraphics[width=12cm]{figures/chap5/Fig_5_5.pdf} \caption{Energy efficiency versus distance.} \label{plot:5.5} \end{figure} \subsection{Average System Throughput versus Maximum Allowed Transmit Power} In figure (\ref{plot:5.6}), we plot the average throughput or data-rate of the network versus the maximum allowed transmit power, $p_{max}$, for different schemes. For $p_{max}\leq 30~\text{dBm}$, it can be perceived that the average system throughput of the proposed iterative \textbf{Algorithm~\ref{last-alg}}, that is the Method A in the figure, raises with the maximum transmit power allowance. However, the slope of the curve of the system throughput starts to decline and reach a saturation in the high transmit power regime, i.e.,~$p_{max}\geq 30~\text{dBm}$. In other words, there is a diminishing return in the average system throughput when the maximum transmit power is higher than 30 dBm. In fact, as the maximum transmit power increases, the interference power level arising from the other SBSs becomes more severe, which degrades the received users' signal quality. To compare, we also plot the curve based on the alternative search method (ASM), i.e., Method B, in which the original problem is divided into three disjoint subproblems. In this method, the SBS-subcarrier assignment, power allocation, and antenna selection are selected based on the values that are determined in the previous round \cite{jalal}. Method C is the proposed algorithm based only on the power allocation when random scheduling of the subcarrier allocation and antenna selection variables is performed to obtain the resource allocation policy. Method D is the full power allocation approach with equal power across subcarriers for each user in which the SBS-subcarrier assignment and antenna selection variables are chosen based on our proposed algorithm. It can be perceived that power allocation can significantly increase the performance gain due to controlling interference term arising from other SBSs. \begin{figure}[!t] \centering \includegraphics[width=12cm]{figures/chap5/Fig_5_6.pdf} \caption{Average system throughput versus maximum allowed transmit power.} \label{plot:5.6} \end{figure} \subsection{Average Harvested Power versus Maximum Allowed Transmit Power } Figure (\ref{plot:5.7}) illustrates the average harvested power versus the maximum allowed transmitting power, $p_{max}$. As the $p_{max}$ increases, the harvested energy also increases in all considered Methods~A-D. However, it is noticeable that for a large value of $p_{max}$, the amount of average harvested power gets saturated. The reason for this inclination is that the transmitter stops to increase the transmit power for the system EE maximization. In order to evaluate our performance gain, we also compare our results from the proposed iterative \textbf{Algorithm~\ref{last-alg}}, i.e., Method A, with three baseline Methods B-D. In Method B, we consider the proposed algorithm in \cite{Wireless_Information}. Method C splits the received signal into two power streams for a finite discrete set of power splitting ratios. Method A works better than Method B and C due to using a different antenna in each subcarrier, leading to a better degree of freedom of the network. Finally, Method D is our proposed algorithm with a different objective. Specifically,~we maximize the system throughput with respect to the same constraints and also further consider minimum energy harvesting requirements for each user. It can be observed that our proposed algorithm reaches higher performance gain due to employing the antenna selection strategy via the multiplexing gain. Moreover, by using the antenna selection, we enhance the performance gain due to increasing flexibility in the resource allocation design. \begin{figure}[t] \centering \includegraphics[width=12cm]{figures/chap5/Fig_5_7.pdf} \caption{Average harvested power versus maximum allowed transmit power.} \label{plot:5.7} \end{figure} \section{Summary} In this chapter, we addressed the EE optimization problem for the DL of a multi-user multi-cell OFDMA network with the generalized AS-based co-located receivers using SWIPT. Considering a practical linear power model in which the transmit power consumption, circuit energy consumption, and the harvested energy (by active receiver EH antennas) are taken into account, our goal was to maximize the EE whilst satisfying the minimum data-rate requirement for each user. The EE optimization problem, which involves a joint optimization of the SBS-subcarrier assignment and power allocation along with an optimal antenna selection, was non-convex and non-linear. This made the optimization problem extremely difficult to tackle directly. Hence, to obtain a feasible solution for this problem, we employed the MM approach by constructing a sequence of surrogate functions to approximate the non-convex optimization. In particular, based on the Dinkelbach method, an optimization problem with a transformed objective function was designed that uses the MM method in its inner loop. Simulation results revealed the superiority of our proposed method over existing works. Furthermore, the proposed antenna selection scheme demonstrated that our algorithm provides a good balance of improving in terms of the throughput as well as EE. \chapter{Conclusion and Future Work} \label{CHAP6} \vspace{15mm} In this chapter, we summarize our overall conclusions.~We also propose some of the future research directions that emerge from this work. \section{Conclusion} In chapter \ref{CHAP3}, we introduced a new approach to harvesting ambient energy. In this approach, a designated portion of the spectrum was used for information decoding (ID) and the rest for energy harvesting (EH), with two separate filters used at the receivers. We used neither splitters nor switches, which significantly simplified the complexity of the receiver. Furthermore, we formulated an optimization problem to maximize the harvested energy via a joint subcarrier assignment and power allocation using the simultaneous wireless information and power transfer (SWIPT) scheme for a downlink (DL) of a multi-user small single-cell orthogonal frequency division multiple access (OFDMA) network, fulfilling each user's minimum data-rate requirement. Our extensive simulation results indicate that our proposed algorithm outperformed other algorithms in the literature by complying with the policy designed for resource allocation. In chapter \ref{CHAP4}, we extended the system model for SWIPT-enabled single-cell OFDMA proposed in the previous chapter to a multi-cell network based on separated receiver architecture with the goal of maximizing system throughput while respecting the maximum power transfer allowed, the minimum of energy harvested for EH receivers, and the minimum amount of data-rate required for ID receivers. The resulting problem, which jointly optimizes subcarrier assignment and power allocation, was mixed-integer non-linear programming (MINLP). This is intractable because of the multiplication of two variables, the binary subcarrier assignment variable, and the intertwingled interference term in the data-rate function. We applied the majorization minimization (MM) approach to manage resource allocation policy in this complex problem. We also analyzed the design of a low complexity algorithm with an upper bound for the interference term by imposing a limiting interference threshold in each subcarrier that can be controlled by the resource allocator to improve system performance. Simulation results showed the excellent performance gain of our suggested algorithms. Finally, in chapter \ref{CHAP5}, we described a novel harvesting technique at the receiver that is based on the receiver antenna selection for a multi-user multi-cell SWIPT OFDMA system with a co-located architecture. This we named a “generalized antenna switching technique”. We then maximized energy efficiency (EE) as a performance metric in a two-layered approach to determine a resource allocation policy that optimizes subcarrier assignment, power allocation, and antenna selection with few constraints. The underlying problem in this chapter was neither linear nor convex due to the fractional form of the objective function, the multiplication of variables, incorporating interference, and integer variables. We relaxed the integer variable and applied the big-M formulation to make sure that relaxed variables take binary values. After that, we used the MM approach based on difference of two convex functions (D.C.) programming to find a feasible solution for the inner problem, employing a first-order Taylor approximation to convexify the non-convex functions. Next we applied the Dinkelback algorithm to transform the objective function into a non-fractional function. We observed that generalized antenna switching, also known as “antenna selection strategy”, increases the EE of the system by providing more degrees of freedom as a result of assigning resources via different antennas. We concluded the chapter with simulation results that demonstrate the superiority of our proposed method. \vspace{-3mm} \section{Future Work} \vspace{-3mm} The field of SWIPT in energy-constrained communication systems is a remarkably rich research discipline with great potential. The following fundamental research directions could be pursued in future work. \textbf{NOMA-based SWIPT:} Non-orthogonal multiple access (NOMA) has been suggested as one of the fundamental techniques for beyond fifth generation (5G) and the forthcoming sixth generation (6G) in order to enhance spectral efficiency (SE) while permitting some degree of multiple access interference at receivers by having users share the same spectrum. NOMA schemes are designed to concurrently serve two or more users at the same base station (BS) or access point (AP) in a single orthogonal resource block. They can be categorized into main two classes – single-carrier NOMA and multi-carrier NOMA. The primary principle of single-carrier NOMA is allocating the same time/frequency resources to multiple users by utilizing distinct power levels (“power-domain” NOMA). In the multi-carrier NOMA scheme, multiple users are multiplexed on different subcarriers by using different codes (sparse code multiple access) and patterns (pattern division multiple access) for each subcarrier~\cite{8452949}. The successive interference cancellation technique is employed at the receiver end to eliminate expected interference and guarantee improved overall fairness and throughput. Most importantly, SE can be achieved in NOMA networks. It is worth noting that the resource allocation design based on SWIPT-enabled NOMA cellular networks could provide both SE improvement through NOMA and EE improvement via SWIPT. The very first attempt to maximize EE under multiple restraints and achieve energy-efficient resource management in SWIPT-enabled NOMA was conducted in \cite{8891923}. Research on this topic has just begun to flourish, with many more potentials awaiting discovery~\cite{jalal2}. \textbf{Massive MIMO- and mmWave-based SWIPT}: Propagation losses of broadcasted radio waves may deteriorate SWIPT efficiency, which makes SWIPT particularly applicable to short-distance uses. Massive multiple-input multiple-output (MIMO) and millimeter-wave (mmWave) technologies would enable a BS or an AP to reliably transfer power to energy-constrained users through ultra-sharp energy beams (that only concentrate transmission energy at certain points) as well as provide diversity and multiplexing gain~\cite{6736761,6894453,7593259}. Integrating SWIPT with massive MIMO and mmWave technologies can therefore help overcome SWIPT's deficiencies and further enhance performance in terms of achievable data-rate, overall SE, and EE. Since the beam-width of the massive MIMO and mmWave based SWIPT system is very small due to the shorter wavelengths (and wider bandwidths), effective initial beam association, beam selection, and beam alignment algorithms are desirable. These topics were investigated in~\cite{7491259,8680660,8718514,8828094,8485639,8907878}. Moreover, considering that the antenna selection technique in multiple antenna systems would significantly decrease the system's power consumption, an exciting research direction would be verifying its applicability to massive MIMO- and mmWave-based SWIPT-enabled networks. \textbf{UAV-based SWIPT}: Unmanned Aerial Vehicles (UAVs) or remotely piloted aircraft have attracted significant attention.~UAVs can improve traditional cellular communication as relays and BSs. Compared to conventional cellular communication, some of the advantages of UAVs include increased wireless connectivity, high maneuverability, extensive coverage, simple implementation, and cost-effective communication – with increasing affordability. An unobstructed line-of-sight (LoS) link might be able to help further improve the reliability of wireless communication systems. UAVs could be effectively integrated with SWIPT by virtue of the dominant presence of LoS connections – acting as flying BSs or APs – to provide a variety of services in areas lacking infrastructure. However, a major drawback to UAV-based applications is that UAV devices are typically power-hungry devices with limited energy storage performing operations in flight. Deployment, trajectory design, and resource allocation must be improved for the efficient utilization of UAVs within a reasonable range of energy-constrained receivers~\cite{8876702,8892608,8915758,8918344,8941314,8976230,9045325}. \textbf{MEC-based SWIPT}: Mobile edge computing (MEC) is a new paradigm for facilitating the real-time implementation of computationally heavy tasks for massive low-power devices (e.g., sensors) by providing cloud-like computing at the edge of mobile networks. MEC offers solutions for resource-limited wireless devices with high-quality wireless services and multi-media applications because it is capable of offloading some or all the computing tasks of resource-limited devices to nearby APs or BSs, where integrated MEC servers can remotely handle their tasks~\cite{8016573}. Although MEC covers the computational aspect of massive computation-intensive devices via local computing or offloading, one big challenge remains: providing a sustainable and cost-effective energy supply to resource-limited energy-constrained wireless devices. Implementing resource allocation policies for mobile users with high computing tasks via MEC and SWIPT would be an interesting topic to explore, with MEC-based SWIPT permitting ubiquitous computing~\cite{8642372,8845134,8904282,8907790,8952620,8970293,9057476}. \chapter*{Acknowledgments} After an intensive year, I put the finishing touches on this Master's thesis. I have been able to discover new elements in various areas of the beautiful world of telecommunication. For this, I would like to sincerely thank a number of people who have supported me during this long journey.\\ First of all, I want to thank my supervisor, Prof. Dr. Ir. Heidi Steendam, for giving me the opportunity to work on this topic for my thesis. Her trust, guidance, and support helped me to successfully complete it. I was fortunate for her supervision, especially her patience, understanding, and availability for discussing how to write scientific papers.\\ In addition, I express my gratitude to many friends and colleagues for our fruitful discussions and friendship. Among them, my special appreciation goes to my best friend Ata for his comments and invaluable knowledge, which reaped us significant research outputs. I am so lucky and honored to have amazing friends, including Pandidan, Sirus, Khasmakhu, Mamadu, Hani, Arakani, Mahboubeh, Keyvan, and Dennis, who gave me crucial feedback. Tinus, Wim, Jacques, and Junior count among those amazing friends whose help and concern I will always remember.\\ Finally, I would like to express my most profound gratefulness to my beloved parents, Afsaneh and Nouri, who are always sources of strength and perseverance. I also want to thank my spiritual mother, Nancy, for being part of my life, giving me guidance and offering me help far from my home country. Lastly, I would like to thank my partner, Mary Grace, for her unconditional trust, care, admiration, and love. \vspace{2cm} \noindent \rightline{Jalal Jalali} \rightline{ June 2020} \chapter*{Permission to Use Content} The author grants permission for this Master's thesis to be used for consultation and for parts of it to be copied for personal use. For all other uses, however, the copyright must be respected, particularly with regard to the obligation to explicitly credit the source when quoting the results. \vspace{2cm} \noindent \rightline{Jalal Jalali} \rightline{ June 2020} \newpage \thispagestyle{plain} \mbox{} \chapter{Operators and Symbols} \fancyhf{} \renewcommand{\headrulewidth}{2pt} \fancyhead[LE,RO]{\thepage} \fancyhead[RE]{\textit{ \nouppercase{Operators and Symbols}} } \fancyhead[RE]{\textit{ \nouppercase{Operators and Symbols}} } \fancyhead[LO]{\textit{ \nouppercase{\rightmark}} } \renewcommand{\footrulewidth}{0.1pt} \fancyfoot[CE,CO]{\nouppercase{Operators and Symbols}} \fancyfoot[LE,RO]{JFMJ} \textbf{Operators} \begin{flushleft} \renewcommand{\baselinestretch}{1.5} \normalsize \hspace{5mm} \begin{tabular}{ll} $|\mathcal{A}|$ & Cardinality of set $\mathcal{A}$\\ $|x|$ & Absolute value norm of $x$\\ $[x]^+$ & max$\{0,x\}$ \\ $[x]^{-1}$ & Matrix inverse\\ $[x]^{T}$ & Matrix transpose \\ $(\text{·})^{(t)}$ & Value of a variable under the $t^{th}$ iteration\\ $f(\text{·}): \mathcal{A} \rightarrow \mathcal{B}$ & $f$ is a function from set $\mathcal{A}$ into $\mathcal{B}$ \\ $\inf f$ & Infimum of function $f$ \\ $\sup f$ & Supremum of function $f$ \\ $\min f$ & Minimum of function $f$ \\ $\max f$ & Maximum of function $f$ \\ $\nabla f$ & Gradient of function $f$ \\ $\mathcal{L}(\text{·})$ & Lagrangian function \\ $\mathcal{D}(\text{·})$ & Dual function \\ $\sum$ & Summation operator\\ $\mathcal{CN}(\mu,\sigma^{2})$ & Circularly symmetric Gaussian distribution with mean $\mu$ and variance $\sigma^{2}$\\ \end{tabular} \end{flushleft} \newpage \textbf{Symbols} \begin{flushleft} \renewcommand{\baselinestretch}{1.5} \normalsize \hspace{10mm} \begin{tabular}{ll} $p^*$ & Primal optimal value \\ $d^*$ & Dual optimal value \\ $\mathscr{B}$ & Bandwidth \\ $\mathcal{N}$ & Number or subcarriers \\ $\mathcal{K}$ & Number or users in the network \\ $\mathcal{J}$ & Number or cells in the network \\ $h$ & Downlink channel gain for the wireless information transfer\\ $g$ & Downlink channel gain for the wireless information transfer\\ $a$ & Binary subcarrier indicators\\ $x$ & Binary antenna selection indicators\\ $\textrm{EH}$ & Amount of the harvested energy \\ $R$ & Amount of the data-rate \\ $\eta$ & Amount of the energy efficiency\\ $\epsilon$ & Power conversion efficiency of an energy harvesting device\\ $\kappa$ & Power amplifier efficiency\\ $\sigma^2$ & Noise power\\ $p_{max}$ & Maximum transmit power \\ $R_{min}$ & Minimum data-rate requirement \\ EH$_{min}$ & Minimum harvested energy requirement \\ \end{tabular} \end{flushleft} \newpage\thispagestyle{empty}\mbox{} \newpage\thispagestyle{empty}\mbox{} \chapter*{} { \setlength{\baselineskip}{14pt} \setlength{\parindent}{0pt} \setlength{\parskip}{8pt} \begin{center} \vspace{-15em} \noindent \textbf{\huge Resource Allocation for SWIPT in} \textbf{\huge Multi-Service Wireless Networks} by Jalal JALALI A Thesis Presented in Partial Fulfillment of the Requirements for the Degree of \\Master of Science in Electrical Engineering Academic year 2019-2020 Supervisor: Prof. Dr. Ir. Heidi STEENDAM\\ Faculty of Engineering and Architecture\\ Ghent University Department of Telecommunications and Information Processing\\ Chairman: Prof. Dr. Ir. Joris WALRAEVENS \\ \end{center} \vspace{2em} \textbf{\Large{Summary}} \vspace{0.7em} The novel resource allocation for simultaneous wireless information and power transfer (SWIPT) is presented as a means of not only helping to communicate and access information with increasing efficiency in the next generation of mobile data networks, but also contributing to minimizing a network's overall power consumption by providing a green energy source. First, a unique architecture is proposed that harvests energy from an access point (AP) without the receiver needing a splitter. In the proposed system model, a portion of the spectrum is used for information decoding (ID) while the remaining portion is exploited for energy harvesting (EH) in an orthogonal frequency division multiple access (OFDMA) network. To investigate the performance gain, an optimization problem is formulated that maximizes the harvested energy of a multi-user single-cell OFDMA downlink (DL) network with SWIPT and also satisfies a minimum data-rate requirement for all users. A locally optimal solution for the underlying problem, which is essentially non-convex due to the coupling of the integer variable, is obtained by using optimization tools. Second, the proposed system model is improved in order to investigate the resource allocation problem of needing to maximize throughput based on the separated receiver architecture in an OFDMA multi-user multi-cell system that uses SWIPT. The resulting problem, which jointly optimizes the subcarrier assignment and power allocation, is a mixed-integer non-linear problem (MINLP) that is difficult to solve. Third, a state-of-the-art harvesting technique at the receiver that is based on receiver antenna selection with a co-located architecture is explored to optimize the energy efficiency (EE) of a SWIPT-enabled multi-cell multi-user OFDMA network. This is referred to as a “generalized antenna-switching technique”. Extensive simulation demonstrates the superiority of the proposed methodologies and presents interesting results. \vspace{2em} \textbf{\Large{Keywords}} \vspace{0.7em} D.C. programming, energy efficiency, energy harvesting, green communication, majorization minimization, MINLP, multi-user communication, non-convex optimization, OFDMA, resource allocation, SWIPT, throughput. } \newpage
{ "timestamp": "2020-07-28T02:44:06", "yymm": "2007", "arxiv_id": "2007.13676", "language": "en", "url": "https://arxiv.org/abs/2007.13676" }
\section{#1} \setcounter{equation}{0}} \def\textup{div}{\textup{div}} \def\mathcal{D}{\mathcal{D}} \def\mathcal{C}{\mathcal{C}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{I}{\mathbb{I}} \def\mathcal{L}{\mathcal{L}} \def\mathcal{M}{\mathcal{M}} \def\mathcal{N}{\mathcal{N}} \def\mathcal{F}{\mathcal{F}} \def\mathcal{K}{\mathcal{K}} \def\mathcal{A}{\mathcal{A}} \def\mathcal{H}{\mathcal{H}} \newcommand{\caps}[1]{\textup{\textsc{#1}}} \providecommand{\bysame}{\makebox[3em]{\hrulefill}\thinspace} \newcommand{\mathbb}{\mathbb} \newcommand{\ov}[1]{\mbox{$\overline{#1}$}} \newcommand{\upshape}{\upshape} \newcommand{\upvec}[2]{\mbox{$#1^{1},\ldots,#1^{#2}$}} \newcommand{\lovec}[2]{\mbox{$#1_{1},\ldots,#1_{#2}$}} \providecommand{\xrighto}[1]{\mbox{$\;\xrightarrow{#1}\;$}} \providecommand{\xleftto}{\mbox{$\xleftarrow$}} \newcommand{\mbox{$\;\leftarrow\!\mapstochar\;$}}{\mbox{$\;\leftarrow\!\mapstochar\;$}} \newcommand{\mbox{$\;\longleftarrow\!\mapstochar\;\,$}}{\mbox{$\;\longleftarrow\!\mapstochar\;\,$}} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\hookrightarrow}{\hookrightarrow} \newcommand{\twoheadrightarrow}{\twoheadrightarrow} \def\vv<#1>{\langle#1\rangle} \def\ww<#1>{\langle\langle#1\rangle\rangle} \newcommand{\mbox{$\textup{Tr}$}}{\mbox{$\textup{Tr}$}} \newcommand{\diag}[1]{\mbox{$\textup{diag}(#1)$}} \newcommand{\mbox{$\text{\up{mult}}$}}{\mbox{$\text{\upshape{mult}}$}} \newcommand{\mbox{$\text{\up{inv}}$}}{\mbox{$\text{\upshape{inv}}$}} \newcommand{\mbox{$\text{\up{im}}\,$}}{\mbox{$\text{\upshape{im}}\,$}} \newcommand{\mbox{$\text{\up{id}}\,$}}{\mbox{$\text{\upshape{id}}\,$}} \newcommand{\mbox{$\text{\up{pr}}$}}{\mbox{$\text{\upshape{pr}}$}} \newcommand{\mbox{$\text{\up{coker}}\,$}}{\mbox{$\text{\upshape{coker}}\,$}} \newcommand{\mbox{$\text{\up{ev}}$}}{\mbox{$\text{\upshape{ev}}$}} \newcommand{\mbox{$\text{\up{rank}}\,$}}{\mbox{$\text{\upshape{rank}}\,$}} \newcommand{\mbox{$\dim_{\bb{R}}$}}{\mbox{$\dim_{\mathbb{R}}$}} \newcommand{\mbox{$\dim_{\bb{C}}$}}{\mbox{$\dim_{\mathbb{C}}$}} \newcommand{\mbox{$\text{ddim}\,$}}{\mbox{$\text{ddim}\,$}} \providecommand{\det}{\mbox{$\text{\upshape{det}}\,$}} \providecommand{\codim}{\mbox{$\text{\upshape{codim}}\,$}} \providecommand{\sign}{\mbox{$\text{\upshape{sign}}\,$}} \providecommand{\vol}{\mbox{$\text{\upshape{vol}}$}} \newcommand{\mbox{$\text{\up{Span}}$}}{\mbox{$\text{\upshape{Span}}$}} \newcommand{\overline{\partial}}{\overline{\partial}} \newcommand{\dd}[2]{\mbox{$\frac{\partial #2}{\partial #1}$}} \newcommand{\mbox{$\dd{t}{}|_{0}$}}{\mbox{$\dd{t}{}|_{0}$}} \providecommand{\del}{\partial} \newcommand{\omega}{\omega} \newcommand{\Omega}{\Omega} \newcommand{\varphi}{\varphi} \newcommand{\Varphi}{\Varphi} \newcommand{\varepsilon}{\varepsilon} \newcommand{\lambda}{\lambda} \newcommand{\Lambda}{\Lambda} \newcommand{\wt}[1]{\mbox{$\widetilde{#1}$}} \newcommand{\dwt}[1]{\mbox{$\widetilde{\widetilde{#1}}$}} \newcommand{\mbox{$\text{Alg}$}}{\mbox{$\text{Alg}$}} \newcommand{\mbox{$\text{\up{Fred}}$}}{\mbox{$\text{\upshape{Fred}}$}} \newcommand{\mbox{$\text{\up{Poly}}$}}{\mbox{$\text{\upshape{Poly}}$}} \providecommand{\Hom}{\mbox{$\text{\upshape{Hom}}$}} \newcommand{\mbox{$\bigsqcup$}}{\mbox{$\bigsqcup$}} \newcommand{\by}[2]{\mbox{$\frac{#1}{#2}$}} \newcommand{\mbox{$C^{\infty}$}}{\mbox{$C^{\infty}$}} \providecommand{\set}[1]{\mbox{$\{#1\}$}} \newcommand{\subseteq}{\subseteq} \newcommand{\supseteq}{\supseteq} \newcommand{\mbox{$\sum$}}{\mbox{$\sum$}} \newcommand{\hcm}{\mbox{$H_{\textup{CM}}$}} \newcommand{\hcml}{\mbox{$H_{\textup{CM}}^{(L)}$}} \newcommand{\mbox{$H_{\textup{free}}$}}{\mbox{$H_{\textup{free}}$}} \newcommand{\bra}[1]{\mbox{$\{#1\}$}} \newcommand{\ham}[1]{\mbox{$\smash{\nabla^{\omega}_{#1}}$}} \newcommand{\hamo}[1]{\mbox{$\smash{\nabla^{\omega_{0}}_{#1}}$}} \newcommand{Cartan subalgebra\xspace}{Cartan subalgebra\xspace} \newcommand{Cartan subgroup\xspace}{Cartan subgroup\xspace} \newcommand{\lie}[1]{\mbox{$\text{\upshape{Lie}}(#1)$}} \newcommand{\mathfrak{a}}{\mathfrak{a}} \newcommand{\mathfrak{b}}{\mathfrak{b}} \newcommand{\mathfrak{c}}{\mathfrak{c}} \newcommand{\doo}{\mathfrak{d}} \newcommand{\mathfrak{e}}{\mathfrak{e}} \newcommand{\mathfrak{f}}{\mathfrak{f}} \newcommand{\mathfrak{g}}{\mathfrak{g}} \newcommand{\mathfrak{h}}{\mathfrak{h}} \newcommand{\mathfrak{i}}{\mathfrak{i}} \newcommand{\mathfrak{j}}{\mathfrak{j}} \newcommand{\mathfrak{k}}{\mathfrak{k}} \newcommand{\mathfrak{l}}{\mathfrak{l}} \newcommand{\mathfrak{m}}{\mathfrak{m}} \newcommand{\mathfrak{n}}{\mathfrak{n}} \newcommand{\mathfrak{o}}{\mathfrak{o}} \newcommand{\mathfrak{p}}{\mathfrak{p}} \newcommand{\mathfrak{q}}{\mathfrak{q}} \newcommand{\mathfrak{r}}{\mathfrak{r}} \newcommand{\mathfrak{s}}{\mathfrak{s}} \newcommand{\mathfrak{t}}{\mathfrak{t}} \newcommand{\mathfrak{u}}{\mathfrak{u}} \newcommand{\mathfrak{w}}{\mathfrak{w}} \newcommand{\mbox{$\text{\up{conj}}$}}{\mbox{$\text{\upshape{conj}}$}} \newcommand{\mbox{$\text{\upshape{Ad}}$}}{\mbox{$\text{\upshape{Ad}}$}} \newcommand{\mbox{$\text{\upshape{ad}}$}}{\mbox{$\text{\upshape{ad}}$}} \newcommand{\mbox{$\text{\upshape{Ad}}^{*}$}}{\mbox{$\text{\upshape{Ad}}^{*}$}} \newcommand{\mbox{$\text{\upshape{ad}}^{*}$}}{\mbox{$\text{\upshape{ad}}^{*}$}} \newcommand{\mbox{$\mathcal{O}$}}{\mbox{$\mathcal{O}$}} \newcommand{\mbox{$\orb^{\bot}$}}{\mbox{$\mbox{$\mathcal{O}$}^{\bot}$}} \newcommand{\mbox{$C^{\circ}$}}{\mbox{$C^{\circ}$}} \newcommand{\mbox{$\Delta_{+}$}}{\mbox{$\Delta_{+}$}} \newcommand{\mbox{$\textup{SL}$}}{\mbox{$\textup{SL}$}} \newcommand{\mbox{$\mathfrak{sl}$}}{\mbox{$\mathfrak{sl}$}} \newcommand{\mbox{$\textup{GL}$}}{\mbox{$\textup{GL}$}} \newcommand{\mbox{$\mathfrak{gl}$}}{\mbox{$\mathfrak{gl}$}} \newcommand{\mbox{$\textup{SU}$}}{\mbox{$\textup{SU}$}} \newcommand{\mbox{$\mathfrak{su}$}}{\mbox{$\mathfrak{su}$}} \newcommand{\mbox{$\textup{SO}$}}{\mbox{$\textup{SO}$}} \newcommand{\mbox{$\mathfrak{so}$}}{\mbox{$\mathfrak{so}$}} \newcommand{\mbox{$\textup{U}$}}{\mbox{$\textup{U}$}} \newcommand{\mbox{$\mathcal{X}$}}{\mbox{$\mathcal{X}$}} \newcommand{\mbox{$\,\delta_t$}}{\mbox{$\,\delta_t$}} \newcommand{\todo}[1]{\vspace{5 mm}\par \noindent \marginpar{\textsc }} \framebox{\begin{minipage}[c]{0.95 \textwidth}\raggedright \tt #1 \end{minipage}}\vspace{5 mm}\par} \newcommand{\revise}[1]{#1 \newcommand{\retwo}[1]{#1 \title[HIPS for compressible flow]{% A Hamiltonian interacting particle system for compressible flow } \usepackage[foot]{amsaddr} \author{Simon Hochgerner} \address{Finanzmarktaufsicht (FMA), Otto-Wagner Platz 5, A-1090 Vienna } \email{simon.hochgerner@fma.gv.at} \begin{document} \begin{abstract} The decomposition of the energy of a compressible fluid parcel into slow (deterministic) and fast (stochastic) components is interpreted as a stochastic Hamiltonian interacting particle system (HIPS). It is shown that the McKean-Vlasov equation associated to the mean field limit yields the barotropic Navier-Stokes equation with density dependent viscosity. Capillary forces can also be treated by this approach. Due to the Hamiltonian structure the mean field system satisfies a Kelvin circulation theorem along stochastic Lagrangian paths. \end{abstract} \maketitle \section{Introduction} \subsection{The barotropic Navier-Stokes equations} Consider a compressible barotropic fluid in an $n$-dimensional domain with periodic boundary conditions. The velocity, $u=u(t,x)$, and density, $\rho=\rho(t,x)$, are a time-dependent vector field and function, respectively, defined on the torus $M=\mathbb{R}^n/\mathbb{Z}^n$. The compressible Navier-Stokes equations with density dependent viscosity and capillary forces are \begin{align} \label{intro:bNS1} \dot{u} &= - \nabla_{u}u - \rho^{-1}\nabla p + \rho^{-1} \Big(\textup{div}\,S + \textup{div}\,C\Big) \\ \label{intro:bNS2} \dot{\rho} &= -\textup{div}(\rho u) \end{align} where $\nabla_u u = \vv<u,\nabla>u = \sum u^j\del_j u$. The hydrostatic pressure, $p$, is assumed to be given in terms of the density, that is $p = \rho^2\mathcal{U}'(\rho)$ for a known function $\mathcal{U}$ which models the specific internal energy when the fluid is in equilibrium. Further, $S$ is the stress tensor, defined by \begin{equation} \label{intro:stress} S_{ij} = \nu{\rho}\Big(\del_i u^j + \del_j u^i\Big) \end{equation} where $\nu\ge0$ is the viscosity coefficient. The corresponding force is \begin{align} \textup{div}\,S = \sum \del_i S_{ij} e_j = \nu (\nabla^{\top}u)\nabla{\rho} + \nu {\rho}\nabla\textup{div}(u) + \nu \nabla_{ \nabla{\rho} }u + \nu{\rho}\Delta u. \end{align} Let $\kappa\ge0$ be a constant. The capillary tensor, $C$, is defined as \begin{equation} \label{intro:Cap} C = \kappa\Big( \Big( {\rho}\Delta{\rho} +\by{1}{2}\vv<\nabla{\rho},\nabla{\rho}> \Big)\mathbb{I} - \nabla{\rho}\otimes\nabla{\rho} \Big) \end{equation} and satisfies $\textup{div}\,C = \kappa{\rho}\nabla\Delta{\rho}$. For background regarding the barotropic Navier-Stokes equations with viscosities which depend linearly on the density we refer to \cite{MV07} and references therein. The capillary tensor~\eqref{intro:Cap} appears in this form also in \cite[Equ.~(4)]{AMW97} and goes back to Korteweg~\cite{K01}. Analytic aspects of the barotropic system with capillary forces \eqref{intro:bNS1}-\eqref{intro:bNS2} are treated in \cite{BDL03}. Navier-Stokes equations with more general third-order spatial derivative terms are discussed in \cite{J11}. \subsection{Description of results} This paper is concerned with a mean field representation of solutions to the compressible Navier-Stokes system \eqref{intro:bNS1}-\eqref{intro:bNS2}, and this mean field is derived from a stochastic Hamiltonian interacting particle system (\revise{Hamiltomian IPS or} HIPS). The HIPS picture follows from a combination of a many particle approach to fluid dynamics and a decomposition of the energy into slow (deterministic) and fast (stochastic) components. \revise{ The interpretation as a slow-fast decomposition is consistent with the multi-time formulation of \cite{CGH17}, where it is shown that advection along stochastic transport fields in the Eulerian representation of stochastic fluid dynamics can be obtained by homogenization. } To describe the many particle approach, consider a tiny Eulerian volume (fixed in space and time) $\Delta V$ which is divided into a very large number $N$ of equal subvolumes. It is assumed that the continuum hypothesis holds in each of the infinitesimal subvolumes $\Delta V^{\alpha}$. Thus there is a mass density $\rho^{\alpha}$ for each grid index $\alpha$. Since the subvolumes are equal, so are the initial conditions for $\rho^{\alpha}$, and $\rho^{\alpha}|_0 = \rho|_0$ which is the initial condition for the overall mass density in $\Delta V$. Now, the fluid parcels in all of the subvolumes interact because the deterministic component of the energy of the blob of fluid in $\Delta V$ depends on the total momentum and the overall mass density. (`Momentum' shall always refer to momentum per unit volume, i.e.\ its dimension is density times velocity.) On the other hand, molecular diffusion is modeled as a set of $N$ independent (multidimensional) Brownian motions such that all $N$ individual parcels undergo their own stochastic process. Since molecules are incompressible these processes are set up as stochastic perturbations along divergence free vector fields. Hence the fluid parcel in $\Delta V$ consists of $N$ identical subparcels, and each subparcel follows the flow of the ensemble of subparcels (is dragged along or advected) but also undergoes its own diffusion. Heuristically, this means that the interaction in the IPS is due to the deterministic part of the motion where each infinitesimal subparcel follows the mean flow of the ensemble. The corresponding total Hamiltonian \eqref{e:H^N} for the fluid in $M$ is then obtained by integrating the energies over the infinitesimal subvolumes $\Delta V^{\alpha}$ and summing over all indices $\alpha$. Since this Hamiltonian describes a system consisting of $N$ subparcels, it is a function \begin{equation*} H^N: \Big(T^*(\textup{Diff}(M)\circledS\mathcal{F}(M)\Big)^N \to\mathbb{R} \end{equation*} which is the $N$-fold product of the phase space of compressible fluid mechanics. (The semi-direct product notation is explained in Section~\ref{sec:SD}.) The canonical symplectic structure on the phase space then yields a Hamiltonian system of Stratonovich SDEs by adapting the construction of \cite{LCO08} to the infinite dimensional setting. This system of $N$ interacting SDEs will be called the `HIPS equations of motion'. In Section~\ref{sec:3A} it is explained how the Hamiltonian $H^N$ is a sum of terms involving: translational kinetic energy of the particle ensemble, equilibrium internal energy associated to the hydrostatic pressure, equilibrium internal energy associated to capillary forces, non-equilibrium internal energy due to expansion/compression along the flow, and stochastic energy associated to molecular bombardment. The HIPS equations of motion are derived in Section~\ref{sec:3b}, see \eqref{e:hips3}-\eqref{e:hips5}. Under the assumption that the mean field limit exists, as $N\to\infty$, the stochastic mean field equations are obtained in Section~\ref{sec:3c}. The mean field SDEs \eqref{e:Hmf1}-\eqref{e:Hmf2} represent the Eulerian description of the motion of a fluid parcel associated to a subdivision $\Delta V^{\alpha}$ for very large $N$. The expected fluid flow is then obtained by averaging over momenta and mass densities of all the smaller fluid parcels. But these are just the mean fields. It therefore remains to calculate the equations of motions for the latter. These equations are deterministic and given in \eqref{e:exp1}-\eqref{e:exp2}. In Section~\ref{sec:3d} it is shown that, if the stochastic perturbation corresponds to a Brownian motion for each Fourier mode (i.e., is a cylindrical Wiener process in the space of solenoidal vector fields), then the momentum and mass density mean fields solve the compressible Navier-Stokes system~\eqref{intro:bNS1}-\eqref{intro:bNS2}. This is the content of Theorem~\ref{thm:cNS}. Moreover, since the HIPS is invariant under the particle relabeling symmetry (invariance with respect to the group of diffeomorphisms) Noether's Theorem implies that a Kelvin Circulation Theorem holds along stochastic Lagrangian paths (Proposition~\ref{prop:KCT}). The density dependence of the viscosity, as manifest by the $\rho$-factor in \eqref{intro:stress}, is a consequence of the form of the Hamiltonian~\eqref{e:H^N}. The compressible Navier-Stokes equations are often expressed with respect to a density-independent viscosity, but it is not clear how to realize this independence in the HIPS framework. Mean field representations of solutions to the incompressible Navier-Stokes equation have been previously obtained by \cite{CI05,I06b} by considering a Weber functional along stochastic Lagrangian paths. In \cite{H17,H18} the above described Eulerian (H)IPS formulation has been used to also obtain a stochastic mean field representation for solutions to the incompressible Navier-Stokes equation. To the best of my knowledge, Theorem~\ref{thm:cNS} is the first stochastic representation of solutions to the compressible Navier-Stokes equations. \revise{Related approaches to fluid dynamics from the perspective of stochastic variational principles include \cite{Y83,CC07,C}, which characterize solutions to the incompressible Navier-Stokes equations by variational principles for stochastic Lagrangian paths, and generalizations in \cite{ACC14,CCR15}. } \subsection{Applications: NatCat models and Solvency~II capital requirements} Uses of the Navier-Stokes equation range from semi-conductor engineering to astrophysics and it is not the goal of this section to attempt a review of these topics. Rather, I want to briefly describe an application where the interaction between academia and industry is perhaps not very well established. This concerns models of natural perils (NatCat models) that are used in the insurance industry to calculate risk capital requirements. These risk capital requirements are a determining factor for the solvency of a given company. In the EU the relevant regulatory framework is called Solvency~II (\cite{Level1}). There exist different NatCat models, which are in general proprietary, for natural disasters such as earthquakes, flooding, tropical cyclones, and extratropical cyclones (European winter storms). Storm models are often based on numerical weather prediction (NWP) systems, and therefore inherit all their advantages and flaws. \revise{However, it is not claimed that the stochastic HIPS formulation of this paper is appropriate to generate stochastic NatCat storm scenarios. Thus the material in this section is only intended as additional background information concerning possible applications of stochastic fluid mechanics, and it is logically independent from the main result, Theorem~\ref{thm:cNS}. } \subsubsection{Numerical weather prediction (NWP) and climate modeling} NWP systems and climate models are important for a number of apparent reasons such as daily weather forecasts or climate change quantification. Geophysical flows are modeled by the compressible Navier-Stokes equations (without capillary term, i.e.\ $C=0$ in \eqref{intro:bNS1}). These equations are deterministic. However, any implementation of these equations needs to introduce temporal and spatial discretizations. Physical processes which occur below these chosen grid scales (`subgrid phenomena') cannot be accounted for by any numerical model of the deterministic flow equations. Since the subgrid processes are inherently unknown and uncertain it seems reasonable to model these by a stochastic dynamics point of view. Indeed, such a position has been adopted quite early by Kraichnan~\cite{K68}. See also \cite{MR06} for a modern version and fluid equations with stochastic force terms. Reviews regarding NWP systems and climate models are contained in \cite{BTB15,Betal17,P19}. These also address the need for stochastic parameterizations of unknown subgrid processes. Recent advances in the stochastic modeling of geophysical flows include the `location uncertainty' approach of M\'emin, Resseguier and collaborators \cite{Mem14,RMC17,RMC17a,RMC17b} as-well as the `stochastic advection by Lie transport (SALT)' theory of Holm and collaborators \cite{Holm15,CGH17,CFH17,DHL20,ABHT20}. The SALT approach is based on the observation that subgrid phenomena represent unknown physical processes and should therefore be derived from a stochastic variational principle. As a consequence, these models preserve circulation along stochastic Lagrangian paths. \subsubsection{NatCat models and solvency capital requirement (SCR)} With the implementation of the Solvency~II regulatory regime (\cite{Level1}) per 1.\ January 2016, applied insurance mathematics has become a surprisingly diverse and multi-disciplinary subject. The relevant tools extend beyond classical actuarial science to, e.g., modeling of local general accounting principles (\cite{HG19,GHL20}), no-arbitrage principles and stochastic interest rate models (\cite{TW16,DHOST20}), and stochastic fluid dynamics. The significance of the last point in this (certainly incomplete) list is briefly explained below. One of the basic quantitative principles of Solvency~II can be summarized as follows: Insurance and reinsurance undertakings are required to quantify all relevant risk factors over a one-year horizon and derive a corresponding loss distribution. Now, the own funds (i.e.\ excess of assets over liabilities) have to cover the $99.5$ percentile of this distribution (`survival of the $200$ year event'). This $99.5$ percentile corresponds to the so-called solvency capital requirement (SCR). The SCR can be calculated from a prescribed standard formula or a company specific internal model. Medium and large sized companies generally use internal models. If a company has chosen the internal model approach and covers windstorm risks, then the corresponding loss distribution for a one-year period has to be derived. To do so the following three-step procedure, or a variant of it, is used (\cite{GK05}): \begin{enumerate} \item Hazard module: physical model. A large set of windstorm scenarios is generated. These are the so-called stochastic scenarios of the NatCat model. \item Vulnerability module. This step quantifies the vulnerability (i.e., damage done) of the insured structures for each of the stochastic scenarios. \item Financial loss module. Finally, structural damage is converted to loss by taking into account contract specifics (e.g., sum insured) for each damaged structure and, possibly, reinsurance. \end{enumerate} Since the relevant time period for NatCat models is one year they operate on a new temporal scale when compared to short term weather forecast models and medium or long term climate models. The most advanced NatCat models are based on a coupling of a global circulation model (GCM) and a NWP system. Since these models are proprietary it is not possible to cite a suitable model documentation. However, \cite{C02} contains a review of the basic method, which is still quite up to date. In particular, it is explained how (deterministic) NWP systems are used to generate a set of scenario events by statistically sampling the initial conditions. \begin{quote} ``Using NWP technology, a large set of potential future storms is generated by taking data sets comprising the initial pressure fields of historical storms, perturbing them both temporally and spatially, and moving them forward in time through the application of a set of partial differential equations governing fluid flow. The resulting event set is rigorously tested to ensure that it provides an appropriate representation of the entire spectrum of potential storm experience -- not just events of average probability, but also the extreme events that make up the tail of the loss distribution.'' (\cite{C02}) \end{quote} This point will be taken up again in Section~\ref{sec:concl}. A recent reanalysis of European winterstorm events, which also highlights the potential financial risks for the insurance sector, has been carried out by \cite{Hetal15}. \section{Notation and preliminaries}\label{sec:nota} \subsection{Diffeomorphism groups}\label{sec:diffgps} Let $M=T^n=\mathbb{R}^n/\mathbb{Z}^n$. We fix $s>1+n/2$ and let $\textup{Diff}(M)^s$ denote the infinite dimensional $\mbox{$C^{\infty}$}$-manifold of $H^s$-diffeomorphisms on $M$. Further, $\textup{Diff}(M)^s_0$ denotes the submanifold of volume preserving diffeomorphisms of Sobolev class $H^s$. Both, $\textup{Diff}(M)^s$ and $\textup{Diff}(M)^s_0$, are topological groups but not Lie groups since left composition is only continuous but not smooth. Right composition is smooth. The tangent space of $\textup{Diff}(M)^s$ (resp.\ $\textup{Diff}(M)^s_0$) at the identity $e$ shall be denoted by $\mathfrak{g}^s$ (resp.\ $\mathfrak{g}^s_0$). Let $\mbox{$\mathcal{X}$}^s(M)$ denote the vector fields on $M$ of class $H^s$ and $\mbox{$\mathcal{X}$}_0^s(M)$ denote the subspace of divergence free vector fields of class $H^s$. We have $\mathfrak{g}^s_0 = \mbox{$\mathcal{X}$}^s_{0}(M)$ and $\mathfrak{g}^s=\mbox{$\mathcal{X}$}^s(M)$. The superscript $s$ will be dropped from now on. We use right multiplication $R^g: \textup{Diff}(M)\to \textup{Diff}(M)$, $k\mapsto k\circ g = kg$ to trivialize the tangent bundle $T\textup{Diff}(M)\cong \textup{Diff}(M)\times\mathfrak{g}$, $\xi_g\mapsto(g,(TR^g)^{-1}\xi_g)$, and similarly for $\textup{Diff}(M)_0$. The Riemannian metric on $\textup{Diff}(M)\times\mathfrak{g}\cong T\textup{Diff}(M)$ is defined by \[ \ww<\xi_g,\eta_g> = \int_M\vv<\xi(g(x)),\eta(g(x))>\, dx \] for $\xi,\eta\in\mathfrak{g}$, where $dx$ is the standard volume element in $M$, and $\vv<.,.>$ is the Euclidean inner product. See \cite{AK98,EM70,MEF,Michor06} for further background. \subsection{Derivatives} The adjoint with respect to $\ww<.,.>$ to the Lie derivative $L$, given by $L_X Y = \nabla_X Y - \nabla_Y X$, is \begin{equation} L^{\top}_X Y = -\nabla_X Y - \textup{div}(X)Y - (\nabla^{\top}X)Y \end{equation} with $ \nabla_X Y = \vv<X,\nabla>Y = \sum X^i\del_i Y^j e_j $ and $ (\nabla^{\top}X)Y = \sum (\del_i X^j)Y^j e_i $ with respect to the standard basis $e_i$, $i = 1,\dots,n$. The notation $\mbox{$\text{\upshape{ad}}$}(X)Y = [X,Y] = -L_X Y$ and $\mbox{$\text{\upshape{ad}}$}(X)^{\bot} = -L^{\top}_X$ will be used. The variational derivative of a functional $F: \mathfrak{g}\to\mathbb{R}$ will be denoted by $\delta F/\delta X$, that is \begin{equation} \ww<\frac{\delta F}{\delta X},Y> = \frac{\del}{\del t}\Big|_0 F(X + tY) \end{equation} for $X,Y\in\mathfrak{g}$. \subsection{Semi-direct product structure}\label{sec:SD} The configuration space of compressible fluid mechanics on $M$ is the semi-direct product $\textup{Diff}(M)\circledS\mathcal{F}(M)$ where $\mathcal{F}(M)$ denotes functions (also of Sobolev class $s$) on $M$. The semi-direct product structure is defined by the right action \begin{equation} R^{(\phi,g)}(\psi,f) = (\psi\circ\phi, f\circ\phi + g). \end{equation} The corresponding phase space is trivialized with respect to this right multiplication as \[ T^*(\textup{Diff}(M)\circledS\mathcal{F}(M)) \cong \textup{Diff}(M)\circledS\mathcal{F}(M) \times \mathfrak{g}^*\times\mathcal{F}(M)^*. \] The variables $\mu\in\mathfrak{g}^*$ and $\rho\in\mathcal{F}(M)^*$ represent momentum and mass density, respectively. Making use of the Euclidean volume form $dx$, the duals can be identified as $\mathfrak{g}^*=\Omega^1(M)$ and $\mathcal{F}(M)^*=\mathcal{F}(M)$. Details on Hamiltonian mechanics on semi-direct products are given in \cite{MRW84}, where also the case of compressible ideal fluids is treated. \subsection{Stochastic dynamics} Let $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in[0,T]},P)$ be a filtered probability space satisfying the usual assumptions as specified in \cite{Pro}. In the following, all stochastic processes shall be understood to be adapted to this filtration. The symbol \begin{equation} \mbox{$\,\delta_t$} \end{equation} will be used to denote the Stratonovich differential to distinguish it from the variational derivative $\delta$. The Ito differential does not appear in this paper. The exterior differential is $d$. \subsection{Brownian motion in $\mathfrak{g}_0$}\label{sec:BMgu} Let \[ \mathbb{Z}_n^+ := \set{k\in\mathbb{Z}_n: k_1>0 \textup{ or, for } i=2,\ldots,n, k_1=\ldots=k_{i-1}=0, k_i>0 }. \] For $k\in\mathbb{Z}^+_n$ let $k_1^{\bot},\ldots,k_{n-1}^{\bot}$ denote a choice of pairwise orthogonal vectors in $\mathbb{R}^n$ such that $|k_i^{\bot}|=|k|$ and $\vv<k_i^{\bot},k>=0$ for all $i=1,\ldots,n-1$. Consider the following system of vectors in $\mathfrak{g}_0$ (\cite{CM08,CS09}): \begin{equation*} A_{(k,i)} = \frac{1}{|k|^{s+1}}\cos\vv<k,x>k^{\bot}_i,\; B_{(k,i)} = \frac{1}{|k|^{s+1}}\sin\vv<k,x>k^{\bot}_i,\; A_{(0,j)} = e_j \end{equation*} where $e_j\in\mathbb{R}^n$ is the standard basis and $s$ is the Sobolev index from Section~\ref{sec:diffgps}. By slight abuse of notation we identify these vectors with their corresponding right invariant vector fields on $\textup{Diff}(M)_0$. Further, in the context of the $\zeta_r$ vectors we shall make use of the multi-index notation $r = (k,i,a)$ where $k\in\mathbb{Z}_n^+$ and $a=0,1,2$ such that \begin{align*} \zeta_r &= A_{(0,i)} \textup{ with } i=1,\ldots,n \textup{ if } a=0\\ \zeta_r &= A_{(k,i)} \textup{ with } i=1,\ldots,n-1 \textup{ if } a=1\\ \zeta_r &= B_{(k,i)} \textup{ with } i=1,\ldots,n-1 \textup{ if } a=2 \end{align*} Thus by a sum over $\zeta_r$ we shall mean a sum over these multi-indices, and this notation will be used throughout the rest of the paper. It can be shown (see \cite[Appendix]{CS09} for details) that the $\zeta_r$ form an orthogonal system of basis vectors in $\mathfrak{g}_0$, such that \begin{equation}\label{e:nablaXX} \nabla_{\zeta_r}\zeta_r = 0 \end{equation} and, for $X\in\mbox{$\mathcal{X}$}(M)$, \begin{equation} \label{e:Delta} \sum \nabla_{\zeta_r}\nabla_{\zeta_r}X = c^s\Delta X \end{equation} where $c^s = 1+\frac{n-1}{n}\sum_{k\in\mathbb{Z}_n^+}\frac{1}{|k|^{2s}}$ is a constant and $\Delta$ is the vector Laplacian. \begin{proposition}[\cite{CC07,CM08,C,DZ}]\label{prop:bm} Let $W_t = \sum \zeta_r W_t^p$, where $W_t^r$ are independent copies of Brownian motion in $\mathbb{R}$. Then $W$ defines (a version of) Brownian motion (i.e., cylindrical Wiener process) in $\mathfrak{g}_0$. \end{proposition} \section{HIPS for compressible flow}\label{sec:hips} This section is concerned with the HIPS equations of motion and the mean field limit. Background on mean field theory can be found in \cite{S91,AD95,DV95,M96,JW17}. \subsection{The Hamiltonian}\label{sec:3A} Consider a barotropic fluid in $M = \mathbb{R}^n/\mathbb{Z}^n$. At a (macroscopic) position $x\in M$ consider a tiny volume element $\Delta V_x$. Suppose $\Delta V_x$ is further divided into $N$ infinitesimal volume elements, $\Delta V_x^{\alpha}$, labeled by $\alpha = 1,\dots,N$, which are all assumed to have identical dimensions. In each volume element there is a mass density $\rho^{\alpha}=\rho^{\alpha}(x)$. Thus it is assumed that the continuum description of the fluid also holds at the level of the subdivsion. Let the fluid element in volume $\Delta V_x^{\alpha}$ have velocity $v^{\alpha}(x)$. Thus the energy of the particle ensemble in the total infinitesimal volume is determined by the velocities, $v^{\alpha}$, and densities, $\rho^{\alpha}$, in the subdivisions. To arrive at the total energy \eqref{e:H^N} we consider five contributions: \begin{enumerate} \item Translational kinetic energy; \item Equilibrium internal energy $\mathcal{U}$ which gives rise to the hydrostatic pressure $p$; \item Equilibrium capillary energy; \item Non-equilibrium expansion/compression energy; \item Stochastic energy due to molecular bombardment; \end{enumerate} \subsubsection{Translational kinetic energy} The total velocity at $x$, $v(x)$, is the weighted average \begin{equation} \label{e:v} v(x) = \frac{\sum_{\alpha=1}^N \rho^{\alpha}(x)v^{\alpha}(x) }{\sum_{\alpha=1}^N \rho^{\alpha}(x)}. \end{equation} The total momentum, $\mu(x)$, is therefore \begin{equation} \mu(x) = \sum_{\alpha} \rho^{\alpha}(x)v^{\alpha}(x) = \sum_{\alpha} \mu^{\alpha} \end{equation} and the translational kinetic energy $K^N(x)$ of the particle system is \begin{equation} \label{e:E1} K^N(x) = \frac{1}{2}\vv< \mu(x), v(x)> = \frac{1}{2} \Big\langle \sum_{\alpha} \mu^{\alpha} (x) , \frac{ \sum_{\beta}\mu^{\beta}(x) } {\sum_{\nu}\rho^{\nu}(x)} \Big\rangle \end{equation} \subsubsection{Equilibrium internal energy} Notice that the overall mass density in $\Delta V_x$ is expressed, in this subdivision picture, as $\rho(x) = \sum\rho^{\alpha}(x)/N$. Consequently, the mass contained in $\Delta V_x$ is $\rho(x)\cdot\Delta V_x = \sum\rho^{\alpha}\,dx$ where $\Delta V_x^{\alpha}\approx\Delta V_x/N$ is identified with the infinitesimal volume element $dx$. The barotropicity assumption implies that the specific equilibrium internal energy in $\Delta V$ depends on the overall mass density $\sum\rho^{\alpha}/N$. Then the equilibrium internal energy in $\Delta V$ is \begin{equation} \label{e:E2} \sum\rho^{\alpha}\mathcal{U}\Big(\sum\rho^{\alpha}/N\Big)\, dx. \end{equation} \subsubsection{Capillary energy} Let $\kappa\ge0$ be a constant. The capillary energy is defined as \begin{equation} \label{e:E3} \kappa\by{1}{2}\Big\langle\nabla\sum\rho^{\alpha}, \nabla\sum\rho^{\alpha}/N\Big\rangle \,dx \end{equation} which depends, again, on the overall mass density. This form coincides with the capillary contribution to the Helmholtz free energy \cite[Equ.~(5)]{AMW97}. \subsubsection{Non-equilibrium expansion energy} Let $\nu\ge0$ be a constant. Assume temporarily that $\Delta V_x$ is a box which is aligned along Cartesian coordinates $e_1$, $e_2$, $e_3$ and that the flow has only velocity components pointing in the direction of the $e_1$-axis. Let $L$ denote the set of labels $\alpha$ such that the corresponding subvolumes $\Delta V^{\alpha}$ constitute the left wall of $\Delta V_x$ while $R$ denotes those which correspond to the right wall (viewed along $e_1$) of the volume. Now, if \begin{equation} \frac{\sum_{\alpha\in L}\rho^{\alpha} v^{\alpha}}{\sum_{\alpha\in L}\rho^{\alpha}} - \frac{\sum_{\alpha\in R}\rho^{\alpha} v^{\alpha}}{\sum_{\alpha\in R}\rho^{\alpha}} \end{equation} is greater than $0$, then particles moving into $\Delta V$ are faster than those moving out, and the corresponding energy difference should contribute to the (non-equilibrium) internal energy of the system. Since this expression is proportional to minus the divergence of the barycentric velocity, we propose a non-equilibrium internal energy expression \begin{equation} \label{e:E4} -\nu \sum\rho^{\alpha}\textup{div}\Big(\frac{\sum_{\alpha}\rho^{\alpha} v^{\alpha}}{\sum_{\alpha}\rho^{\alpha}} \Big) \revise{dx}. \end{equation} \subsubsection{Stochastic energy} The energy due to molecular bombardment along a vector field $\xi_r$ \revise{in $\Delta V_x^{\alpha}\approx dx$} is \begin{equation} \label{e:E5} \vv<\rho^{\alpha} v^{\alpha},\xi_r>\mbox{$\,\delta_t$} W^r \revise{dx}. \end{equation} See \cite{LCO08, HR12, Holm15, H17, H18}. This stochastic perturbation corresponds to individual molecules imparting their velocities, namely $\xi_r$, on the macroscopic fluid element in $\Delta V^{\alpha}$. Since individual molecules are incompressible the $\xi_r$ are assumed to be divergence free. \subsubsection{Total energy} \revise{ The above subdivision formulation implies that the total configuration space is of the form $\Pi_{\alpha=1}^N(\textup{Diff}(\Delta V^{\alpha})\circledS\mathcal{F}(\Delta V^{\alpha})))$. But now the $\Delta V^{\alpha}$, which are all identical by assumption, are identified with the infinitesimal element $dx$ in $M$. Thus each index $\alpha$ corresponds to a copy of $M$, and this can be done since the for the state variables, velocity and density, it does not make a difference whether these are regarded at the $\Delta V$ or at the $\Delta V^{\alpha}$ level. Therefore, letting the position $x\in M$ range over the full domain, the total configuration space is $\Pi_{\alpha=1}^N(\textup{Diff}(M)\circledS\mathcal{F}(M)) = (\textup{Diff}(M)\circledS\mathcal{F}(M))^N$. } Let us switch from velocity and density to momentum \revise{(density)}, $\mu^{\alpha} = \rho^{\alpha}v^{\alpha}$, and density, $\rho^{\alpha}$, as state variables. \revise{The phase space is thus $ T^*(\textup{Diff}(M)\circledS\mathcal{F}(M))^N = ( T^*(\textup{Diff}(M)\circledS\mathcal{F}(M)) )^N $. } We use the Euclidean metric to identify \revise{each copy in} the phase space, which is the regular dual, as \revise{ \[ T^*(\textup{Diff}(M)\circledS\mathcal{F}(M)) = T(\textup{Diff}(M)\circledS\mathcal{F}(M)) = (\textup{Diff}(M)\circledS\mathcal{F}(M))\times(\mbox{$\mathcal{X}$}(M)\circledS\mathcal{F}(M)) \] where the last identification follows from right-multiplication in the semi-direct product group, see Section~\ref{sec:SD}. } The resulting Hamiltonian of the IPS will therefore be a function \begin{equation} \label{e:HSys} H^N: \Big( T(\textup{Diff}(M)\circledS\mathcal{F}(M)) \Big)^N \to \mathbb{R}. \end{equation} For \begin{equation} \label{e:phase} \Gamma = \Big( \Phi^{\alpha},f^{\alpha}; \mu^{\alpha},\rho^{\alpha}\Big)_{\alpha=1}^N \in \Big( T(\textup{Diff}(M)\circledS\mathcal{F}(M)) \Big)^N \end{equation} the total Hamiltonian is the sum of \eqref{e:E1}, \eqref{e:E2}, \eqref{e:E3}, \eqref{e:E4} and \eqref{e:E5}, and given in semi-martingale notation as \begin{align} \label{e:H^N} H^N(\Gamma) &= \frac{1}{2}\int_M \Big\langle \sum_{\alpha} \mu^{\alpha} , \frac{ \sum_{\beta}\mu^{\beta} } {\sum_{\gamma}\rho^{\gamma}} \Big\rangle \,dx \mbox{$\,\delta_t$} t \\ \notag &\phantom{==} + \int_M \sum_{\alpha}\rho^{\alpha} \mathcal{U}\Big(\frac{\sum_{\beta}\rho^{\beta}}{N}\Big)\,dx \mbox{$\,\delta_t$} t \\ \notag &\phantom{==} + \kappa\int_M \Big\langle \sum_{\alpha}\nabla\rho^{\alpha}, \sum_{\alpha}\nabla\rho^{\alpha}/N \Big\rangle \,dx \mbox{$\,\delta_t$} t \\ \notag &\phantom{==} - \nu \int_M \sum_{\alpha}\rho^{\alpha} \textup{div}\Big(\frac{\sum_{\beta}\mu^{\beta}}{\sum_{\gamma}\rho^{\gamma}}\Big) \,dx \mbox{$\,\delta_t$} t \\ \notag &\phantom{==} + \varepsilon\int_{M}\sum_{j,\alpha}\vv<\mu^{\alpha},\xi_r>\,dx \mbox{$\,\delta_t$} W^{r,\alpha} \end{align} where $W^{r,\alpha}$ are pairwise independent Brownian motions such that $[W^r,W^s]_t=\delta_{r,s}t$, the solenoidal vector fields $\xi_r\in\mbox{$\mathcal{X}$}_0(M)$ are fixed and $\varepsilon\ge 0$ is a constant. The Hamiltonian is right-invariant by construction as it does not depend on $(\Phi^{\alpha},f^{\alpha})\in \textup{Diff}(M)\circledS\mathcal{F}(M)$. \subsection{HIPS equations of motion}\label{sec:3b} \revise{ The phase space \eqref{e:HSys} is an $N$-fold direct product of a tangent bundle identified with its dual. It carries therefore the corresponding direct product canonical symplectic form. Since the Hamiltonian $H^N$ does not depend on $(\Phi^{\alpha},f^{\alpha})_{\alpha=1}^N\in (\textup{Diff}(M)\circledS\mathcal{F}(M))^N$ we can pass via Lie-Poisson reduction to the phase space $( \mbox{$\mathcal{X}$}(M)\circledS\mathcal{F}(M) )^N$. The Hamiltonian IPS equations of motion follow therefore from the variational derivatives (again, in the semi-martingale notation) } \begin{align} \label{e:hips1} \frac{\delta H^N}{\delta\mu^{\alpha}} &= \frac{ \sum_{\beta}\mu^{\beta} }{ \sum_{\gamma}\rho^{\gamma} }\mbox{$\,\delta_t$} t + \nu\nabla\log\sum_{\beta}\rho^{\beta}\mbox{$\,\delta_t$} t + \varepsilon\sum_r \xi_r\mbox{$\,\delta_t$} W^{r,\alpha} \\ \notag &= u^N\mbox{$\,\delta_t$} t + \nu\nabla\log\rho^N \mbox{$\,\delta_t$} t + \varepsilon \sum_r \xi_r\mbox{$\,\delta_t$} W^{r,\alpha} \\ \label{e:hips2} \frac{\delta H^N}{\delta\rho^{\alpha}} &= \Big(- \frac{1}{2}\Big\langle \frac{\sum_{\beta} \mu^{\beta}}{{\sum_{\gamma}\rho^{\gamma}}} , \frac{ \sum_{\beta}\mu^{\beta} }{\sum_{\gamma}\rho^{\gamma}} \Big\rangle + \mathcal{U}\Big(\frac{\sum_{\beta}\rho^{\beta}}{N}\Big) + \frac{\sum_{\beta}\rho^{\beta}}{N}\mathcal{U}'\Big(\frac{\sum_{\beta}\rho^{\beta}}{N}\Big) \\ \notag &\phantom{==} - \kappa \Delta \frac{\sum_{\beta}\rho^{\beta}}{N} - \nu \frac{1}{\sum_{\beta}\rho^{\beta}}\textup{div}\Big(\sum_{\beta}\mu^{\beta}\Big) \Big)\mbox{$\,\delta_t$} t\\ \notag &= \Big(- \frac{1}{2}\vv<u^N,u^N> + \mathcal{U}(\rho^N) + \rho^N\mathcal{U}'(\rho^N) - \kappa\Delta \rho^N - \nu \frac{1}{\rho^N}\textup{div}(\mu^N) \Big)\mbox{$\,\delta_t$} t \end{align} \revise{ by using the $N$-fold product of the semi-direct product structure (\cite{MRW84}). Here } the abbreviations $\mu^N := \frac{\sum_{\beta}\mu^{\beta}}{N}$, $\rho^N := \frac{\sum_{\beta}\rho^{\beta}}{N}$ and $u^N := \frac{\mu^N}{\rho^N}$ are used. \revise{ \begin{remark} The variational derivatives, $\delta H^N/\delta\mu^{\alpha}$ and $\delta H^N/\delta\rho^{\alpha}$, also depend on $\mu^{\beta}$ and $\rho^{\beta}$ with $\beta\neq\alpha$. Hence the right hand sides in the equations~\eqref{e:hips3}-\eqref{e:hips5} below cannot be viewed as vector fields on $\mbox{$\mathcal{X}$}(M)\circledS\mathcal{F}(M)$, but only as Cartesian projections of vector fields on the full space $( \mbox{$\mathcal{X}$}(M)\circledS\mathcal{F}(M) )^N$. \end{remark} } \begin{remark} Equations~\eqref{e:hips1} and \eqref{e:hips2} depend only on the empirical averages $\mu^N$ and $\rho^N$. \end{remark} \begin{remark} The quantity $u^N$ is the specific momentum (viewed as a vector field) of the ensemble average. But it is (for non-constant density) not equal to the empirical average of velocities, that is $ u^N \neq \frac{\sum \mu^{\alpha}/\rho^{\alpha}}{N} $. \end{remark} The stochastic Hamilton equation associated to \eqref{e:H^N} for $(\Phi^{\alpha}, \mu^{\alpha},\rho^{\alpha})$ are: \begin{align} \label{e:hips3} \mbox{$\,\delta_t$} \Phi^{\alpha}_t &= \Big(\frac{\delta H^N}{\delta\mu^{\alpha}}\Big)\circ\Phi_t^{\alpha} \\ \notag &= \Big(u^N + \nu\nabla\log\rho^N\Big)\circ\Phi_t^{\alpha} \mbox{$\,\delta_t$} t + \varepsilon\sum_r \xi_r\circ\Phi_t^{\alpha} \mbox{$\,\delta_t$} W^{r,\alpha} \\ \label{e:hips4} \mbox{$\,\delta_t$}\mu^{\alpha}_t &= - \mbox{$\text{\upshape{ad}}$}\Big( (\delta_t\Phi_t^{\alpha})\circ(\Phi_t^{\alpha})^{-1} \Big)^{\top} \mu^{\alpha}_t - \frac{\delta H^N}{\delta\rho^{\alpha}} \diamond \rho^{\alpha}_t \\ \notag &= -\nabla_{\delta H^N/\delta\mu^{\alpha}}\mu^{\alpha} - \textup{div}(\delta H^N/\delta\mu^{\alpha})\mu^{\alpha} - (\nabla^{\top}\delta H^N/\delta\mu^{\alpha})\mu^{\alpha} -\rho^{\alpha}\nabla \frac{\delta H^N}{\delta\rho^{\alpha}} \\ \notag &= \Big(-\nabla_{u^N + \nu\nabla\log\rho^N}\mu^{\alpha} - \textup{div}(u^N+\nu\nabla\log\rho^N)\mu^{\alpha} - (\nabla^{\top}u^N + \nu\nabla^{\top}\nabla\log\rho^N)\mu^{\alpha} \\ \notag &\phantom{==} -\rho^{\alpha} \Big( -(\nabla^{\top}u^N)u^N + (\rho^N)^{-1} \nabla\Big( (\rho^N)^2\mathcal{U}'(\rho^N) \Big) - \kappa\nabla\Delta\rho^N \\ \notag &\phantom{====} +\nu(\rho^N)^{-2}\textup{div}(\mu^N)\nabla\rho^N -\nu(\rho^N)^{-1}\nabla\textup{div}(\mu^N) \Big) \Big)\mbox{$\,\delta_t$} t\\ \notag &\phantom{==} - \varepsilon\sum_r \mbox{$\text{\upshape{ad}}$}\Big(\xi_r\Big)^{\top}\mu^{\alpha}\mbox{$\,\delta_t$} W^{r,\alpha} \\ \label{e:hips5} \mbox{$\,\delta_t$}\rho_t &= - L_{(\delta_t\Phi_t^{\alpha})\circ(\Phi_t^{\alpha})^{-1} }\rho^{\alpha}_t \\ \notag &= -\textup{div}\Big( \rho^{\alpha} u^N + \nu\rho^{\alpha}\nabla\log\rho^N \Big)\mbox{$\,\delta_t$} t - \varepsilon\sum_r \textup{div}\Big(\rho^{\alpha} \xi_r\Big) \mbox{$\,\delta_t$} W^{r,\alpha} \end{align} Here $\rho^{\alpha}$ is viewed as a density whence $L$ is the Lie derivative of a density, not of a function. The momentum variable is identified, via the Euclidean metric, as an element \begin{equation} \mu^{\alpha}\in\mbox{$\mathcal{X}$}(M) \end{equation} whence the transpose Lie derivative $L^{\top}$ is used instead of $L^*$. The diamond notation in \eqref{e:hips4} is defined by $f\diamond\rho = \rho\nabla f$ and this term arises because of the semi-direct product structure. \subsection{Mean field limit}\label{sec:3c} Assume the mean field limit of \eqref{e:hips3}-\eqref{e:hips5} exists, for $N\to\infty$. Since all subvolumes $\Delta V^{\alpha}$ and their enclosed fluid elements are identical it suffices to consider $\alpha=1$, \begin{equation} \label{e:mflimit} (\Phi_t^1,f_t^1,\mu_t^1,\rho_t^1)\longrightarrow (\Phi_t,f_t,\mu_t,\rho_t) \end{equation} as $N\to\infty$. Hence \begin{equation} \mu^N \longrightarrow E[\mu] =: \bar{\mu} \quad\textup{and}\quad \rho^N\longrightarrow E[\rho] =: \bar{\rho} \end{equation} and \begin{equation} \label{e:mf1} u^N = \frac{ \mu^N }{ \rho^N } \longrightarrow \bar{\mu}/\bar{\rho} =: u \end{equation} as $N\to\infty$. Note that $\mu$ and $\rho$ are stochastic processes while $u$ is deterministic. The mean field limit equations of motion for $\mu$ and $\rho$ are \begin{align} \label{e:Hmf1} \mbox{$\,\delta_t$}\mu &= \Big( -\nabla_{u+\nu\nabla\log\bar{\rho}}\mu - \textup{div}(u+\nu\nabla\log\bar{\rho})\mu - (\nabla^{\top}u)\mu - \nu(\nabla^{\top}\nabla\log\bar{\rho})\mu \\ \notag &\phantom{==} + \rho (\nabla^{\top}u) u - \rho \bar{\rho}^{-1} \nabla\Big( \bar{\rho}^2\mathcal{U}'(\bar{\rho}) \Big) + \kappa\rho\nabla\Delta\bar{\rho} \\ \notag &\phantom{==} - \nu\rho \bar{\rho}^{-2}\textup{div}(\bar{\mu})\nabla\bar{\rho} + \nu\rho \bar{\rho}^{-1}\nabla\textup{div}(\bar{\mu}) \Big) \mbox{$\,\delta_t$} t \\ \notag &\phantom{==} - \varepsilon\sum_r \mbox{$\text{\upshape{ad}}$}(\xi_r)^{\top}\mu\mbox{$\,\delta_t$} W^r \\ \label{e:Hmf2} \mbox{$\,\delta_t$}\rho_t &= -\textup{div}(\rho u + \nu\rho\nabla\log\bar{\rho})\mbox{$\,\delta_t$} t - \varepsilon\sum_r \textup{div}(\rho \xi_r) \mbox{$\,\delta_t$} W^r \end{align} These equations are linear in $\mu$ and $\rho$, and depend otherwise on the mean fields $E[\mu] = \bar{\mu}$ and $E[\rho] = \bar{\rho}$. Let $p := \bar{\rho}^2\mathcal{U}'(\bar{\rho})$. Using that $\bar{\mu} = \bar{\rho}u$, the equations for the expectations $\bar{\mu}$, $\bar{\rho}$ are therefore \begin{align} \label{e:exp1} \dot{\bar{\mu}} &= -\nabla_{u+\nu\nabla\log\bar{\rho}}\bar{\mu} - \textup{div}(u+\nu\nabla\log\bar{\rho})\bar{\mu} - \nu(\nabla^{\top}\nabla\log\bar{\rho})\bar{\mu} \\ \notag &\phantom{==} - \nabla p + \kappa\bar{\rho}\nabla\Delta\bar{\rho} - \nu\bar{\rho}^{-1}\textup{div}(\bar{\mu})\nabla\bar{\rho} + \nu\nabla\textup{div}(\bar{\mu}) + \frac{\varepsilon^2}{2}\sum_r L^{\top}_{\xi_r} L^{\top}_{\xi_r}\bar{\mu} \\ \label{e:exp2} \dot{\bar{\rho}} &= -\textup{div}\Big(\bar{\rho} u + \nu\nabla\bar{\rho}\Big) + \frac{\varepsilon^2}{2}\sum_r \textup{div}\Big( \textup{div}(\bar{\rho} \xi_r) \xi_r \Big) \end{align} \subsection{Barotropic Navier-Stokes equation}\label{sec:3d} Assume that the perturbation vector fields $\xi_r$ are given by $\zeta_r$, defined in Section~\ref{sec:BMgu} and that $\varepsilon^2 c^s/2 = \nu$. Then \eqref{e:Delta} implies \begin{equation} \frac{\varepsilon^2}{2}\sum_r L^{\top}_{\zeta_r} L^{\top}_{\zeta_r}\bar{\mu} = \nu\Delta\bar{\mu} \;\textup{ and }\; \frac{\varepsilon^2}{2}\sum_r \textup{div}\Big( \textup{div}(\bar{\rho} \zeta_r) \zeta_r \Big) = \nu \Delta\bar{\rho}. \end{equation} (The explicit calculation is carried out in \cite[Lemma~4.3]{H18}.) Hence equations~\eqref{e:exp1} and \eqref{e:exp2} become \begin{align} \label{e:exp11} \dot{\bar{\mu}} &= -\nabla_{u+\nu\nabla\log\bar{\rho}}\bar{\mu} - \textup{div}(u+\nu\nabla\log\bar{\rho})\bar{\mu} - \nu(\nabla^{\top}\nabla\log\bar{\rho})\bar{\mu}\\ \notag &\phantom{==} - \nabla p + \kappa\bar{\rho}\nabla\Delta\bar{\rho} - \nu\bar{\rho}^{-1}\textup{div}(\bar{\mu})\nabla\bar{\rho} + \nu\nabla\textup{div}(\bar{\mu}) + \nu\Delta\bar{\mu} \\ \label{e:exp22} \dot{\bar{\rho}} &= -\textup{div}(\bar{\rho} u) \end{align} Therefore, \begin{align*} \dot{\bar{\mu}} &= \dd{t}{}(\bar{\rho}u) = -\textup{div}(\bar{\rho}u)u + \bar{\rho}\dot{u}\\ &= -\bar{\rho}\nabla_{u+\nu\nabla\log\bar{\rho}}u - \textup{div}(\bar{\rho}u)u - \nu\textup{div}(\bar{\rho}\nabla\log\bar{\rho})u \\ &\phantom{==} - \nu(\nabla^{\top}\nabla\log\bar{\rho})\bar{\rho}u - \nabla p + \kappa\bar{\rho}\nabla\Delta\bar{\rho} - \nu\bar{\rho}^{-1}\vv<\nabla\bar{\rho},u>\nabla\bar{\rho} - \nu\textup{div}(u)\nabla\bar{\rho} \\ &\phantom{==} + \nu\nabla\Big(\vv<\nabla\bar{\rho},u> + \bar{\rho}u\Big) + \nu\Delta(\bar{\rho}u) \\ &= -\bar{\rho}\nabla_{u+\nu\nabla\log\bar{\rho}}u - \textup{div}(\bar{\rho}u)u - \nu(\Delta\bar{\rho})u \\ &\phantom{==} - \nu\nabla_u\nabla\bar{\rho} + \nu\bar{\rho}^{-1}\vv<u,\nabla\bar{\rho}>\nabla\bar{\rho} - \nabla p + \kappa\bar{\rho}\nabla\Delta\bar{\rho}\\ &\phantom{==} - \nu\bar{\rho}^{-1}\vv<\nabla\bar{\rho},u>\nabla\bar{\rho} - \nu\textup{div}(u)\nabla\bar{\rho} + \nu(\nabla^{\top}\nabla\bar{\rho}) u + \nu (\nabla^{\top}u) \nabla\bar{\rho} \\ &\phantom{==} + \nu \textup{div}(u)\nabla\bar{\rho} + \nu \bar{\rho}\nabla\textup{div}(u) + \nu(\Delta\bar{\rho})u\\ &\phantom{==} + 2\nu\nabla_{\nabla\bar{\rho}}u + \nu\bar{\rho}\Delta u \\ &= - \bar{\rho}\nabla_{u}u - \textup{div}(\bar{\rho}u)u - \nabla p + \kappa\bar{\rho}\nabla\Delta\bar{\rho + \nu (\nabla^{\top}u)\nabla\bar{\rho} + \nu \bar{\rho}\nabla\textup{div}(u) + \nu \nabla_{ \nabla\bar{\rho} }u + \nu\bar{\rho}\Delta u \end{align*} Define the stress tensor, $S$, by \begin{equation} \label{e:stress} S_{ij} = \nu\bar{\rho}\Big(\del_i u^j + \del_j u^i\Big) \end{equation} and the corresponding force \begin{align} \textup{div}\,S = \sum \del_i S_{ij} e_j = \nu (\nabla^{\top}u)\nabla\bar{\rho} + \nu \bar{\rho}\nabla\textup{div}(u) + \nu \nabla_{ \nabla\bar{\rho} }u + \nu\bar{\rho}\Delta u. \end{align} The capillary tensor, $C$, is defined (see \cite[Equ.~(4)]{AMW97}) by \begin{equation} \label{e:Cap} C = \kappa\Big( \Big( \bar{\rho}\Delta\bar{\rho} +\by{1}{2}\vv<\nabla\bar{\rho},\nabla\bar{\rho}> \Big)\mathbb{I} - \nabla\bar{\rho}\otimes\nabla\bar{\rho} \Big) \end{equation} and satisfies $\textup{div}\,C = \kappa\bar{\rho}\nabla\Delta\bar{\rho}$. Note that $\nu\ge0$ and $\kappa\ge0$ are constants, and that $\bar{\mu}$, resp.\ $\bar{\rho}$ are a time dependent vector field, resp. function by construction. It follows that: \begin{theorem}\label{thm:cNS} The mean field equations \eqref{e:Hmf1} and \eqref{e:Hmf2} imply, if $\xi_r=\zeta_r$ and $c^s\varepsilon^2/2=\nu$, that the expectations $\bar{\mu}=E[\mu]$ and $\bar{\rho}=E[\rho]$ satisfy the compressible Navier-Stokes equations \begin{align} \label{e:bNS1} \dot{u} &= - \nabla_{u}u - \bar{\rho}^{-1}\nabla p + \bar{\rho}^{-1} \Big(\textup{div}\,S + \textup{div}\,C\Big) \\ \label{e:bNS2} \dot{\bar{\rho}} &= -\textup{div}(\bar{\rho}u) \end{align} where $u=\bar{\mu}/\bar{\rho}$. \end{theorem} \section{Stochastic Kelvin Circulation Theorem}\label{sec:KCT} \revise{ In \cite{DH19} it is shown that stochastic Euler-Poincar\'e fluid equations are characterized by preserving circulation along Lagrangian paths. } Since \eqref{e:Hmf1}-\eqref{e:Hmf2} are obtained as a mean field limit of a Hamiltonian IPS, \revise{ and can be viewed of as mean field generalization of the stochastic fluid system in \cite{DH19}, } there should be a Kelvin Circulation Theorem: \begin{proposition}\label{prop:KCT} Let $C$ be a smooth closed loop which is transported by the Lagrangian flow $\Phi_t$, defined through the mean field limit \eqref{e:mflimit} and characterized by \begin{equation} \mbox{$\,\delta_t$}\Phi_t\circ\Phi_t^{-1} = \Big(u_t+\nu\nabla\log\bar{\rho}\Big)\mbox{$\,\delta_t$} t + \varepsilon\sum \xi_r\mbox{$\,\delta_t$} W^r_t. \end{equation} Let $\mu_t$ and $\rho_t$ be solutions of \eqref{e:Hmf1} and \eqref{e:Hmf2}. Then \begin{equation}\label{e:KCT} \mbox{$\,\delta_t$}\int_{(\Phi_t)_*C}\rho_t^{-1}\mu_t^{\flat} = 0 \end{equation} where $\flat$ is the Euclidean isomorphism to one-forms (since $\mu$ is treated as a vector field). \end{proposition} \begin{proof} Equations~\eqref{e:hips3} and \eqref{e:mflimit} yield \begin{equation} \label{e:Phi} \mbox{$\,\delta_t$} \Phi = \Big(u+\nu\nabla\log\bar{\rho}\Big)\circ\Phi\mbox{$\,\delta_t$} t + \varepsilon\sum\xi_r\circ\Phi \mbox{$\,\delta_t$} W^r \end{equation} Now, \eqref{e:Hmf1} and \eqref{e:Hmf2} imply that $X_t := \rho_t^{-1}\mu_t$ satisfies \begin{align} \mbox{$\,\delta_t$} X &= -\rho^{-2}(\mbox{$\,\delta_t$}\rho)\mu + \rho^{-1}\mbox{$\,\delta_t$}\mu \label{e:Xvel} \\ &= \Big( - \nabla_{u+\nu\nabla\log\bar{\rho}}X - (\nabla^{\top} u)X - \nu(\nabla^{\top}\nabla\log\bar{\rho})X - \nabla\tilde{p} \Big)\mbox{$\,\delta_t$} t \notag \\ &\phantom{==} - \varepsilon\sum \Big( -\nabla_{\xi_r} X - (\nabla^{\top} \xi_r)X \Big)\mbox{$\,\delta_t$} W^r \notag \end{align} with \begin{equation} \tilde{p} := -\frac{1}{2}\vv<u,u> +\mathcal{U}(\bar{\rho}) + \bar{\rho}\mathcal{U}'(\bar{\rho}) - \kappa\Delta\bar{\rho} - \nu\bar{\rho}^{-1}\textup{div}(\bar{\mu}). \end{equation} Hence, with parameterization $C = c([0,1])$: \begin{align*} \mbox{$\,\delta_t$} \int_{(\Phi_t)_*C} X_t^{\flat} &= \mbox{$\,\delta_t$}\int_C \Phi_t^* X_t^{\flat} = \mbox{$\,\delta_t$}\int_C \Big( X_t^{\flat} \circ \Phi_t\Big). T\Phi_t \\ &= \int_0^1\mbox{$\,\delta_t$}\vv< X_t\circ\Phi_t, T\Phi_t.c'(s)>\,ds \\ &= \int_0^1\Big( \vv<(\mbox{$\,\delta_t$} X_t)\circ\Phi_t + TX_t.\delta\Phi_t, T\Phi_t.c'(s)> \\ &\phantom{===} + \vv<X_t\circ\Phi_t, T(u_t\mbox{$\,\delta_t$} t + \nu\nabla\log\bar{\rho} \mbox{$\,\delta_t$} t + \varepsilon\sum\xi_r\mbox{$\,\delta_t$} W^r_t ).T\Phi_t.c'(s)> \Big)\,ds\\ &= \int_0^1\Big( \Big\langle(\mbox{$\,\delta_t$} X_t)\circ\Phi_t + (\nabla_{u_t\,\delta_t t + \nu\nabla\log\bar{\rho} \,\delta_t t + \varepsilon\sum\xi_r\,\delta_t W^r_t } X_t) \circ\Phi_t\\ &\phantom{===} + ((\nabla^{\top}u_t\mbox{$\,\delta_t$} t + \nu\nabla^{\top}\nabla\log\bar{\rho} \mbox{$\,\delta_t$} t + \varepsilon\sum\nabla^{\top}\xi_r\mbox{$\,\delta_t$} W^r_t )X_t)\circ\Phi_t , T\Phi_t.c'(s)\Big\rangle \Big)\,ds\\ &= \int_{(\Phi_t)_*C} \Big( \mbox{$\,\delta_t$} X_t + \nabla_{u_t\,\delta_t t + \nu\nabla\log\bar{\rho} \,\delta_t t + \varepsilon\sum\xi_r\,\delta_t W^r_t } X_t \\ &\phantom{===} + (\nabla^{\top}u_t\,\delta_t t + \nu\nabla\log\bar{\rho} \mbox{$\,\delta_t$} t + \varepsilon\sum\nabla^{\top}\xi_r\,\delta_t W^r_t ) X_t \Big)^{\flat} \\ &= 0 \end{align*} since, by \eqref{e:Xvel}, the integrand equals $-(\nabla\tilde{p})^{\flat}\mbox{$\,\delta_t$} t = -d\tilde{p}\mbox{$\,\delta_t$} t$. \end{proof} \section{Conclusions}\label{sec:concl} \subsection{HIPS approach to the compressible Navier-Stokes equation} The mean field system \eqref{e:Hmf1}-\eqref{e:Hmf2} is derived from the interacting particle point of view under the basic assumption that the equations of motion follow from stochastic Hamiltonian mechanics. Therefore, circulation is preserved along stochastic Lagrangian paths. If the perturbation fields $\xi_r$ run over the orthogonal system $\zeta_r$, defined in Section~\ref{sec:BMgu}, such that the stochastic perturbation is given by a cylindrical Wiener process, then the mean fields $E[\mu]$ and $E[\rho]$ solve the compressible Navier-Stokes equations. (Theorem~\ref{thm:cNS} and Proposition~\ref{prop:KCT}.) While the HIPS formulation relies on a system of $N$ interacting SDEs, the mean field equations \eqref{e:Hmf1}-\eqref{e:Hmf2} is a single SDE system for momentum and mass density. In contrast to statistical mechanics, this mean field formulation is obtained without any closure assumptions. (However, in this paper the existence of the mean field limit is not proved but assumed.) In the mean field limit \eqref{e:mflimit}, the HIPS evolution equation \eqref{e:hips3} becomes \begin{equation} \label{e:mf_transp} (\mbox{$\,\delta_t$} \Phi_t)\circ\Phi_t^{-1} = \Big( E[\mu_t]/E[\rho_t]+\nu\nabla\log E[\rho_t] \Big)\mbox{$\,\delta_t$} t + \varepsilon \sum \xi_r\mbox{$\,\delta_t$} W_t^r \end{equation} which is of similar form as the LA SALT advection field \begin{equation} \label{e:la_salt} E[u_t^L]\mbox{$\,\delta_t$} t + \sum \xi_r\mbox{$\,\delta_t$} W_t^r \end{equation} of \cite{DHL20,ABHT20}, where $u_t^L$ is a stochastic velocity field. However, there are a few crucial differences: while \eqref{e:la_salt} is the starting point for LA SALT theory, the HIPS formulation is based on the ensemble Hamiltonian \eqref{e:H^N} and the Lie transport along \eqref{e:mf_transp} in the mean field equations \eqref{e:Hmf1}-\eqref{e:Hmf2} is a consequence of the Hamiltonian structure of the IPS (and the ensuing passage to the mean field limit). Moreover, unless the density is constant, it is not clear how to identify the drift in \eqref{e:mf_transp} with the expectation of a velocity. Thus, both, the starting points and the advection fields are different. However, the perturbation fields $\xi_r$ can be interpreted in the same manner. \subsection{NatCat modeling of windstorm events} \revise{NatCat models used to calculate the solvency capital requirement (SCR as defined in \cite{Level1}) for storm risks rely on NWP systems. These NWP systems are deterministic and, to arrive at a set of `stochastic' NatCat scenarios, the initial conditions are statistically sampled. As discussed in the Introduction, all such numerical schemes suffer from subgrid phenomena, and for geophysical flow models a well-established means for treating these deficiencies is by stochastic fluid mechanics (\cite{ABHT20,Betal17,Holm15,Mem14,RMC17}). Since SCR calculation is concerned with predicting extreme events in the $99.5$ percentile, and not only average storm patters, it seems reasonable to expect that also NatCat models would benefit from a stochastic dynamics approach. However, it is not claimed that the stochastic HIPS formulation of this paper is appropriate to generate stochastic NatCat storm scenarios. }
{ "timestamp": "2020-10-02T02:20:45", "yymm": "2007", "arxiv_id": "2007.13692", "language": "en", "url": "https://arxiv.org/abs/2007.13692" }
\section{Introduction}\label{sec:i} Challenging problems is what drives AI research. What pushes the field and its applications forward. A prime example of that is the ImageNet dataset, together with the corresponding ILSVRC challenge \citep{russakovsky2015imagenet}. The popularization of this competition revitalized the Neural Networks field, particularly in the context of image processing. The outstanding performance of deep neural networks models in the demanding ILSVRC challenge caught the attention of AI researchers and practitioners around the world, who quickly acknowledged the potential behind the combination of deep nets and large sets of data. As a result, the popularity of the field exploded. The ImageNet dataset provided an appealing challenge to lure AI researchers, who in turn were able to develop and test new ideas on it. Some of these ideas became powerful principles for the current deep learning (DL) field, such as Inception blocks \citep{szegedy2015going}, residual connections \citep{he2016deep}, dropout regularization \citep{srivastava2014dropout}, ReLU activations \citep{nair2010rectified} and weight initializations \citep{glorot2010understanding, he2015delving}, among others. This amounts for a remarkable set of achievements in a very short time span, and speaks of the contribution of ImageNet to the AI field. That being said, the relevance of the ImageNet image classification challenge today has mostly vanished. The last edition of ILSVRC took place in 2017 \citep{Ilsvrc17last}, and the AI community considers it a solved problem with little margin for improvement (by 2019, 98.2\% top-5 accuracy \citep{xie2019selftraining} was achieved, while human top-5 classification accuracy is thought to be between 88\% and 95\%~\citep{russakovsky2015imagenet}). The ImageNet challenge is defined around two main types of instances: Man-made objects, and living things. These classes are characterized by large distinctive features which require little attention to detail for their recognition. State-of-the-art performance can be achieved on this kind of tasks after applying heavy deformation on the image (\ie uniform reshape) and losing most visual details (\eg downsampling to 300x300) \citep{xie2019selftraining}. At the same time, samples of the same class have little intra-class variance, while being affected by large contextual changes (background, scale, perspective, illumination, \etc). To contribute in a direction which has not yet been properly addressed by the AI community, in this paper we present a visual challenge which is different in all these aspects. It is based on museum art mediums (MAMe), where attention to detail is essential, where there is huge intra-class variance, and where contextual information is not a factor. The properties of ImageNet and ImageNet-like datasets have popularized the practice of interpolating images. This approach allows to reduce the memory requirements of models, avoiding high resolution (HR) images, and removing the hindrances of variable-shaped (VS) inputs. The first CNN models tackling the ImageNet challenge interpolated images to a fixed size of 224x224 pixels~\citep{simonyan2014very, szegedy2015going}. More recent solutions increased that size to 229x229~\citep{szegedy2016rethinking}, 331x331~\citep{zoph2018learning}, 480x480~\citep{huang2019gpipe} or even 600x600~\citep{he2017mask, lin2017feature} pixels, as scaling the image resolution is known to result in better performances on some cases~\citep{tan2019efficientnet,ghosh2019reshaping}. Even so, the nature of the ImageNet-like problems minimized these inconveniences, resulting in competitive performances even when using relatively small input sizes~\citep{xie2019selftraining}. Given the prominence of ImageNet, this particularity biased research. Indeed, beyond this ImageNet-like tasks, there are many current and future visual challenges where exploitation of HR and VS properties are likely to be relevant for improving performance. Visual challenges in the medical domain are often based on the identification of small-scale visual patterns, requiring both attention to detail and an understanding of large structures. In domains like breast cancer detection, the benefit of exploiting the highest possible image resolution has already been highlighted \citep{geras2017high, lotter2017multi}, motivating the use of HR images. Similarly, image recognition systems used for autonomous driving also benefits from using HR images, as this entails detection at further distances, which have enormous safety implications. Current solutions already use images that are larger than 0.25 MP \citep{chen2017multi, treml2016speeding}. The motivation for research on VS images derives from the increasing popularity of crowd-sourced datasets, such as Open Images~\citep{OpenImages}. These datasets combine data produced from multiple sources, which saves time and effort, at the expense of obtaining data in different resolutions and shapes (\eg landscape or portrait). In this context, standard training procedures using squared images are forced to interpolate them, hence deforming their image patterns. These image deformations introduce noise within data, potentially decreasing performance. The main contribution of this paper is the MAMe dataset itself, which is made available for the research community (\S\ref{sec:mame_dataset}). Beyond extensive statistics and expert insights, this work also provides several baselines based on popular architectures: VGG~\citep{simonyan2014very}, ResNet~\citep{he2016deep}, DenseNet~\citep{huang2017densely} and EfficientNet~\citep{tan2019efficientnet} (\S\ref{sec:baselines}). Further experiments (\S\ref{sec:exp_res}) are performed to assess the impact on accuracy when using high resolution, variable shape or both properties in conjunction. One final experiment (\S\ref{sec:exp_res}) highlights whether performance gain comes from increasing the amount of image information or from increasing the models internal representation (as consequence of increasing the input size). This last experiment provide really different results when using the MAMe dataset in contrast to ImageNet \citep{sandler2019nondiscriminative}, highlighting the particularity of the MAMe dataset. Finally, we provide a qualitative analysis of the MAMe through a set of expert analysis and explainability experiments (\S\ref{sec:exp}). \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{datasets_ps_ar.png} \caption{Product size and aspect ratio distribution over several datasets, both on log scale. The dashed horizontal blue line separates a sample of current image classification datasets, and the proposed MAMe dataset. The vertical red line at aspect ratio 1.0 shows the border between portrait (left side) and landscape (right side) images.} \label{fig:ps_ar_distr} \end{figure*} \section{Related work}\label{sec:rw} There are many visual challenge datasets in the current literature. There are however, very few containing images larger than 500x500 pixels, and with a significant variance in their aspect ratio. To illustrate that point we analyze a sample of popular datasets which satisfy three conditions we consider essential for attracting and generating high quality research: \begin{itemize} \item The dataset is publicly available. \item The dataset labels are reliable. \item The dataset has at least 100 instances per class. \end{itemize} The first requires data to be as public as possible, to reach the largest possible number of researchers. The second one excludes all datasets that contain labels not validated by humans or that have been crowd-labeled, as these may contain a significant amount of noise (and noise reduces the reliability of experimental results). The third enforces a minimum number of instances. We consider this a necessity for thorough research experimentation. We were nonetheless flexible in this regard, as some datasets of those analized contain some classes with less than 100 instances. The sample analyzed contains the following 12 datasets: ImageNet 2012 \citep{russakovsky2015imagenet}, Food101 \citep{bossard2014food}, IP102 \citep{wu2019ip102}, Places365 \citep{zhou2017places}, Mit67 \citep{quattoni2009recognizing}, Flower102 \citep{nilsback2008automated}, CatsDogs \citep{parkhi2012cats}, StanfordDogs \citep{khosla2011novel}, Textures \citep{cimpoi2014describing}, Caltech256 \citep{griffin2007caltech}, Microsoft COCO \citep{lin2014microsoft} and Pascal VOC 2012 \citep{pascal-voc-2012}. For each one we compute the product size (\ie width multiplied by height) and aspect ratio (\ie width divided by height) distributions. For the three datasets with more than 100,000 total samples (ImageNet 2012, Places365 and Microsoft COCO) we use a random sample of 100,000 images. Distributions for all 12 datasets can be seen in Figure \ref{fig:ps_ar_distr}. In terms of number of pixels (left plot), current image classification datasets rarely contain images with more than 1 megapixel (MP). For reference purposes, none of the 12 datasets contain images bigger than 1,000 x 1,000 pixels, assuming unitary aspect ratio. This already indicates a significant bias in current research, and a mismatch with current technology, as popular image taking resolutions are well above that size. Obviously, there are datasets with images larger than 1 MP, but these are typically either private, unreliability labeled \citep{OpenImages}, or have very few instances per class \citep{GoogleLandmarks}. In this context, as shown at the bottom of Figure \ref{fig:ps_ar_distr}, the MAMe dataset stands out, containing a large volume of reliable labeled HR images. In fact, all images in the Q1-Q3 interval of the MAMe dataset are bigger than the largest image found on all analyzed datasets. The mean image size for the MAMe dataset is 6.6MP (\eg 2350x2350 in a squared image), one order of magnitude larger than all images contained in the analyzed datasets. Regarding aspect ratio, the right plot of Figure \ref{fig:ps_ar_distr} shows how the majority of images found in current datasets are landscape. All datasets have their median in the landscape side, only half of the datasets contain Q1 within the portrait side, and only 3 contain a significant amount of portrait images (Food101, CatsDogs and Caltech256). However, even these have their aspect ratio distribution clearly skewed towards landscape images (notice that the median is quite close to the third quartile on all three cases). In contrast, the proposed MAMe dataset has a balanced distribution, containing approximately the same number of portrait and landscape images. The aspect ratio distribution is also much wider than the other datasets, showing how the MAMe dataset contains infrequently wide and tall images. \section{The MAMe dataset}\label{sec:mame_dataset} In this work we propose the Museum Artworks Medium dataset, abbreviated as the MAMe dataset. MAMe is an image classification dataset focused on the recognition of mediums in artworks and heritage held by museums (\eg \textit{Oil on canvas}, \textit{Bronze} or \textit{Woodcut}). Medium is a broad technical term used to describe several aspects of artworks~\citep{maynor1989paper}. On one hand, it can be used to describe the main physical components used for the creation of an artwork, such as \textit{Oil on canvas}. However, medium can also refer to the technique used to produce the artwork. \textit{Engraving}, for example, is the printed result of engraving a metal plate. Both of these interpretations of medium are freely used by museums to organize their collections. As detailed in \S\ref{tab:medium_museums}, the classes considered in the MAMe dataset comprise a wide variety of mediums according to both interpretations of the term. These can range from simple material aspects (\eg \textit{Bronze}, \textit{Silver} or \textit{Gold}) to complex, high-level techniques (\eg \textit{Faience}, \textit{Woodblock} or \textit{Woven fabric}). The variety of relevant features in MAMe requires both attention to detail and to the overall image structure. Meanwhile, the essence of art causes widely different artworks to share the same label. The degree of intra-class variance of MAMe is exemplified in Figure \ref{fig:intra-class_variance}. \begin{figure}[tb] \centering \begin{tabular}{ccc} \includegraphics[width=0.2\textwidth]{188762-131765.jpg} & \includegraphics[width=0.2\textwidth]{1926_552.jpeg} & \includegraphics[width=0.2\textwidth]{684577.jpg} \\ \includegraphics[height=0.19\textwidth]{226001-108131.jpg} & \includegraphics[width=0.2\textwidth]{1983_66.jpeg} & \includegraphics[height=0.19\textwidth]{244676-50245.jpg}\\ \end{tabular} \caption{Example of intra-class variance. Images in the same row belong to the same medium class, but share few visual features. The first row belongs to \textit{Ceramic}, the second row to \textit{Bronze} and the third row to \textit{Faience}.} \label{fig:intra-class_variance} \end{figure} \subsection{Data acquisition} \begin{figure}[tb] \centering \includegraphics[width=\textwidth]{mediums_ps_ar.png} \caption{Product size and aspect ratio distribution over all classes of the MAMe dataset. Distributions are represented in box-plots, both of them on log scale. The vertical red line at aspect ratio 1.0 shows the border between portrait (left side) and landscape (right side) images.} \label{fig:class_ps_ar_distr} \end{figure} In the past few years, museums around the world have been endorsing the policy of publicly releasing images of their heritage. Some of these museums release HR images under a CC0 license, allowing a free and unrestricted use of the data. We base our work on the data released by three museums. These were chosen because all three endorse the CC0 license, include a large number of images, provide accessible labels for them, and make it feasible to access their data in an automatized manner: \begin{itemize} \item The Metropolitan Museum of Art of New York (from now on the Met museum)~\citep{Metrelease} \item The Los Angeles County Museum of Art (from now on the Lacma museum)~\citep{Lacmarelease} \item The Cleveland Museum of Art (from now on the Cleveland museum)~\citep{Clevelandrelease} \end{itemize} All three museums hold large artistic collections with a general scope, including artworks from all over the world, from very early cultures to recent ones. For accessing the data, the Cleveland museum publishes an API to automatically download images. Lacma and Met on the other hand provide access to their images only through their webpages. This implies an image-by-image download process, for which we built museum-specific crawlers. By these means we downloaded approximately 232,000 images from the Met museum, 26,000 from the Lacma museum and 32,000 from the Cleveland museum. From this data, we define the MAMe dataset, composed by an expertly-curated subset of the data. The final selection includes 37,407 images belonging to 29 classes. The class selection process was made following several technical criteria, including balance between museums (to avoid potential bias), balance and volume of class instances (to facilitate research), and image resolution (to enable HR exploration). Grey scale images were discarded. Significantly, museum images have a natural tendency towards VS (\eg human sculptures tend to be tall, while paintings tend to be wide). Although we did not encouraged its presence, this natural feature is shown in the dataset statistics (see right plot of Figure~\ref{fig:class_ps_ar_distr}). \begin{table}[t] \centering \caption{For each medium class within MAMe, distributions of instances among museums. The Met, Lacma and Cleveland museums are labeled as "Met", "Lac" and "Cle" respectively. Museum distributions are divided by data splits, into training, validation and test ("Train", "Val" and "Test" respectively). The last four columns show values aggregated for all data splits ("All"). The "Test" and "All" sections contain a 4th column indicating the total ("Total"). These values are not provided for "Train" and "Val" since these are constant (700 and 50 respectively).} \label{tab:medium_museums} \input{medium_museums_table} \end{table} \subsection{Label mapping} All three museums (Met, Lacma and Cleveland) reported the medium used to represent their artworks as metadata. Unfortunately, there is not a unique ontology behind, as each museum uses a different level of detail and interpretation of medium. Some mediums are subtypes of another mediums. Some mediums are reported under different names. And some mediums are combinations of other mediums. Experts from the art domain grouped the medium metadata into coherent classes, following their professional understanding of artistic coherency and visual discriminability. Classes which could not be discriminated visually by a human without technical aid (\eg a microscope) were discarded. The main expert criteria used to determine the classes are the following: \begin{itemize} \item Written coherency: Medium categories written in different forms refering to the same term are aggregated (\eg \textit{Bronze} and \textit{bronze}) \item Terminology coherency: Medium categories which are considered to be analogous are aggregated (\eg \textit{Ceramic} and \textit{Pottery}). \item Taxonomic coherency: Object belonging to the same parent medium are sometimes aggregated (\eg \textit{Terracotta} and \textit{Ceramic}). Where technical criteria allows, medium subtypes are left as a separate class (\eg \textit{Porcelain}). \item Visual coherency: Medium categories which cannot be visually differentiated at plain sight are aggregated (\eg \textit{Hard-paste porcelain} and \textit{Soft-paste porcelain} into \textit{Porcelain}, \textit{Cotton} and \textit{Linen} into \textit{Woven fabric}). \end{itemize} After enforcing a minimum amount of 850 samples per medium (adding up train, val and test), the MAMe dataset contains 29 different classes. These are shown in the left column of Table \ref{tab:medium_museums}. Notice we made an exception with the \textit{Silk and metal thread} medium, which only contains 845 samples. A detailed description of the nature of each class is provided in Table \ref{tab:medium_descriptions}. Visual details on how to discriminate some of these classes are discussed in \S\ref{sec:exp}. \begin{table}[t] \centering \caption{Descriptions of the medium classes. Some descriptions are obtained from the museum sources~\citep{Metdescriptions}.} \label{tab:medium_descriptions} \begin{adjustbox}{width=\textwidth} \input{medium_descriptions_table} \end{adjustbox} \end{table} \subsection{Dataset details} The MAMe dataset is publicly available~\footnote{\label{note1}\url{https://hpai.bsc.es/MAMe-dataset}}. The site provides access to all the original images, and a CSV file with metadata for each of them. This metadata includes the following information: \begin{itemize} \item the \textbf{image filename} \item the \textbf{medium} of the artwork (\ie the classification label) \item the \textbf{museum} from where the image was obtained \item the artwork \textbf{ID} given by the museum \item the \textbf{data split} of the instance (\ie train, validation or test set) \item the \textbf{width} of the image \item the \textbf{height} of the image \item the \textbf{product size} of the image (\ie width multiplied by height) \item the \textbf{aspect ratio} of the image (\ie width divided by height) \end{itemize} The dataset contains 29 medium classes. Each class is composed by at least 850 images and, at most 1,450. Each class contains 700 images for training, 50 images for validation and a variable amount of images for the test set (\ie the test set is unbalanced). The minimum amount of instances in the test set is 100 (except for \textit{Silk and metal thread} with 95) and the maximum is 700. In total, the MAMe dataset is composed by 37,407 HR images. All images in the MAMe dataset have, at least, a resolution of 0.25MP, equivalent to a squared image of 500x500 pixels. The mean resolution is around 10.3MP, corresponding to an image of more than 3,200x3,200 pixels, and the greatest image has more than 370MP corresponding to an image of 32,683x11,412 pixels (check Figure \ref{fig:ps_ar_distr}). The 37,407 images are divided in subsets as follows: 20,300 images for training and 1,450 images for validation and 15,657 for test. Of those, 24,911 images originate from the Met museum, 5,531 images from the Lacma museum and 6,965 images from the Cleveland museum. An effort was made to keep the data coming from the different museums as balanced as possible, to minimize the possibility of potential biases generated by the nature of artworks and the image taking particularities of each museum. The exact distributions of images per museum, class and data split are shown in Table \ref{tab:medium_museums}. To assess the internal balance of MAMe with regards to HR and VS features, Figure \ref{fig:class_ps_ar_distr} shows the product size and aspect ratio distributions for each medium class. Besides a few classes with particularly narrow or skewed distributions, most of the categories include a wide variety of product sizes and aspect ratios. \section{Baselines and Experiments}\label{sec:baselines} This section introduces and evaluates both a set of baseline models and a set of hypothesis. The purpose of baselines is to illustrate how the task proposed is coherently constructed (\ie solvable) and worth receiving the attention of researchers. To this end, we employ prototypical solutions from the literature that provide good results on other challenges, and report their performance on the MAMe dataset. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{images_data_type.PNG} \caption{Visualization of the different MAMe data types, exemplified for one particular instance. FS stands for fixed shape and VS for variable shape. R stands for resolution and A for axis.} \label{fig:data_type} \end{figure} Additionally, to highlight differences of high resolution (HR) and variable shape (VS) properties \wrt low resolution (LR) and fixed shape (FS) in the context of MAMe, we perform a set of experiments. These are designed to evaluate the following hypothesis: \begin{hyp}\label{hyp:first} MAMe benefits from HR data \wrt LR data. \end{hyp} \begin{hyp} \label{hyp:second} MAMe benefits from VS data \wrt FS data. \end{hyp} \begin{hyp} \label{hyp:third} MAMe benefits from information gain \wrt only resolution gain. \end{hyp} All baseline models, hypothesis experiments and the code needed to replicate results are publicly available \footnote{\url{https://github.com/HPAI-BSC/MAMe-baselines}}. \subsection{MAMe data types}\label{sec:data_type} Most of current solutions in the literature use squared images to feed their models, that is images with a fixed shape. Additionally, these squared images are typically of low resolution. Resolutions used are diverse, but the most common is 256x256 pixels, corresponding to a total amount of 65,536 pixels. For referencing purposes, we use this data type as a starting point and we call it R65k-FS. For comparison purposes, we use a second data type using the same resolution (\ie same amount of total pixels) but keeping the original aspect ratio of the image, that is with the VS property. This second data type is called R65k-VS. We also produce the HR versions of these two data type. These HR versions contain a total of 360,000 pixels. They are the R360k-FS and the R360k-VS version. Notice that R360k-FS corresponds exactly to an squared image of 600x600 pixels, while R360k-VS contains images of variable shape but fixed number of pixels. See Figure \ref{fig:data_type} for an illustration of all data types used. The final list of data types used in this section is as follows: \begin{itemize} \item R65k-FS: images are downsampled to 256 x 256 pixels, corresponding to an image size of 65,536 pixels. \item R65k-VS: the original aspect ratio is maintained forcing the total number of pixels to 65,536. \item R360k-FS: images are resized to 600 x 600 pixels (360,000 pixels) \item R360k-VS: images are rescaled to a total number of pixels to 360,000, maintaining the original aspect ratio. \end{itemize} \subsection{Training configurations}\label{sec:base_arc} In this work we use very well-known architectures: VGG~\citep{simonyan2014very}, ResNet~\citep{he2016deep}, DenseNet~\citep{huang2017densely} and EfficientNet~\citep{tan2019efficientnet}. The specific architecture versions that we use are the following: \begin{itemize} \item VGG11 (configuration A) \item VGG16 (configuration D) \item ResNet18 \item ResNet50 \item EfficientNet-B0 \item EfficientNet-B3 \item DenseNet121 \end{itemize} Our baselines and experiments use several types of input processing. This is divided into two main components: data processing and data augmentation. The first refers to all image transformations required to obtain each data type (according to subsection \ref{sec:data_type}), while the second provides regularization during the training process. The data augmentation is independent of the data type, and is defined as follows: \begin{enumerate} \item Random rotation of the image from [-30, 30] degrees. \item Random crop of (0.875 x width, 0.875 x height) pixels. Width and height refer to current dimensions at this point of the processing. \item Random horizontal flip with 50\% chance. \end{enumerate} As a final step, images are normalized to have values in the range [0, 1] and standardized with $\mu = 0.5$ and $\sigma = 0.5$ (same value for all three channels). Notice that during validation, the data augmentation is adapted. In this phase, steps 1 and 3 are avoided and, step 2 does a center crop instead of random one. We use the AMSGrad optimizer~\citep{reddi2019convergence} for all the baselines and experiments, a variant of the original Adam optimizer~\citep{kingma2014adam}. Batch sizes and learning rates are optimized for each training, considering memory limitations, training speed and learning convergence. Executions are conducted in a single computing node of the CTE-Power9 cluster at the Barcelona Supercomputing Center, with the following characteristics: \begin{itemize} \item 2 Sockets x IBM Power9 8335-GTH @ 2.4GHz (20 cores and 4 threads/core, total 160 threads). \item 4 x GPU NVIDIA V100 (Volta) with 16GB HBM2. \end{itemize} \subsection{Baselines performance} \label{sec:baselines_perf} To show the feasibility and evaluate the difficulty of the MAMe task, we introduce a set of baseline models. Their purpose is to reach the best possible performance using current prototypical solutions. To that end, we employ the following CNN architectures: VGG11, VGG16, ResNet18, ResNet50, EfficientNet-B0, EfficientNet-B3 and DenseNet121. We train these using the four MAMe data types explained in subsection \ref{sec:base_arc}. Due to memory limitations, we only use a subset of architectures on the R360k data type: VGG11, ResNet18, EfficientNet-B0 and EfficientNet-B3. All baselines are trained on top of the corresponding ImageNet pre-trained models \footnote{\url{https://github.com/pytorch/vision}}\footnote{\url{https://github.com/lukemelas/EfficientNet-PyTorch}}. Top 10 baseline results are shown in Table \ref{tab:baseline_results}. These reported results correspond to the mean per class test accuracy using the models achieving minimum validation loss. \begin{table}[t] \begin{minipage}{.49\linewidth} \centering \caption{Top 10 baseline results for the MAMe dataset. Notice the prevalence of high resolution to a great extent.} \label{tab:baseline_results} \input{top_10_baselines} \end{minipage} \begin{minipage}{.49\linewidth} \centering \caption{Experiment results for \hypref{hyp:first} (more resolution is better) and \hypref{hyp:second} (less deformation is better) hypotheses. \hypref{hyp:first} is assessed vertically (same shape policy, variable resolution), while \hypref{hyp:second} is assessed horizontally (same resolution, variable shape policy).} \label{tab:hyp1_2_results} \input{hypothesis_1_2_table} \end{minipage} \end{table} After training multiple models with combinations of seven architectures and four MAMe data types, finetuning each of the training models to optimize performance and using pre-trained models from ImageNet, most models achieve accuracies above 80\%. The maximum 88.95\% accuracy is obtained by the EfficientNet-B3 architecture on the R360k-FS data type. These results show that, indeed, the MAMe task is solvable. It also clearly illustrates the benefits of using higher resolutions, as the top 4 models are based on R360k data types. On the other hand, it seems that models are not properly exploiting the VS property, as only VGG11 manages to be in the top 4 with that shape policy. Next, let us assess the relevance of these properties in further detail. \subsection{Hypothesis evaluation} \label{sec:exp_res} In this section we aim to validate the three hypothesis introduced in section \ref{sec:baselines}. Our first hypothesis is \textit{\hypref{hyp:first}: MAMe benefits from HR data \wrt LR data}. Since HR contains extra information that is not present in LR data, this hypothesis aims to measure to which degree is this additional information relevant for improving performance on MAMe. To test \hypref{hyp:first} we train a set of models using R65k-FS and R360k-FS MAMe data types, where the only difference is the resolution of images. Notice both data types share the same proportional distortion \wrt the original shape of images. We do the same experiment using variable shape (\ie R65k-VS and R360k-VS data types). In this case the only difference is the resolution of images because there is no distortion added to the aspect ratio. The architectures used for validating this hypothesis are VGG11, ResNet18, EfficientNet-B0 and EfficientNet-B3. We use this subset of shallow architectures due to the high-memory requirements when using R360k data. The models are trained starting from their corresponding ImageNet pre-trained models. Results are shown in Table \ref{tab:hyp1_2_results} and Table \ref{tab:hyp1_diff}. \begin{table}[t] \begin{minipage}{.49\linewidth} \centering \caption{Difference in performance between models trained using R65k and R360k data. This results are used to validate hypothesis \hypref{hyp:first}.} \label{tab:hyp1_diff} \input{hypothesis_1_diff} \end{minipage} \begin{minipage}{.49\linewidth} \centering \caption{Difference in performance between models trained using FS and VS data. This results are used to validate hypothesis \hypref{hyp:second}.} \label{tab:hyp2_diff} \input{hypothesis_2_diff} \end{minipage} \end{table} \hypref{hyp:first} is validated for 6 out of 8 comparison pairs. In these, the models trained on HR data (\ie R360k) achieve a boost in performance around 4\%. This happens for all four architectures tested (VGG11, ResNet18 and the EfficientNet variants). Noticeably, the two cases where using HR data does not yield benefits (and instead degrades around 3\%) are VS settings using the EfficientNet architectures. This phenomenon will be further studied and discussed next, when we assess the validity of \hypref{hyp:second} hypothesis. Let us now consider the second hypothesis \textit{\hypref{hyp:second}: MAMe benefits from VS data \wrt FS data}. Since the FS property deforms the original image adding some distortion, this hypothesis aims to measure to which degree is this deformation relevant for performance on MAMe. For this purpose we compare models which have the same resolution and only differ in shape; we compare R65k-FS with R65k-VS, and R360k-FS with R360k-VS. The architectures used for the R65k comparison are all architectures listed in section \ref{sec:base_arc}, but architectures used in the R360k comparison are a subset of them due to high-memory requirements: VGG11, ResNet18, EfficientNet-B0 and EfficientNet-B3. All models are trained starting from ImageNet pre-trained models. Results are shown in Table \ref{tab:hyp1_2_results} and Table \ref{tab:hyp2_diff}. With regards to \textit{\hypref{hyp:second}}, results do not fully validate nor reject it. Prototypical architectures do not adequately take advantage of the VS property, either by obtaining insignificant accuracy variations (VGG11, VGG16, ResNet18-R65k and ResNet50) or even producing a negative effect on performance due to VS (ResNet18-R360k, EfficientNet-B0 and EfficientNet-B3). Only for the case of DenseNet121 on R65k data, performance increases when moving from FS to VS. Overall, results are inconclusive regarding this second hypothesis \textit{\hypref{hyp:second}}. The experiments with regards to \textit{\hypref{hyp:second}} are affected by the padding. To process images of VS in a single batch, it is required that they all have exactly the same shape for computational purposes. Doing so without changing the shape implies the addition of padding pixels, that is, non-informative values (typically zeros) that are used to uniform batch shape. However, existing architectures do not differentiate padding pixels from image pixels, counting as noise during the training process. Remarkably, this noise increases considerably under a HR setting, where the absolute amount of padding pixels increases. In this regard, some preliminary experiments conducted on a previous version of the MAMe dataset \citep{sotiropoulos2020handling} indicate that reducing padding in a VS setting can yield to accuracy improvements between 3\% and 5\%. As illustrated in the results of the \hypref{hyp:first}, on the MAMe dataset a gain in resolution implies a gain in performance. However, as suggested by Sandler \textit{et. al.}~\citep{sandler2019nondiscriminative}, such increase in performance may be due to an increase of the input size or due to an increase of the internal representation of the model. In their publication \citep{sandler2019nondiscriminative}, they evaluate the impact of these two factors with an experiment on the ImageNet dataset. This experiment consist on comparing information gain against resolution gain, by assessing pairs of models trained with images of same resolution but different amount of information: \begin{itemize} \item Full-information images: Images that are downsampled to a target resolution and contain their corresponding amount of information. \item Capped-information images: Images that are, first, downsampled to a 224x224 resolution and, second, upsampled to a larger target resolution (Bilinear interpolation). Notice that such image will be of same resolution than its corresponding full-information image, but it will contain less original information. \end{itemize} Full-information images increase the model internal representation and the input information, while capped-information images increase the model internal representation as well, but do not increase the input information. By comparing performances using both, we can isolate the impact of the input information when increasing the image resolution. To evaluate the impact of image information gain on the MAMe dataset, we formulated our third hypothesis \textit{\hypref{hyp:third}: MAMe benefits from information gain \wrt only resolution gain}. While this hypothesis is \textit{rejected} in ImageNet according to \citep{sandler2019nondiscriminative}, next we test if this is also the case for MAMe. To conduct this experiment, we train FS models on VGG11 and ResNet18 architectures using the following target resolutions: 50k, 90k, 160k, 250k and 360k pixels. These correspond to squared image widths of 224, 300, 400, 500 and 600 pixels. For each target resolution, we use full-information and capped-information images. Notice both full-information image and capped-information image are equivalent for the image width of 224 pixels. Results are shown in Figure \ref{fig:hyp3}. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{experiment.png} \caption{Results when training VGG11 and ResNet18 architectures using full-information (downsampled from original) and capped-information (upsampled from 224) images. Target resolutions used are the ones corresponding to image widths of 224, 300, 400, 500 and 600 pixels. Both settings improve with size, indicating that both resolution and information gain contribute to better performances in MAMe. Notice the experiment at 224 width is not capped information.} \label{fig:hyp3} \end{figure} Results obtained by Sandler \textit{et. al.}~\citep{sandler2019nondiscriminative} on ImageNet dataset \citep{russakovsky2015imagenet} indicate that there is no gain in performance due to information gain (see their Figure 2b). Indeed, improvement in their experiments is only caused by the use of larger model internal representations. In the MAMe dataset, this very same experiment shows that improvement occurs for both reasons. Increasing the model internal representation (capped-information) entails some consistent improvement in performance. However, performance is further boosted when also increasing the information (full-information). These results suggest the third hypothesis holds for the MAMe dataset, and further highlights the inherent differences between MAMe and ImageNet. \section{Expert and explainability analysis of MAMe}\label{sec:exp} The domain of artworks and heritage is defined by human technology, skill and creativity. Art experts can identify a set of visual queues useful for the characterization of art, but remains to be seen if AI models learn these same features. To analyze these features in the context of the MAMe dataset, we analyze several medium classes from an expert point of view and perform explainability experiments. We perform explainability on two models trained from scratch with two data types. On one hand, the R65k-FS is used to characterize a low resolution and fixed shape setting (LR\&FS). On the other hand, to highlight a high resolution and variable shape (HR\&VS) we introduce a new data type version: the A500-VS. This new HR\&VS version ensures a minimum size of 500 pixels per axis, preserving the original aspect ratio, as illustrated in Figure \ref{fig:data_type}. The main reason for using A500-VS instead of R360k-VS is to facilitate visualization. In this case we use the architecture most widely used for this kind of experiments, the VGG~\citep{simonyan2014very}. In our case, we use the shallow version VGG11 to handle the high-memory requirements of A500-VS. By understanding the focus of the LR\&FS and HR\&VS models, we can detect the most relevant class features according to them. These explanations allow experts to assess the consistence of the decisions made, and detect the potential existence of bias. Finally, comparison between LR\&FS and HR\&VS explanations offers an additional exploration about the impact of HR and VS properties in the MAMe dataset. \subsection{Layer-wise relevance propagation} In our analysis we use post-hoc interpretability \citep{mythos}: Methods used to interpret the model predictions once the model has been trained. For image classification, a widely used visual explanation are the saliency methods. These methods use saliency maps to show the features on the image that contribute to a prediction. In other words, which pixels in the input image are important for the classification task. Among this family of methods \citep{Selvaraju_2019, simonyan2013deep, springenberg2014striving, sundararajan2017axiomatic, zeiler2013visualizing}, we use Layer-wise Relevance Propagation (LRP) \citep{lrp} which has been used in different fields performing meaningful explanations \citep{10.1007/978-3-319-45886-1_28, DBLP:journals/corr/SturmBSM16, thomas2018interpretable, alex2018computational}. The LRP technique backpropagates the output prediction to the input image, by computing the contribution of each neuron \wrt the output prediction. That is, effectively mapping the relevance of an specific class into the pixels of the input image. Although different LRP rules have been proposed, we implement the recent Composite LRP \citep{overview}. This technique proposes to combine different propagation rules depending on the depth of the layer. Our Composite LRP makes use of LRP$ - 0$ for last layers, LRP$ - \epsilon$ ($\epsilon = 0.25$) and LRP$ - \gamma$ ($\gamma = 0.25$) for intermediate layers, and LRP$-z^B$ for the first layer of the network, as illustrated in Figure \ref{fig:lrp}. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{lrp.PNG} \caption{LRP rules applied to each layer of the VGG11 network.} \label{fig:lrp} \end{figure} So, given an image $I$ and a specific class $c$, the Composite LRP produces an explanation heatmap $E_{I,c}$. The color convention for this heatmap is as follows: red is used for positive contributions, while blue indicates negative contributions. That means, the red areas are considered descriptive patterns of the given class by the model. Meanwhile, the blue areas are considered typical patterns of other classes. We perform two types of LRP analysis, one for the correctly predicted images, and another one for the incorrectly predicted images. In case of correctly predicting the medium $m$, we produce its corresponding explanation heatmap $E_{I,m}$. In this case, the red areas of the heatmap correspond to descriptive patterns of the predicted medium $m$ and blue areas to descriptive patterns of the rest of mediums. In the case of incorrectly predicted images, we computed the explanation as the difference between two heatmaps. The one associated to the real medium $r$, minus the one associated to the predicted medium $p$: \begin{equation} E_{I,r,p} = E_{I,r} - E_{I,p} \end{equation} This difference allows to remove the contributions to the predicted class, focusing on the features that contribute to the real class. In this visualization, the red areas will be considered typical patterns of the real class but not of the predicted class, while blue areas will be considered typical patterns of other classes (most of them probably from the predicted class). \subsection{Best and worst performances} \label{sec:exp_crc} First, let us focus on the classes that are best and worst recognized by the two models trained on the two versions of the MAMe dataset introduced before: \begin{itemize} \item The model trained from scratch on images of LR\&FS (R65k-FS), using the VGG11 architecture. \item The model trained from scratch on images of HR\&VS (A500-VS), using the VGG11 architecture. \end{itemize} Among the best ones we can count \textit{Albumen photograph}, \textit{Gold} and \textit{Graphite}. In the case of \textit{Albumen photograph}, we only have one type of photographic technique in the MAMe dataset, making these images easily distinguishable from other cultural assets. The class \textit{Gold} is a similar case, since the golden color differentiates it from other metals, despite having other objects in the dataset of similar shapes. Lastly, \textit{Graphite} is a drawing technique that uses similar grey tones with metallic brightness and smooth strokes that usually end at the edge of the paper. These characteristics help avoiding confusions between \textit{Graphite} and \textit{Lithograph}, which in some cases may be similar. For these reasons these mediums are easily recognizable, not only for the LR\&FS and HR\&VS model, but also for human experts. On the other side of the spectrum we have the classes that are most poorly recognized by these two models. These are \textit{Woven fabric}, \textit{Polychromed wood}, \textit{Etching} and \textit{Silk and metal thread}. These classes are hard to predict because they belong to fine-grained groups of classes, with many common features. Following expert guidelines we identify the following fine-grained groups. These are discussed in further detail next. \begin{itemize} \item \texttt{Prints}: \textit{Etching}, \textit{Engraving}, \textit{Wood engraving}, \textit{Woodcut}, \textit{Woodblock}, \textit{Lithograph} \item \texttt{Fabrics}: \textit{Woven fabric}, \textit{Silk and metal thread} \item \texttt{Paintings}: \textit{Polychromed Wood}, \textit{Oil on canvas} \end{itemize} \subsubsection{\texttt{Prints} group} \label{sec:prints-group} From an expert perspective, the most complex fine-grained group is \texttt{Prints}. They are hard to differentiate because they may look very similar, despite having been printed through different procedures. Common clues used by experts for their discrimination include the definition of lines, the appearance of strokes, the homogeneity of shadows or color areas, as well as the intensity of blacks. A common feature used to identify different kinds of prints is the platemark. Platemark is the rectangular ridge created in the paper of a print by the edge of an intaglio plate. These marks can be essential for the discrimination of certain print classes: While both \textit{Engraving} or \textit{Wood engraving} have very defined lines and grid patterns, they can be told apart through platemarks since these only appear on the edges of an \textit{Engraving}. Within the same group \texttt{Prints}, \textit{Woodblocks} are distinguishable from the rest because of their oriental aesthetics. They are usually colored prints that use one block for each ink. As a result, colors sometimes overlap, and/or leave gaps in the outlines. However, this last characteristic is also found on other colored prints like \textit{Lithographs} or \textit{Woodcuts}. One last example to illustrate the complexity within \texttt{Prints} could be \textit{Etching} and \textit{Engraving}. These two techniques are very similar, having the same aforementioned platemarks and often the same grid patterns in their printed areas. In this case, experts need to appreciate the contours of the lines for differentiation. They are more vibrant and less defined in \textit{Etchings}, and they have convex edges for \textit{Engravings}. In sight of the expert knowledge, image resolution seems key to properly detect main discriminating patterns. In some cases, even our HR\&VS images seem to fall short in resolution (\eg grid patterns are lost). As an example, Figure \ref{fig:engraving-HR-resized} shows a rectangular region of an \textit{Engraving} in original resolution (left side) and in HR\&VS (right side). Zoomed area shows the central figure of the print, a fisherman. If we focus on the clothes, we can clearly perceive the characteristic grid pattern of an \textit{Engraving} in the original resolution image, but these are lost on the HR\&VS image, where the grid become a gray blur due to the interpolation when resizing the image. \begin{figure}[tb] \centering \begin{tabular}{cc} \includegraphics[width=0.22\textwidth]{811930.jpg} & \includegraphics[width=0.22\textwidth]{811930_resized.jpg}\\ \includegraphics[width=0.22\textwidth]{original_fisherman_zoom.jpg} & \includegraphics[width=0.22\textwidth]{resized_fisherman_zoom.jpg}\\ \end{tabular} \caption{Example of an \textit{Engraving} artwork at its original size (left side) and HR\&VS (right side). The second row shows the same zoomed area for both images, where the grid pattern can only be perceived on the original resolution (left).\\ \small{\textsc{Met museum: 53.600.1616}} } \label{fig:engraving-HR-resized} \end{figure} \subsubsection{\texttt{Fabrics} group} The second group of fine-grained classes is \texttt{Fabrics}. To discriminate these with total confidence it is necessary to identify the fibers using microscopy techniques. This condition motivated the aggregation of several classes within \textit{Woven fabric} (\eg linen, cotton, silk and others). Nonetheless, one particular type of woven fabric can be visually recognized without the aid of external machinery. That is \textit{Silk and metal thread}, which are clearly distinguishable from other textile fibers due to the glitter of metallic threads. \begin{figure}[tb] \centering \begin{tabular}{cc} \includegraphics[height=0.18\textwidth]{230300_resized.jpg} & \includegraphics[height=0.18\textwidth]{230300_224x224.jpg}\\ \end{tabular} \caption{Example of \textit{Silk and metal thread} in HR\&VS (left) and LR\&FS (right). The brightness of the metal threads is visible in both cases.\\ \small{\textsc{Met museum: 2002.494.278}}} \label{fig:silk-and-metal} \end{figure} In Figure \ref{fig:silk-and-metal}, we can see the metallic glitter in LR\&FS and HR\&VS images (more clearly on the latter). However, both models have been unable to properly discriminate these two classes. If the model does not detect this feature, it will learn other patterns for differentiating these two classes, such as ornamental motifs. However, this is not a reliable discriminatory feature and, therefore, it could be a source of error. We performed explainability experiments on several images and found cases where the model focuses on the ornamental motifs as shown in Figure \ref{fig:silk_and_metal_motifs}. \begin{figure}[tb] \centering \begin{tabular}{cc} \includegraphics[width=0.22\textwidth]{230388.jpg} & \includegraphics[width=0.22\textwidth]{230388_lrp.jpg}\\ \end{tabular} \caption{Example of \textit{Silk and metal thread} in HR\&VS (left) and its LRP explanation (right). The ornamental motifs (red zones) have positively contributed to the \textit{Silk and metal thread} class classification.\\ \small{\textsc{Met museum: 2002.494.366}}} \label{fig:silk_and_metal_motifs} \end{figure} \subsubsection{\texttt{Paintings} group} The third group of fine-grained classes is \texttt{Paintings}. This group contains two classes: \textit{Polychromed wood} and \textit{Oil on canvas}. The main reason why these classes are hard to differentiate is because \textit{Polychromed wood} contains the subclass panel paintings (\ie a painting on a flat panel made of wood), which are similar to \textit{Oil on canvas}. Both, \textit{Polychromed wood} panel paintings and \textit{Oil on canvas}, hide the support behind the paint layer, complicating the identification of the support material (fabric or wood). In this context, experts pay attention to cracks, leaks or textures that may be characteristic of the support below the paint. Nonetheless, these features may not be properly visible in a single LR\&FS or HR\&VS images. There are several \textit{Oil on canvas} images that are incorrectly predicted as \textit{Polychromed wood}, both in LR\&FS and in HR\&VS. It makes sense from an expert point of view since, in several HR\&VS images, it is impossible to appreciate any detail that may suggest whether the support is wood or fabric, forcing the model to guess the class based on alternative patterns that may be misleading. For example, one of the key properties that identify an \textit{Oil on canvas} is the canvas weave pattern. Unfortunately, this seems to be visible only on a few HR\&VS images. Within this work, art experts reviewed around \~150 images where the two models failed to discriminate between \textit{Oil on canvas} and \textit{Polychromed wood}, and considered that they could only see the canvas weave pattern in approximately 5\% of the HR\&VS images. In Figure \ref{fig:polychromed-oil}, we show an example of an \textit{Oil on canvas} image where it is possible to perceive the canvas weave pattern. Although this pattern is present in the HR\&VS image but not in the LR\&FS image, both models misclassified this example, indicating that the HR\&VS model does not pay attention to this property. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=0.22\textwidth]{1943_324_resized_original_oil.jpg} & \includegraphics[width=0.22\textwidth]{1943_324_256x256_original_oil.jpg}\\ \includegraphics[width=0.22\textwidth]{crop_original_zoom.jpg} & \includegraphics[width=0.22\textwidth]{crop_resized_zoom.jpg}\\ \end{tabular} \caption{Example of \textit{Oil on canvas} in HR\&VS (left side) and LR\&FS (right side). The second row shows the zoomed area where it is possible to perceive the canvas wave pattern in the HR\&VS but not in the LR\&FS image.\\ \small{\textsc{Cleveland museum: 1943.324}}} \label{fig:polychromed-oil} \end{figure} \subsection{LR\&FS and HR\&VS comparison} \label{sec:exp_cri} In this section we explore the classes with greatest difference in accuracy between the two models we are exploring: the one trained on LR\&FS images and the one trained on HR\&VS images. In order, these classes are \textit{Lithograph} (+16.28\% gain by HR\&VS), \textit{Bronze} (+15.71\% gain by HR\&VS) and \textit{Engraving} (+14.85\% gain by HR\&VS). \textit{Lithograph} and \textit{Engraving} are within the \texttt{Prints} group which, as reviewed in \S\ref{sec:prints-group}, can benefit from more detailed inputs for their discrimination. The third, \textit{Bronze} is a material which can be easily differentiated by a human expert. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=0.22\textwidth]{812558_resized.jpg} & \includegraphics[width=0.22\textwidth]{812558.jpg} \\ \includegraphics[width=0.22\textwidth]{812558_256x256.jpg} & \includegraphics[width=0.22\textwidth]{812558_pred_Wood_engraving.jpg} \\ \hline \\ \includegraphics[width=0.22\textwidth]{crop_original.jpg} & \includegraphics[width=0.22\textwidth]{crop_resized.jpg} \\ \end{tabular} \caption{\textit{Lithograph} example in HR\&VS and LR\&FS. There is a top side and a bottom side divided by an horizontal black line. Top shows the image in HR\&VS and its corresponding LRP explanation. Both models focus on the general texture for their predictions, although LR\&FS mispredicts \textit{Wood engraving}. Bottom side shows a zoomed area of the print in HR\&VS (left) and LR\&FS (right). In here we can see the granular texture of the surface typical of this class in HR\&VS, but not in LR\&FS.\\ \small{\textsc{Met museum: 49.21.53}}} \label{fig:812558-lithograph} \end{figure} \begin{figure}[htb] \centering \begin{tabular}{cc} \includegraphics[width=0.22\textwidth]{1958_105_resized.jpg} & \includegraphics[width=0.22\textwidth]{1958_105.jpeg} \\ \includegraphics[width=0.22\textwidth]{1958_105_256x256.jpg} & \includegraphics[width=0.22\textwidth]{1958_105_pred_Wood_engraving.jpg} \\ \end{tabular} \caption{\textit{Engraving} example in HR\&VS and LR\&FS and its corresponding LRP explanations. Check how the contours of the figures positively contribute to the prediction of the class in HR\&VS format. LR\&FS loses most these details, and mispredicts it as \textit{Wood engraving}.\\ \small{\textsc{Cleveland museum: 1958.105}}} \label{fig:1958.105-engraving} \end{figure} Let us start with the case of \textit{Lithograph}. Figure \ref{fig:812558-lithograph} shows a representative example of this class, illustrating both the input and the LRP for the HR\&VS and LR\&FS models. Both models focus on the overall texture of the image (the LRP relevance is spread throughout the image), but with different impacts on the prediction: it represents negative evidence for LR\&FS (which ends up in mispredicting the class \textit{Wood engraving}) but positive evidence for HR\&VS. Experts highlight the relevance of the texture of \textit{Lithographs} for their discrimination from other similar classes like \textit{Woodblock}, \textit{Hand-colored etching}, \textit{Wood engraving} or \textit{Hand-colored engraving}. \textit{Lithographs} contain a granular texture that is not present on the other classes, but this texture is only visible at a certain resolution, as shown in the zoomed tombstone at the bottom of Figure \ref{fig:812558-lithograph}. This LRP results indicate that the HR\&VS model follows a similar strategy to distinguish \textit{Lithograph} from other classes, successfully recognizing the textures from prints and properly interpreting them for the final prediction. The LR\&FS model, unable to recognize the granular texture, fails at finding relevant features towards \textit{Lithograph}. Figure \ref{fig:1958.105-engraving} shows an example of the \textit{Engraving} class, which has been correctly predicted by the HR\&VS but not by the LR\&FS (mispredicted as \textit{Wood engraving}). The Figure contains the entire image and its corresponding LRP explanation for both, HR\&VS and LR\&FS, which target really different aspects of the print: While HR\&VS focuses on the contours of the print figures, LR\&FS does not. According to experts, these figure contours are dark areas that encode essential information for discriminating the mediums within the \texttt{Prints} group. Contours can only be properly inspected at high resolutions. Some of this information is retained in HR\&VS images, as reviewed by experts. Meanwhile LR\&FS images lose all relevant details. \begin{figure}[tb] \centering \begin{tabular}{cc} \includegraphics[height=0.13\textwidth]{396174_resized.jpg} & \includegraphics[height=0.13\textwidth]{396174_256x256.jpg}\\ \includegraphics[height=0.16\textwidth]{crop_original_2.jpg} & \includegraphics[height=0.16\textwidth]{crop_resized_2.jpg}\\ \end{tabular} \caption{First row shows an \textit{Engraving} in HR\&VS and LR\&FS. Notice the deformation of the latter. Second row shows a zoomed area, to illustrate how the grid lines become blurred in the LR\&FS version.\\ \small{\textsc{Met museum: 17.3.3169}}} \label{fig:396174-engraving} \end{figure} As mentioned in subsection \ref{sec:prints-group}, another property to distinguish printing techniques is the grid pattern. Although in some cases it can only be perceived in the original resolution image, some HR\&VS image retain this information. However, this is always lost in the LR\&FS images. On top of that, the image distortion produced by the shape variation of LR\&FS images forces the grid lines closer in one axis (unpredictably, as it depends on the original image aspect ratio), complicating its identification. As an example of that, Figure \ref{fig:396174-engraving} shows an \textit{Engraving} image in HR\&VS and LR\&FS format, where the latter shows a great image distortion. It also shows a zoomed area, highlighting the differences in the grid pattern. The last case we consider in this section is the third class with the biggest difference in performance. This is the \textit{Bronze} class, which includes a great variety of objects (\eg sculptures, ornaments), but specially coins. One of the main reasons why there are so many coins inside the \textit{Bronze} class is that, historically, \textit{Bronze} has been a usual alloy used to mint coins. One of the main characteristics of a coin is its circular shape. However, this property is lost when deforming the image due to the uniformization of aspect ratio inherent to LR\&FS inputs. The lack of a uniform shape of coins has a negative impact on their recognition, which is not found on the HR\&VS model. A clear example of this can be observed in Figure \ref{fig:1916.1877-bronze}. The corresponding LRP explanations show, on one side, the positive impact of the rounded coin contour for the HR\&VS image and, on the other side, the negative impact of the deformed coin for the prediction of the LR\&FS image. This particular LR\&FS example is mispredicted with \textit{Steel}, which makes sense from an expert point of view because the model must focus on the detection of the material, as it can not rely on the shape of the coin for the prediction. Indeed, classes like \textit{Steel} and \textit{Iron} are among the most frequent confusions for \textit{Bronze}. As as result, \textit{Bronze} is significantly better predicted by the HR\&VS, with a 15.7\% increase in accuracy with respect to LR\&FS. Another example is shown in Figure \ref{fig:1926.248-bronze}, where we can see the characteristic corrosion and patinas of \textit{Bronze}. This corrosion or green patinas on the surface comes from the oxidation of copper, which is one of the main components of the \textit{Bronze} alloy. Experts underline that these properties make quite easy to recognize the class. While these are perfectly visible in HR\&VS images, they become hard to perceive in the LR\&FS images. \begin{figure}[tb] \centering \begin{tabular}{cc} \includegraphics[height=0.14\textwidth]{1916_1877_resized.jpg} & \includegraphics[height=0.14\textwidth]{1916_1877_256x256.jpg} \\ \includegraphics[height=0.14\textwidth]{1916_1877.jpeg} & \includegraphics[height=0.14\textwidth]{1916_1877_pred_Steel.jpg} \\ \end{tabular} \caption{Example of coins within the \textit{Bronze} class in HR\&VS (left side) and LR\&FS (right side), and its corresponding LRP explanations (bottom). Shape of coins is lost in LR\&FS, which affects the prediction.\\ \small{\textsc{Cleveland museum: 1916.1877}}} \label{fig:1916.1877-bronze} \end{figure} \begin{figure}[tb] \centering \begin{tabular}{cc} \includegraphics[height=0.22\textwidth]{crop_original_bronze.jpg} & \includegraphics[height=0.22\textwidth]{crop_resized_bronze.jpg} \\ \end{tabular} \caption{Zoom in of a \textit{Bronze} artwork in HR\&VS (left) and in LR\&FS (right) respectively. Notice how the corrosion and patinas are easier to appreciate in HR\&VS.\\ \small{\textsc{Cleveland museum: 1926.248}}} \label{fig:1926.248-bronze} \end{figure} \section{Conclusions}\label{sec:c} In this paper, we introduce the MAMe dataset, a novel challenge for the prediction of artwork mediums based on its visual appearance. The images of the dataset come from three different museums for a total of 37,407 images. Museums do not share a common scheme for labeling mediums, which required intensive work by art experts for its homogenization. For producing the dataset, we leverage technical requirements (sample size, balance, image resolution, \etc) and domain requirements (visual coherency, taxonomical properties, \etc). At the end, the MAMe is composed by 29 classes of mediums, each containing at least 850 images (always 700 for training) of high resolution (at least 500 pixels in the smaller axis) and variable shape. In comparison with commonly available datasets, the MAMe provides a significantly larger distribution of high resolution and variable-shaped images. These properties are of relevance for future applications in domains such as medicine or autonomous driving; domains where attention to detail, understanding the overall structure and avoiding image pattern deformation/loss is crucial. Recognizing a lack of focus on these topics by the AI community, MAMe provides a good testing environment for new research ideas in the field. Baselines and hypothesis results provide several conclusions. Regarding baselines, results presented in Table \ref{tab:baseline_results} show the capability of such models to solve the task proposed by the MAMe dataset up to certain degree, with a top performance of 88.95\% accuracy achieved by EfficientNet-B3 architecture using R360k-FS data format. Regarding hypothesis evaluation, results shown in Table \ref{tab:hyp1_2_results} and Figure \ref{fig:hyp3} support \hypref{hyp:first} and \hypref{hyp:third} hypotheses but not \hypref{hyp:second}. We conclude from our first hypothesis \hypref{hyp:first} that performance on the MAMe task increases when using images of high resolution over standard low resolution ones. Furthermore, based on the validated third hypothesis \hypref{hyp:third}, we see that this performance gain comes not only from larger image resolution but also from an increase of the image information (unlike ImageNet \citep{russakovsky2015imagenet}). In contrast, results on the \hypref{hyp:second} hypothesis do not validate it. We consider prototypical architectures to be specifically designed for FS data, hence not taking proper advantage of the VS property. Moreover, the current way of handling VS introduce padding on batching, increasing the amount of noise during training, specially in high resolution settings (\ie R360k in our case). We consider that there is room for improvement in this area based on studies in a previous version of the MAMe dataset, where padding reduction on VS models provides performance improvements of 3\% to 5\% \citep{sotiropoulos2020handling}. Lastly, we perform an explainability and expert analysis to further understand the differences when training models with either, low resolution and fixed-shape (R65k-FS) images or high resolution and variable-shape (A500-VS). The results of these analysis allow us to assess how the models we explored fail to discriminate between certain classes due to a lack of resolution. In several cases we found that even the A500-VS resolution is insufficient to perceive the patterns that experts would pay attention to. This forces the models to learn on alternative patterns that may not generalize well. Overall, the MAMe dataset constitutes a large scale challenge which benefits from the use of HR. Benefit from using HR images does not only come from bigger internal representation of the models, but also from an increase of the image information. This is particularly characteristic for MAMe, since differs from the prototypical and well-known ImageNet dataset. Further research is needed to efficiently handle VS data, and MAMe dataset serves as a good candidate for such use case. \section*{Acknowledgments} This work is partially supported by the Intel-BSC Exascale Lab agreement, by the Spanish Government through Programa Severo Ochoa (SEV-2015-0493), by the Spanish Ministry of Science and Technology through TIN2015-65316-P project, by the Generalitat de Catalunya (contracts 2017-SGR-1414) and by the Secretaria d’Universitats i Recerca of the Generalitat de Catalunya under the Industrial Doctorate Grant DI 2018-100. Authors would like to thank the support and assessment of the Conservació-Restauració del Patrimoni group (2017-SGR-1151). \bibliographystyle{unsrtnat}
{ "timestamp": "2021-05-21T02:15:53", "yymm": "2007", "arxiv_id": "2007.13693", "language": "en", "url": "https://arxiv.org/abs/2007.13693" }
\section{1. Introduction} Since the origin of quantum field theory there have been proposals to add a new scale of length to the theory in order to solve the problems connected to ultraviolet divergences. Later, the necessity of introducing a fundamental length scale has also arisen in several attempts to build a theory of quantum gravity. In these cases, the scale could be identified in a natural way with the Planck length $L_p=\sqrt{\hbar G\over c^3}\sim1.6\cdot 10^{-35}\;{\rm m}$ [1]. A naive application of the idea of a minimal length, as for example a lattice field theory, would however break Lorentz invariance. A way to reconcile the discreteness of spacetime with Lorentz invariance was originally proposed by Snyder [2] a long time ago. This was the first example of a noncommutative geometry: the length scale should enter the theory through the commutators of spacetime coordinates, see [3,4]. In particular, the position operators obey the commutation relations }\def\poi{Poincar\'e }\def\kp{$\k$-\poi }\def\coo{coordinates $$[x_\m,x_\n]=i\b J_{\mu\nu}}\def\ij{{ij},\eqno(1)$$ where $J_{\mu\nu}}\def\ij{{ij}$ are the generators of the Lorentz transformations and $\b$ is a parameter of dimension length square that sets the scale of noncommutativity.\footnote{$^1$}{Throughout this paper we adopt natural units $\hbar=c=1$.} In more recent times, using ideas coming from the development of noncommutative geometry [5], the coproduct and star product structures induced by the position operators of the Snyder model have been calculated [6,7]. However, in the Snyder model the algebra of the position operators does not close, as is evident from (1), and hence the bialgebra resulting from the implementation of the coproduct is not strictly speaking a Hopf algebra, as in other noncommutative geometries. In particular, the coproduct is not coassociative and the star product is not associative [6]. A closed Lie algebra can however be obtained if one adds to the position generators the generators of the Lorentz algebra [7]. In this way one can define a proper Hopf algebra, with coassociative coproduct.\footnote{$^2$}{Generally, Lie deformed quantum Minkowski spaces admit both Hopf algebra and Hopf algebroid structure [8].} The price to pay is the addition to the formalism of tensorial degrees of freedom and their conjugate momenta. To distinguish from the standard noncommutative realization of the Snyder model [6], we call the algebra where the Lorentz generators are added as extended coordinates extended Snyder algebra, and the theory based on it extended Snyder model. The physical interpretation of the new degrees of freedom is however not evident, they may be viewed for example as coordinates parametrizing extra dimensions [7]. In this paper, we construct new realizations of this extended algebra, perturbatively in the parameter $\b$. In order to construct them, we define an extended Heisenberg algebra, which includes the Lorentz generators and their conjugate momenta. Then we consider a Weyl realization of the algebra in terms of the extended Heisenberg algebra, and then generalize it to the most general one compatible with Lorentz invariance at order $\b$, including the one obtained in [7], and compute the coproduct and the star product in the general case. We also calculate the twist in the Weyl realization. We recall here some of the most relevant recent advances in Snyder theory: in [9] the Snyder algebra was generalized in such a way to maintain the Lorentz invariance; in [6] the coproduct was calculated, in [7] the same problem was investigated from a geometrical point of view, using the fact that the momentum space of Snyder can be identified with a coset space; the twist was investigated in [10,11]. The construction of a field theory was first addressed in [6,7] and then examined in more detail in [12]. Different applications to phenomenology have been considered in [13]. Finally, the extension to a curved background was proposed in [14] and further investigated in [15]. Also the nonrelativistic limit of the theory was studied in a large number of papers, but we shall omit a discussion of this topic. The paper is organized as follows: in sect.~2 we introduce the extended Snyder model and discuss its Weyl realization in terms of an extended Heisenberg algebra; in sect.~3 we compute the coproduct and the star product in this realization; in sect.~4 also the twist is calculated. In sect.~5, generic realizations up to order $\b$ are introduced and coproducts and star products are obtained. Finally, in sect.~6 the relations of these realizations with that of ref.~[7] and with well-known nonassociative ones are discussed. In sect.~7 some conclusion are drawn. \bigbreak \section{2. Extended Snyder model and Weyl realization} As mentioned in the introduction, the lack of associativity of the standard realization of the Snyder star product is due to the fact that this is built in terms of the position coordinates only, whose commutators do not close (cfr.~(1)). An associative realization of the Snyder model can however be obtained by adding to the algebra generated by the position coordinates $\hat x_i$ the tensorial coordinates $\hat x_{ij}$, identified with the Lorentz generators, so that they span the closed algebra (2). In fact, all Lie-algebra type non commutative spaces induce associative star products and the coproducts of momenta are coassociative. This implies that the star product we shall obtain in the present framework in eqs.~(26), (27) is associative. If instead in the star product only the $\cD_i$ were present, without the $\cD_{ij}$, associativity would be lost. We consider the extended Snyder algebra algebra generated by the $N$ position operators $\hat x}\def\hm{\hat m_i$ and the $N(N-1)/2$ antisymmetric Lorentz generators $\hat x}\def\hm{\hat m_\ij$, with $i=0\dots,N-1$, $$\eqalignno{&[\hat x}\def\hm{\hat m_i,\hat x}\def\hm{\hat m_j]=i\l\b\hat x}\def\hm{\hat m_\ij,\qquad[\hat x}\def\hm{\hat m_\ij,\hat x}\def\hm{\hat m_k]=i\l(\y_{ik}\hat x}\def\hm{\hat m_j-\y_{jk}\hat x}\def\hm{\hat m_i),&\cr &\ \ [\hat x}\def\hm{\hat m_\ij,\hat x}\def\hm{\hat m_{kl}]=i\l(\y_{ik}\hat x}\def\hm{\hat m_{jl}-\y_{il}\hat x}\def\hm{\hat m_{jk}-\y_{jk}\hat x}\def\hm{\hat m_{il}+\y_{jl}\hat x}\def\hm{\hat m_{ik}),&(2)}$$ where $\l$ and $\b$ are real parameters. In particular, $\b$ can be identified with the Snyder parameter, which is usually assumed to be of size $L_p^2$, while $\l$ is a dimensionless parameter. The parameter $\b$ can take both positive and negative values, leading to quite different physical models. However, from an algebraic point of view both cases can be treated in an essentially unified way. For $\b=0$, the commutation relations }\def\poi{Poincar\'e }\def\kp{$\k$-\poi }\def\coo{coordinates (2) reduce to those of the standard Lorentz algebra acting on commutative coordinates. The algebra (2) can be realized in terms of an extended Heisenberg algebra, which includes also the Lorentz generators, $$\eqalignno{&[x_i,x_j]=[p_i,p_j]=[x_\ij,x_{kl}]=[p_\ij,p_{kl}]=0,&\cr &[x_i,p_j]=i\y_\ij,\qquad [x_\ij,p_{kl}]=i(\y_{ik}\y_{jl}-\y_{il}\y_{jk}),&\cr &[x_i,x_{jk}]=[x_i,p_{jk}]=[x_\ij,x_k]=[x_\ij,p_k]=0,&(3)}$$ where $p_i$ and $p_\ij$ are momenta canonically conjugate to $x_i$ and $x_\ij$ respectively, and $p_\ij=-p_{ji}$. The momenta can be realized in a standard way as $$p_i=-i{\de\over\de x_i},\qquad p_{ij}=-i{\de\over\de x_{ij}}.\eqno(4)$$ Note that, including the momenta $p_i$ in the algebra (2), with commutation relations }\def\poi{Poincar\'e }\def\kp{$\k$-\poi }\def\coo{coordinates $$[p_i,p_j]=0,\qquad[\hat x}\def\hm{\hat m_\ij,p_k]=i\l(\y_{ik}p_j-\y_{jk}p_i),\qquad[\hat x}\def\hm{\hat m_i,p_j]=i(\y_\ij+\l^2\b p_ip_j),\eqno(5)$$ one recovers the full original Snyder algebra [2]. To proceed with the computations, it is convenient to exploit the isomorphism between the extended Snyder algebra and $so(1,N)$, and write the previous formulas more compactly defining, for positive $\b$, $\hat x}\def\hm{\hat m_i\id\sqrt\b\,\hat x}\def\hm{\hat m_{iN}$, $x_i\id\sqrt\b\,x_{iN}$, $p_i\id p_{iN}/\sqrt\b$, with $\y_{NN}=1$, and $\m=0,\dots N$.\footnote{$^3$}{When $\b<0$ the algebra is isomorphic to $so(2,N-1)$. The coordinates are defined in the same way, except that the absolute value of $\b$ must be taken under the square root and $\y_{NN}=-1$. All results are identical, with the appropriate choice of the sign of $\b$.} The extended Heisenberg algebra (3) becomes then $$[x_{\mu\nu}}\def\ij{{ij},x_{\r\s}]=[p_{\mu\nu}}\def\ij{{ij},p_{\r\s}]=0,\qquad[x_{\mu\nu}}\def\ij{{ij},p_{\r\s}]=i(\y_{\m\r}\y_{\n\s}-\y_{\m\s}\y_{\n\r}),\eqno(6)$$ while the extended Snyder algebra (2) takes the form $$[\hat x}\def\hm{\hat m_{\mu\nu}}\def\ij{{ij},\hat x}\def\hm{\hat m_{\r\s}]=i\l C_{{\mu\nu}}\def\ij{{ij},\r\s,\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}\,\hat x}\def\hm{\hat m_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes,\eqno(7)$$ where $C_{{\mu\nu}}\def\ij{{ij},\r\s,\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}$ are the structure constants of the $so(1,N)$ algebra, $$\eqalignno{C_{{\mu\nu}}\def\ij{{ij},\r\s,\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}=\ha\Big[&-\y_{\n\r}(\y_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\y_{\s\b}-\y_{\s\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\y_{\m\b})+\y_{\m\s}(\y_{\r\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\y_{\n\b}-\y_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\y_{\r\b})+&\cr &\ \y_{\m\r}(\y_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\y_{\s\b}-\y_{\s\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\y_{\m\b})-\y_{\n\s}(\y_{\r\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\y_{\m\b}-\y_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\y_{\r\b})\Big].&(8)}$$ that satisfy the symmetry properties $C_{{\mu\nu}}\def\ij{{ij},\r\s,\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}=-C_{\n\m,\r\s,\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}=-C_{{\mu\nu}}\def\ij{{ij},\s\r,\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}=-C_{{\mu\nu}}\def\ij{{ij},\r\s,\b\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}=-C_{\r\s,{\mu\nu}}\def\ij{{ij},\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}=-C_{{\mu\nu}}\def\ij{{ij},\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b,\r\s}$. In general, if the \coo $\hat x_\m$ generate a Lie algebra $[\hat x}\def\hm{\hat m_\m,\hat x}\def\hm{\hat m_\n]=iC_{\m\n\l}\hat x}\def\hm{\hat m_\l$ with structure constants $C_{\m\n\l}$, then the universal realization of $\hat x}\def\hm{\hat m_\m$ corresponding to Weyl-symmetric ordering is given by [16] $$\hat x}\def\hm{\hat m_\m=x_\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\phi}\def\g{\gamma}\def\h{\theta}\def\io{\iota}\def\j{\vartheta_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\m}(p)=x_\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\({\cal C}\over 1-e^{-{\cal C}}\)_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon},\eqno(9)$$ where ${\cal C}_{\mu\nu}}\def\ij{{ij}=C_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\m\n}p_\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon$. This realization enjoys the property $$e^{ik_\m\hat x}\def\hm{\hat m_\m}\triangleright1=e^{ik_\m x_\m},\qquad k_\m\in{\bf R},\eqno(10)$$ where the action $\triangleright$ is given by $$x_\m\triangleright f(x_\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon)=x_\m f(x_\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon),\qquad p_\m\triangleright f(x_a)=-i{\de f(x_a)\over\de x_\m},\eqno(11)$$ or, in our case, $$x_{\mu\nu}}\def\ij{{ij}\triangleright f(x_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes)=x_{\mu\nu}}\def\ij{{ij} f(x_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes),\qquad p_{\mu\nu}}\def\ij{{ij}\triangleright f(x_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes)=-i{\de f(x_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes)\over\de x_{\mu\nu}}\def\ij{{ij}}=[p_{\mu\nu}}\def\ij{{ij}, f(x_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes)],\eqno(12)$$ Hence, the corresponding Weyl realization of $\hat x}\def\hm{\hat m_{\mu\nu}}\def\ij{{ij}$ in terms of the extended Heisenberg algebra (6) reads [16] $$\hat x}\def\hm{\hat m_{\mu\nu}}\def\ij{{ij}=x_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}\({\l\,{\cal C}\over 1-e^{-\l\cal C}}\)_{{\mu\nu}}\def\ij{{ij},{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes}=x_{\mu\nu}}\def\ij{{ij}+{\l\over2}x_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes\,{\cal C}_{{\mu\nu}}\def\ij{{ij},{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes}+{\l^2\over12}x_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes\({\cal C}^2\)_{{\mu\nu}}\def\ij{{ij},{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes} +{\cal O}(\l^4).\eqno(13)$$ where $$\eqalignno{&{\cal C}_{{\mu\nu}}\def\ij{{ij},{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes}=\ha\,C_{\r\s,{\mu\nu}}\def\ij{{ij},{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes}p_{\r\s}=\ha(\y_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\n\b}-\y_{\m\b}p_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}+\y_{\n\b}p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}-\y_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\m\b}),&\cr &\({\cal C}^2\)_{{\mu\nu}}\def\ij{{ij},{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes}=\ha\sum_{k=0}^2\pmatrix{2\cr k\cr}\Big((p^k)_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}(p^{2-k})_{\n\b}-(p^{2-k})_{\m\b}(p^k)_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\Big),&(14)}$$ and $p_{\mu\nu}}\def\ij{{ij}$ is written in matricial notation. Inserting $\cal C$ in (13), we find up to order $\l^2$, $$\hat x_{\mu\nu}}\def\ij{{ij}=x_{\mu\nu}}\def\ij{{ij}+{\l\over 2}(x_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}-x_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon})-{\l^2\over 12}(x_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\n\b}p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}-x_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\m\b}p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}-2x_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\n\b}). \eqno(15)$$ One has then $$\eqalignno{[\hat x}\def\hm{\hat m_{\mu\nu}}\def\ij{{ij},p_{\r\s}]&=i(\y_{\m\r}\y_{\n\s}-\y_{\m\s}\y_{\n\r})+{i\l\over2}(\y_{\m\r}p_{\n\s}-\y_{\n\r}p_{\m\s}+\y_{\n\s}p_{\m\r}-\y_{\m\s}p_{\n\r})&\cr &-{i\l^2\over 12}(\y_{\m\r}p_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\s\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}-\y_{\m\s}p_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\r\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}-\y_{\n\r}p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\s\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}+\y_{\n\s}p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\r\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}+2p_{\m\r}p_{\n\s}-2p_{\n\r}p_{\m\s}).&(16)}$$ One can rewrite eq.~(15) in terms of its components as $$\eqalignno{\hat x}\def\hm{\hat m_i&=x_i+{\l\over2}\big(x_kp_{ik}-\b x_{ik}p_k\big)-{\l^2\over12}\big(x_\k p_{kl}p_{il}+\b(-x_k p_kp_i+x_ip_k^2-x_{ik}p_lp_{kl}-2x_{kl}p_kp_{il})\big),&\cr \hat x}\def\hm{\hat m_\ij&=x_\ij+{\l\over2}\big(x_ip_j+x_{ik}p_{jk}-(i\leftrightarrow j)\big)-{\l^2\over12}\big(x_{ik}p_{jl}p_{kl}-x_{kl}p_{ik}p_{jl}-x_ip_kp_{jk}+2x_kp_ip_{jk}&\cr &+\b x_{ik}p_kp_j-(i\leftrightarrow j)\big).&(17)}$$ In the limit $\l\b=L_p^2$, $\l=0$, the algebra (2) becomes the DFR (Moyal) algebra [3] and the realization (15) takes the form $$\hat x}\def\hm{\hat m_i=x_i-{L_p^2\over2}\,x_{ik}p_k,\qquad\hat x}\def\hm{\hat m_\ij=x_\ij.\eqno(18)$$ The corresponding Lorentz generators are $$M_\ij=x_ip_j-x_jp_i+x_{ik}p_{jk}-x_{jk}p_{ik}.\eqno(19)$$ \vfil\eject \section{3. Coproduct and star product in Weyl realization} In order to compute the coproduct of the Hopf algebra, we use the formalism introduced in [17]. We define a function ${\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\mu\nu}}\def\ij{{ij}(tk_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes)$ that satisfies the differential equation $${d{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\mu\nu}}\def\ij{{ij}\over dt}={i\over2}[p_{\mu\nu}}\def\ij{{ij},k_{\r\s}\hat x}\def\hm{\hat m_{\r\s}]\big|_{p\to{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}(tk)}=k_{\r\s}\F_{{\mu\nu}}\def\ij{{ij},\r\s}({\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}),\eqno(20)$$ with initial condition ${\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\mu\nu}}\def\ij{{ij}(0)=q_{\mu\nu}}\def\ij{{ij}$. The function $\F_{{\mu\nu}}\def\ij{{ij},\r\s}(p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b})$ is defined from (15) as $\hat x_{\mu\nu}}\def\ij{{ij}=x_{\r\s}\F_{\r\s,{\mu\nu}}\def\ij{{ij}}$. In our case, equation (20) takes the form $${d{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\mu\nu}}\def\ij{{ij}\over dt}=k_{\mu\nu}}\def\ij{{ij}-{\l\over2}(k_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}-k_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon})- {\l^2\over12}(k_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\n\b}-k_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\m\b}-2k_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\n\b}),\eqno(21)$$ and with the given initial condition has solution $$\eqalignno{{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\mu\nu}}\def\ij{{ij}&=q_{\mu\nu}}\def\ij{{ij}+tk_{\mu\nu}}\def\ij{{ij}-{\l t\over2}\Big(k_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}q_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}-k_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}q_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\Big) -{\l^2\over12}\Big(\big(k_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}q_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}q_{\n\b}-k_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}q_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}q_{\m\b}-2k_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}q_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}q_{\n\b}\big)t&\cr &+\big(k_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}k_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}q_{\n\b}-k_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}k_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}q_{\m\b}-2k_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}k_{\n\b}q_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}\big)t^2\Big).&(22)}$$ We can now define $P_{\mu\nu}}\def\ij{{ij}(k_{\mu\nu}}\def\ij{{ij},q_{\mu\nu}}\def\ij{{ij})\id{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_{\mu\nu}}\def\ij{{ij}(t=1)$, so that $$\eqalignno{P_{\mu\nu}}\def\ij{{ij}(k_{\mu\nu}}\def\ij{{ij},q_{\mu\nu}}\def\ij{{ij})&=\ k_{\mu\nu}}\def\ij{{ij}+q_{\mu\nu}}\def\ij{{ij}-{\l\over2}\Big(k_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}q_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}-k_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}q_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\Big) -{\l^2\over12}\Big(k_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}q_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}q_{\n\b}-k_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}q_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}q_{\m\b}-2k_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}q_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}q_{\n\b}&\cr &+k_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}k_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}q_{\n\b}-k_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}k_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}q_{\m\b}-2k_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}k_{\n\b}q_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}\Big).&(23)}$$ Defining then $\cK_{\mu\nu}}\def\ij{{ij}(k_{\mu\nu}}\def\ij{{ij})\id P_{\mu\nu}}\def\ij{{ij}(q_{\mu\nu}}\def\ij{{ij}=0)$, one has $\cK_{\mu\nu}}\def\ij{{ij}=k_{\mu\nu}}\def\ij{{ij}$, and therefore also its inverse function $\cK^\mo_{\mu\nu}}\def\ij{{ij}(k_{\mu\nu}}\def\ij{{ij})=k_{\mu\nu}}\def\ij{{ij}$. It can be shown that the generalized momentum addition law is given by [17] $$k_{\mu\nu}}\def\ij{{ij}\oplus q_{\mu\nu}}\def\ij{{ij}\id\cD_{\mu\nu}}\def\ij{{ij}(k_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes,q_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes)=P_{\mu\nu}}\def\ij{{ij}(\cK^\mo_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes,q_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes),\eqno(24)$$ and hence in our case $\cD_{\mu\nu}}\def\ij{{ij}(k_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes,q_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes)=P_{\mu\nu}}\def\ij{{ij}(k_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes,q_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes)$. This yields the coproduct $$\eqalignno{\Delta}\def\F{\Phi}\def\G{\Gamma}\def\H{\Theta}\def\L{\Lambda p_{\mu\nu}}\def\ij{{ij}&=\cD_{\mu\nu}}\def\ij{{ij}(p_{\mu\nu}}\def\ij{{ij}\otimes1,1\otimes p_{\mu\nu}}\def\ij{{ij})=\ \Delta}\def\F{\Phi}\def\G{\Gamma}\def\H{\Theta}\def\L{\Lambda_0p_{\mu\nu}}\def\ij{{ij}-{\l\over2}\Big(p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\otimes p_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}-p_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\otimes p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\Big)-{\l^2\over12}\Big(p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\ot p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}p_{\n\b}&\cr& -p_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}\ot p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}p_{\m\b}-2p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}\ot p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\n\b}+p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}\ot p_{\n\b}-p_{\n\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}\ot p_{\m\b}-2p_{\m\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon}p_{\n\b}\ot p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}\Big),&(25)}$$ with $\Delta}\def\F{\Phi}\def\G{\Gamma}\def\H{\Theta}\def\L{\Lambda_0p_{\mu\nu}}\def\ij{{ij}=p_{\mu\nu}}\def\ij{{ij}\otimes1+1\otimes p_{\mu\nu}}\def\ij{{ij}$. It is straightforward to explicitly check the coassociativity of this coproduct. It is also easy to see that the antipode is trivial, $S(p_{\mu\nu}}\def\ij{{ij})=-p_{\mu\nu}}\def\ij{{ij}$. Recalling our definitions $\hat x}\def\hm{\hat m_i=\sqrt\b\hat x}\def\hm{\hat m_{iN}$ and $p_i=p_{iN}/\sqrt\b$, we can write the functions $\cD_{\a\b}}\def\rs{{\r\s}}\def\en{{\small N}}\def\ot{\otimes$ in terms of their components, namely $$\eqalignno{\cD_i(k,q)=&\ k_i+q_i-{\l\over2}\Big[k_jq_\ij-k_{ij}q_j\Big]+{\l^2\over12}\Big[\b(k_ik_jq_j-k_j^2q_i)-k_jk_{jk}q_{ik}+2k_{ik}k_jq_{jk}&\cr &+k_\ij k_{jk}q_k+\b(k_jq_jq_i-k_iq_j^2)+k_{ij}q_{jk}q_k-2k_{jk}q_kq_{ij}-k_jq_{jk}q_{ik}\Big],&(26)}$$ $$\eqalignno{\cD_\ij(k,q)=&\ k_\ij+q_\ij-{\l\over2}\Big[k_{ik}q_{jk}+\b k_iq_j-(i\leftrightarrow}\def\bdot{\!\cdot\! j)\Big]+{\l^2\over12}\Big[k_{ik}k_{jl}q_{kl}-k_{ik}k_{kl}q_{jl}&\cr &+\b(k_ik_kq_{jk}-k_{ik}k_kq_j-2k_ik_{jk}q_k)+k_{kl}q_{ik}q_{jl}-k_{ik}q_{kl}q_{jl}&\cr &-\b(k_{ik}q_kq_j-k_iq_kq_{jk}-2k_kq_iq_{jk})-(i\leftrightarrow}\def\bdot{\!\cdot\! j)\Big].&(27)}$$ The functions $\cD(q,k)$ satisfy the symmetry properties $$\cD_i(q,k)\big|_\l=\cD_i(k,q)\big|_{-\l},\qquad\cD_\ij(q,k)\big|_\l=\cD_\ij(k,q)\big|_{-\l}.\eqno(28)$$ It also holds $$e^{{i\over2}k_{{\mu\nu}}\def\ij{{ij}}\hat x}\def\hm{\hat m_{{\mu\nu}}\def\ij{{ij}}} e^{{i\over2}q_\rs\hat x}\def\hm{\hat m_{\rs}}=e^{{i\over2}\cD_{\mu\nu}}\def\ij{{ij}(k,q)\hat x}\def\hm{\hat m_{\mu\nu}}\def\ij{{ij}},\eqno(29)$$ and $$e^{{i\over2}k_{\mu\nu}}\def\ij{{ij} x_{\mu\nu}}\def\ij{{ij}}\star e^{{i\over2}q_\rs x_\rs}=e^{{i\over2}k_{{\mu\nu}}\def\ij{{ij}}\hat x}\def\hm{\hat m_{{\mu\nu}}\def\ij{{ij}}} e^{{i\over2}q_\rs\hat x}\def\hm{\hat m_{\rs}}\tr1=e^{{i\over2}\cD_{\mu\nu}}\def\ij{{ij}(k,q)\hat x}\def\hm{\hat m_{\mu\nu}}\def\ij{{ij}}\tr1=e^{{i\over2}\cD_{\mu\nu}}\def\ij{{ij}(k,q)x_{\mu\nu}}\def\ij{{ij}}.\eqno(30)$$ Moreover, we can write $$\eqalignno{&e^{{i\over2}k_{{\mu\nu}}\def\ij{{ij}}\hat x}\def\hm{\hat m_{{\mu\nu}}\def\ij{{ij}}}=e^{ik_i\hat x}\def\hm{\hat m_i+{i\over2}k_{\ij}\hat x}\def\hm{\hat m_{\ij}},&\cr &e^{ik_ix_i+{i\over2}k_{\ij}x_{\ij}}\star e^{iq_kx_k+{i\over2}q_{kl}x_{kl}}=e^{i\cD_ix_i+{i\over2}\cD_\ij x_{\ij}}.&(31)}$$ In particular, from (26) and (27) one can obtain the star product for plane waves. Notice that the star product of two translations clearly will have a component also in the direction of rotations, $$\eqalignno{e^{ik_ix_i}\star e^{iq_jx_j}=&\ e^{i\[k_i+q_i-{1\over12}\l^2\b(q_j^2k_i-k_jq_jq_i+k_j^2q_i-k_jq_jk_i)\]x_i-{i\over2}\l\b k_iq_jx_{ij}},&\cr e^{{i\over2}k_{ij}x_{ij}}\star e^{{i\over2}q_{kl}x_{kl}}=&\ e^{{i\over2}\[k_{ij}+q_{ij}-\l k_{ik}q_{jk}-{1\over6}\l^2(k_{ik}q_{kl}q_{jl}-k_{kl}q_{ik}q_{jl}+k_{ik}k_{kl}q_{jl}-k_{ik}k_{jl}q_{kl})\]x_{ij}},&\cr e^{ik_kx_k}\star e^{{i\over2}q_{ij}x_{ij}}=&\ e^{i\[k_i-{\l\over2}k_jq_{ij}-{1\over12}\l^2k_jq_{jk}q_{ik}\]x_i+{i\over2}\[q_\ij+{1\over6}\l^2\b k_ik_kq_{jk}\]x_{ij}},&\cr e^{{i\over2}k_{ij}x_{ij}}\star e^{{i\over2}q_kx_k}=&\ e^{i\[q_i+{\l\over2}k_{ij}q_j+{1\over12}\l^2k_{ij}k_{jk}q_k\]x_i+{i\over2}\[k_\ij-{1\over6}\l^2\b k_{ik}q_kq_j\]x_{ij}}.&(32)}$$ This star product is associative. One can also check that the star products of the coordinates $x_i$ and $x_{ij}$ satisfy the extended Snyder algebra. In fact, according to [7], denoting $k$ the vector $k_i$, $\bl$ the tensor $l_{ij}$ and so on, and defining $e_{k,\bl}=e^{k_ix_i+\bl_{jk}x_{jk}}$, the star product of the coordinates can be evaluated as follows: $$\eqalignno{x_i\star x_j=&\int dk\,dq\,d\bl\,d\br\,\d(k)\d(q)\d(\bl)\d(\br)\de_{k_i}\de_{q_j}(e_{k,\bl}\star e_{q,\br})=\hat x}\def\hm{\hat m_i\triangleright x_j=x_ix_j+i{\l\b\over2}x_{ij},&\cr x_{ij}\star x_{kl}=&\int dk\,dq\,d\bl\,d\br\,\d(k)\d(q)\d(\bl)\d(\br)\de_{\bl_{ij}}\de_{\br_{kl}}(e_{k,\bl}\star e_{q,\br})=\hat x}\def\hm{\hat m_{ij}\triangleright x_{kl}=x_{ij}x_{kl} +i{\l\over2}\Big(\y_{ik}x_{jl}-\y_{jk}x_{il}&\cr&-\y_{il}x_{jk}+\y_{jl}x_{ik}\Big),&\cr x_k\star x_{ij}=&\int dk\,dq\,d\bl\,d\br\,\d(k)\d(q)\d(\bl)\d(\br)\de_{k_k}\de_{\br_{ij}}(e_{k,\bl}\star e_{q,\br})=\hat x}\def\hm{\hat m_k\triangleright x_{ij}=x_kx_{ij}-i{\l\over2}\Big(\y_{ik}x_j-\y_{jk}x_i\Big),&\cr x_{ij}\star x_k=&\int dk\,dq\,d\bl\,d\br\,\d(k)\d(q)\d(\bl)\d(\br)\de_{\bl_{ij}}\de_{q_k}(e_{k,\bl}\star e_{q,\br})=\hat x}\def\hm{\hat m_{ij}\triangleright x_k=x_{ij}x_k+i{\l\over2}\Big(\y_{ik}x_j-\y_{jk}x_i\Big).&(33)}$$ Therefore, $$\eqalignno{&[x_i,x_j]_\star=i\l\b x_{ij},\qquad[x_{ij},x_k]_\star=i\l(\y_{ik}x_j-\y_{jk}x_i),&\cr &\ \ [x_{ij},x_{kl}]_\star=i\l(\y_{ik}x_{jl}-\y_{jk}x_{il}-\y_{il}x_{jk}+\y_{jl}x_{ik}),&(34)}$$ which is isomorphic to the algebra (2). \bigbreak \section{4. The twist for the Weyl realization} In this section, we construct the twist operator at second order in $\l$, using a perturbative approach. The twist is defined as a bilinear operator such that $\Delta}\def\F{\Phi}\def\G{\Gamma}\def\H{\Theta}\def\L{\Lambda h=\cF\Delta}\def\F{\Phi}\def\G{\Gamma}\def\H{\Theta}\def\L{\Lambda_0h\cF^\mo$ for each $h\in so(1,N)$. The twist in a Hopf algebroid sense can be computed by means of the formula [10,18] $$\cF^\mo\id e^F=e^{-{i\over2}p_{\mu\nu}}\def\ij{{ij}\ot x_{\mu\nu}}\def\ij{{ij}}e^{{i\over2}p_\rs\ot\hat x}\def\hm{\hat m_\rs}.\eqno(35)$$ By the Baker-Campbell-Haussdorf formula $e^Ae^B=e^{A+B+\ha[A,B]+\dots}$, one gets $$F={i\over2}\,p_{\m\n}\ot(\hat x}\def\hm{\hat m_{\mu\nu}}\def\ij{{ij}-x_{\mu\nu}}\def\ij{{ij})-{1\over8}\,p_{\mu\nu}}\def\ij{{ij} p_\rs\ot[x_{\mu\nu}}\def\ij{{ij},\hat x}\def\hm{\hat m_\rs]+\dots\eqno(36)$$ where we can safely ignore further terms because it can be explicitly checked that they give contributions of order $\l^3$. Substituting (15) in (36), one obtains $$F={i\l\over2}p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\g}\ot x_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}p_{\g\b}-{i\l^2\over24}\big(2p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\g}\ot x_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}p_{\b\d}p_{\g\d}-2p_{\g\d}\ot x_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\g}p_{\b\d} -p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\g}p_{\b\d}\ot x_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}p_{\g\d}+p_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\g}p_{\d\g}\ot x_{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon\b}p_{\d\b}\big).\eqno(37)$$ Using the Hadamard formula $e^ABe^{-A}=B+[A,B]+\ha[A,[A,B]]+\dots$, it is easy to check that $$\cF\Delta}\def\F{\Phi}\def\G{\Gamma}\def\H{\Theta}\def\L{\Lambda_0p_{\mu\nu}}\def\ij{{ij}\cF^\mo=\Delta}\def\F{\Phi}\def\G{\Gamma}\def\H{\Theta}\def\L{\Lambda p_{\mu\nu}}\def\ij{{ij},\eqno(38) $$ with $\Delta}\def\F{\Phi}\def\G{\Gamma}\def\H{\Theta}\def\L{\Lambda p_{\mu\nu}}\def\ij{{ij}$ given in (25), as expected. \section{5. Generic realizations} We consider now the most general realization of the commutation relations }\def\poi{Poincar\'e }\def\kp{$\k$-\poi }\def\coo{coordinates (2) in terms of the elements of the extended Heisenberg algebra (3), up to second order in $\l$. Of course, this will deform the commutation relations }\def\poi{Poincar\'e }\def\kp{$\k$-\poi }\def\coo{coordinates between coordinates and momenta in (5). The generic form of the Lorentz-covariant combinations of the generators of the algebra (3), linear in $x_i$, $x_\ij$, up to order $\l^2$ is given by\footnote{$^4$}{In principle, one may add further terms to (39), namely the terms $x_i p_{kl}p_{kl}$ and $x_{kl} p_{kl} p_i$ to $\hat x_i$, and $x_{ij} p_k p_k$, $x_{ij} p_{kl} p_{kl}$, $x_{kl} p_{kl} p_{ij}$, $x_k p_k p_{ij}$ to $\hat x_{ij}$. However these terms must vanish if one requires that the Snyder algebra is satisfied.} $$\eqalignno{\hat x_i&=x_i+\l\big(\b c_0x_{ik}p_k+c_1x_kp_{ik}\big)+\l^2\big(\b(c_2x_ip_k^2+c_3x_kp_kp_i+c_4x_{ik}p_{kl}p_l+c_5x_{kl}p_kp_{il})+c_6x_kp_{kl}p_{il}\big),&\cr&\cr \hat x_\ij&=x_\ij+\l\big(d_0x_{ik}p_{jk}+d_1x_ip_j-(i\leftrightarrow}\def\bdot{\!\cdot\! j)\big)+\l^2\big(\b d_2x_{ik}p_kp_j+d_3x_{ik}p_{kl}p_{jl}+d_4x_{kl}p_{ik}p_{jl}+d_5x_ip_kp_{jk}&\cr &\quad+d_6x_kp_{ik}p_j-(i\leftrightarrow}\def\bdot{\!\cdot\! j)\big).&(39)}$$ In order to satisfy (2) to first order in $\l$ one must have $$c_0=-\ha,\qquad d_0=\ha,\qquad c_1+d_1=1.\eqno(40)$$ Hence, at this order one has one free parameter. In particular, in the Weyl realization (17), $d_1=c_1=\ha$. To second order in $\l$, one has ten new parameters $c_2,\dots,c_6$, $d_2,\dots,d_6$ that must satisfy the six independent relations $$\eqalignno{&\ {c_1\over2}-2c_2+c_3=d_1,\qquad{c_1\over2}+c_4+c_5=\ha,\qquad d_3-2d_4=-{1\over4},&\cr &c_5-d_2={1\over4},\qquad{c_1\over2}+c_6-d_6=0,\qquad{c_1\over2}-c_1d_1+c_6+d_5=0.&(41)}$$ Hence up to second order one has five free parameters. For example, one may choose as free parameters $c_1$, $c_2$, $c_4$, $d_4$ and $d_5$, so that $d_1=1-c_1$ and $$\eqalignno{&c_3=1-{3c_1\over2}+2c_2,\quad c_5=\ha-{c_1\over2}-c_4\quad c_6={c_1\over2}-c_1^2-d_5,&\cr &\ \ d_2={1\over 4}}\def\pro{\propto}\def\app{\approx-{c_1\over2}-c_4,\quad d_3=-{1\over4}+2d_4,\quad d_6=c_1-c_1^2-d_5.&(42)}$$ It is easy to verify that the coefficients of the Weyl realization (17) satisfy the above relations with $c_1=\ha$, $c_2=-c_4=-d_4=-d_5=-{1\over12}$. Note that setting $\b=0$ in (39) one obtains realizations of the \poi algebra. For example, the Weyl realization for the operators $\hat x_i$ and $\hat x_\ij$ of the \poi algebra becomes $$\eqalignno{\hat x_i&=x_i+{\l\over2}\,x_kp_{ik}-{\l^2\over12}\,x_kp_{kl}p_{il},&\cr \hat x}\def\hm{\hat m_\ij&=x_\ij+\[{\l\over2}\big(x_ip_j+x_{ik}p_{jk}\big)-{\l^2\over12}\big(x_{ik}p_{jl}p_{kl}-x_{kl}p_{ik}p_{jl}-x_ip_kp_{jk}+2x_kp_ip_{jk}\big) -(i\leftrightarrow j)\].&(43)}$$ Through the same procedure as in the previous section, one can determine the coproduct for the generic realization (39). The differential equations for ${\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_i(tk)$ and ${\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_\ij(tk)$ are $$\eqalignno{{d{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_i\over dt}&=i\Big[p_i,k_k\hat x}\def\hm{\hat m_k+\ha k_{kl}\hat x}\def\hm{\hat m_{kl}\Big]\bigg|_{p\to{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}(tk)},&\cr {d{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_\ij\over dt}&=i\Big[p_\ij,k_k\hat x}\def\hm{\hat m_k+\ha k_{kl}\hat x}\def\hm{\hat m_{kl}\Big]\bigg|_{p\to{\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}(tk)},&(44)}$$ with initial conditions ${\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_i(0)=q_i$ and ${\cal P}}\def\cK{{\cal K}}\def\cD{{\cal D}}\def\cF{{\cal F}_\ij(0)=q_\ij$. After some calculations, one can write down the functions $\cD_i(k,q)$ and $\cD_\ij(k,q)$ that appear in the star product of plane waves, $$\eqalignno{\cD_i(k,q)&=k_i+q_i+\l\(-c_1k_jq_\ij+d_1k_{ij}q_j\)+{\l^2\over2}\Big[\b\(c_0c_1+c_3\)k_j^2q_i+\b(-c_0c_1+2c_2+c_3)k_ik_jq_j&\cr &+\(c_1^2-c_1d_0-c_1d_1+c_6+d_6\)k_jk_{jk}q_{ik}+(c_1d_0+c_1d_1+c_6-d_5)k_{ik}k_jq_{jk}&\cr &+\(d_1^2+d_5-d_6\)k_\ij k_{jk}q_k+2\b c_2k_iq_j^2+2\b c_3k_jq_jq_i+2d_5k_{ij}q_{jk}q_k+2d_6k_{jk}q_jq_{ik}&\cr &+2c_6k_jq_{jk}q_{ik}\Big],&(45)}$$ and $$\eqalignno{\cD_\ij(k,q)&=k_\ij+q_\ij+\l\(-d_0k_{ik}q_{jk}+\b c_0k_iq_j-(i\leftrightarrow}\def\bdot{\!\cdot\! j)\)+{\l^2\over2}\Big[\b\(-c_0c_1+c_4-c_5\)k_ik_kq_{jk}&\cr &+(-d_0^2+d_3+2d_4)k_{ik}k_{kl}q_{jl}+\(d_0^2+d_3\)k_{ik}k_{jl}q_{kl}+\b\(c_0d_0+c_5+d_2\)k_{ik}k_kq_j&\cr &+\b(c_0d_0+c_0d_1+c_4-d_2)k_ik_{jk}q_k+2\b d_2k_{ik}q_kq_j+2d_3k_{ik}q_{kl}q_{jl}+2d_4k_{kl}q_{ik}q_{jl}&\cr &+2\b c_4k_iq_kq_{jk}+2\b c_5k_kq_{ik}q_j-(i\leftrightarrow}\def\bdot{\!\cdot\! j)\Big].&(46)}$$ From these functions one can easily obtain the star product and the coproduct in the general case, see (25) and (31). In particular, for $c_0=-\ha$ and $k_\ij=q_\ij=0$, one has $$e^{ik_ix_i}\star e^{iq_jx_j}=e^{i\[k_i+q_i+{1\over2}\l^2\b\(2c_2q_j^2k_i+2c_3k_jq_jq_i+(c_3-{c_1\over2})k_j^2q_i+(2c_2+c_3+{c_1\over2})k_jq_jk_i\)\]x_i-{i\over2}\l\b k_iq_jx_{ij}}, \eqno(47)$$ which for $c_1=\ha$, $c_2=-c_3=-{1\over12}$ reduces to the first relation in (32). \section{6. Comparison with the Girelli-Livine approach} The authors of [7] studied our model in 3D Euclidean space using geometric methods, with a very different parametrization, adapted to the coset space nature of the Snyder momentum space. In our notations, their star product for plane waves takes the form, at second order in $\l$, $$\eqalign{e^{ik_ix_i}\star e^{iq_jx_j} =\exp\[i\(k_i+q_i+{\l^2\b\over2}(k_jq_jk_i+k_j^2q_i+2k_jq_jq_i)\)x_i-i{\l\b\over2}k_iq_jx_{ij}\].}\eqno(48)$$ This expression corresponds to a realization (39) with $c_0=-\ha$, $d_0=\ha$ and $c_1=c_2=0$. It follows from (42) that $c_3=1$, but the other coefficients are not determined and depend on three free parameters. If one also requires $d_5=0$, this may be called a generalized Snyder realization, since it obeys all the commutation relations of the original Snyder model [2], given by (2) and (5). Note that the momenta $p_\ij$ do not appear in these relations. Of course, additional commutation relations }\def\poi{Poincar\'e }\def\kp{$\k$-\poi }\def\coo{coordinates are obeyed by the momenta $p_\ij$, but they are not of interest for our considerations. One may consider more general realizations belonging to the previous class, with $c_0=-\ha$, $d_0=\ha$, $c_1=0$ and three free parameters. For example, $c_2=-\ha$, implies $c_3=0$ and gives rise to a realization that, for $d_5=0$, reproduces at order $\b$ the commutation relations }\def\poi{Poincar\'e }\def\kp{$\k$-\poi }\def\coo{coordinates of the Maggiore realization introduced in [9]. More generally, these representations generalize those introduced in [10], with arbitrary $c_2$ and $c_3=1+2c_2$. In particular, one can choose the free parameters such that $$\eqalignno{\hat x}\def\hm{\hat m_i&=x_i+{\l^2\b\over2}\Big[(c_3-1)x_ip_k^2+2c_3x_kp_kp_i\Big]-{\l\b\over2}\hm_{ik}p_k&\cr \hat x}\def\hm{\hat m_\ij&=\hm_\ij+\l(x_ip_j-x_jp_i),&(49)}$$ where the $\hm_\ij$ generate the Lorentz algebra $so(1,N-1)$ and $$[\hm_\ij,x_k]=[\hm_\ij,p_k]=0.\eqno(50)$$ For example, in the Weyl realization of $\hm_\ij$, $d_3=-d_4=-{1\over12}$, leaving $c_3$ as a free parameter. In the limit $\b=0$, $\hat x}\def\hm{\hat m_i$ reduces to $x_i$. \section{7. Conclusions} The coalgebra usually associated to the Snyder model is noncoassociative, and this fact prevents the definition of a proper Hopf algebra, whose coproduct is by definition coassociative. The reason is that the algebra of the position operators of the Snyder model does not close. However this can be remedied by including the Lorentz generators in the defining algebra [7]. In this way a standard coassociative Hopf algebra can be defined. In this paper we have studied the realizations of this extended algebra in terms of the deformations of an extended Heisenberg algebra, which contains tensorial elements that in the deformation assume the role of Lorentz generators. We have obtained the coproduct, the star product and the twist in the case of a Weyl realization. We have also considered the most general realization of the algebra up to second order in the expansion parameter $\l$ (or equivalently at first order in the Snyder parameter $\b$) and calculated the corresponding coproduct and star product. Although this approach may be considered more rigorous than the standard one from a mathematical point of view, the physical interpretation of the new degrees of freedom, related to the Lorentz generators and their momenta, is still an issue. In ref.~[7] the tensorial coordinates $x_\ij$ were interpreted in a Kaluza-Klein perspective, as coordinates of extra dimensions, hence not identified with Lorentz generators. It is also important to note that the action of noncommutative tensorial coordinates on 1 is defined to give commutative tensorial coordinates (see eqs.~(30) and (33)). The noncommutative tensorial coordinates are related to the parametrization of the dual Lorentz group. This topic is presently being investigated. In applications, one may for example build a field theory assuming that the fields $\q(\hat x}\def\hm{\hat m_{\mu\nu}}\def\ij{{ij})$ depend only on the spacetime coordinates [7], i.e.\ $\q(x_{\mu\nu}}\def\ij{{ij})=\phi}\def\g{\gamma}\def\h{\theta}\def\io{\iota}\def\j{\vartheta(\hat x}\def\hm{\hat m_i)\d(\hat x}\def\hm{\hat m_\ij)$. In this way one would however recover the usual nonassociative star product. Another possibility is that the extra coordinates parametrize a compactified internal space. In this case associativity would be preserved, but nontrivial physical consequences would presumably arise. We leave the investigation of this possibility for future work. In any case, a field theory based on this formalism could avoid the shortcomings due to the nonassociativity of the star product [12], but different problems can arise because of the intertwining between the position and the extra degrees of freedom [7]. To conclude, we observe that also the standard commutative theory, as well as DFR spacetime [3], can be formulated in this extended framework, as we have observed several times in the text. The investigation of these elementary cases could be a good starting point to better understand the physical implications of the present formalism, in particular in relation with quantum field theory. \vfill\eject \beginref \ref[1] L.J. Garay, \IJMP{A10}, 145 (1995). \ref[2] H.S. Snyder, \PR{71}, 38 (1947). \ref[3] S. Doplicher, K. Fredenhagen and J. E. Roberts, \PL{B331}, 39 (1994); S. Doplicher, K. Fredenhagen and J. E. Roberts, \CMP{172}, 187 (1995). \ref[4] G. Amelino-Camelia, J. Lukierski and A. Nowicki, Phys. At. Nucl. {\bf 61}, 1811 (1998); G. Amelino-Camelia, and M. Arzano, \PR{D65}, 084044 (2002). \ref[5] S. Majid, {\it Foundations of quantum group theory}, Cambridge Un. Press 1995. \ref[6] M.V. Battisti and S. Meljanac, \PR{D82}, 024028 (2010). \ref[7] F. Girelli and E. Livine, \JHEP{1103}, 132 (2011). \ref[8] J. Lukierski, D. Meljanac, S. Meljanac, D. Pikuti\'c and M. Woronowicz, \PL{B777}, 1 (2018); J. Lukierski, S. Meljanac and M. Woronowicz, \PL{B789}, 82 (2019). \ref[9] M.V. Battisti and S. Meljanac, \PR{D79}, 067505 (2009). \ref[10] S. Meljanac, D. Meljanac, S. Mignemi and R. \v Strajn, \IJMP{A32}, 1750172 (2017). \ref[11] S. Meljanac, D. Meljanac, S. Mignemi, D. Pikuti\'c and R. \v Strajn, \EPJ{C78}, 194 (2018). \ref[12] S. Meljanac, D. Meljanac, S. Mignemi and R. \v Strajn, \PL{B768}, 321 (2017); S. Meljanac, S. Mignemi, J. Trampeti\'c and J. You, \PR{D96}, 045021 (2017); S. Meljanac, S. Mignemi, J. Trampeti\'c and J. You, \PR{D97}, 055041 (2018); A. Franchino-Vi\~nas and S. Mignemi, \PR{D98}, 065010 (2018). \ref[13] S. Mignemi and R. \v Strajn, \PR{D90}, 044019 (2014); S. Mignemi and A. Samsarov, \PL{A381}, 1655 (2017); S. Mignemi and G. Rosati, \CQG{35}, 145006 (2018). \ref[14] J. Kowalski-Glikman and L. Smolin, \PR{D70}, 065020 (2004). \ref[15] H.G. Guo, C.G. Huang and H.T. Wu, \PL{B663}, 270 (2008); R. Banerjee, K. Kumar and D. Roychowdhury, \JHEP{1103}, 060 (2011); S. Mignemi, \CQG{26}, 245020 (2009). \ref[16] N. Durov, S. Meljanac, A. Samsarov and Z. \v Skoda, J. Algebra {\bf 309}, 318 (2007); S. Meljanac, T. Martini\' c-Bila\' c and S.~Kre\v si\' c-Juri\' c, \JMP{61}, 051705 (2020). \ref[17] S. Meljanac, D. Meljanac, A. Samsarov and M. Stoji\'{c}, \MPL{A25}, 579 (2010); S. Meljanac, D. Meljanac, A. Samsarov and M. Stoji\'{c}, \PR{D83}, 065009 (2011). \ref[18] S. Meljanac, D. Meljanac, F. Mercati, D. Pikutic, \PL{B766}, 181 (2017); S. Meljanac, D. Meljanac, A. Pachol, D. Pikuti\'c, \JoP{A50}, 265201 (2017); D. Meljanac, S. Meljanac, S. Mignemi, R. \v Strajn, \PR{D99}, 126012 (2019). \par\endgroup \end
{ "timestamp": "2020-12-01T02:14:17", "yymm": "2007", "arxiv_id": "2007.13498", "language": "en", "url": "https://arxiv.org/abs/2007.13498" }
\section{Introduction} The next generation Internet of Things (IoT) is envisioned to support various important applications, including smart home, intelligent transportation, wireless health-care, environment monitoring, etc \cite{ngiot}. The key step to implement the IoT is to ensure that a massive number of IoT devices with heterogenous energy profiles and quality of service (QoS) requirements can be connected in a spectrally efficient manner, which results in the two following challenges. From the spectral efficiency perspective, it is challenging to support massive connectivity, given the scarce bandwidth resources available for wireless communications. Non-orthogonal multiple access (NOMA) has been recognized as a spectrally efficient solution to support massive connectivity by encouraging spectrum sharing among wireless devices with different QoS requirements \cite{jsacnomaxmine,6666156,NOMAPIMRC}. For example, in conventional orthogonal multiple access (OMA), a delay-sensitive IoT device is allowed to solely occupy a bandwidth resource block, which is not helpful to support massive connectivity and can also result in low spectral efficiency, particularly if this device has a small amount of data to send. By using NOMA, additional users, such as delay-tolerate devices, can be admitted to the channel. As a result, the overall spectral efficiency is improved, and the use of advanced forms of NOMA can ensure that massive connectivity is supported while strictly guaranteeing all devices' QoS requirements \cite{8662677,8986647,8674774}. From the energy perspective, the challenge is due to the fact that some IoT devices might be equipped with continuous power supplies, but there are many other devices which are battery powered and hence severely energy constrained. This challenge motivates the use of two techniques, wireless power transfer (WPT) and backscatter communication (BackCom). The key idea of WPT is to use radio frequency (RF) signals for energy transfer. In particular, an energy-constrained IoT device can first carry out energy harvesting by using the RF signals sent by a power station or another non-energy-constrained node in the wireless network, where the harvested energy can be used to power the transmission of the energy-constrained device \cite{Ruizhangbroadcast13,6951347,7081080,8758981}. Similar to WPT, BackCom is another low-power and low-complexity technique to connect energy-constrained devices \cite{7876867,7551180,xiangback}. The key idea of BackCom is to ask an energy-constrained IoT device, termed a tag, to carry out passive reflection and modulation of a single-tone sinusoidal continuous wave sent by a BackCom reader. Instead of relying on the continuous wave sent by a reader, a variation of BackCom, termed symbiotic radio, was recently proposed to use the information-bearing signal sent by a non-energy-constrained device to power a batteryless device \cite{8907447,8399824}. In order to simultaneously address the aforementioned spectral and energy challenges, it is natural to consider the combination of NOMA with the two energy-cooperation transmission techniques in the next generation IoT, which will be the focus of this paper. Early examples of WPT assisted NOMA (WPT-NOMA) have considered the cooperative communication scenario, where relay transmission is powered by the energy harvested from the signals sent by a source \cite{yuanweijsac, 9007658,7917312}. In downlink scenarios, the use of WPT-NOMA can yield a significant improvement in the spectral and energy efficiency as demonstrated by \cite{9104736,9091839,8891923}. The application of WPT to uplink NOMA has been previously studied in \cite{7582543}, where users use the energy harvested from the signal sent by the base station to power their uplink NOMA transmission. Compared to WPT-NOMA, the application of BackCom to NOMA received less attention. In \cite{8439079} and \cite{wongwcnc20}, NOMA was used to ensure that multiple backscatter devices can communicate with the same access point (a reader) simultaneously by modulating the continuous wave sent by the access point. More recently, the application of NOMA to a special case of BackCom, symbiotic radio, has been considered in \cite{8962090,8636518}. The aim of this paper is to consider a NOMA uplink scenario, where a delay-sensitive non-energy-constrained IoT device and multiple delay-tolerant energy-constrained devices communicate with the same access point. In particular, following the semi-grant-free protocol proposed in \cite{8662677} and \cite{SGFx}, one of the delay-tolerant devices is granted access to the channel which would be solely occupied by the delay-sensitive device in OMA. Because some IoT devices are energy constrained, the use of the two energy-cooperative transmission strategies, WPT-NOMA and BAC-NOMA, is considered, and their performance is compared. The contributions of the paper are listed as follows: \begin{itemize} \item A new WPT-NOMA scheme is proposed by applying hybrid successive interference cancellation (SIC), where the transmission of an energy-constrained device is powered by the energy harvested from the signals sent by the non-energy-constrained device. Recall that hybrid SIC is to dynamically decide the SIC decoding order by simultaneously using the devices' channel state information (CSI) and their QoS requirements \cite{SGFx}. An intermediate benefit for using hybrid SIC for NOMA uplink is to avoid an outage probability error floor, which is not possible if a fixed SIC decoding order is used. In this paper, the outage performance of WPT-NOMA with hybrid SIC is analyzed, and the obtained analytical results demonstrate that outage probability error floors can be avoided and the full diversity gain is still achievable, even though the transmission of those energy-constrained devices are not powered by their own batteries. \item A general multi-user BAC-NOMA scheme is proposed, where an energy-constrained device reflects and modulates the signals sent by the non-energy-constrained device. Note that the BAC-NOMA scheme considered in \cite{8636518} can be viewed as a special case of this general framework. In addition, the two key features of BAC-NOMA are analyzed in detail. Firstly, we focus on the outage probability error floor suffered by BAC-NOMA. The key event which causes the error floor is analyzed, and the asymptotic behaviour of the probability of this event with respect to the number of the participating devices is studied by applying the extreme value theory (EVT) \cite{Arnoldbook,1710338}. Secondly, we focus on another feature of BAC-NOMA, i.e., modulating the energy-constrained device's signal on the non-energy-constrained device's signal. This feature means that the relationship between the two devices' signals is multiplicative, instead of additive. Or in other words, the non-energy-constrained device's signal can be viewed as a type of fast fading for the energy-constrained device. The analytical results developed in the paper show that this virtual fading is damaging to the reception reliability, and the diversity gain achieved by BAC-NOMA is capped by one, even if the event which causes the outage probability error floor can be ignored. \item The performance achieved by the two energy and spectrally efficient transmission strategies is compared by using the provided analytical and simulation results. Our finding is that WPT-NOMA can offer a significant outage performance gain over BAC-NOMA, particularly at high signal-to-noise ratio (SNR) and with small target data rates, which is due to the fact that hybrid SIC can be implemented in WPT-NOMA systems. However, WPT-NOMA suffers the two following drawbacks. One is that WPT-NOMA cannot support continuous transmission, which has a harmful impact on its ergodic data rate. The other is that WPT-NOMA is sensitive to how much time is allocated for energy harvesting and data transmission, respectively, where an inappropriate choice can lead to a significant performance loss, compared to BAC-NOMA. \end{itemize} \section{System Model}\label{section system model} Consider a NOMA uplink scenario with one access point and $(M+1)$ IoT devices, denoted by ${\rm U_m}$, $0\leq m\leq M$. For illustration purposes, assume that ${\rm U_0}$ is a non-energy-constrained delay-sensitive device, whereas ${\rm U_m}$, $1\leq m\leq M$, are energy constrained and delay tolerant. The channel from the access point to ${\rm U_i}$ is denoted by $h_i$, $0\leq i \leq M$. The channel from ${\rm U_0}$ to ${\rm U_m}$ is denoted by $g_m$, $1\leq m\leq M$. Because ${\rm U}_0$ is delay sensitive, it is allowed to solely occupy a bandwidth resource block in OMA, which is spectrally inefficient for supporting massive connectivity. Following the designs shown in \cite{8662677} and \cite{SGFx}, we consider that one of the delay-tolerant IoT devices is to be granted access to the resource block which would be solely occupied by ${\rm U}_0$ in OMA. {\it Assumption:} To facilitate performance analysis, we assume that the energy-constrained devices are located in a small-size cluster, such that that the distances between ${\rm U}_0$ and ${\rm U}_m$, $m\geq 1$, are same. A similar assumption is also made to the distances between the access point and the devices. For example, the devices can be sensors in a self-driving vehicle or on an autonomous robot. For smart home applications, the devices can be sensors for different functionalities fixed in the same room. Therefore, we assume that $g_m$, $1\leq m \leq M$, are modelled as independent and identically distributed (i.i.d.) Rayleigh fading, i.e., complex Gaussian distributed with zero mean and variance $\lambda_g$, $g_m\sim CN(0,\lambda_g)$, where $\lambda_g\triangleq d_g^\phi$, $d_g$ denotes the distance between ${\rm U}_0$ and ${\rm U}_m$, $m\geq 1$, and $\phi$ denotes the path loss exponent. Similarly, we also assume that $h_m\sim CN(0,\lambda_h)$ and $h_0\sim CN(0,\lambda_0)$, where $\lambda_h\triangleq d_h^\phi$, $\lambda_0\triangleq d_0^\phi$, $d_h$ denotes the distance between the access point and ${\rm U}_m$, $m\geq 1$, and $d_0$ denotes the distance between ${\rm U}_0$ and the access point. \subsection{WPT Assisted NOMA} Without loss of generality, assume that ${\rm U}_m$ is granted access, where the details for the scheduling strategy will be provided at the end of this subsection. Suppose that the energy-constrained devices can support WPT, and time-switching WPT is used for its simplicity, which consists of two phases \cite{Zhouzhang13}. During the first $\alpha T$ seconds, ${\rm U}_m$ performs energy harvesting by using ${\rm U}_0$'s signal, denoted by $s_0$, and then uses the harvested energy for its transmit power to send its signal $s_m$ to the access point, where $\alpha$ denotes the time-switching parameter, $0\leq \alpha\leq 1$ and $T$ denotes the block period. Therefore, the amount of energy harvested at ${\rm U}_m$ is $\eta P|g_m|^2\alpha T$, where $P$ denotes ${\rm U}_0$'s transmit power, $\eta$ denotes the energy harvesting efficiency coefficient. This means that the observation at the access point is given by \begin{align} y_{{\rm AP}} =\sqrt{P}h_0s_0+ \sqrt{ \frac{\eta P|g_m|^2\alpha }{1-\alpha}} h_m s_m +n_{{\rm AP}}, \end{align} where $n_{{\rm AP}}$ denotes the noise. For the proposed WPT-NOMA scheme, hybrid SIC is applied \cite{hsic01,hsic02}. In particular, if $s_m$ is decoded first, ${\rm U}_m$'s maximal data rate without causing the failure of SIC (or degrading ${\rm U}_0$'s performance) is given by \begin{align} \label{eqwp1} {\rm R}^{WP,1}_m =(1-\alpha)\log\left( 1+ \frac{\eta P\bar{\alpha}|g_m|^2 |h_m|^2}{P|h_0|^2+1} \right) , \end{align} where $\bar{\alpha}=\frac{\alpha}{1-\alpha}$ and the noise power is assumed to be normalized. If ${\rm U}_0$'s signal is decoded first, ${\rm U}_0$'s achievable data rate is given by \begin{align}\label{eqwp2} {\rm R}^{WP,2}_{0,m} =(1-\alpha)\log\left(1+ \frac{P|h_0|^2}{\eta P\bar{\alpha}|g_m|^2 |h_m|^2+1}\right). \end{align} Denote ${\rm U }_0$'s target data rate by $R_0$. If ${\rm R}^{WP,2}_{0,m}\geq R_0$, $s_0$ can be successfully decoded and removed, which means that $s_m$ can be decoded correctly with the following data rate: \begin{align} \label{eqwp3} {\rm R}^{WP,2}_m = (1-\alpha)\log\left(1+ \eta P\bar{\alpha} |g_m|^2|h_m|^2 \right). \end{align} \subsubsection*{ Device Scheduling for WPT-NOMA} The aim of device scheduling is to ensure that the delay-tolerant device which yields the largest data rate can be selected, under the condition that ${\rm U}_0$'s QoS requirements are strictly guaranteed. Note that ${\rm R}^{WP,2}_{0,m} \geq R_0$ is equivalent to the following inequality: \begin{align} \gamma_m\leq \frac{|h_0|^2}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} , \end{align} where $\bar{\epsilon}_0 = 2^{\frac{R_0}{1-\alpha}}-1$. Furthermore, define $\bar{\epsilon}_s=2^{\frac{R_s}{1-\alpha}}-1$ and $\tau(h_0) = \max\left\{0, \frac{|h_0|^2}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} \right\}$, where it is assumed that ${\rm U}_m$, $1\leq m\leq M$, have the same target data rate, denoted by $R_s$. The delay-tolerant IoT devices can be divided into the two groups, denoted by $\mathcal{S}_1$ and $\mathcal{S}_2$, respectively, as defined in the following: \begin{itemize} \item $\mathcal{S}_1$ contains the devices whose channel gains satisfy $\gamma_m>\tau(h_0)$. If one device from $\mathcal{S}_1$ is scheduled, its signal has to be decoded at the first stage of SIC, which yields the data rate ${\rm R}^{WP,1}_m$. \item $\mathcal{S}_2$ contains the devices whose channel gains satisfy $\gamma_m\leq \tau(h_0)$. If one devices from $\mathcal{S}_2$ is scheduled, its signal can be decoded either at the first stage of SIC (which yields the data rate ${\rm R}^{WP,1}_m$) or at the second stage of SIC (which yields the data rate ${\rm R}^{WP,2}_m$). Since $ {\rm R}^{WP,1}_m\leq {\rm R}^{WP,2}_m$ always holds, ${\rm U}_m$ always prefers its signal to be decoded at the second stage of SIC. \end{itemize} The access point selects the delay-tolerant device which yields the largest data rate, i.e., \begin{align} \label{swipt select} m^* = \arg \max \left\{\max\left\{{\rm R}^{WP,1}_m, m\in\mathcal{S}_1\right\} ,\max\left\{{\rm R}^{WP,2}_m, m\in\mathcal{S}_2\right\} \right\}. \end{align} {\it Remark 1:} As can be observed from \eqref{eqwp1}, \eqref{eqwp2}, and \eqref{eqwp3}, the use of time-switching reduces the time duration for data transmission, since the first $\alpha T$ seconds are used for energy harvesting. This feature of WPT-NOMA can lead to a potential performance loss compared BAC-NOMA which can support continuous data transmission. \subsection{BackCom-Assisted NOMA} Again assume that ${\rm U}_m$ is granted access, where the details for the BAC-NOMA scheduling strategy will be provided later. Suppose that the energy-constrained devices are capable to carry out backscatter communications. Therefore, the access point receives the following signal:\footnote{We assume that the symbol periods of different devices are same, where the design of BAC-NOMA for the case with devices using different symbol periods is beyond the scope of this paper. } \begin{align}\label{eq1} y_{{\rm AP}} = \sqrt{P}h_0s_0 + \sqrt{P}\beta g_m h_ms_0s_m +n_{{\rm AP}}, \end{align} where $\beta$ denotes the BackCom power reflection coefficient. Unlike WPT-NOMA, in BAC-NOMA, there is only one choice for the SIC decoding order, which is to decode ${\rm U}_0$'s signal first. The reason for this is that ${\rm U}_0$'s signal can be viewed as a fading channel for ${\rm U}_m$'s signal. In order to implement coherent detection, ${\rm U}_0$'s signal, i.e., the virtual fading channel, needs to be decoded first. Therefore, in BAC-NOMA, ${\rm U}_0$'s achievable data rate is given by \begin{align} {\rm R}_{0,m}^{BAC} = \log\left(1+\frac{P|h_0|^2}{P\beta^2 |g_m|^2 |h_m|^2+1}\right). \end{align} Assuming that ${\rm U}_0$'s signal can be correctly decoded, i.e., $ {\rm R}^{BAC}_{0,m} \geq R_0$, ${\rm U}_0$'s signal can be removed, which leads to the following system model: \begin{align}\label{eq2} y_{{\rm AP}} - \sqrt{P}h_0s_0 = \sqrt{P}\beta g_m h_ms_0s_m +n_{{\rm AP}}. \end{align} Therefore, an achievable data rate for decoding $s_m$ is given by \begin{align}\label{eq3} {\rm R}^{BAC}_m = \log\left(1+ {P}\beta^2 |g_m |^2 |h_m|^2|s_0|^2 \right), \end{align} where ${\rm U}_0$'s signal, $s_0$, is viewed as a fast fading channel gain for $s_m$. Similar to \cite{8907447,8636518}, it is assumed that $s_m \sim CN(0,1)$, i.e., the probability density function (pdf) of this virtual fading channel, $|s_0|^2$, is $f_{|s_0|^2}(x)=e^{-x}$. \subsubsection*{ Device Scheduling for BAC-NOMA} In OMA, ${\rm U}_0$ is allowed to solely occupy the channel, whereas the use of NOMA ensures that the backscatter devices can also be granted access. In order to guarantee ${\rm U}_0$'s QoS requirements, device ${\rm U}_m$ can be granted access only if ${\rm R}_{0,m}^{BAC}\geq R_0$ which can be rewritten as follows: \begin{align} |g_m|^2 |h_m|^2 \leq \beta^{-2} \epsilon_0^{-1}|h_0|^2- \beta^{-2}P^{-1} \end{align} where $\epsilon_0=2^{R_0}-1$. On the other hand, it is ideal to admit the device which can maximize the data rate $ {\rm R}^{BAC}_m $. Therefore, the device scheduling criterion is given by \begin{align}\label{bd selection} m^* = \arg \max\left\{ {\rm R}^{BAC}_m , m\in \mathcal{S}_0\right\}, \end{align} where $\mathcal{S}_0=\left\{ m: {\rm R}^{BAC}_{0,m} \geq R_0,1\leq m \leq M \right\}$. {\it Remark 2:} Unlike WPT-NOMA, BAC-NOMA can support one SIC decoding order only, which is the reason why it suffers an outage probability error floor, as shown in the next section. Another feature of BAC-NOMA is that $s_0$ is treated as a virtual fading channel, which means $s_m$ suffers additional fading attenuation. The impact of this virtual fading channel on the reception reliability of $s_m$ will be investigated in the following section. {\it Remark 3:} We note that the two proposed device scheduling strategies can be carried out in a distributed manner. Take BAC-NOMA as an example. Each backscatter device decides to participate in contention, if $ {\rm R}_{0,m}^{BAC} >R_0, m\in \mathcal{S}$, otherwise it switches to the match state. Each device calculates its backoff time inversely proportionally to its achievable data rate $ {\rm R}^{BAC}_m$, which ensures that $ {\rm U}_{m^*}$ can be granted access in a distributed manner. \section{Performance Analysis for WPT-NOMA} Since the implementation of WPT-NOMA is transparent to ${\rm U}_0$, we only focus on the performance of the admitted delay-tolerant energy-constrained device. Denote the effective channel gains of the devices by $ \gamma_m=|g_m |^2 |h_m|^2$. In order to simplify notations, without of loss of generality, assume that the delay-tolerant devices are ordered according to their effective channel gain as follows: \begin{align} \label{channel order} \gamma_1\leq \cdots\leq \gamma_M. \end{align} With this channel ordering, the impact of device scheduling on the NOMA transmission can be shown explicitly. Particularly, denote $\bar{E}_m$ by the event that the size of $\mathcal{S}_2$ is $m$, i.e., $\bar{E}_m$ can be expressed as follows: \begin{align} \bar{E}_m = \left\{ \gamma_{m}<\tau(h_0), \gamma_{m+1}>\tau(h_0) \right\}, \end{align} for $1\leq m\leq M-1$, where $\bar{E}_0 = \left\{ \gamma_{1}>\tau(h_0) \right\}$ and $\bar{E}_M= \left\{ \gamma_{M}<\tau(h_0) \right\}$. The outage probability achieved by WPT-NOMA can be expressed as follows: \begin{align}\nonumber {\rm P}^{WP} =&\sum^{M}_{m=1} \underset{T_m}{\underbrace{{\rm P}\left( \max\left\{ {\rm R}_{m} ^{WP,2}, {\rm R}_{M} ^{WP,1}\right\}<R_s , |\mathcal{S}_2|=m\right) }}\\ &+\underset{T_0}{\underbrace{{\rm P}\left( {\rm R}_{M} ^{WP,1} <R_s , |\mathcal{S}_2|=0\right) }} . \label{wp outage} \end{align} We note that the performance analysis requires the pdf and cumulative distribution function (CDF) of the ordered channel gain $\gamma_m$, which can be found by using the density functions of the unordered channel gain. In particular, the pdf of the unordered effective channel gain is given by \cite{8636518} \begin{align}\label{pdf gamma} f_{\gamma}(x) = 2 \lambda_h\lambda_g K_0\left(2\sqrt{ \lambda_h\lambda_g x}\right), \end{align} where $K_i(\cdot)$ denotes the $i^{\rm th}$-order modified Bessel function of the second kind. The CDF of the unordered channel gain, denoted by $F_{\gamma}(x) $, can be obtained as follows: \begin{align}\nonumber F_{\gamma}(x) =& \int^{x}_{0} 2\lambda_h\lambda_gK_0\left(2\sqrt{ \lambda_h\lambda_gy}\right) dy= \frac{4}{\lambda_h\lambda_g} x \int^{1}_{0} K_0\left(2t\sqrt{x}\sqrt{ \frac{1}{\lambda_h\lambda_g}}\right) tdt \\\label{cdf gamma} =&1 - 2 \sqrt{ \lambda_h\lambda_g x}K_1\left(2 \sqrt{ \lambda_h\lambda_g x}\right) , \end{align} where \cite[(6.561.8)]{GRADSHTEYN} is used. As can be observed from \eqref{pdf gamma} and \eqref{cdf gamma}, the density functions of the unordered channel gains contain Bessel functions, which makes it difficult to obtain an exact expression for the outage probability achieved by WPT-NOMA. However, the diversity gain achieved by WPT-NOMA can be obtained, as shown in the following theorem. \begin{theorem}\label{theorem1} For the considered NOMA uplink scenario, WPT-NOMA can realize a diversity gain of $M$, if $\bar{\epsilon}_0 \bar{\epsilon}_s<1$. \end{theorem} \begin{proof} See Appendix \ref{proof1}. \end{proof} {\it Remark 3:} Theorem \ref{theorem1} shows that the diversity gain achieved by WPT-NOMA is not zero, which implies that WPT-NOMA does not suffer any outage probability error floors, a feature not achievable to BAC-NOMA, as shown in the next section. Therefore, WPT-NOMA is a more robust transmission solution, compared to BAC-NOMA, particularly at high SNR. {\it Remark 4:} Note that $M$ is the maximal multi-user diversity gain achievable to the considered NOMA uplink scenario, since there are $M$ delay-tolerant devices competing for the access. Theorem \ref{theorem1} shows that the maximal diversity gain can be realized by WPT-NOMA, even though battery-less transmission is used. Therefore, WPT-NOMA is particularly attractive for energy-constrained IoT devices which have strict requirements for reception reliability. {\it Remark 5:} We note that the conclusion that there is no outage probability error floor also holds for the special case $M=1$, i.e., there is a single delay-tolerant device and device scheduling is not carried out. This implies that the outage probability error floor is avoided due to the use of hybrid SIC, instead of device scheduling \section{Performance Analysis for BAC-NOMA} Again because the implementation of NOMA is transparent to ${\rm U}_0$, we only focus on the performance of the admitted delay-tolerant device. The outage probability of interest is expressed as follows: \begin{align} {\rm P}^{BAC} = {\rm P}\left( {\rm R}_{m^*} ^{BAC}<R_s , |\mathcal{S}_0|\neq 0\right) + {\rm P}\left( |\mathcal{S}_0|= 0\right) , \end{align} where $|\mathcal{S}|$ denotes the size of set $\mathcal{S}$. Assume that the devices' channel gains are ordered as in \eqref{channel order}. Denote $E_m$ by the event that the size of $\mathcal{S}_0$ is $m$, i.e., $E_m$ can be expressed as follows: \begin{align} E_m = \left\{ \gamma_{m}<\theta(h_0), \gamma_{m+1}>\theta(h_0) \right\}, \end{align} for $1\leq m\leq M-1$, where $\theta(h_0)=\beta^{-2} \epsilon_0^{-1}|h_0|^2- \beta^{-2}P^{-1} $. We note that $E_0 = \left\{ \gamma_{1}>\theta(h_0) \right\}$ and $E_M= \left\{ \gamma_{M}<\theta(h_0) \right\}$. The use of \eqref{bd selection} and \eqref{channel order} means that $ {\rm U}_{m}$ will be granted access, for the event $E_m$. Therefore, the outage probability can be further written as follows: \begin{align}\label{outage bdd} {\rm P}^{BAC} =& \sum^{M}_{m=1}\underset{Q_m}{\underbrace{{\rm P}\left( {\rm R}^{BAC}_{m} <R_s , E_m\right) }} + {\rm P}\left( E_0\right) . \end{align} We note that ${\rm P}^{BAC}$ is more challenging to analyze, compared to ${\rm P}^{WP}$, because there are more random variables involved. In the following, we focused on two key features of WPT-NOMA. \subsection{Outage Probability Error Floor} In this subsection, we will show that BAC-NOMA suffers from an outage probability error floor. The existence of the error floor can be sufficiently proved by focusing on a lower bound on the outage probability as shown in the following: \begin{align}\label{floor1} {\rm P}^{BAC} \geq {\rm P}\left( E_0\right) . \end{align} The simulation results provided in Section \ref{section simulation} show that $E_0$ is indeed the most damaging event at high SNR, compared to the terms $Q_m$, $1\leq m \leq M$. $ {\rm P}\left( E_0\right) $ can be expressed as follows: \begin{align} {\rm P}\left( E_0\right) =& {\rm P}\left(\gamma_1>\beta^{-2} \epsilon_0^{-1}|h_0|^2- \beta^{-2}P^{-1}\right) \\\nonumber =& {\rm P}\left(\beta^2 \epsilon_0\gamma_1+ \epsilon_0 P^{-1}> |h_0|^2>\epsilon_0P^{-1} \right) \\\nonumber &+{\rm P}\left( |h_0|^2<\epsilon_0P^{-1} \right) . \end{align} Denote $f_{\gamma_1}(x) \triangleq M f_{\gamma}(x) \left(1-F_{\gamma}(x)\right)^{M-1}$ by the marginal pdf of the smallest order statistics, and hence $ {\rm P}_{E_0} $ can be expressed as follows: \begin{align}\nonumber {\rm P}\left( E_0\right) =& \int^{\infty}_{0}\left(e^{-\lambda_0\epsilon_0P^{-1} }- e^{ -\lambda_0( \beta^2 \epsilon_0x+ \epsilon_0 P^{-1})} \right)f_{\gamma_1}(x)dx \\\nonumber& +1 - e^{-\lambda_0\epsilon_0P^{-1} } \\\nonumber =& 1 -Me^{ -\lambda_0 \epsilon_0 P^{-1}} \int^{\infty}_{0} e^{ -\lambda_0 \beta^2 \epsilon_0x } f_{\gamma}(x)\\ &\times \left(1-F_{\gamma}(x)\right)^{M-1}dx . \end{align} At high SNR, i.e., $P\rightarrow \infty$, $ {\rm P}\left( E_0\right) $ can be approximated as follows: \begin{align}\nonumber {\rm P}\left( E_0\right) \approx & 1 -M \int^{\infty}_{0} e^{ -\lambda_0 \beta^2 \epsilon_0x } f_{\gamma}(x)\left(1-F_{\gamma}(x)\right)^{M-1}dx \\ \label{floor2} \approx & \lambda_0 \beta^2 \epsilon_0\int^{\infty}_{0} e^{ -\lambda_0 \beta^2 \epsilon_0x } \left(1-F_{\gamma}(x)\right)^{M} dx, \end{align} which is constant and not a function of $P$. Combining \eqref{floor1} with \eqref{floor2}, it is sufficient to conclude that BAC-NOMA transmission suffers an outage probability error floor. {\it Remark 6:} This finding is consistent to the conclusions made in \cite{8636518}. The reason for the existence of this error floor is due to the fact that only one SIC decoding order can be used by BAC-NOMA. Compared to BAC-NOMA, WPT-NOMA can avoid this error floor and hence outperform BAC-NOMA at high SNR. {\it Remark 7:} Theorem \ref{theorem1} indicates that WPT-NOMA can utilize the multi-user diversity, and hence a nature question is whether BAC-NOMA can also use the multi-user diversity, i.e., whether it is beneficial to invite more delay-tolerant devices to participate in transmission in BAC-NOMA. By applying the EVT, the following lemma can be obtained for this purpose. \begin{lemma}\label{lemma1} The error floor caused by $ {\rm P}\left( E_0\right) $ can be reduced to zero by increasing the number of participating delay-tolerant devices $M$ and the transmit power $P$. \end{lemma} \begin{proof} See Appendix \ref{lemma1proof}. \end{proof} \subsection{Impact of $s_0$ on Reception Reliability} Recall that $s_0$ is treated as a type of fast fading when the signal from the delay-tolerant device is decoded. In this section, we will show that this fast fading has a harmful impact on the outage probability. To obtain an insightful conclusion, we consider an ideal situation, in which $E_0$ does not happen. We will show that even in such an ideal situation, the full multi-user diversity gain cannot be realized. Recall the term $Q_m$, $1\leq m \leq M-1$, shown in \eqref{outage bdd} can be evaluated as follows: \begin{align}\label{eq89} Q_m =& {\rm P}\left( {\rm R}^{BAC}_{m} <R_s , \gamma_{m}<\theta(h_0), \gamma_{m+1}>\theta(h_0)\right) \\\nonumber =& {\rm P}\left( \gamma_m <\min\{ \epsilon_s {P}^{-1}\beta^{-2} |s_0|^{-2}, \theta(h_0)\}, \gamma_{m+1}>\theta(h_0)\right) . \end{align} Define $a_{s_0,h_0} = \min\{ \epsilon_s {P}^{-1}\beta^{-2} |s_0|^{-2}, \theta(h_0)\}$. By applying order statistics, the joint pdf of $\gamma_m $ and $\gamma_{m+1}$ is given by \cite{Arnoldbook} \begin{align} f_{\gamma_m,\gamma_{m+1}}(x,y) =&\mu_0 f_{\gamma}(x) f_{\gamma}(y)\left(F_{\gamma}(x) \right)^{m-1}\\\nonumber &\times \left(1- F_{\gamma}(y) \right)^{M-m-1}, \end{align} for $x<y$, where $\mu_0 = \frac{M!}{(m-1)! (M-m-1)!} $. Therefore, $Q_m$ can be expressed as follows: \begin{align} Q_m =& \bar{\mu}_0\mathcal{E}_{h_0, s_0}\left\{ \int^{\infty}_{\theta(h_0)}f_{\gamma}(y)\left(1- F_{\gamma}(y) \right)^{M-m-1}dy\right.\\ &\times\left. \int^{a_{s_0,h_0}}_{0} f_{\gamma}(x) \left(F_{\gamma}(x) \right)^{m-1} dx \right\} \\\nonumber =&\bar{\mu}_0\mathcal{E}_{h_0, s_0}\left\{ \left(1- F_{\gamma}(\theta(h_0)) \right)^{M-m} \left(F_{\gamma}(a_{s_0,h_0}) \right)^{m} \right\} , \end{align} where $\bar{\mu}_0=\mu_0\mathcal{E}_{h_0, s_0}$. Because the density functions of $\gamma_m$ contain Bessel functions, a closed-form expression for $Q_m$ is difficult to obtain, and hence we consider an ideal scenario, in which the connection from ${\rm U}_0$ to ${\rm U}_m$, $1\leq m\leq M$, is lossless. This assumption yields a lower bound on $Q_m$ as follows: \begin{align} Q_m \geq& \bar{\mu}_0\mathcal{E}_{h_0, s_0}\left\{ \left(1- \bar{F}_{\gamma}(\theta(h_0)) \right)^{M-m} \left(\bar{F}_{\gamma}(a_{s_0,h_0}) \right)^{m} \right\} , \end{align} where $\bar{F}_{\gamma} (x)=1-e^{-\lambda_h x}$. For the case $E_m$, $m\geq 1$, we have $\theta(h_0)\geq 0$, which means that $|h_0|^2\geq \epsilon_0P^{-1} $. In addition, $a_{s_0,h_0} = \min\{ \epsilon_s {P}^{-1}\beta^{-2} |s_0|^{-2}, \theta(h_0)\}= \theta(h_0) $ implies the following \begin{align} |h_0|^2\leq \epsilon_0\epsilon_s {P}^{-1} |s_0|^{-2}+\epsilon_0P^{-1} . \end{align} By applying the simplified CDF, $\bar{F}_{\gamma}(x)$, the lower bound on $Q_m$ can be expressed as follows: \begin{align} Q_m \geq& \bar{\mu}_0 \int^{\infty}_{0}e^{-y} \int^{\frac{\epsilon_0\epsilon_s}{ {P} y}+\frac{\epsilon_0}{P} }_{\frac{\epsilon_0}{P} } \left(1-e^{-\lambda_h \theta(x)} \right)^{m} \\\nonumber & \times e^{-(M-m)\lambda_h \theta(x)} \lambda_0e^{-\lambda_0x}dxdy \\\nonumber & +\bar{\mu}_0 \int^{\infty}_{0} e^{-y} \left(1-e^{-\lambda_h \epsilon_s {P}^{-1}\beta^{-2} y^{-1}} \right)^{m}\\\nonumber &\times \int_{\frac{\epsilon_0\epsilon_s}{ {P} y}+\frac{\epsilon_0}{P} }^{\infty} e^{-(M-m)\lambda_h \theta(x)} \lambda_0e^{-\lambda_0x}dx dy. \end{align} With some algebraic manipulations, the lower bound on $Q_m$ can be approximated at high SNR as follows: \begin{align}\label{es32} Q_m \geq& \bar{\mu}_0 \lambda_0 \sum^{m}_{p=0}{m \choose p} (-1)^p \breve{\mu}_p^{-1} \left[- \frac{\breve{\mu}_p\epsilon_0\epsilon_s}{ {P} } \ln \frac{\breve{\mu}_p\epsilon_0\epsilon_s}{ {P} } \right] \\\nonumber & +\bar{\mu}_0 \lambda_0 \breve{\mu}_0^{-1}\sum^{m}_{p=0}{m \choose p} (-1)^p \left( \frac{4\tilde{\mu}_p}{ P } \ln \frac{4\tilde{\mu}_p}{ P } \right) \\\label{bd final1} \rightarrow & \frac{1}{P\ln^{-1}P}, \end{align} where the last approximation follows from the fact that each term in \eqref{es32} can be approximated as $\frac{1}{P\ln^{-1}P}$. {\it Remark 8:} Following the steps in the proof for Theorem \ref{theorem1} and also using \eqref{bd final1}, it is straightforward to show that the achievable diversity gain is one. In other words, the approximation obtained in \eqref{bd final1} shows that the existence of virtual fast fading $|s_0|^2$ caps the diversity gain achieved by BAC-NOMA by one, even if the outage probability error floor can be discarded. \begin{figure}[t] \vspace{-2em} \begin{center}\subfigure[$R_0=0.1$ BPCU ]{\label{fig1a}\includegraphics[width=0.5\textwidth]{op1.eps}} \subfigure[$R_0=2$ BPCU]{\label{fig1b}\includegraphics[width=0.5\textwidth]{op2.eps}} \vspace{-1em} \end{center} \caption{ Outage performance of BAC-NOMA and WPT-NOMA. $R_s=1.2$ bit per channel use (BPCU). $d_h=d_0 = 50$ m and $d_g=5$ m. $\alpha=0.5$, $\beta=0.1$, and $\eta=0.1$. \vspace{-1em} }\label{fig1}\vspace{-0.5em} \end{figure} \section{Simulation Results}\label{section simulation} In this section, the performance of the two considered transmission schemes, BAC-NOMA and WPT-NOMA, is investigated by using computer simulation results. For all the carried out simulations, we choose $\phi=3.5$ and the noise power is $-94$ dBm. In Fig. \ref{fig1}, the outage performance achieved by WPT-NOMT and BAC-NOMA is studied with different choices of $R_0$. In Fig. \ref{fig1a}, the choice $R_0=0.1$ bits per channel use (BPCU) is used. With $R_0=0.1$ BPCU and $R_s=1.2$ BPCU, it is straightforward to verify that the condition $\bar{\epsilon}_0 \bar{\epsilon}_s<1$ holds. As indicated in Theorem \ref{theorem1}, if $\bar{\epsilon}_0 \bar{\epsilon}_s<1$ holds, WPT-NOMA can avoid outage probability error floors, which is consistent to the observations made from Fig. \ref{fig1a}. In addition, Fig. \ref{fig1a} shows that the slope of the outage probability curve for WPT-NOMA is increased when increasing $M$, which indicates that the diversity gain achieved by WPT-NOMA is increased by increasing $M$, an observation also consistent to the conclusion made in Theorem \ref{theorem1}. In Fig. \ref{fig1b}, the choice $R_0=2$ BPCU is used, which leads to the violation of the condition $\bar{\epsilon}_0 \bar{\epsilon}_s<1$. As a result, there are error floors for the outage probabilities achieved by WPT-NOMA, as shown in Fig. \ref{fig1b}. \begin{figure}[t]\centering \vspace{-1em} \epsfig{file=floor.eps, width=0.5\textwidth, clip=}\vspace{-0.5em} \caption{ Illustration of the outage probability error floor of BAC-NOMA. $R_0=0.1$ BPCU and $R_s=1.2$ BPCU. $d_h=d_0 = 100$ m and $d_g=1$ m. $\alpha=0.5$, $\beta=0.1$, and $\eta=0.1$. \vspace{-1em} }\label{fig3}\vspace{-0.5em} \end{figure} On the other hand, the two figures in Fig. \ref{fig1} show that BAC-NOMA always suffers outage probability error floors, which is due to the fact that hybrid SIC cannot be implemented in BAC-NOMA systems. In addition, the figures also demonstrate that the performance of BAC-NOMA can be improved by increasing $M$, i.e., inviting more delay-tolerant devices to participate in NOMA transmission is beneficial to improve reception reliability. But unlike WPT-NOMA, increasing $M$ does not change the slope of the outage probability curve for BAC-NOMA. It is worth to point out that for the two considered choices of $R_0$, WPT-NOMA can always realize a smaller outage probability than BAC-NOMA, as shown in Fig. \ref{fig1}. In Fig. \ref{fig3}, the outage probability error floor experienced by BAC-NOMA is studied, where the term in the legend, `Error Floor of BAC-NOMA', refers to ${\rm P}(E_0)$. In order to clearly show the asymptotic behaviour of the outage probability, a larger transmit power range than those in Fig. \ref{fig1} is used. As can be observed from the figure, ${\rm P}(E_0)$ is a tight lower bound on the outage probability, and it is constant at high SNR, which implies that $E_0$ is the most damaging event and is the cause for the error floor of the outage probability. Another important observation is that increasing $M$ is useful to reduce the error floor, which confirms Lemma \ref{lemma1}. On the other hand, WPT-NOMA does not suffer any outage probability error floor because the used target rate choices satisfy $\bar{\epsilon}_0 \bar{\epsilon}_s<1$. \begin{figure}[t] \vspace{-2em} \begin{center}\subfigure[Outage Probability]{\label{fig2a}\includegraphics[width=0.5\textwidth]{dis_op.eps}} \subfigure[Ergodic Data Rate]{\label{fig2b}\includegraphics[width=0.5\textwidth]{dis_rate.eps}} \vspace{-1em} \end{center} \caption{ Impact of path loss on the performance of BAC-NOMA and WPT-NOMA. $M=5$, $R_0=2$ BPCU, $R_s=3$ BPCU. $d_g=5$ m. $\alpha=0.5$, $\beta=0.1$, and $\eta=0.1$. \vspace{-1em} }\label{fig2}\vspace{-0.5em} \end{figure} In Fig. \ref{fig2}, the impact of path loss on the performance of WPT-NOMA and BAC-NOMA is studied. In Fig. \ref{fig2a}, the outage probability is used as the metric for the performance evaluation, whereas the ergodic data rate is used as the metric in Fig. \ref{fig2b}. The two figures in Fig. \ref{fig2} show that the performance of the two NOMA schemes is degraded when path loss becomes more severe. This deteriorating effect of path loss can be explained by using WPT-NOMA as an example. Increasing path loss does not only increase the attenuation of the signal strength, but also reduces the energy harvested at the delay-tolerant devices. For a similar reason, the performance of BAC-NOMA is also significantly affected by path loss. Therefore, the ideal applications of BAC-NOMA and WPT-NOMA are indoor communication scenarios, e.g., the distances between the nodes are not large. We note that WPT-NOMA also exhibits outage probability error floors in Fig. \ref{fig2a}, since the condition $\bar{\epsilon}_0 \bar{\epsilon}_s<1$ does not hold, an observation consistent to the previous figures. In addition, Fig. \ref{fig2a} shows that WPT-NOMA outperforms BAC-NOMA, if the outage probability is used as the metric for performance evaluation, which is also consistent to the previous numerical studies. However, Fig. \ref{fig2b} shows an interesting result that BAC-NOMA can outperform WPT-NOMA if the ergodic rate is used as the performance metric, particularly at high SNR and with small path loss. One possible reason is that WPT-NOMA relies on the time-switching WPT strategy, i.e., the first $\alpha T$ seconds are used for energy harvesting, and the remaining $(1-\alpha)T$ seconds are used for data transmission. On in other words, there is less time available for WPT-NOMA to transmit, whereas BAC-NOMA can carry out transmission continuously. In order to clearly demonstrate the impact of $\alpha$ on the performance of WPT-NOMA, in Fig. \ref{fig4}, different choices of $\alpha$ are used. In particular, $\alpha=0.1$ and $\alpha=0.9$ are a pair of choices of interest, as explained in the following. The use of $\alpha=0.1$ means that the delay-tolerant devices use a small amount of time for energy harvesting and the majority time for data transmission, whereas $\alpha=0.9$ means that the majority time is used for energy harvesting. Fig. \ref{fig4} demonstrates that the choice of $\alpha=0.9$ results in the poorest performance among all the choices shown in the figure. This is due to the fact that there is not sufficient time for data transmission, even though a good amount of energy has been harvested and the delay-tolerant devices can use larger transmit powers than that in the case with $\alpha=0.1$. It is worth pointing out that the choice of $\alpha=0.5$ yields the best performance among the choices shown in the figure. \begin{figure}[t]\centering \vspace{-1em} \epsfig{file=alpha.eps, width=0.5\textwidth, clip=}\vspace{-0.5em} \caption{ Impact of the choices of $\alpha$ on the performance of WPT-NOMA. $R_0=0.1$ BPCU and $R_s=2$ BPCU. $d_h=d_0 = 50$ m and $d_g=5$ m. $M=5$, $\beta=0.1$, and $\eta=0.1$. \vspace{-1em} }\label{fig4}\vspace{-0.5em} \end{figure} \section{Conclusions} In this paper, two energy and spectrally efficient transmission strategies, namely WPT-NOMA and BAC-NOMA, were proposed by employing the energy and spectrum cooperation among the IoT devices. For the proposed WPT-NOMA scheme, hybrid SIC was used to improve reception reliability, and the developed analytical results demonstrate that WPT-NOMA can avoid outage probability error floors and realize the full diversity gain. Unlike WPT-NOMA, BAC-NOMA suffers from an outage probability error floor, and the asymptotic behaviour of this error floor was analyzed in the paper by applying EVT. In addition, the effect of using one device's signal as the carrier signal was studied, and its harmful impact on the diversity gain was revealed. We note that the provided simulation results show that the choice of $\alpha$ has a significant impact on the performance of WPT-NOMA, and therefore an important direction for future research is to develop low-complexity algorithms for optimizing $\alpha$. In addition, we note that the reason for BAC-NOMA to suffer the outage probability error floor is due to the fact that hybrid SIC cannot be implemented. However, provided that ${\rm U}_n$, $1\leq n \leq M$, can carry out non-coherent detection, it is possible to apply hybrid SIC to BAC-NOMA, which is another important direction for future research. \appendices \section{Proof for Theorem \ref{theorem1}} \label{proof1} The proof for the theorem can be divided to four steps, where the first three steps are to analyze the asymptotic behaviour of $T_0$, $T_m$, $1\leq m\leq M-1$, and $T_M$, respectively., and the last step is to study the overall diversity gain. \subsection{Asymptotic Study of $T_0$} This section focuses on the high-SNR approximation of $T_0$ which can be rewritten as follows: \begin{align} \nonumber T_0 = &{\rm P}\left( {\rm R}_{M} ^{WP,1}<R_s , |\mathcal{S}_2|=0\right) \\ \label{T 11} = &{\rm P}\left( \gamma_M<\frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}} , \gamma_{1}>\tau(h_0) \right) . \end{align} As can be observed from \eqref{T 11}, $T_0$ is a function of two order statistics, $\gamma_1$ and $\gamma_M$, whose joint pdf is given by \cite{Arnoldbook} \begin{align} f_{\gamma_1,\gamma_M}(x,y) = \frac{M!}{(M-2)!}f_{\gamma}(x)f_{\gamma}(y)\left[F_{\gamma}(y)-F_{\gamma}(x)\right]^{M-2}. \end{align} Denote $T_{0|h_0}$ by the value of $T_0$ when $h_0$ is treated as a constant. Therefore, $T_{0|h_0}$ can be expressed as follows: \begin{align} \nonumber T_{0|h_0} =& \frac{M!}{(M-2)!}\int^{\frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}}}_{\tau(h_0)}f_{\gamma}(x)\int^{\frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}}}_{x}f_{\gamma}(y) \left[F_{\gamma}(y)-F_{\gamma}(x)\right]^{M-2}dydx \\\nonumber =& \frac{M!}{(M-1)!}\int^{\frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}}}_{\tau(h_0)}f_{\gamma}(x) \left[F_{\gamma}\left( \frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}}\right)-F_{\gamma}(x)\right]^{M-1}dx. \end{align} $T_{0|h_0} $ can be further simplified as follows: \begin{align} T_{0|h_0} =& \left[F_{\gamma}\left( \frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}}\right)-F_{\gamma}(\tau(h_0))\right]^{M} . \end{align} Therefore, $T_0$ can be obtained by finding the expectation of $T_{0|h_0} $ with respect to $h_0$: \begin{align}\nonumber T_0 = & \mathcal{E}_{h_0}\left\{ T_{0|h_0} \right\}. \end{align} We note that $\tau(h_0)$ can have different forms depending on the choice of $|h_0|^2$. In particular, $\tau(h_0)= 0$ means \begin{align} \frac{|h_0|^2}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} \leq 0 , \end{align} which requires \begin{align}\label{low bound h} |h_0|^2 \leq \frac{\bar{\epsilon}_0 }{ P } . \end{align} For the case $\tau(h_0)\neq 0$, the probability shown in \eqref{T 11} requires $\tau(h_0)<\frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}} $. This hidden constraint imposes another constraint on $|h_0|^2$ as follows: \begin{align} \frac{|h_0|^2}{\bar{\epsilon}_0 } -\frac{1}{ P } <\frac{\bar{\epsilon}_s(P|h_0|^2+1)}{ P }, \end{align} which can be explicitly expressed as follows: \begin{align} |h_0|^2 < \frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ P(1-\bar{\epsilon}_0 \bar{\epsilon}_s ) } .\label{upper bound h} \end{align} By using the constraints shown in \eqref{low bound h} and \eqref{upper bound h}, $T_1$ can be expressed as follows: \begin{align} \label{tx001} T_0 = & \lambda_0 \int^{\frac{\bar{\epsilon}_0}{P}}_{0}\left[F_{\gamma}\left( \frac{\bar{\epsilon}_s(Px+1)}{\eta P \bar{\alpha}}\right)-F_{\gamma}(0)\right]^{M} e^{-\lambda_0 x} dx \\\nonumber & +\lambda_0 \int_{\frac{\bar{\epsilon}_0}{P}}^{{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ P(1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} }e^{-\lambda_0 x} \left[F_{\gamma}\left( \frac{\bar{\epsilon}_s(Px+1)}{\eta P \bar{\alpha}}\right)-F_{\gamma}\left( \frac{x}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} \right)\right]^{M} dx. \end{align} We note that the upper bound on $|h_0|^2$, ${\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ P(1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} $, is crucial to remove outage probability error floors and realize the full diversity gain, as shown in the following. In particular, one can observe that both $ \frac{\bar{\epsilon}_s(Px+1)}{\eta P \bar{\alpha}}$ and $ \frac{x}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} $ go to zero for $P\rightarrow \infty$ in the two integrals considered in \eqref{tx001}. Therefore, the parameters of the Bessel functions in $T_0$ go to zero for $P\rightarrow \infty$. Recall that $xK_1(x) \approx 1+\frac{x^2}{2}\ln \frac{x}{2}$, for $x\rightarrow 0$ \cite{Dingkri04}. Therefore, the CDF of the unordered channel gain can be approximated as follows: \begin{align}\label{bessel approximation} F_{\gamma}(x) =&1 - 2 \sqrt{ \lambda_h\lambda_g x}K_1\left(2 \sqrt{ \lambda_h\lambda_g x}\right) \\\nonumber \approx & 1 - \left( 1+ \lambda_h\lambda_g x \ln (\lambda_h\lambda_g x) \right)= - \lambda_h\lambda_g x \ln (\lambda_h\lambda_g x) , \end{align} for $x\rightarrow 0$. We note that for $x\rightarrow 0$, $\ln (\lambda_h\lambda_g x) <0$ and hence the approximation for $F_{\gamma}(x) $ in \eqref{bessel approximation} is still positive. Therefore, $T_0$ can be approximated at high SNR as follows: \begin{align} \label{343} T_0 \approx & \int^{\frac{\bar{\epsilon}_0}{P}}_{0}\left[- \lambda_h\lambda_g \frac{\bar{\epsilon}_s(Px+1)}{\eta P \bar{\alpha}} \ln \left(\lambda_h\lambda_g \frac{\bar{\epsilon}_s(Px+1)}{\eta P \bar{\alpha}}\right) \right]^{M} dx \lambda_0\\\nonumber &+\lambda_0 \int_{\frac{\bar{\epsilon}_0}{P}}^{{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ P(1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} }\left[ \lambda_h\lambda_g \left(\frac{x}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} \right) \ln \left(\lambda_h\lambda_g \left(\frac{x}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} \right) \right) \right. \\\nonumber &\left. - \lambda_h\lambda_g \frac{\bar{\epsilon}_s(Px+1)}{\eta P \bar{\alpha}} \ln \left(\lambda_h\lambda_g \frac{\bar{\epsilon}_s(Px+1)}{\eta P \bar{\alpha}}\right) \right]^{M} dx. \end{align} In order to obtain a more insightful asymptotic expression of $T_0$, the expression in \eqref{343} can be rewritten as follows: \begin{align}\nonumber T_0 \approx & \frac{\lambda_0}{P} \int^{ \bar{\epsilon}_0 }_{0}\left[- \lambda_h\lambda_g \frac{\bar{\epsilon}_s(y+1)}{\eta P \bar{\alpha}} \ln \left(\lambda_h\lambda_g \frac{\bar{\epsilon}_s(y+1)}{\eta P \bar{\alpha}}\right) \right]^{M} dy\\\nonumber &+ \frac{\lambda_0}{P}\int_{ \bar{\epsilon}_0 }^{{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ (1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} }\left[ \frac{\lambda_h\lambda_g }{P} \left(\frac{y}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta \bar{\alpha}} \right) \right. \\\nonumber &\left. \times\ln \left(\frac{\lambda_h\lambda_g}{P} \left(\frac{y}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta \bar{\alpha}} \right) \right) - \lambda_h\lambda_g \frac{\bar{\epsilon}_s(y+1)}{\eta P \bar{\alpha}} \ln \left(\lambda_h\lambda_g \frac{\bar{\epsilon}_s(y+1)}{\eta P \bar{\alpha}}\right) \right]^{M} dy \\\nonumber =& \frac{\lambda_0}{P} \int^{ \bar{\epsilon}_0 }_{0}\left[- \frac{b_1(y)}{P} \ln \left(\frac{b_1(y)}{P}\right) \right]^{M} dy+ \frac{\lambda_0}{P}\int_{ \bar{\epsilon}_0 }^{{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ (1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} }\\ \label{two int} &\times \left[ \frac{b_2(y)}{P}\ln \left(\frac{b_2(y)}{P} \right) - \frac{b_1(y)}{P} \ln \left(\frac{b_1(y)}{P}\right) \right]^{M} dy , \end{align} where $y=Px$, $b_1(y)=\lambda_h\lambda_g \frac{\bar{\epsilon}_s(y+1)}{\eta \bar{\alpha}}$ and $b_2(y)= \lambda_h\lambda_g \left(\frac{y}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta \bar{\alpha}} \right)$. It is important to point out that both $b_1(y)$ and $b_2(y)$ are constant and not functions of $P$. Denote the two integrals in \eqref{two int} by $\tilde{Q}_1$ and $\tilde{Q}_2$, respectively. For $\tilde{Q}_1$, the following approximation can be used: \begin{align}\label{eqx40} \frac{b_1(y)}{P} \ln \left( \frac{b_1(y)}{P} \right) =& \frac{b_1(y)}{P} \left[ \ln b_1(y) -\ln P \right] \\ \underset{P\rightarrow \infty}{\approx }& - \frac{ b_1(y)}{P} \ln P =- \frac{b_1(y)}{P\ln^{-1}P} , \label{eqx41} \end{align} since $b_1(y)$ is finite and strictly larger than zero for the integral considered in $ \tilde{Q}_1$. Therefore, $Q_1$ can be approximated as follows: \begin{align} \tilde{Q}_1&\approx \int^{ \bar{\epsilon}_0 }_{0}\left[\frac{b_1(y)}{P\ln^{-1}P} \right]^{M} dy = \frac{e_1}{P^{M}\ln^{-M}P} = \mathcal{O}\left( \frac{1}{P^{M}\ln^{-M}P}\right), \end{align} where $\mathcal{O}$ denotes the approximation operation by omitting the constant multiplicative coefficient, and the last approximation follows from the fact that $e_1= \int^{ \bar{\epsilon}_0 }_{0}\left[ {b_1(y)} \right]^{M} dy$ is constant and not a function of $P$. The approximation for $\tilde{Q}_2$ is more complicated since $b_2(y)$ can be zero for the considered integral and hence $\ln b_2(y)$ can be unbounded. Unlike $\tilde{Q}_1$, $\tilde{Q}_2$ can be approximated as follows: \begin{align} \nonumber \tilde{Q}_2=&\sum^{M}_{p=0}\frac{(-1)^p}{P^M}{M\choose p} \int_{ \bar{\epsilon}_0 }^{{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ (1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} }b_2(y) ^{M-p}b_1(y)^p \left[ \ln b_2(y) -\ln P \right]^{M-p} \left[ \ln b_1(y) - \ln P \right]^{p} dy \\\nonumber =&\sum^{M}_{p=0}\frac{(-1)^p}{P^M} \int_{ \bar{\epsilon}_0 }^{{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ (1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} }b_2(y) ^{M-p}b_1(y)^p \left( \sum^{M-p}_{i=0}(-1)^i{M-p\choose i}( \ln b_2(y))^{M-p-i} (\ln P)^i\right) \\ \label{dominant1} &\times \left(\sum^{p}_{j=0}{p\choose j}(-1)^j ( \ln b_1(y))^{p-j}(\ln P)^j \right) dy . \end{align} At high SNR, the term with $(\ln P)^M$ is dominant, compared to the terms with $(\ln P)^m$, $m<M$, which means that \eqref{dominant1} can be further approximated as follows: \begin{align} \nonumber \tilde{Q}_2\approx &\frac{(\ln P)^M }{P^M}\sum^{M}_{p=0} \underset{i+j=M}{\sum}(-1)^{p+i+j}{M-p\choose i} {p\choose j} \\ &\times \int_{ \bar{\epsilon}_0 }^{{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ (1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} }b_2(y) ^{M-p}b_1(y)^p( \ln b_1(y))^{p-j} \\\nonumber &\times ( \ln b_2(y))^{M-p-i} dy =\mathcal{O}\left( \frac{1}{P^{M}\ln^{-M}P}\right). \end{align} Therefore, with $P\rightarrow \infty$, $T_0$ can be approximated as follows: \begin{align} \label{t00} T_0 =\frac{\lambda_0}{P}\tilde{Q}_1+\frac{\lambda_0}{P}\tilde{Q}_2 =\mathcal{O}\left( \frac{1}{P^{M+1}\ln^{-M}P}\right). \end{align} \subsection{Asymptotic Study of $T_m$, $1\leq m \leq M$} This section is to focus on $T_m$, $1\leq m \leq M-1$, which can be expressed as follows: \begin{align}\nonumber T_m = &{\rm P}\left( {\rm R}_{m} ^{WP,2}<R_s, {\rm R}_{M} ^{WP,1}<R_s , |\mathcal{S}_2|=m\right) \\\nonumber = &{\rm P}\left( \gamma_m<\frac{\bar{\epsilon}_s}{\eta P\bar{\alpha}},\gamma_{m}<\tau(h_0), \gamma_{m+1}>\tau(h_0), \gamma_M<\frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}} \right) . \end{align} For the case of $1\leq m\leq M$, $\tau(h_0)\neq 0$, which means \begin{align}\label{low bound h2} \frac{|h_0|^2}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} >0, \end{align} or equivalently $|h_0|^2 > \frac{\bar{\epsilon}_0 }{ P } $. Furthermore, the requirement $\tau(h_0)<\frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}} $ leads to the constraint $ |h_0|^2 < \frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ P(1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }$, as discussed in \eqref{upper bound h}. Therefore, $T_m$ can be rewritten as follows: \begin{align}\nonumber T_m =& {\rm P}\left( \gamma_m<\frac{\bar{\epsilon}_s}{\eta P\bar{\alpha}} , \gamma_M<\frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}} , \right. \\\nonumber &\left.\gamma_{m}<\frac{|h_0|^2}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} , \gamma_{m+1}>\frac{|h_0|^2}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} \right) \\\label{tm1} =&{\rm P}\left( \gamma_m< b_{h_0} , \gamma_{m+1}> \tau(h_0) , \gamma_M<a(h_0) \right) , \end{align} where $a(h_0)=\frac{\bar{\epsilon}_s(P|h_0|^2+1)}{\eta P \bar{\alpha}} $ and $ b_{h_0} = \min\left\{ \frac{\bar{\epsilon}_s}{\eta P\bar{\alpha}} , \tau(h_0)\right\}$. As can be observed from \eqref{tm1}, $T_m$, $1\leq m \leq M-1$, is a function of three order statistics, $\gamma_m$, $\gamma_{m+1}$, and $\gamma_{M}$. Recall that the joint pdf of three order statistics is given by \cite{Arnoldbook} \begin{align}\label{joint pdf three} &f_{\gamma_m,\gamma_{m+1},\gamma_{M}}(x,y,z) = c_m F_{\gamma}(x)^{m-1}\\\nonumber &\times \left(F_{\gamma}(z) - F_{\gamma}(y)\right)^{M-m-2} f_{\gamma}(x)f_{\gamma}(y)f_{\gamma}(z),% \end{align} where $c_m=\frac{M!}{(m-1)!(M-m-2)!}$. Denote $T_{m|h_0}$ by the value of $T_m$ by assuming that $h_0$ is fixed. By using the joint pdf in \eqref{joint pdf three}, $T_{m|h_0} $ can be expressed as follows: \begin{align} T_{m|h_0} =& {\rm P}\left( \gamma_m< b_{h_0} , \gamma_{m+1}> \tau(h_0) , \gamma_M<a(h_0) \right) \\\nonumber =& c_m \int^{b_{h_0} }_{0}F_{\gamma}(x)^{m-1} f_{\gamma}(x)dx \int^{a(h_0)}_{ \tau(h_0) }f_{\gamma}(y) \int^{a(h_0)}_{y}\left(F_{\gamma}(z) - F_{\gamma}(y)\right)^{M-m-2} f_{\gamma}(z)dz. \end{align} By using the property of CDFs, $T_{m|h_0} $ can be more explicitly expressed as follows: \begin{align} \nonumber T_{m|h_0} =& \bar{ c}_m F_{\gamma}(b_{h_0} )^{m} \int^{a(h_0)}_{ \tau(h_0) }\left[\left(F_{\gamma}(a(h_0)) - F_{\gamma}(y)\right)^{M-m-1} -\left(F_{\gamma}(y) - F_{\gamma}(y)\right)^{M-m-1} \right]f_{\gamma}(y)dy \\ =& \bar{ c}_m F_{\gamma}(b_{h_0} )^{m} \int^{a(h_0)}_{ \tau(h_0) } \left[F_{\gamma}(a(h_0)) - F_{\gamma}(y)\right]^{M-m-1} f_{\gamma}(y) dy, \end{align} where $\bar{c}_m=\frac{M!}{m!(M-m-1)!}$. The expression of $T_{m|h_0} $ can be further simplified as follows: \begin{align} \nonumber T_{m|h_0} =& \tilde{ c}_m F_{\gamma}(b_{h_0} )^{m} \left( \left[F_{\gamma}(a(h_0)) - F_{\gamma}(\tau(h_0))\right]^{M-m} \right.\\\nonumber &\left. - \left[F_{\gamma}(a(h_0)) - F_{\gamma}(a(h_0))\right]^{M-m} \right) \\ \label{m general} =& \tilde{ c}_m F_{\gamma}(b_{h_0} )^{m} \left[F_{\gamma}(a(h_0)) - F_{\gamma}(\tau(h_0))\right]^{M-m} , \end{align} where $\tilde{c}_m=\frac{M!}{m!(M-m)!}$. $T_m $ can be obtained by calculating the expectation of $T_{m|h_0} $ with respect of $|h_0|^2$ as follows: \begin{align} T_m = & \mathcal{E}_{h_0}\left\{ T_{m|h_0} \right\} \\\nonumber =& \tilde{ c}_m \mathcal{E}_{h_0}\left\{ F_{\gamma}(b_{h_0} )^{m} \left[F_{\gamma}(a(h_0)) - F_{\gamma}(\tau(h_0))\right]^{M-m} \right\}. \end{align} Recall that $b_{h_0}=\tau(h_0)$ if the constraint $\frac{\bar{\epsilon}_s}{\eta P\bar{\alpha}} >\frac{|h_0|^2}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}}$ is satisfied, which imposes the following constraint on $ |h_0|^2$: \begin{align}\label{cons3} |h_0|^2 <\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } . \end{align} Therefore, $T_m $ can be more explicitly expressed as follows: \begin{align} T_m \label{integral tm} = & \tilde{ c}_m \lambda_0 \int^{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } }_{\frac{\bar{\epsilon}_0 }{ P } } \left( \left[F_{\gamma}(a(x)) - F_{\gamma}(\tau(x))\right]^{M-m} \right) \\\nonumber &\times F_{\gamma}(\tau(x) )^{m}e^{-\lambda_0 x}dx +\tilde{ c}_m\lambda_0F_{\gamma}\left(\frac{\bar{\epsilon}_s}{\eta P\bar{\alpha}} \right)^{m} \\\nonumber &\times \int_{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } }^{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ P(1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} \left( \left[F_{\gamma}(a(x)) - F_{\gamma}(\tau(x))\right]^{M-m} \right) e^{-\lambda_0 x}dx, \end{align} where the constraints on $|h_0|^2$ shown in \eqref{upper bound h}, \eqref{low bound h2} and \eqref{cons3} have been used. We note that for the integrals considered in \eqref{integral tm}, $ \tau(x) \rightarrow 0$ for $P\rightarrow \infty$, which can be explained in the following. Recall that \begin{align} \tau(x) = \frac{x}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta P\bar{\alpha}} . \end{align} For the integrals considered in \eqref{integral tm}, ${\frac{\bar{\epsilon}_0 }{ P } } \leq x\leq {\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } } $ and ${\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } }\leq x\leq {\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ P(1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} $. Therefore, indeed $x\rightarrow 0$ for $P\rightarrow \infty$, which means that $ \tau(x) \rightarrow 0$. Similarly, for the integrals considered in \eqref{integral tm}, the following approximation also holds \begin{align} a(x)= \frac{\bar{\epsilon}_sx}{\eta \bar{\alpha}}+\frac{\bar{\epsilon}_s }{\eta P \bar{\alpha}}\underset{P\rightarrow\infty}{\longrightarrow} 0. \end{align} By using these asymptotic behaviours of $\tau(x)$ and $a(x)$, the probability $T_m $ can be approximated as follows: \begin{align}\nonumber T_m \approx & \tilde{ c}_m \lambda_0 \int^{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } }_{\frac{\bar{\epsilon}_0 }{ P } } \left[- \lambda_h\lambda_g \tau(x) \ln (\lambda_h\lambda_g \tau(x) ) \right]^{m}\left[ \lambda_h\lambda_g \tau(x)\right. \\\nonumber &\times \left. \ln (\lambda_h\lambda_g \tau(x)) - \lambda_h\lambda_g a(x) \ln (\lambda_h\lambda_g a(x)) \right]^{M-m} dx \\\nonumber &+\tilde{ c}_m\lambda_0\left[ - \lambda_h\lambda_g \frac{\bar{\epsilon}_s}{\eta P\bar{\alpha}} \ln \left(\lambda_h\lambda_g \frac{\bar{\epsilon}_s}{\eta P\bar{\alpha}}\right)\right] ^{m} \\\nonumber &\times \int_{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } }^{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ P(1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} \left[ \lambda_h\lambda_g \tau(x) \ln (\lambda_h\lambda_g \tau(x)) - \lambda_h\lambda_g a(x) \ln (\lambda_h\lambda_g a(x)) \right]^{M-m} dx, \end{align} for $P\rightarrow \infty$. Define $ \bar{\tau}(h_0) = P\lambda_h\lambda_g\tau(h_0) $ and $ \bar{a}(h_0)= P\lambda_h\lambda_g a(h_0)$. Therefore, $T_m $ can be expressed as follows: \begin{align} T_m \approx & \tilde{ c}_m\lambda_0 \int^{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } }_{\frac{\bar{\epsilon}_0 }{ P } } \left[- \frac{ \bar{\tau}(x)}{P} \ln \left( \frac{\bar{ \tau}(x)}{P} \right) \right]^{m} \\\nonumber &\times \left[ \frac{\bar{ \tau}(x)}{P} \ln \left(\frac{\bar{ \tau}(x)}{P}\right) - \frac{\bar{ a}(x)}{P} \ln \left(\frac{\bar{ a}(x)}{P}\right) \right]^{M-m} dx \\\nonumber &+\tilde{ c}_m\lambda_0\left[ - \frac{\lambda_h\lambda_g\bar{\epsilon}_s}{\eta P\bar{\alpha}} \ln \left(\frac{\lambda_h\lambda_g \bar{\epsilon}_s}{\eta P\bar{\alpha}}\right)\right] ^{m} \int_{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } }^{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ P(1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} \\\nonumber &\times \left[ \frac{\bar{ \tau}(x)}{P} \ln \left( \frac{ \bar{\tau}(x)}{P}\right) - \frac{\bar{ a}(x)}{P} \ln \left( \frac{\bar{ a}(x)}{P}\right) \right]^{M-m} dx. \end{align} In order to obtain a more insightful asymptotic expression, we substitute the following three parameters, $y=Px$, \begin{align} \tilde{\tau}(y) = \lambda_h\lambda_g\left(\frac{y}{\bar{\epsilon}_0\eta \bar{\alpha}} -\frac{1}{\eta \bar{\alpha}} \right) , \end{align} and \begin{align} \tilde{a}(y)= \lambda_h\lambda_g\left(\frac{\bar{\epsilon}_sy}{\eta \bar{\alpha}}+\frac{\bar{\epsilon}_s }{\eta \bar{\alpha}}\right), \end{align} into the expression of $T_m$, which yields the following expression: \begin{align} T_m \approx & \frac{\tilde{ c}_m \lambda_0}{P} \int^{ \bar{\epsilon}_0 (1+\bar{\epsilon}_s) }_{ \bar{\epsilon}_0 } \left[- \frac{ \tilde{\tau}(y)}{P} \ln \left( \frac{\tilde{ \tau}(y)}{P} \right) \right]^{m} \\\nonumber &\times \left[ \frac{\tilde{ \tau}(y)}{P} \ln \left(\frac{\tilde{ \tau}(y)}{P}\right) - \frac{\tilde{ a}(y)}{P} \ln \left(\frac{\tilde{ a}(y)}{P}\right) \right]^{M-m} dy \\\nonumber &+\frac{\tilde{ c}_m\lambda_0}{P}\left[ - \frac{\lambda_h\lambda_g\bar{\epsilon}_s}{\eta P\bar{\alpha}} \ln \left(\frac{\lambda_h\lambda_g \bar{\epsilon}_s}{\eta P\bar{\alpha}}\right)\right] ^{m} \int_{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }^{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s) }{ (1-\bar{\epsilon}_0 \bar{\epsilon}_s ) }} \left[ \frac{\tilde{ \tau}(y)}{P} \ln \left( \frac{ \tilde{\tau}(y)}{P}\right) - \frac{\tilde{ a}(y)}{P} \ln \left( \frac{\tilde{ a}(y)}{P}\right) \right]^{M-m} dy. \end{align} It is important to point out that both $ \tilde{\tau}(y) $ and $\tilde{a}(y)$ are constant and not functions of $P$. By using the steps similar to those to obtain the approximation of $T_0$, $T_m $ can be approximated as follows: \begin{align} \label{tm xx} T_m = & \mathcal{O}\left( \frac{1}{P^{M+1}\ln^{-M}P}\right) . \end{align} \subsection{Asymptotic Study of $T_M$} For the special case $T_M$, we first recall that $T_M$ can be expressed as follows: \begin{align}\nonumber T_M = &{\rm P}\left( {\rm R}_{M} ^{WP,2}<R_s, {\rm R}_{M} ^{WP,1}<R_s , |\mathcal{S}_2|=M\right) \\ \label{TM x} = &{\rm P}\left( \gamma_M<\frac{\bar{\epsilon}_s}{\eta P\bar{\alpha}}, \gamma_{M}<\tau(h_0) \right) . \end{align} By using the marginal pdf of the largest order statistics, $T_M$ can be be straightforwardly expressed as follows: \begin{align} \label{TM re} T_M = & \mathcal{E}_{h_0}\left\{ F_{\gamma}\left(\min\left\{\frac{\bar{\epsilon}_s}{\eta P\bar{\alpha}},\tau(h_0)\right\}\right)^{M} \right\}. \end{align} As can be observed from \eqref{TM x}, $T_M$ is a function of $\gamma_M$ only, which is different from $T_m$, $1\leq m\leq M-1$. It is important to point out that the constraint of $|h_0|^2$ shown in \eqref{upper bound h} does not exist for $T_M$. This causes the reduction of the diversity gain from $M+1$ to $M$, as shown in the following. $T_M $ can be more explicitly expressed as follows: \begin{align} T_M = & \underset{T_{M,1}}{\underbrace{ \tilde{ c}_m \lambda_0 \int^{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } }_{\frac{\bar{\epsilon}_0 }{ P } } F_{\gamma}(\tau(x) )^{M}e^{-\lambda_0 x}dx}} \\ \label{eq52} &+\tilde{ c}_m\lambda_0 \underset{T_{M,2}}{\underbrace{ F_{\gamma}\left(\frac{\bar{\epsilon}_s}{\eta P\bar{\alpha}} \right)^{M} }} \underset{T_{M,3}}{\underbrace{ \int_{\frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } }^{\infty} e^{-\lambda_0 x}dx}}. \end{align} By following steps similar to those to analyze $T_m$, $1\leq m\leq M-1$, it is straightforward to show that $T_{M,1} =\mathcal{O}\left( \frac{1 }{P^{M+1} \ln^{-M}P} \right)$ and $T_{M,2} =\mathcal{O}\left( \frac{1 }{P^{M} \ln^{-(M-1)}P} \right)$. What makes the high SNR behaviour of $T_M$ different from those of $T_m$, $0\leq m\leq M-1$, is $T_{M,3}$. It is important to point out that the upper end of the integral range of $T_{M,3}$ is $\infty$, instead of a value which goes to zero for $P\rightarrow \infty$. As a result, $\lambda_0T_{M,3}= e^{-\lambda_0 \frac{\bar{\epsilon}_0 (1+\bar{\epsilon}_s)}{ P } }\underset{P\rightarrow \infty}{\longrightarrow} 1$, instead of $\frac{1}{P}$. Therefore, $T_M$ can be approximated at high SNR as follows: \begin{align} \label{tm xx2} T_M = & \mathcal{O}\left( \frac{1 }{P^{M} \ln^{-(M-1)}P}\right) . \end{align} \subsection{Overall High-SNR Approximation} By substituting \eqref{t00}, \eqref{tm xx} and \eqref{tm xx2} in \eqref{wp outage}, we can conclude that the overall outage probability can be approximated as follows: \begin{align} \label{ovx} {\rm P}^{WP} = & \mathcal{O}\left( \frac{1 }{P^{M} \ln^{-(M-1)}P} \right), \end{align} for $P\rightarrow \infty$. \eqref{ovx} indicates that $T_M$ is the most dominant term in \eqref{wp outage} at high SNR. The diversity gain achieved by WPT-NOMA can be obtained as follows: \begin{align} d=& \underset{P\rightarrow \infty}{\lim}-\frac{\log {\rm P}^{WP} }{\log P} =\underset{P\rightarrow \infty}{\lim}\frac{\log \left(P^{M} \ln^{-(M-1)}P\right)}{\log P} \\\nonumber =&\underset{P\rightarrow \infty}{\lim}\left[ \frac{\log P^{M} }{\log P}- \frac{\log \ln^{M-1}P }{\log P}\right]. \end{align} The following limit holds at high SNR \begin{align} \underset{P\rightarrow \infty}{\lim} \frac{\log \ln^{M-1}P }{\log P}=& \underset{P\rightarrow \infty}{\lim} \frac{\log e \ln\left( \ln^{M-1}P \right)}{\log e \ln P}= \underset{P\rightarrow \infty}{\lim} \frac{M-1}{\ln P} =0, \end{align} where L'Hospital's rule is used. Therefore, the diversity gain achieved by WPT-NOMA can be obtained as follows: \begin{align} d=&\underset{P\rightarrow \infty}{\lim} \frac{\log P^{M} }{\log P} =M, \end{align} and the theorem is proved. \section{Proof for Lemma \ref{lemma1}} \label{lemma1proof} In order to study the asymptotic behaviour of $ {\rm P}\left( E_0\right) $, EVT is applied in the following. Recall that the limiting CDF of the smallest order statistics should follow one of the three distributions, namely the Frech\'et type, the modified Weibull type and the extreme value CDF \cite[Theorem 8.3.5]{Arnoldbook}. For the considered order statistics, $\gamma_1$, the modified Weibull type is applicable as explained in the following. Denote $F^{-1}_{\gamma}(a)$ by the inverse function of the CDF of the unordered channel gain, i.e., $F_{\gamma}\left(F^{-1}_{\gamma}(a)\right)=a$. The first condition to show that the considered CDF is the modified Weibull type of EVT is that $F^{-1}_{\gamma}(0)$ should be finite \cite[Theorem 8.3.6]{Arnoldbook}. For the considered CDF, we have $F^{-1}_{\gamma}(0)=0$ which is indeed finite. The second condition is to show whether the following limitation exists \begin{align} \underset{\epsilon\rightarrow 0^+}{\lim}\frac{F_{\gamma}\left(F^{-1}_{\gamma}(0)+\epsilon x\right)}{F_{\gamma}\left(F^{-1}_{\gamma}(0)+\epsilon \right)}=x^{\breve{\alpha}}, \end{align} for all $x>0$, where $\breve{\alpha}$ denotes a constant parameter. For the considered CDF, the limitation can be expressed as follows: \begin{align}\label{evt1} &\underset{\epsilon\rightarrow 0^+}{\lim}\frac{F_{\gamma}\left(F^{-1}_{\gamma}(0)+\epsilon x\right)}{F_{\gamma}\left(F^{-1}_{\gamma}(0)+\epsilon \right)} \\\nonumber =&\underset{\epsilon\rightarrow 0^+}{\lim}\frac{F_{\gamma}\left( \epsilon x\right)}{F_{\gamma}\left( \epsilon \right)} = \underset{\epsilon\rightarrow 0^+}{\lim}\frac{1 - 2 \sqrt{ \lambda_h\lambda_g \epsilon x}K_1\left(2 \sqrt{ \lambda_h\lambda_g \epsilon x}\right) }{1 - 2 \sqrt{ \lambda_h\lambda_g \epsilon}K_1\left(2 \sqrt{ \lambda_h\lambda_g \epsilon}\right) } . \end{align} Note that in \eqref{evt1}, $x$ is constant, and the limitation is with respect to $\epsilon$. When $\epsilon\rightarrow 0$, the approximation in \eqref{bessel approximation} can be applied and the limitation can be obtained as follows: \begin{align}\label{evt2} &\underset{\epsilon\rightarrow 0^+}{\lim}\frac{F_{\gamma}\left(F^{-1}_{\gamma}(0)+\epsilon x\right)}{F_{\gamma}\left(F^{-1}_{\gamma}(0)+\epsilon \right)} = \underset{\epsilon\rightarrow 0^+}{\lim}\frac{- \lambda_h\lambda_g \epsilon x \ln (\lambda_h\lambda_g \epsilon x) }{- \lambda_h\lambda_g \epsilon \ln (\lambda_h\lambda_g \epsilon) } =\underset{\epsilon\rightarrow 0^+}{\lim}\frac{ x \ln (\lambda_h\lambda_g \epsilon x) }{ \ln (\lambda_h\lambda_g \epsilon) } . \end{align} By applying L'Hospital's rule, the limitation can be obtained as follows: \begin{align}\label{evt3} &\underset{\epsilon\rightarrow 0^+}{\lim}\frac{F_{\gamma}\left(F^{-1}_{\gamma}(0)+\epsilon x\right)}{F_{\gamma}\left(F^{-1}_{\gamma}(0)+\epsilon \right)} =\underset{\epsilon\rightarrow 0^+}{\lim}\frac{ x \frac{\lambda_h\lambda_g x}{\lambda_h\lambda_g \epsilon x} }{ \frac{\lambda_h\lambda_g }{\lambda_h\lambda_g \epsilon} } =x, \end{align} which means that $\breve{\alpha}=1$ for the considered order statistics. As a result, the smallest channel gain will follow the modified Weibull type with $\breve{\alpha}=1$, i.e., \begin{align}\label{ab} \frac{\gamma_1-a_m}{b_m} \sim G_2^*(x;\breve{\alpha}) , \end{align} where $G_2^*(x;\breve{\alpha}) $ denotes the modified Weibull distribution: \begin{align} G_2^*(x;\breve{\alpha}) \triangleq 1-G_2(-x;\breve{\alpha}) = 1- e^{-x}, \end{align} and $G_2(x;\breve{\alpha})$ denotes the Weibull distribution defined as follows: \begin{align} G_2(x;\breve{\alpha}) \triangleq \left\{\begin{array}{ll}e^{-(-x)^{\breve{\alpha}}}, &x<0\\ 1,&x\geq 0 \end{array}\right.. \end{align} The two parameters in \eqref{ab}, $a_m$ and $b_m$ are given by \begin{align}\label{am} a_m \triangleq F^{-1}_{\gamma}(0)=0, \end{align} and \begin{align}\label{bmxx} b_m \triangleq F^{-1}_{\gamma}\left(\frac{1}{M}\right) - F^{-1}_{\gamma}(0)=F^{-1}_{\gamma}\left(\frac{1}{M}\right). \end{align} The challenging step is to find an explicit expression of $b_m$, which can be obtained by solving the following equation: \begin{align} \label{eqx3} 1 - 2 \sqrt{ \lambda_h\lambda_g b_m}K_1\left(2 \sqrt{ \lambda_h\lambda_g b_m}\right) = \frac{1}{M} . \end{align} For $M\rightarrow \infty$, we have $\frac{1}{M}\rightarrow 0$ and hence $b_m\rightarrow 0$. Because $b_m\rightarrow 0$, the use of the approximation in \eqref{bessel approximation} can be used to simplify the equation \eqref{eqx3} as follows: \begin{align} \label{lambert1} - \lambda_h\lambda_g b_m \ln (\lambda_h\lambda_g b_m) =\frac{1}{M}. \end{align} In order to apply the Lambert W function, \eqref{lambert1} needs to be written as follows: \begin{align} \label{lambert2} - \frac{1}{M} &=-\frac{1}{M \lambda_h\lambda_g b_m}e^{-\frac{1}{M\lambda_h\lambda_g b_m}}, \end{align} which means that the solution of \eqref{lambert2} can be expressed as follows: \begin{align} \label{lambert3} -\frac{1}{M \lambda_h\lambda_g b_m} &=W\left( - \frac{1}{M} \right), \end{align} or equivalently \begin{align} \label{lambert4} b_m &=-\frac{1}{ M \lambda_h\lambda_gW\left( - \frac{1}{M} \right)}, \end{align} where $W(\cdot)$ denotes the Lambert W function. Because $- \frac{1}{M} $ is negative, there are two solutions for $W\left( - \frac{1}{M} \right)$, namely $W_0\left( - \frac{1}{M} \right)$ and $W_{-1}\left( - \frac{1}{M} \right)$ \cite{6559999}. Recall that $W_0(x)\rightarrow 0$ for $x\rightarrow 0$, which means that $ b_m =-\frac{1}{ M \lambda_h\lambda_gW_0\left( - \frac{1}{M} \right)}\rightarrow \infty$ for $M\rightarrow \infty$. This is contradicted to \eqref{bmxx} which indicates that $b_m\rightarrow 0$ for $M\rightarrow \infty$. Therefore, $W_0\left( - \frac{1}{M} \right)$ is not the solution of the considered case, and we are interested the other branch, $W_{-1}\left( - \frac{1}{M} \right)$. Recall that $W_{-1}\left( x\right)$ can be bounded as follows: \cite{6559999} \begin{align} -1-\sqrt{2u}-u< W_{-1}\left( -e^{-u-1}\right)<-1-\sqrt{2u} -\frac{2}{3}u,\text{ for } u>0. \end{align} By applying the bounds, $W_{-1}\left( - \frac{1}{M} \right)$ can be bounded as follows: \begin{align} \ln \frac{1}{M}< W_{-1}\left( - \frac{1}{M} \right)< \frac{2}{3}\ln \frac{1}{M}, \end{align} which yields the following approximation: \begin{align} W_{-1}\left( - \frac{1}{M} \right)=- \mathcal{O}( \ln M). \end{align} Therefore, $b_m$ can be approximated as follows: \begin{align} \label{lambert5} b_m &=\frac{1}{ \lambda_h\lambda_g M \mathcal{O}( \ln M)}. \end{align} By applying \eqref{am} and \eqref{lambert5} to \eqref{ab}, we have $ \frac{\gamma_1 }{b_m}\sim e^{-x} $ and the limiting CDF of the smallest channel gain is given by \begin{align} F_{\gamma_1}(y) = 1-e^{ {y}{M \lambda_h\lambda_gW\left( - \frac{1}{M} \right)}}, \end{align} and the corresponding pdf is given by $ f_{\gamma_1}(y) = {M \lambda_h\lambda_gW\left( - \frac{1}{M} \right)}e^{ {y}{M \lambda_h\lambda_gW\left( - \frac{1}{M} \right)}} $. By using this pdf, $ {\rm P}\left( E_0\right) $ can be expressed as follows: \begin{align}\nonumber {\rm P}\left( E_0\right) =& \int^{\infty}_{0}\left(e^{-\lambda_0\epsilon_0P^{-1} }- e^{ -\lambda_0( \beta^2 \epsilon_0x+ \epsilon_0 P^{-1})} \right)f_{\gamma_1}(x)dx +1 - e^{-\lambda_0\epsilon_0P^{-1} } \\\nonumber \approx &1 + \frac{M \lambda_h\lambda_gW\left( - \frac{1}{M} \right) }{\lambda_0 \beta^2 \epsilon_0- M \lambda_h\lambda_gW\left( - \frac{1}{M} \right)} , \end{align} which can be approximated as follows: \begin{align}\nonumber {\rm P}\left( E_0\right) \approx &1 - \frac{M \lambda_h\lambda_g \mathcal{O}( \ln M) }{\lambda_0 \beta^2 \epsilon_0+ M \lambda_h\lambda_g \mathcal{O}( \ln M)} \\\label{final ap} \approx &1 - \frac{1 }{1+\frac{\lambda_0 \beta^2 \epsilon_0}{ M \lambda_h\lambda_g \mathcal{O}( \ln M)}} \rightarrow \frac{\lambda_0 \beta^2 \epsilon_0}{ \lambda_h\lambda_g M \mathcal{O}( \ln M)}, \end{align} where the last approximation follows from the fact that $\frac{1}{1+x}\approx 1-x$ for $x\rightarrow 0$. By increasing $M$, \eqref{final ap} clearly shows that $ {\rm P}\left( E_0\right) $ approaches zero, and the proof for the lemma is complete. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-07-28T02:43:30", "yymm": "2007", "arxiv_id": "2007.13665", "language": "en", "url": "https://arxiv.org/abs/2007.13665" }
\section{Introduction} In recent years, neural networks have increased dramatically in size: \cite{amodei2018ai} estimate a 300,000x growth in compute since 2012, with a doubling period of 3.4 months. Since the ``bitter lesson'' \citep{sutton2019bitter} of machine learning seems for now to be true, and performance on machine learning tasks appears to scale linearly with model size and amount of training data \citep{hestness2017deep}, this trend is unlikely to decelerate anytime soon. Indeed, at the time of writing, OpenAI has just published GPT-3, a language modelling neural network with 175 Billion parameters \citep{brown2020language}. Since neural networks are increasingly used in industry to power various large-scale applications--from voice recognition (Google's Assistant and Apple's Siri are powered by neural networks \citep{team2017hey, He_2019}) to image processing and natural language understanding--lowering the computational cost of these models at inference time is an increasingly pressing problem. There are ways of reducing the computational footprints of neural networks. Neural network pruning removes less important connections between neurons \citep{castellano1997iterative}. Quantisation reduces the number of bits taken up by each of the network's parameters \citep{han2015deep}. Knowledge distillation uses a larger network to train a smaller network on the large one's outputs \citep{hinton2015distilling}. These methods, although powerful, still lead to networks spending the same amount of compute on each input, regardless of the complexity of the input. Yet humans take different amounts of time to solve different tasks based on the complexity of the tasks (it is easier to quickly distinguish between a bear and a boat than it is to quickly distinguish between an Alaskan Malamute and a Siberian Husky). This observation gave rise to the field of conditional computation. Conditional computation techniques involve incorporating mechanisms within neural networks that allow the networks to reduce inference time compute costs by not passing the input through the entire graph, but only a small sub-part of it. This lets the network spend less compute time on easier inputs/tasks, and has the added benefit of allowing the network to be sensitive to computational budgets (if the budget is high, the network can afford to use more compute). Existing implementations of conditional computation are generally complicated to engineer, and consequently are not used much in industry \citep{bapna2020controlling}. To solve this, we propose the simplest model of conditional computation: attaching a single hidden layer perceptron, which we call SideNet, to an intermediate representation of a pretrained network, which we call MainNet. Unlike most existing conditional computation methods, the SideNet is straightforward to train, and attaching a SideNet to a MainNet is easy to engineer. We also make three noteworthy observations: (i) When attached to the early intermediate representations of ResNets, the classification confidences of SideNets are calibrated, whereas the classification confidences of their ResNets are not. (ii) SideNet-based compute reduction can be complementary to knowledge-distillation and pruning: applying SideNets to DistilBERT \citep{sanh2019distilbert}, a heavily compressed transformer model, still yielded noticeable performance savings ($\approx 30\%$) for a small drop in test accuracy ($\approx 0.5\%$). (iii) SideNets make it easy to explore compute-accuracy space, by making it continuous rather than discrete. \section{Related Work} \label{relwork} \subsection{Similar architectures} \cite{tinytobig} first run an image through a small convolutional neural network to ascertain whether or not it can be classified with high confidence. If it cannot, they send the image to a larger network, and use that classification as the final one. \cite{bolukbasi2017adaptive} build on this. They first run an image through a small AlexNet classifier \citep{alexnet}, and a regression model determines the confidence level of the classification. If it is high, the classification is returned; otherwise, the image is sent through a GoogLeNet classifier \citep{googlenet}, where the same regression is applied. If the confidence is still too low, it is sent through a ResNet \citep{he2016deep}, where a final classification is returned. Our method differs from these because in ours less computation is wasted: if the SideNet's confidence in its prediction is not high enough to return a classification, then the intermediary representation it used will continue flowing along the MainNet, and will not have to be recomputed from scratch. \cite{leroux2017cascading} is the paper most resembling ours: they run an image through a main backbone network, along with multiple small classification networks along the backbone's side that interrupt the flow of the image through the main model if their confidence is high enough. They demonstrated that their method provided significant energy savings on a Raspberry Pi computer. \cite{zhang2019scan} build on this, by using attention mechanisms with their side classification networks, and training them with knowledge-distillation and a genetic algorithm. Our method differs from these because it only uses one SideNet, which makes training substantially easier (training a network with multiple heads requires properly weighting the losses of each head, which is challenging). There are a variety of other architectures involving conditional computation: \cite{bengio2015conditional} use reinforcement learning to learn a policy that directs an input only through discrete parts of a network, rather than the whole network. However, backpropagating through discrete random variables is inefficient and slow. \cite{bapna2020controlling} introduce a method to turn these discrete random variables continuous, to increase the rate of learning, and use it to train control networks, networks that control the amount of compute used at inference. \subsection{Intermediate representations} There is a rich literature studying neural networks' intermediate representations. \cite{alexnet} find that early layers of convolutional neural networks mostly pick out simple textures and lines. This suggests that if an image is texturally simple or distinctive, it should be able to be classified in early parts of the network, rather than at the very end. \cite{leroux2017cascading} argue that this holds: their model was confident in its predictions when the input was fairly straightforward, and passed it off to the deeper model when it was more visually complex (e.g. the digit 1 is less complex than the italicised digit \textit{1}, and was classified earlier in the network). Similarly, in natural language processing, \cite{clark2019does} find that early layers of BERT (a large transformer architecture by \cite{devlin2018bert}) attend to broad features of an input, as opposed to later layers that tend to focus on a certain particular aspect of an input, and \cite{alex2019emergent} find that <CLS> tokens are heavily overparameterised, and can be shrunk substantially without affecting performance. \section{Method} \subsection{Framework} A neural network $M$, at a high level, is a function approximator. It maps inputs $x$ to outputs $y$: $M(x) = y$. Supervised learning involves training the parameters of $M$ to best fit the training data $(x, \tilde{y})$. We can decompose this mapping $M$ into sub-components, and view it as a composition of transformations $M_1$, $M_2$, ..., $M_n$ of the input $x$ into intermediate representations $x_1$, $x_2$, ..., $x_m$. For simple architectures, like VGG \citep{simonyan2014very}, the compositions can be written simply, as below: $x_1=M_1(x)$, $x_2=M_2(x_1)=M_2\circ M_1(x)$, ... $y=x_n=M_n\circ M_{n-1}\dots\circ M_2\circ M_1(x)$, where the $M_i$ are convolutional layers, max-pooling layers, fully connected layers, and non-linear layers. More complicated architectures are more involved to formalise, but can nonetheless still be made to fit this framework of intermediate representations. For example, if a layer of the network involves a skip connection from layer $i$ to layer $k$, then we can write $x_k$ and $M_k$ as: $x_k=M_k(x_{k-1}, x_i)=x_{k-1}+x_i$. \subsection{Architecture} \begin{figure} \centering \includegraphics[width=0.90\linewidth]{group.pdf} \caption{SideNets can be attached to a wide variety of networks. Here we visualise two networks equipped with SideNets. \textit{Left:} A DistilBERT transformer. Its MainNet is untouched, but a SideNet is added between encoders 1 and 2. \textit{Right:} A ResNet. Its MainNet is untouched, but a SideNet identical to the one for DistilBERT is added after its first block.} \label{twonets} \end{figure} We call the net $M$ the MainNet. On top of this MainNet backbone, we propose to add a SideNet, a simple task-specific network $S$ which takes as input one of the MainNet's intermediary representations $x_c$, and returns a probability distribution over the classes $y_c=S(x_c)$. In our experiments, we purposefully choose $S$ to be extremely simple: a fully connected layer, a non-linear ReLU layer \citep{nair2010rectified}, a batch normalisation layer \citep{ioffe2015batch}, a final fully connected layer, and a softmax layer (or a sigmoid layer in the case of binary classification). Although the softmax operation is not a true reflection of the model's confidence\footnote{See Appendix A for a fuller discussion.} \citep{gal2016uncertainty}, we find that using it as a proxy for model confidence works well empirically. SideNets can be attached to any intermediate representation $x_i$; in \autoref{twonets} we illustrate two possible locations for SideNets on two different architectures: the DistilBERT transformer for natural language processing and the ResNet for computer vision. \subsection{Training SideNets} To train the SideNet quickly, we can freeze the weights of the MainNet, and update the SideNet's weights on the normal training data. The SideNet, by construction, has very few parameters, and the input data only needs to flow through a small fraction of the MainNet to get to the SideNet, so the optimisation is fast and converges quickly. Multiple SideNets $S_1$, ..., $S_p$ with parameters $W_1$, ..., $W_p$ can be trained in parallel at different points along the MainNet, as long as they return separate losses $L_{S1}$, ..., $L_{Sp}$, since by construction $\frac{\partial W_i}{\partial L_j}=0, \forall i \neq j$, provided that the SideNets remain independent of each other. While training this way is significantly faster, it does come with a significant performance cost (on the order of ~3\% in our experiments), so in performance-critical models, fine-tuning the whole model with the SideNet is preferable. To fine-tune the weights of the MainNet alongside those of the SideNet, we can backpropagate over the weighted sum of their losses. If the MainNet's loss is $L_M$ and the SideNet's loss is $L_S$, then we can backpropagate over a loss $L = L_M + \alpha L_S$. In our experiments we always pick $\alpha=1$. To show that SideNets are easy to train, we share our code in Appendix B. To fine-tune the weights of the MainNet alongside those of multiple SideNets, each with losses $L_{S1}$, ..., $L_{Sp}$, the same principle applies. \subsection{SideNets at inference time} To classify an input image x, we run x through the MainNet until we obtain the intermediary representation $x_c$, and then pass $x_c$ through $S$ to obtain a classification $\hat{y}$ and confidence level $\hat{p}$. If the confidence level exceeds a user specified threshold $\theta$, then the classification is returned immediately, without having $x_c$ pass through the rest of the MainNet. If the confidence level is below $\theta$, then $x_c$ is passed back on to the MainNet, where it returns a final classification $y$. \section{Classification Experiments} We perform all experiments on a single NVIDIA RTX 2070 GPU. All experiments use the Adam \citep{kingma2014adam} optimiser, with default parameters. We use an initial learning rate of .0003 and train for 50 epochs in the ResNet experiments; we use an initial learning rate of .000003 and train for 20 epochs in the BERT and DistilBERT experiments. In both cases, we use a learning rate decay of 3 after 5 epochs in which the validation loss doesn't go down. For all experiments, our SideNet is a single hidden layer perceptron, with an input size equal to the number of elements in $x_c$ (a flattened version of $x_c$ for images), a hidden layer with 32 units,\footnote{We found that increasing this number did not have much of an effect. More details in Appendix C.} a batchnorm layer, a ReLU layer, and a classification layer (softmax for multi-class classification, sigmoid for binary). All of our experimental details can be found in Appendix D. \subsection{CIFAR10} We assess our method's performance on the CIFAR10 dataset \citep{krizhevsky2009learning}, a dataset of 60,000 colour images, 32x32 pixels, with 10 classes of 6,000 elements each. Our train/validate/test split is 50,000/5,000/5,000. We apply standard CIFAR10 data augmentation techniques: normalisation, random cropping with padding 4, and horizontal flips. We use ResNet18, 34, 50, and 152 (with weights pretrained on ImageNet) as the core architecture of the MainNet. Since they were pretrained on ImageNet, which has 1,000 classes, we replace their final fully connected layer with a fully connected layer with the same architecture as a SideNet, described above. We attach the SideNet to the output of the Resnet 1 block illustrated in \autoref{twonets}. The SideNet is fine-tuned with the last layer of the MainNet. \begin{figure} \centering \includegraphics[width=0.85\linewidth]{cifargood.pdf} \caption{Plot of test accuracy with respect to average number of parameters per run, for different thresholds $\theta$, for different depths of ResNets, with and without SideNets. The results are averaged over 5 runs, with error bars indicating standard deviations.} \label{cifartrail} \end{figure} We evaluate our method on the test set with different thresholds $\theta$ by plotting the model's accuracy with respect to the amount of compute used. We use the average number of parameters used for a single input as a proxy for the amount of compute used (since this number stays fixed, whereas the average number of floating point operations would vary based on the size of the input). The results are plotted in \autoref{cifartrail}. We find that architectures using SideNets can use significantly less compute than architectures without SideNets, and still maintain the same accuracy. We also note that adding a SideNet makes it easy and cheap to explore the space of models with different compute and accuracy levels: simply adjust the threshold $\theta$. In order to explore this same compute-accuracy space with knowledge-distillation or pruning, we would have to repeatedly do so from scratch. We also test to see if our SideNets are calibrated. A classification model is calibrated when the probability $p$ it assigns to an input $x$ belonging to a certain class is equal to the actual probability of the model classifying it correctly. For example, if a weather model predicts every day for 100 days that it will be sunny with 75\% certainty, and at the end of the 100 days there were indeed 75 sunny days, then that model is calibrated. More formally, given an input $x$ whose true classification label is $y$, if a model $M$ assigns to $x$ a classification of $\hat{y}$ with confidence $\hat{p}$, then $M$ is calibrated iff $\mathbb{P}(\hat{y}=y | \hat{p}=p)=p, \forall p \in [0, 1]$. Calibration is a useful property for a model to have, since it ``knows what it doesn't know''. We quantify calibration using the expected calibration error (ECE). We first bin our predictions into 8 equally spaced classification confidence bins, consisting of $n_i$ predictions each: confidences between 0.2 and 0.3 go into bin 1, ..., confidences between 0.9 and 1 go into bin 8 (there are no bins between 0.1 and 0.2 because in our experiments both SideNets and MainNets always have confidence above 0.2). The ECE is computed by calculating the average distance between confidence and accuracy for each bin: ECE=$\sum_i \frac{n_i}{n} | acc(i) - conf(i) |$, $i=1, \dots, 8$. \cite{guo2017calibration} find that deep convolutional neural networks are not calibrated. We reproduce their results, and find that our MainNet classifications are not calibrated, with high ECE scores. However, the classifications of our SideNets are well calibrated. \autoref{cifar-table} details the ECE scores for SideNets and MainNets, and \autoref{calibrn50} gives a specific example of how the MainNet is uncalibrated relative to the SideNet. This is helpful to the person setting the confidence threshold $\theta$: it means that if they set $\theta=0.85$, then the SideNet will have a minimum accuracy of 85\%. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{calib.pdf} \caption{Calibration plot of a ResNet50's SideNet and MainNet, averaged over 5 runs, with standard deviations. The SideNet's classifications are significantly closer to perfect calibration than those of the MainNet. The results were obtained on the test set.} \label{calibrn50} \end{figure} \begin{table} \caption{ECE scores for different SideNets and MainNets, evaluated on the test set. The lower the ECE, the more calibrated the model. The values are averaged over 5 runs, and include standard deviations.} \label{cifar-table} \centering \begin{tabular}{ l | c c } \toprule Model & SideNet ECE & MainNet ECE \\ \midrule ResNet152 & $\textbf{.30} \pm .06$ & $1.1 \pm .18$ \\ ResNet50 & $\textbf{.41} \pm .10$ & $.91 \pm .17$ \\ ResNet34 & $\textbf{.38} \pm .08$ & $1.0 \pm .14$\\ ResNet18 & $\textbf{.31} \pm .08$ & $1.0 \pm .05$ \\ \bottomrule \end{tabular} \end{table} \subsection{SST-2} \begin{figure} \centering \includegraphics[width=0.85\linewidth]{moviereviews.pdf} \caption{Plot of test accuracy with respect to average number of parameters per run, for different thresholds $\theta$, for different transformer models, with and without SideNets. The results are averaged over 5 runs, with error bars indicating standard deviations.} \label{moviereviews} \end{figure} To assess our method's performance on natural language processing tasks, we apply it to the SST-2 dataset \citep{socher2013recursive}, a dataset of 9613 movie reviews, labelled as positive or negative. Our train/validation/test split is 5000/1613/3000. We use pretrained DistilBERT \citep{sanh2019distilbert}, BERT-base, and BERT-large \citep{devlin2018bert} models as the core architectures of the MainNet. DistilBERT has 6 encoders with 12 attention heads each, BERT-base has 12 encoders with 12 attention heads each, and BERT-large has 24 encoders with 16 attention heads each. For the MainNet's final classification layer, we add a fully connected layer with the same architecture as a SideNet. We attach DistilBERT's SideNet after its first encoder (out of 6), as in \autoref{twonets}; we attach BERT-base's SideNet after its fourth encoder (out of 12); we attach BERT-large's SideNet after its eighth encoder (out of 24).\footnote{For full diagrams of the architectures, see Appendix E.} The SideNet is fine-tuned along with the last layer of the MainNet. BERT-base and DistilBERT use 768 dimensional tensors to represent each token, and so the total parameter count overhead of the SideNet is $768 \times 32 + 32\times 1 \approx 25000$, which is $\approx 0.03\%$ of BERT-base's $\approx$100M parameter count, and $\approx 0.04\%$ of DistilBERT's $\approx$60M parameter count. BERT-large uses 1024 dimensional tensors to represent each token, so its overhead is $1024 \times 32+32\times 1 \approx 35000$, which is $\approx 0.01\%$ of BERT-large's $\approx$300M parameter count. We evaluate on the test set with different thresholds $\theta$, and plot the results in \autoref{moviereviews}, with the same methodology as with \autoref{cifartrail}. We find that adding SideNets allows for substantial decreases in compute, albeit with a greater loss in accuracy than the CIFAR10 example above. However, as above, we note that the addition of SideNets allows for a much easier exploration of compute-accuracy space. If we wanted a model with 200M parameters, rather than the 300M of BERT-large or the 100M of BERT-base, then rather than train that 200M parameter model from scratch, we could easily attach a SideNet to a pretrained BERT-large, and get a model that on average uses 200M parameters per run, with an accuracy above BERT-base, but below BERT-large. Furthermore, it is worth highlighting that adding a SideNet to DistilBERT manages to reduce its average parameter use by 30\%, at a cost of ~0.5\% test accuracy, despite it already being a version of BERT-base that was compressed using extensive model pruning and knowledge distillation. In comparison, DistilBERT lost 1.4\% test accuracy on the SST-2 task after losing 40\% of its parameters. This suggests that adding SideNets is a compute reduction method that can effectively complement knowledge distillation and model pruning. As in the CIFAR10 case, we found that the SideNets were calibrated. However, we found that the pretrained transformers were also calibrated, duplicating the findings of \cite{desai2020calibration}. \subsection{Does the SideNet lower the MainNet's accuracy?} It could be argued that the addition of the SideNet to the training task would lead to a decrease in the final accuracy of the MainNet, since the training procedure splits its attention between minimising the SideNet and the MainNet's loss. We find that this is not the case, and that adding the SideNet does not seem to have a negative effect on the MainNet's accuracy. Our findings are summarised in \autoref{summaryofmethods}. Anecdotally, we find that ensembling the SideNet and MainNet predictions provided a slight boost to final accuracy over just the MainNet's predictions. For a fuller discussion, see appendix F. \begin{table} \caption{Test accuracies for final MainNet classifications when trained with and without a SideNet, on computer vision (CIFAR10) and natural language processing (SST-2) classification tasks. The values are averaged over 5 runs, and include standard deviations.} \label{summaryofmethods} \centering \begin{tabular}{ l c c c c } \toprule & ResNet18 & ResNet34 & ResNet50 & ResNet152 \\ \midrule SideNet & $92.3 \pm .1$ & $93.2 \pm .1$ & $94.0 \pm .2$ & $94.3 \pm .1$\\ No SideNet & $92.1 \pm .1$ & $92.9 \pm .2$ & $93.6 \pm .3$ & $93.9 \pm .2$ \\ \bottomrule \smallskip \end{tabular} \hspace{20em} \begin{tabular}{l c c c} \toprule & DistilBERT & BERT-base & BERT-large \\ \midrule SideNet & $88.8 \pm .1$ & $90.6 \pm .2$ & $92.2 \pm 1.2$ \\ No SideNet & $89.0 \pm .1$ & $90.6 \pm .3$ & $92.8 \pm 0.5$ \\ \bottomrule \end{tabular} \end{table} \section{Conclusion and Future Work} \label{concl} In this work we propose attaching a SideNet, a small single hidden layer perceptron, onto the intermediate representations of a MainNet, a large pretrained network, and using the SideNet's confidence level to determine whether an input should be classified by the SideNet or passed back to the MainNet. SideNets are easy to implement, fine-tune, and deploy, and provide substantial compute savings at little cost to model accuracy, for both natural language processing and computer vision tasks. We also find that SideNets in the early layers of ResNets are calibrated, while the ResNets themselves are not, and that SideNets can significantly reduce the amount of compute used by DistilBERT at minimal cost to accuracy, despite DistilBERT already being a highly compressed model. Finally, increasing or decreasing the threshold $\theta$ for the model's confidence allows us to painlessly explore compute-accuracy space, by making continuous what was once discrete. SideNets open several avenues for further study: \begin{enumerate} \item SideNets perform well on classification tasks. Do they perform equally well on more complicated, higher dimensional tasks, such as image segmentation or machine translation? \item SideNets help reduce DistilBERT's total compute, with minimal loss in accuracy, even though DistilBERT is already a highly compressed model. What is the interplay between different forms of model compression, and to what extent can they be combined? \item SideNets are small and shallow networks. Does this make them more susceptible to being fooled by adversarial attacks \citep{goodfellow2014explaining}? \end{enumerate} We hope to investigate these questions further in future work. \section*{Broader Impact} \subsection{Potential positive outcome} We hope this paper demonstrates that conditional computation is a simple and effective tool for reducing the computational cost of neural networks, with minimal effect on model performance. We hope that our paper convinces machine learning researchers and engineers to adopt conditional computation methods in their own work--be it our simple SideNet method, or more powerful but involved methods like those described in our Related Work section (\autoref{relwork}). We think widespread adoption of conditional computation would be an effective tool in helping decrease deep learning's carbon footprint. \subsection{Potential negative outcome} We can imagine two ways in which SideNets could have a negative effect on society. \begin{enumerate} \item SideNets are designed to draw quick conclusions from earlier-stage, less processed data. It is not clear whether or not the classifications they make would disproportionately leverage biases in the data and lead to unfair decisions. \item As discussed in \autoref{concl}, it might be easier to use adversarial examples to fool SideNets than to fool MainNets. If so, real-world systems that use them might be made more vulnerable by using conditional computing systems. \end{enumerate} We hope to investigate both of these questions in future work, along with the open question posed in the conclusion. \begin{ack} We would like to thank Julien Raffaud, Patrick Germain, Tiffany Vlaar, Alexandre Matton, and Antreas Antoniou for helpful comments and conversations. \end{ack} \bibliographystyle{plainnat}
{ "timestamp": "2020-07-28T02:38:16", "yymm": "2007", "arxiv_id": "2007.13512", "language": "en", "url": "https://arxiv.org/abs/2007.13512" }
\section{Introduction} Special functions, with its diverse sub-branches, are very wide field of study and are used not only in various fields of mathematics but also in the solutions of important problems in many disciplines of science such as physics, chemistry and biology. This subject is powerful to make sense of uncertain questions especially in physical problems so it encourages many people for notable improvements on this matter. As in other sciences, remarkable problems are still discussed in many disciplines and more general results are tried to be obtained. Generalized hypergeometric functions which are one of these studies in special functions \cite{Slater, Srivastava} are defined by \begin{equation} _{p}F_{q}\left[ \begin{array}{c} \alpha _{1},\ \alpha _{2},...,\alpha _{p} \\ \beta _{1},\ \beta _{2},...,\beta _{q}% \end{array}% ;x\right] =\sum_{n=0}^{\infty }\frac{\left( \alpha _{1}\right) _{n}\left( \alpha _{2}\right) _{n}...\left( \alpha _{p}\right) _{n}}{\left( \beta _{1}\right) _{n}\left( \beta _{2}\right) _{n}...\left( \beta _{q}\right) _{n}}\frac{x^{n}}{n!} \label{ghf} \end{equation}% where $\alpha _{1},\ \alpha _{2},....,\alpha _{p},\ \beta _{1},\ \beta _{2},...,\beta _{q},\ x\in \mathbb{C} \ $and $\beta _{1},\ \beta _{2},...,\beta _{q}$ neither zero nor a negative integers. Here $\left( \lambda \right) _{n}$ is the Pochhammer symbol defined by \begin{equation} \left( \lambda \right) _{n}=\left\{ \begin{array}{c} \lambda \left( \lambda +1\right) ...\left( \lambda +n-1\right) \ ;\ \ \ n\geq 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ ;\ \ n=0,\ \lambda \neq 0% \end{array}% \right. . \label{poc} \end{equation}% In special case $p=2$ and $q=1$ in $\eqref{ghf}$ , we can obtain $_{2}F_{1} $ Gauss hypergeometric function \cite{Slater, Srivastava}, \begin{equation} _{2}F_{1}\left[ \begin{array}{c} \alpha \ ,\ \beta \\ \gamma% \end{array}% ;x\right] =\sum_{n=0}^{\infty }\frac{\left( \alpha \right) _{n}\left( \beta \right) _{n}}{\left( \gamma \right) _{n}}\frac{x^{n}}{n!},\ \ \ \left\vert x\right\vert <1 \label{hf} \end{equation}% where $\alpha ,\ \beta ,\ \gamma ,\ x\in \mathbb{C} \ $and $\gamma \ $neither zero nor a negative integer. Many elementary functions can be expressed in terms of hypergeometric functions. Moreover; non-elementary functions that occur in physics and mathematics have a representation with hypergeometric series. Therefore, generalizing hypergeometric functions bring with different generalizations in other disciplines. These generalizations can be made by increasing the number of parameters in the hypergeometric function or by increasing the number of variables. Appell, based on the idea that the number of variables can be increased, has defined Appell hypergeometric functions obtained by multiplying two hypergeometric functions. These are the four elemanter functions defined in \cite{Slater, Srivastava} \begin{eqnarray} F_{1}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) &=&\sum_{m,n=0}^{\infty }\frac{\left( \alpha \right) _{m+n}\left( \beta \right) _{m}\left( \beta ^{\prime }\right) _{n}}{\left( \gamma \right) _{m+n}}\frac{x^{m}}{m!}\frac{y^{n}}{n!}, \label{app1} \\ F_{2}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ,\gamma ^{\prime };x,y\right) &=&\sum_{m,n=0}^{\infty }\frac{\left( \alpha \right) _{m+n}\left( \beta \right) _{m}\left( \beta ^{\prime }\right) _{n}}{\left( \gamma \right) _{m}\left( \gamma ^{\prime }\right) _{n}}\frac{x^{m}}{m!}% \frac{y^{n}}{n!} \label{app2} \\ F_{3}\left( \alpha ,\alpha ^{\prime },\beta ,\beta ^{\prime };\gamma ;x,y\right) &=&\sum_{m,n=0}^{\infty }\frac{\left( \alpha \right) _{m}\left( \alpha ^{\prime }\right) _{n}\left( \beta \right) _{m}\left( \beta ^{\prime }\right) _{n}}{\left( \gamma \right) _{m+n}}\frac{x^{m}}{m!}% \frac{y^{n}}{n!} \label{app3} \\ F_{4}\left( \alpha ,\beta ;\gamma ,\gamma ^{\prime };x,y\right) &=&\sum_{m,n=0}^{\infty }\frac{\left( \alpha \right) _{m+n}\left( \beta \right) _{m+n}}{\left( \gamma \right) _{m}\left( \gamma ^{\prime }\right) _{n}}\frac{x^{m}}{m!}\frac{y^{n}}{n!} \label{app4} \end{eqnarray}% where $\left\vert x\right\vert <1,\ \left\vert y\right\vert <1,\ \left\vert x\right\vert +\left\vert y\right\vert <1,\ \left\vert x\right\vert <1,\left\vert y\right\vert <1,\ \sqrt{\left\vert x\right\vert }+\sqrt{% \left\vert y\right\vert }<1,\ $respectively. Another generalization of hypergeometric functions is the hypergeometric$\ k$ -function, defined by the Pochhammer $k$-symbol studied by Diaz et al. \cite{Diaz}. This paper includes the $k$-analogue of the Pochhammer symbol and hypergeometric function, as well as the $k$-generalization of gamma, beta, and zeta functions with their integral representations and some identities provided by classical ones. It should be noted that, taking $k=1$ in these generalizations, the $k$-extensions of the functions reduce to the classical ones. Let $k\in \mathbb{R} ^{+}$ and $n\in \mathbb{N} ^{+}.\ $Hypergeometric $k$-function is defined in \cite{Diaz} as \begin{equation} _{2}F_{1,k}\left[ \begin{array}{c} \alpha \ ,\ \beta \\ \gamma \end{array}% ;x\right] :=\ _{2}F_{1,k}\left[ \begin{array}{c} \left( \alpha ,k\right) \ ,\ \left( \beta ,k\right) \\ \left( \gamma ,k\right) \end{array}% ;x\right] \ =\sum\limits_{n=0}^{\infty }\frac{\left( \alpha \right) _{n,k}\left( \beta \right) _{n,k}}{\left( \gamma \right) _{n,k}}\frac{x^{n}}{% n!} \end{equation} where $\alpha ,\ \beta ,\ \gamma ,\ x\in \mathbb{C} \ $and $\gamma \ $neither zero nor a negative integer and $\left( \lambda\right) _{n,k}\ $is the Pochhammer $k$-symbol defined in \cite{Diaz} as \begin{equation} \left( \lambda \right) _{n,k}=\left\{ \begin{array}{c} \lambda \left( \lambda +k\right) \left( \lambda +2k\right) ...\left( \lambda +\left( n-1\right) k\right) \ ;\ \ \ n\geq 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ ;\ \ n=0,\ \lambda \neq 0% \end{array}% \right. . \end{equation} Based on this generalization, Kokologiannaki \cite{Kokolo} obtained different inequalities and properties for the generalizations of Gamma, Beta and Zeta functions. Some limits with the help of asymptotic properties of $k$-gamma and $k$-beta functions were discussed by Krasniqi \cite{Krasniqi}. Mubeen et al. \cite{Mubeen} established integral representations of the $k$% -confluent hypergeometric function and k-hypergeometric function and in another paper \cite{Mubeen1}, proved the k-analogue of the Kummer's first formulation using these integral representations. In \cite{Korkmaz}, some families of multilinear and multilateral generating functions for the $k$% -analogue of the hypergeometric functions were obtained. Studies on this subject are not limited to these papers, for detailed \cite{Li,Mubeen2,Nisar,Sivamani}. In \cite{Mubeen5}, Mubeen adapted the $k$-generalization to the Riemann Liouville fractional integral by using $k$-gamma function. In \cite{Romero}, $k$-Riemann Liouville fractional derivative were studied and new properties were obtained with the help of Fourier and Laplace transforms. In \cite% {Rahman}, Rahman et al. applied the newly $k$-fractional derivative operator to $k$-analogue of hypergeometric and Appell functions and obtained new relations satisfied between them. Furthermore, $k$-fractional derivative operator was applied to the $k$-Mittag leffler function and the Wright function. Our present investigation is motivated by the fact that generalizations of hypergeometric functions have considerable importance due to their applications in many disciplines from different perspectives. Therefore, our study is generally based on the $k$-extension of hypergeometric functions. The structure of the paper is organized as follows: In section 2, we briefly give some definitions and preliminary results which are essential in the following sections as noted in \cite{Diaz,Mubeen5,Mubeen6}. In section 3, following \cite{Diaz,Mubeen6} and using the same notion, we are concerned with the $k$-generalizations of $F_{2}$ and $F_{3}$ Appell hypergeometric functions. Moreover, we prove some main properties such as integral representations, transformation formulas and some reduction formulas which enables us to have relations for $k$-hypergeometric functions$\ $and $k$-Appell functions. In the last part of the paper, applying the theory of Riemann Liouville $k$-fractional derivative \cite{Rahman} and using the relations which we consider previous sections, we gain linear and bilinear generating relations for $k$-analogue of hypergeometric functions and $k$-Appell functions. \section{Some\ Definitions\ and\ Preliminary Results\ } For the sake of completeness, it will be better to examine the preliminary section in three subsections by reason of the number of theorems and definitions. In these subsections, we will present some definitions, properties and results which we need in our investigation in further sections. We begin by introducing $k$-gamma,\ $k$-beta and $k$-analogue of hypergeometric function and we continue definition of $k$-generalization of $% F_{1}$ which is the first Appell function. We conclude this section with recalling Riemann Liouville fractional derivative,\ $k$-generalization of this fractional derivative and some important theorems which will be required in our studies. Through this paper, we denote by $% \mathbb{C} ,\ \mathbb{R} ,\ \mathbb{R} ^{+}$,\ $% \mathbb{N} \ $and $% \mathbb{N} ^{+}$ the sets of complex numbers, real numbers, real and positive numbers and positive integers with zero and positive integers, respectively. \subsection{$k$-Generalizations of Gamma, Beta and Hypergeometric Functions} In this subsection, we will present the definitions of $k$-gamma and $k$-beta functions are presented and some elemental relations provided by these functions are introduced by Diaz et al.\ \cite{Diaz} and Mubeen et al.\ \cite{Mubeen2}. Furthermore, we continue the definition of $k$-hypergeometric function and we present integral representation and some formulas satisfied from this generalization \cite{Mubeen, Mubeen1}. \begin{definition} For $x\in \mathbb{C} $ and $k\in \mathbb{R} ^{+},\ $the integral representation of $k$-gamma function $\Gamma _{k}\ $% is defined by% \begin{equation} \Gamma _{k}\left( x\right) =\int\limits_{0}^{\infty }t^{x-1}e^{-\frac{t^{k}% }{k}}dt \label{kg} \end{equation}% where $\Re\left( x\right) >0$ \cite{Diaz, Mubeen2}. \end{definition} \begin{definition} For $x,\ y\in \mathbb{C} $ and $k\in \mathbb{R} ^{+},\ $the $k$-beta function $B_{k}\ $ is defined by% \begin{equation} B_{k}\left( x,y\right) =\frac{1}{k}\int\limits_{0}^{1}t^{\frac{x}{k}% -1}\left( 1-t\right) ^{\frac{y}{k}-1}dt \label{kb} \end{equation}% where $\Re\left( x\right) >0$ and $\Re\left( y\right) >0$ \cite{Diaz}. \end{definition} \begin{proposition} Let$\ k\in \mathbb{R} ^{+},\ a\in \mathbb{R} ,\ n\in \mathbb{N} ^{+}.\ $The $k$-gamma function $\Gamma _{k}$ and the $k$-beta function $B_{k}$ satisfy the following properties \cite{Diaz, Mubeen2}, \begin{eqnarray} \Gamma _{k}\left( x+k\right) &=&x\Gamma _{k}\left( x\right) , \label{kg1} \\ \Gamma _{k}\left( x\right) &=&k^{\frac{x}{k}-1}\Gamma \left( \frac{x}{k} \right) , \label{kg2} \\ B_{k}\left( x,y\right) &=&\frac{\Gamma _{k}\left( x\right) \Gamma _{k}\left( y\right) }{\Gamma _{k}\left( x+y\right) }, \label{kb3} \\ B_{k}\left( x,y\right) &=&\frac{1}{k}B\left( \frac{x}{k},\frac{y}{k}\right) . \label{kb4} \end{eqnarray} \end{proposition} \begin{definition} Let $x$ $\in \mathbb{C} ,\ k\in \mathbb{R} ^{+}\ $and $n\in \mathbb{N} ^{+}.$ Then the Pochhammer $k$-symbol is defined in \cite{Diaz, Mubeen2} by \begin{equation} \left( x\right) _{n,k}=x\left( x+k\right) \left( x+2k\right) ...\left( x+\left( n-1\right) k\right) \label{kpoc} \end{equation} In particular we denote $\left( x\right) _{0,k}:=1$. \end{definition} \begin{proposition} If\ $\alpha \in \mathbb{C} $ and $m,n\in \mathbb{N} ^{+}\ $then for $k\in \mathbb{R} ^{+},$ we have\ \begin{eqnarray} \left( \alpha \right) _{n,k} &=&\frac{\Gamma _{k}\left( \alpha +nk\right) }{% \Gamma _{k}\left( \alpha \right) }, \label{kpoc1} \\ \left( \alpha \right) _{n,k} &=&k^{n}\left( \frac{\alpha }{k}\right) _{n}, \label{kpoc2} \\ \left( \alpha \right) _{m+n,k} &=&\left( \alpha \right) _{m,k}\left( \alpha +mk\right) _{n,k}, \label{kpoc3} \end{eqnarray}% where $\left( \alpha \right) _{n}$ and $\left( \alpha \right) _{n,k}$ denote the Pochhammer symbol and Pochhammer $k$-symbol\ respectively \cite{Diaz, Mubeen2}. \end{proposition} \begin{proposition} For any $\alpha \in \mathbb{C} $ and $k\in \mathbb{R} ^{+}$, the following identity holds% \begin{equation} \sum\limits_{n=0}^{\infty }\left( \alpha \right) _{n,k}\frac{x^{n}}{n!}% =\left( 1-kx\right) ^{-\frac{\alpha }{k}} \label{kpoc5} \end{equation}% where $\left\vert x\right\vert <\frac{1}{k}$ \cite{Diaz, Mubeen2}. \end{proposition} \begin{theorem} Assume that $x\in \mathbb{C} ,\ k\in \mathbb{R} ^{+}$ and $\Re\left( \gamma \right) >\Re\left( \beta \right) >0,\ $then the integral representation of the $k$-hypergeometric function is defined in \cite{Mubeen} as \begin{equation} _{2}F_{1,k}\left[ \begin{array}{c} \alpha \ ,\ \beta \\ \gamma% \end{array}% ;x\right] =\frac{\Gamma _{k}\left( \gamma \right) }{k\Gamma _{k}\left( \beta \right) \Gamma _{k}\left( \gamma -\beta \right) }\int\limits_{0}^{1}t^{% \frac{\beta }{k}-1}\left( 1-t\right) ^{\frac{\gamma -\beta }{k}-1}\left( 1-kxt\right) ^{-\frac{\alpha }{k}}dt. \label{ikhf} \end{equation} \end{theorem} For the following theorem, $_{2}F_{1,k}\left[ \begin{array}{c} \left( \alpha ,1\right) \ ,\ \left( \beta ,k\right) \\ \left( \gamma ,k\right)% \end{array}% ;x\right] \ $is the expression of the following form \cite{Mubeen1},% \begin{equation} _{2}F_{1,k}^{\ast }\left[ \begin{array}{c} \alpha \ ,\ \beta \\ \gamma \end{array}% ;x\right] :=\ _{2}F_{1,k}\left[ \begin{array}{c} \left( \alpha ,1\right) \ ,\ \left( \beta ,k\right) \\ \left( \gamma ,k\right) \end{array}% ;x\right] =\sum\limits_{n=0}^{\infty }\dfrac{\left( \alpha \right) _{n}\left( \beta \right) _{n,k}}{\left( \gamma \right) _{n,k}}\dfrac{x^{n}}{% n!}. \end{equation} \begin{theorem} \cite{Mubeen1} Assume that $x\in \mathbb{C} ,\ k\in \mathbb{R} ^{+}$ and $Re\left( \gamma -\beta \right) >0,$ then \begin{equation} _{2}F_{1,k}\left[ \begin{array}{c} \left( \alpha ,1\right) \ ,\ \left( \beta ,k\right) \\ \left( \gamma ,k\right)% \end{array}% ;x\right] :\ =\frac{\Gamma _{k}\left( \gamma \right) \Gamma _{k}\left( \gamma -\beta -k\alpha \right) }{\Gamma _{k}\left( \gamma -\beta \right) \Gamma _{k}\left( \gamma -k\alpha \right) }. \label{kummer1} \end{equation} \end{theorem} For the special case $\alpha =-n,$% \begin{equation} _{2}F_{1,k}\left[ \begin{array}{c} \left( -n,1\right) \ ,\ \left( \beta ,k\right) \\ \left( \gamma ,k\right)% \end{array}% ;x\right] \ =\frac{\left( \gamma -\beta \right) _{n,k}}{\left( \gamma \right) _{n,k}}. \label{kummer2} \end{equation}% \subsection{ $k$-Generalization of the Appell Function $F_{1}\left( \protect\alpha ,\protect\beta ,\protect\beta ^{\prime };\protect\gamma ;x,y\right) $} Here, we remind the definition of $k$-analogue of $F_{1}$ which is the first Appell function and some identities which are satisfied by it \cite{Mubeen6}. \begin{definition} \cite{Mubeen6} Let$\ k\in \mathbb{R} ^{+},\ x,y\in \mathbb{C} ,\ \alpha ,\ \beta ,\ \beta ^{\prime },\ \gamma \in \mathbb{C} $ and $n\in \mathbb{N} ^{+}.\ $Then the $F_{1,k}$ function with the parameters $\alpha ,\ \beta ,\ \beta ^{\prime },\ \gamma$ is given by \begin{equation} F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) =\sum\limits_{m,n=0}^{\infty }\frac{\left( \alpha \right) _{m+n,k}\left( \beta \right) _{m,k}\left( \beta ^{\prime }\right) _{n,k}}{\left( \gamma \right) _{m+n,k}}\frac{x^{m}}{m!}\frac{y^{n}}{n!} \label{kapp1} \end{equation}% where $\gamma \neq 0,-1,-2,...$ and $\left\vert x\right\vert <\frac{1}{k},\ \left\vert y\right\vert <\frac{1}{k}.$\bigskip \end{definition} \begin{theorem} \cite{Mubeen6} Assume that $k\in \mathbb{R} ^{+},\ x,y\in \mathbb{C} ,\ \Re\left( \gamma \right) >\Re\left( \alpha \right) >0,\ $then the integral representation of the $k$-hypergeometric function is as follows \begin{eqnarray} &&F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) \notag \\ &=&\tfrac{\Gamma _{k}\left( \gamma \right) }{k\Gamma _{k}\left( \alpha \right) \Gamma _{k}\left( \gamma -\alpha \right) }\int\limits_{0}^{1}t^{% \frac{\alpha }{k}-1}\left( 1-t\right) ^{\frac{\gamma -\alpha }{k}-1}\left( 1-kxt\right) ^{-\frac{\beta }{k}}\left( 1-kyt\right) ^{-\frac{\beta ^{\prime }}{k}}dt. \label{ikapp} \end{eqnarray} \end{theorem} \subsection{The Riemann Liouville $k$-Fractional Derivative Operator} Fractional calculus and its applications have been intensively investigated for a long time by many researches in numerous disciplines and its attention has grown tremendously. By making use of the concept of the fractional derivatives and integrals, various extensions of them has been introduced and authors have gained different perspectives in many areas such as engineering, physics, economics, biology, statistics \cite{ Fernandez, Ozarslan}. One of the generalization of fractional derivatives is Riemann Liouville $k$-fractional derivative operator studied in \cite{ Rahman, Romero, Azam}. Here, we remind the definition of Riemann Liouville fractional derivative and its $k$-generalization and also some theorems which will be used in further section, are shown. \begin{definition} \cite{Srivastava} The well known Riemann Liouville fractional derivative of order $\mu $ is described,\ for a function $f,\ $as follows \begin{equation} \mathcal{D}_{z}^{\mu }\left\{ f\left( z\right) \right\} =\frac{1}{\Gamma \left( -\mu \right) }\int\limits_{0}^{z}f\left( t\right) \left( z-t\right) ^{-\mu -1}dt \label{rl1} \end{equation}% where $\Re\left( \mu \right) <0.$ In particular, for the case $m-1<\Re\left( \mu \right) <m$ where $m=1,2,...$ \eqref{rl1} is written by \begin{eqnarray} \mathcal{D}_{z}^{\mu }\left\{ f\left( z\right) \right\} &=&\frac{d^{m}}{% dz^{m}}\mathcal{D}_{z}^{\mu -m}\left\{ f\left( z\right) \right\} \label{rl2} \\ &=&\frac{d^{m}}{dz^{m}}\left\{ \frac{1}{\Gamma \left( -\mu +m\right) }% \int\limits_{0}^{x}f\left( t\right) \left( z-t\right) ^{-\mu +m-1}dt\right\} \notag \end{eqnarray} \end{definition} \begin{definition} \cite{Rahman} The $k$-analogue of Riemann Liouville fractional derivative of order $\mu $ is defined by% \begin{equation} _{k}\mathcal{D}_{z}^{\mu }\left\{ f\left( z\right) \right\} =\frac{1}{% k\Gamma _{k}\left( -\mu \right) }\int\limits_{0}^{z}f\left( t\right) \left( z-t\right) ^{-\frac{\mu }{k}-1}dt \label{krl1} \end{equation}% where $\Re\left( \mu \right) <0\ $and$\ k\in \mathbb{R} ^{+}.$ In particular, for the case $m-1<\Re\left( \mu \right) <m$ where $% m=1,2,...,$ \eqref{krl1} is written by% \begin{eqnarray} _{k}\mathcal{D}_{z}^{\mu }\left\{ f\left( z\right) \right\} &=&\frac{d^{m}}{% dz^{m}}\ _{k}\mathcal{D}_{z}^{\mu -mk}\left\{ f\left( z\right) \right\} \label{krl2} \\ &=&\frac{d^{m}}{dz^{m}}\left\{ \frac{1}{k\Gamma _{k}\left( -\mu +mk\right) }% \int\limits_{0}^{z}f\left( t\right) \left( z-t\right) ^{-\frac{\mu }{k}% +m-1}dt\right\} \notag \end{eqnarray} \end{definition} \begin{theorem} \cite{Rahman} Let $k\in \mathbb{R} ^{+},\ \Re\left( \mu \right) <0.$ Then we have \begin{equation} _{k}\mathcal{D}_{z}^{\mu }\left\{ z^{\frac{\eta }{k}}\right\} =\frac{z^{% \frac{\eta -\mu }{k}}}{\Gamma _{k}\left( -\mu \right) }B_{k}\left( \eta +k,-\mu \right) . \label{krl3} \end{equation} \end{theorem} \begin{theorem} \cite{Rahman} Let $Re\left( \mu \right) >0$ and suppose that the function $f\left( z\right) $ is analytic at the origin with its Maclaurin expansion has the power series expansion \begin{equation} f\left( z\right) =\sum\limits_{n=0}^{\infty }a_{n}z^{n} \end{equation} where$\ \left\vert z\right\vert <\rho ,\ \rho \in \mathbb{R} ^{+}.$ Then% \begin{equation} _{k}\mathcal{D}_{z}^{\mu }\left\{ f\left( z\right) \right\} =\sum\limits_{n=0}^{\infty }a_{n}\ _{k}\mathcal{D}_{z}^{\mu }\left\{ z^{n}\right\} . \label{krl3a} \end{equation} \end{theorem} \begin{theorem} \cite{Rahman} Let $k\in \mathbb{R} ^{+},\ \Re\left( \mu \right) >\Re\left( \eta \right) >0$ $.\ $% Then the following result holds true \begin{equation} _{k}\mathcal{D}_{z}^{\eta -\mu }\left\{ z^{\frac{\eta }{k}-1}\left( 1-kz\right) ^{-\frac{\beta }{k}}\right\} =\frac{\Gamma _{k}\left( \eta \right) }{\Gamma _{k}\left( \mu \right) }z^{\frac{\mu }{k}-1}\ _{2}F_{1,k}% \left[ \begin{array}{c} \beta \ ,\ \eta \\ \mu% \end{array}% ;z\right] \ \label{krl4} \end{equation}% where $\left\vert z\right\vert <\frac{1}{k}.$ \end{theorem} \begin{theorem} \cite{Rahman} Let $k\in \mathbb{R} ^{+}.\ $We have the following result \begin{equation} _{k}\mathcal{D}_{z}^{\eta -\mu }\left\{ z^{\frac{\eta }{k}-1}\left( 1-kaz\right) ^{-\frac{\alpha }{k}}\left( 1-kbz\right) ^{-\frac{\beta }{k}% }\right\} =\frac{\Gamma _{k}\left( \eta \right) }{\Gamma _{k}\left( \mu \right) }z^{\frac{\mu }{k}-1}F_{1,k}\left( \eta ,\alpha ,\beta ;\mu ;az,bz\right) \label{krl5} \end{equation} where $\Re\left( \mu \right) >\Re\left( \eta \right) >0,\ \Re\left( \alpha \right) >0,\Re\left( \beta \right) >0$ and $\max \left\{ \left\vert az\right\vert ,\left\vert bz\right\vert \right\} <\frac{1% }{k}.$ \end{theorem} \section{$k$-Generalizations of the Appell Functions and Some Transformation Formulas} In 2015, $k$-generalization of $F_{1}$ Appell function was introduced and contiguous function relations and integral representation\ of this function were shown by using the fundamental relations of the Pochhammer $k$-symbol\ \cite{Mubeen6}. The $k$-analogue of the $F_{1}\ $was defined but other Appell $k$-functions such as $F_{2},\ F_{3}$ and $F_{4}$ have not yet been explored. We now turn our attention the definition of $F_{2}$ and $F_{3}$ and provide the integral representation of them. Also we derive some linear transformations of Appell functions and give some reduction formulas involving the $_{2}F_{1,k}$ hypergeometric function. \begin{definition} Let$\ k\in \mathbb{R} ^{+},\ x,y\in \mathbb{C} ,\ \alpha ,\beta ,\beta ^{\prime },\gamma $,$\ \gamma ^{\prime }\in \mathbb{C} $ and $m,n\in \mathbb{N} ^{+}.\ $Then the Appell $k$-functions defined by\ \begin{eqnarray} F_{2,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ,\gamma ^{\prime };x,y\right) &=&\sum\limits_{m,n=0}^{\infty }\frac{\left( \alpha \right) _{m+n,k}\left( \beta \right) _{m,k}\left( \beta ^{\prime }\right) _{n,k}}{% \left( \gamma \right) _{m,k}\left( \gamma ^{\prime }\right) _{n,k}}\frac{% x^{m}}{m!}\frac{y^{n}}{n!} \label{appk2} \\ &=&\sum\limits_{m=0}^{\infty }\frac{\left( \alpha \right) _{m,k}\left( \beta \right) _{m,k}}{\left( \gamma \right) _{m,k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha +mk\ ,\ \beta ^{\prime } \\ \gamma ^{\prime }% \end{array}% ;y\right] \frac{x^{m}}{m!} \notag \\ F_{3,k}\left( \alpha ,\alpha ^{\prime },\beta ,\beta ^{\prime };\gamma ;x,y\right) &=&\sum\limits_{m,n=0}^{\infty }\frac{\left( \alpha \right) _{m,k}\left( \alpha ^{\prime }\right) _{n,k}\left( \beta \right) _{m,k}\left( \beta ^{\prime }\right) _{n,k}}{\left( \gamma \right) _{m+n,k}}% \frac{x^{m}}{m!}\frac{y^{n}}{n!} \label{appk3} \\ &=&\sum\limits_{m=0}^{\infty }\frac{\left( \alpha \right) _{m,k}\left( \beta \right) _{m,k}}{\left( \gamma \right) _{m,k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha ^{\prime }\ ,\ \beta ^{\prime } \\ \gamma +mk% \end{array}% ;y\right] \frac{x^{m}}{m!} \notag \\ F_{4,k}\left( \alpha ,\beta ;\gamma ,\gamma ^{\prime };x,y\right) &=&\sum\limits_{m,n=0}^{\infty }\frac{\left( \alpha \right) _{m+n,k}\left( \beta \right) _{m+n,k}}{\left( \gamma \right) _{m,k}\left( \gamma ^{\prime }\right) _{n,k}}\frac{x^{m}}{m!}\frac{y^{n}}{n!} \label{appk4} \\ &=&\sum\limits_{m=0}^{\infty }\dfrac{\left( \alpha \right) _{m,k}\left( \beta \right) _{m,k}}{\left( \gamma \right) _{m,k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha +mk\ ,\ \beta +mk \\ \gamma ^{\prime }% \end{array}% ;y\right] \frac{x^{m}}{m!} \notag \end{eqnarray} where $\left\vert x\right\vert +\left\vert y\right\vert <\frac{1}{k}, \left\vert x\right\vert <\frac{1}{k},\left\vert y\right\vert <\frac{1}{k}$, $\sqrt{\left\vert x\right\vert }+\sqrt{\left\vert y\right\vert }<\frac{1}{k} \ $ respectively\ and denominators are neither zero and nor negative integers. \\ Also, the first Appell $k$-function $F_{1,k}\ $defined by \eqref{kapp1} is expressed in terms of $_{2}F_{1,k}$ as follows \begin{equation} F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) =\sum\limits_{m=0}^{\infty }\dfrac{\left( \alpha \right) _{m,k}\left( \beta \right) _{m,k}}{\left( \gamma \right) _{m,k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha +mk\ ,\ \beta ^{\prime } \\ \gamma +mk% \end{array}% ;y\right] \frac{x^{m}}{m!} \label{appk1} \end{equation} \end{definition} As a first theorem, we consider the integral representation $F_{2,k}$ and $% F_{3,k}.\ $We note that the integral representation of $F_{1,k}$ can be found \cite{Mubeen6}. \begin{theorem} Let$\ k\in \mathbb{R} ^{+}$.\ Integral representations of $F_{2,k}$ and $F_{3,k}$ have the forms of \begin{eqnarray} &&F_{2,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ,\gamma ^{\prime };x,y\right) =\frac{1}{k^{2}B_{k}\left( \beta ,\gamma -\beta \right) B_{k}\left( \beta ^{\prime },\gamma ^{\prime }-\beta ^{\prime }\right) } \notag \\ &&\times \int\limits_{0}^{1}\int\limits_{0}^{1}\frac{t^{\frac{\beta }{k}% -1}s^{\frac{\beta ^{\prime }}{k}-1}\left( 1-t\right) ^{\frac{\gamma -\beta }{% k}-1}\left( 1-s\right) ^{\frac{\gamma ^{\prime }-\beta ^{\prime }}{k}-1}}{% \left( 1-kxt-kys\right) ^{\frac{\alpha }{k}}}dtds \label{appk5} \end{eqnarray}% \begin{eqnarray} &&F_{3,k}\left( \alpha ,\alpha ^{\prime },\beta ,\beta ^{\prime };\gamma ;x,y\right) =\frac{\Gamma _{k}\left( \gamma \right) }{k^{2}\Gamma _{k}\left( \beta \right) \Gamma _{k}\left( \beta ^{\prime }\right) \Gamma _{k}\left( \gamma -\beta -\beta ^{\prime }\right) } \notag \\ &&\times \iint\limits_{D}\frac{t^{\frac{\beta }{k}-1}s^{\frac{\beta ^{\prime }}{k}-1}\left( 1-kxt\right) ^{-\frac{\alpha }{k}}\left( 1-kys\right) ^{-\frac{\alpha ^{\prime }}{k}}}{\left( 1-t-s\right) ^{1-\frac{% \gamma -\beta -\beta ^{\prime }}{k}}}dtds \label{appk5ab} \end{eqnarray} where $\Re\left( \gamma \right) >\Re\left( \beta \right) >0$, $\Re\left( \gamma ^{\prime }\right) >\Re\left( \beta ^{\prime}\right) >0 \ and \ D=\left\{ t\geq 0,\ s\geq 0,\ t+s\leq 1\right\}.$ \end{theorem} \begin{proof} From the definition of Pochhammer $k$-symbol, we can write \begin{equation} \frac{\left( \beta \right) _{m,k}}{\left( \gamma \right) _{m,k}}=\frac{% B_{k}\left( \beta +mk,\gamma -\beta \right) }{B_{k}\left( \beta ,\gamma -\beta \right) }\ \ \ \ \ \ \ \ \ \frac{\left( \beta ^{\prime }\right) _{n,k}}{\left( \gamma ^{\prime }\right) _{n,k}}=\frac{B_{k}\left( \beta ^{\prime }+nk,\gamma ^{\prime }-\beta ^{\prime }\right) }{B_{k}\left( \beta ^{\prime },\gamma ^{\prime }-\beta ^{\prime }\right) }. \end{equation} We insert these formulas with the integral representation of$\ B_{k}$ into the definition of\ $F_{2,k}$ given by \eqref{appk2}, we find that \begin{eqnarray*} &&F_{2,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ,\gamma ^{\prime };x,y\right) =\frac{1}{k^{2}B_{k}\left( \beta ,\gamma -\beta \right) B_{k}\left( \beta ^{\prime },\gamma ^{\prime }-\beta ^{\prime }\right) } \\ &&\times \sum\limits_{m,n=0}^{\infty }\left( \alpha \right) _{m+n,k}\left( \int\limits_{0}^{1}t^{\frac{\beta }{k}+m-1}\left( 1-t\right) ^{\frac{\gamma -\beta }{k}-1}dt\right) \left( \int\limits_{0}^{1}s^{\frac{\beta ^{\prime }% }{k}+n-1}\left( 1-s\right) ^{\frac{\gamma ^{\prime }-\beta ^{\prime }}{k}% -1}ds\right) \frac{x^{m}}{m!}\frac{y^{n}}{n!} \\ &=&\frac{1}{k^{2}B_{k}\left( \beta ,\gamma -\beta \right) B_{k}\left( \beta ^{\prime },\gamma ^{\prime }-\beta ^{\prime }\right) } \\ &&\times \int\limits_{0}^{1}\int\limits_{0}^{1}t^{\frac{\beta }{k}-1}s^{% \frac{\beta ^{\prime }}{k}-1}\left( 1-t\right) ^{\frac{\gamma -\beta }{k}% -1}\left( 1-s\right) ^{\frac{\gamma ^{\prime }-\beta ^{\prime }}{k}% -1}\sum\limits_{m,n=0}^{\infty }\left( a\right) _{m+n,k}\frac{\left( xt\right) ^{m}}{m!}\frac{\left( ys\right) ^{n}}{n!}dtds \\ &=&\frac{1}{k^{2}B_{k}\left( \beta ,\gamma -\beta \right) B_{k}\left( \beta ^{\prime },\gamma ^{\prime }-\beta ^{\prime }\right) } \\ &&\times \int\limits_{0}^{1}\int\limits_{0}^{1}t^{\frac{\beta }{k}-1}s^{% \frac{\beta ^{\prime }}{k}-1}\left( 1-t\right) ^{\frac{\gamma -\beta }{k}% -1}\left( 1-s\right) ^{\frac{\gamma ^{\prime }-\beta ^{\prime }}{k}-1}\left( 1-kxt-kys\right) ^{-\frac{a}{k}}dtds, \end{eqnarray*} which completes the proof. Formula \eqref{appk5ab} can be proved in a similar way, hence the details are omitted. \end{proof} \begin{theorem} For $k\in \mathbb{R} ^{+},\ F_{1,k}\ $has the following relation \begin{eqnarray} &&F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) \notag \\ &=&\left( 1-kx\right) ^{-\frac{\beta }{k}}\left( 1-ky\right) ^{-\frac{\beta ^{\prime }}{k}}F_{1,k}\left( \gamma -\alpha ,\beta ,\beta ^{\prime };\gamma ;-\frac{x}{1-kx},-\frac{y}{1-ky}\right) \label{appk7} \end{eqnarray}% where $\Re\left( \gamma \right) >\Re\left( \alpha \right) >0$ and $\left\vert \frac{x}{1-kx}\right\vert <\frac{1}{k},\ \left\vert \frac{y}{% 1-ky}\right\vert <\frac{1}{k},\ \left\vert x\right\vert <\frac{1}{k},\ \left\vert y\right\vert <\frac{1}{k}.$ \end{theorem} \begin{proof} In \cite{Mubeen6}, the integral representation of\ $F_{1,k}\ $is given by \begin{equation*} F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) =\frac{1}{% kB_{k}\left( \alpha ,\gamma -a\right) }\int\limits_{0}^{1}t^{\frac{\alpha }{% k}-1}\left( 1-t\right) ^{\frac{\gamma -\alpha }{k}-1}\left( 1-kxt\right) ^{-% \frac{\beta }{k}}\left( 1-kyt\right) ^{-\frac{\beta ^{\prime }}{k}}dt. \end{equation*} Performing change of variables $t=1-t_{1}$ in above integral, we can write \begin{eqnarray*} &&F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) =\frac{1}{% kB_{k}\left( \alpha ,\gamma -a\right) } \\ &&\times \int\limits_{0}^{1}t_{1}^{\frac{\gamma -\alpha }{k}-1}\left( 1-t_{1}\right) ^{\frac{\alpha }{k}-1}\left( 1-kx\left( 1-t_{1}\right) \right) ^{-\frac{\beta }{k}}\left( 1-ky\left( 1-t_{1}\right) \right) ^{-% \frac{\beta ^{\prime }}{k}}dt_{1} \\ &=&\frac{1}{kB_{k}\left( \alpha ,\gamma -a\right) }\left( 1-kx\right) ^{-% \frac{\beta }{k}}\left( 1-ky\right) ^{-\frac{\beta ^{\prime }}{k}} \\ &&\times \int\limits_{0}^{1}t_{1}^{\frac{\gamma -\alpha }{k}-1}\left( 1-t_{1}\right) ^{\frac{\alpha }{k}-1}\left( 1+\frac{kxt_{1}}{1-kx}\right) ^{-% \frac{\beta }{k}}\left( 1+\frac{kyt_{1}}{1-ky}\right) ^{-\frac{\beta ^{\prime }}{k}}dt_{1} \\ &=&\left( 1-kx\right) ^{-\frac{\beta }{k}}\left( 1-ky\right) ^{-\frac{\beta ^{\prime }}{k}}F_{1,k}\left( \gamma -\alpha ,\beta ,\beta ^{\prime };\gamma ;-\frac{x}{1-kx},-\frac{y}{1-ky}\right) . \end{eqnarray*} Thus we get the desired result. \end{proof} \begin{theorem} For $k\in \mathbb{R} ^{+},\ $we have% \begin{eqnarray} F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right)=\left( 1-kx\right) ^{-\frac{\alpha }{k}}F_{1,k}\left( \alpha ,\gamma -\beta -\beta ^{\prime },\beta ^{\prime };\gamma ;-\tfrac{x}{1-kx},-\tfrac{x-y}{1-kx}% \right) , \label{appk8} \\ F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right)=\left( 1-ky\right) ^{-\frac{\alpha }{k}}F_{1,k}\left( \alpha ,\beta ,\gamma -\beta -\beta ^{\prime };\gamma ;-\tfrac{y-x}{1-ky},-\tfrac{y}{1-ky}\right) . \label{appk9} \end{eqnarray} \end{theorem} \begin{proof} By a change of variables$\ $using $t=\frac{t_{1}}{1-kx+kt_{1}x}$in the integral representation of $F_{1,k},\ $we have that \begin{eqnarray*} &&F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) =\frac{1}{% kB_{k}\left( \alpha ,\gamma -\alpha \right) } \\ &&\times \int\limits_{0}^{1}t^{\frac{\alpha }{k}-1}\left( 1-t\right) ^{% \frac{\gamma -\alpha }{k}-1}\left( 1-kxt\right) ^{-\frac{\beta }{k}}\left( 1-kyt\right) ^{-\frac{\beta ^{\prime }}{k}}dt \\ &=&\frac{1}{kB_{k}\left( \alpha ,\gamma -\alpha \right) }\left( 1-kx\right) ^{\frac{\gamma -\alpha -\beta }{k}} \\ &&\times \int\limits_{0}^{1}t_{1}^{\frac{\alpha }{k}-1}\left( 1-t_{1}\right) ^{\frac{\gamma -\alpha }{k}-1}\left( 1-kx+kxt_{1}\right) ^{% \frac{\beta +\beta ^{\prime }-\gamma }{k}}\left( 1-kx+kxt_{1}-kyt_{1}\right) ^{-\frac{\beta ^{\prime }}{k}}dt_{1} \\ &=&\frac{1}{kB_{k}\left( \alpha ,\gamma -\alpha \right) }\left( 1-kx\right) ^{-\frac{\alpha }{k}} \\ &&\times \int\limits_{0}^{1}t_{1}^{\frac{\alpha }{k}-1}\left( 1-t_{1}\right) ^{\frac{\gamma -\alpha }{k}-1}\left( 1+\frac{kxt_{1}}{1-kx}% \right) ^{\frac{\beta +\beta ^{\prime }-\gamma }{k}}\left( 1+\frac{% kxt_{1}-kyt_{1}}{1-kx}\right) ^{-\frac{\beta ^{\prime }}{k}}dt_{1} \\ &=&\left( 1-kx\right) ^{-\frac{\alpha }{k}}\ F_{1,k}\left( \alpha ,\gamma -\beta -\beta ^{\prime },\beta ^{\prime };\gamma ;-\frac{x}{1-kx},-\frac{x-y% }{1-kx}\right) \end{eqnarray*}% In the above integral we note that, Using similar argument with $t=\frac{t_{1}}{1-ky-kt_{1}y}$,\ one can easily obtain% \begin{equation*} F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) =\left( 1-ky\right) ^{-\frac{\alpha }{k}}F_{1,k}\left( \alpha ,\beta ,\gamma -\beta -\beta ^{\prime };\gamma ;-\frac{y-x}{1-ky},-\frac{y}{1-ky}\right) . \end{equation*} \end{proof} \begin{theorem} \label{TheoremA}Let $k\in \mathbb{R} ^{+}$ then $F_{1,k}\ $has the following relations,% \begin{eqnarray} &&F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) \notag \\ &=&\left( 1-kx\right) ^{\frac{\gamma -\alpha -\beta }{k}}\left( 1-ky\right) ^{-\frac{\beta ^{\prime }}{k}}F_{1,k}\left( \gamma -\alpha ,\gamma -\beta -\beta ^{\prime },\beta ^{\prime };\gamma ;x,-\tfrac{y-x}{1-ky}\right) \label{appk10} \\ &&and \notag \\ &&F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) \notag \\ &=&\left( 1-kx\right) ^{-\frac{\beta }{k}}\left( 1-ky\right) ^{\frac{\gamma -\alpha -\beta ^{\prime }}{k}}F_{1,k}\left( \gamma -\alpha ,\beta ,\gamma -\beta -\beta ^{\prime };\gamma ;-\tfrac{y-x}{1-kx},y\right). \label{appk11} \end{eqnarray} \end{theorem} \begin{proof} Using $t=\frac{t_{1}}{1-kx+kxt_{1}}$ and $t_{1}=1-t_{2}\ $in integral representation of $F_{1,k}$, we obtain \begin{eqnarray*} &&F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) =\frac{1}{% kB_{k}\left( \alpha ,\gamma -\alpha \right) } \\ &&\times \int\limits_{0}^{1}t^{\frac{\alpha }{k}-1}\left( 1-t\right) ^{% \frac{\gamma -\alpha }{k}-1}\left( 1-kxt\right) ^{-\frac{\beta }{k}}\left( 1-kyt\right) ^{-\frac{\beta ^{\prime }}{k}}dt \\ &=&\frac{1}{kB_{k}\left( \alpha ,\gamma -\alpha \right) } \\ &&\times \int\limits_{0}^{1}t_{2}^{\frac{\gamma -\alpha }{k}-1}\left( 1-t_{2}\right) ^{\frac{\alpha }{k}-1}\left( 1-kx\right) ^{\frac{\gamma -\alpha -\beta }{k}}\left( 1-kxt_{2}\right) ^{\frac{\beta +\beta ^{\prime }-\gamma }{k}}\left( 1-ky+kyt_{2}-kxt_{2}\right) ^{-\frac{\beta ^{\prime }}{k% }}dt_{2} \\ &=&\left( 1-kx\right) ^{\frac{\gamma -\alpha -\beta }{k}}\left( 1-ky\right) ^{-\frac{\beta ^{\prime }}{k}}\frac{1}{kB_{k}\left( \alpha ,\gamma -\alpha \right) } \\ &&\times \int\limits_{0}^{1}t_{2}^{\frac{\gamma -\alpha }{k}-1}\left( 1-t_{2}\right) ^{\frac{\alpha }{k}-1}\left( 1-kxt_{2}\right) ^{\frac{\beta +\beta ^{\prime }-\gamma }{k}}\left( 1+\frac{kyt_{2}-kxt_{2}}{1-ky}\right) ^{-\frac{\beta ^{\prime }}{k}}dt_{2} \\ &=&\left( 1-kx\right) ^{\frac{\gamma -\alpha -\beta }{k}}\left( 1-ky\right) ^{-\frac{\beta ^{\prime }}{k}}F_{1,k}\left( \gamma -\alpha ,\gamma -\beta -\beta ^{\prime },\beta ^{\prime };\gamma ;x,-\tfrac{y-x}{1-ky}\right) . \end{eqnarray*}% Using the same method as above, we can reach \eqref{appk11} easily. \end{proof} \begin{theorem} Let $k\in \mathbb{R} ^{+}$, then the following relations hold% \begin{eqnarray} &&F_{2,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ,\gamma ^{\prime };x,y\right) \notag \\ &=&\left( 1-kx\right) ^{-\frac{\alpha }{k}}F_{2,k}\left( \alpha ,\gamma -\beta ,\beta ^{\prime };\gamma ,\gamma ^{\prime };-\frac{x}{1-kx},\frac{y}{% 1-kx}\right) , \label{appk12} \\ && \notag \\ &&F_{2,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ,\gamma ^{\prime };x,y\right) \notag \\ &=&\left( 1-ky\right) ^{-\frac{\alpha }{k}}F_{2,k}\left( \alpha ,\beta ,\gamma ^{\prime }-\beta ^{\prime };\gamma ,\gamma ^{\prime };\frac{x}{1-ky}% ,-\frac{y}{1-ky}\right) \label{appk13} \\ && and \notag \\ &&F_{2,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ,\gamma ^{\prime };x,y\right) \notag \\ &=&\left( 1-kx-ky\right) ^{-\frac{\alpha }{k}}F_{2,k}\left( \alpha ,\gamma -\beta ,\gamma ^{\prime }-\beta ^{\prime };\gamma ,\gamma ^{\prime };-\tfrac{% x}{1-kx-ky},-\tfrac{y}{1-kx-ky}\right) . \label{appk14} \end{eqnarray} \end{theorem} \begin{proof} By taking for the first relation $t=1-t_{1},\ $for the second $s=1-s_{1}$ and finally for the third $t=1-t_{1},\ s=1-s_{1}\ $together in the double integral \eqref{appk5}, we find \eqref{appk12}, \eqref{appk13} and \eqref{appk14}, respectively. These complete the proof. \end{proof} We continue with some reduction formulas for Appell functions $F_{1,k}$ and $% F_{2,k}\ $in terms of the $_{2}F_{1,k}$ generalized hypergeometric function. \begin{theorem} Let $k\in \mathbb{R} ^{+}$. Then the special cases of $F_{1,k}$ and $F_{2,k}$ are as follows \begin{eqnarray} F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) &=&\left( 1-kx\right) ^{-\frac{\alpha }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha \ ,\ \beta ^{\prime } \\ \beta +\beta ^{\prime }% \end{array}% ;-\frac{x-y}{1-kx}\right], \label{appk15} \\ F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) &=&\left( 1-ky\right) ^{-\frac{\alpha }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha \ ,\ \beta \\ \beta +\beta ^{\prime }% \end{array}% ;-\frac{y-x}{1-ky}\right], \label{appk16} \\ F_{2,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ,\gamma ^{\prime };x,y\right) &=&\left( 1-kx\right) ^{-\frac{\alpha }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha \ ,\ \beta ^{\prime } \\ \gamma ^{\prime }% \end{array}% ;\frac{y}{1-kx}\right], \label{appk17} \\ F_{2,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ,\gamma ^{\prime };x,y\right) &=&\left( 1-ky\right) ^{-\frac{\alpha }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha \ ,\ \beta \\ \gamma \end{array}% ;\frac{x}{1-ky}\right]. \label{appk18} \end{eqnarray} \end{theorem} \begin{proof} Specializing \eqref{appk8} and \eqref{appk9}\ for $\gamma =\beta +\beta ^{\prime }\ $and also if we set $\gamma =\beta $ and $\gamma =\beta ^{\prime }$\ in % \eqref{appk12} and \eqref{appk13}, we obtain desired results respectively.\ \end{proof} In the next lemma, we will prove Euler transformation for $_{2}F_{1,k}$ hypergeometric function which will be used in the next theorem. \begin{lemma} Let$\ x\in \mathbb{C} ,\ k\in \mathbb{R} ^{+}.\ $ Then we have \begin{equation} _{2}F_{1,k}\left[ \begin{array}{c} \alpha \ ,\ \beta \\ \gamma% \end{array}% ;x\right] =\left( 1-kx\right) ^{-\frac{\beta }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \gamma -\alpha \ ,\ \beta \\ \gamma% \end{array}% ;-\frac{x}{1-kx}\right] \label{Euler} \end{equation} \end{lemma} \begin{proof} From the definition of $_{2}F_{1,k},$ one gets \begin{eqnarray} &&\left( 1-kx\right) ^{-\frac{\beta }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \gamma -\alpha \ ,\ \beta \\ \gamma \end{array}% ;-\frac{x}{1-kx}\right] \notag \\ &=&\left( 1-kx\right) ^{-\frac{\beta }{k}}\sum\limits_{n=0}^{\infty }\frac{% \left( \gamma -\alpha \right) _{n,k}\left( \beta \right) _{n,k}}{\left( \gamma \right) _{n,k}}\frac{\left( -1\right) ^{n}x^{n}}{n!\left( 1-kx\right) ^{n}} \notag \\ &=&\sum\limits_{m,n=0}^{\infty }\frac{\left( \gamma -\alpha \right) _{n,k}\left( \beta \right) _{n,k}\left( \beta +nk\right) _{m,k}}{\left( \gamma \right) _{n,k}}\frac{\left( -1\right) ^{n}x^{m+n}}{n!m!} \notag \\ &=&\sum\limits_{m=0}^{\infty }\sum\limits_{n=0}^{m}\frac{\left( \gamma -\alpha \right) _{n,k}\left( \beta \right) _{m,k}}{\left( \gamma \right) _{n,k}}\frac{\left( -1\right) ^{n}x^{m}}{n!\left( m-n\right) !} \label{Euler1} \end{eqnarray}% Using the identity $\left( m-n\right) !=\frac{\left( -1\right) ^{n}m!}{% \left( -m\right) _{n}}\ $in \eqref{Euler1} , we thus find that% \begin{eqnarray} &&\left( 1-kx\right) ^{-\frac{\beta }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \gamma -\alpha \ ,\ \beta \\ \gamma \end{array}% ;-\frac{x}{1-kx}\right] \notag \\ &=&\sum_{m=0}^{\infty }\sum\limits_{n=0}^{m}\frac{\left( \gamma -\alpha \right) _{n,k}\left( -m\right) _{n}}{\left( \gamma \right) _{n,k}n!}% \frac{\left( \beta \right) _{m,k}x^{m}}{m!} \notag \\ &=&\sum_{m=0}^{\infty }\ _{2}F_{1,k}\left[ \begin{array}{c} \left( -m,1\right) \ ,\ \left( \gamma -\alpha ,k\right) \\ \left( \gamma ,k\right) \end{array}% ;1\right] \left( \beta \right) _{m,k}\frac{x^{m}}{m!} \label{Euler2} \end{eqnarray}% Making use of \eqref{kummer2} in \eqref{Euler2}, we get the desired result. \end{proof} \begin{theorem} Let $k\in \mathbb{R} ^{+}$. Then we have \begin{equation} F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) =\left( 1-ky\right) ^{-\frac{\beta ^{\prime }}{k}}F_{3,k}\left( \alpha ,\gamma -\alpha ,\beta ,\beta ^{\prime };\gamma ;x,-\frac{y}{1-ky}\right) \label{appk19} \end{equation} \end{theorem} \begin{proof} Using the definition of $F_{1,k}$ defined by \eqref{appk1} and making use of % \eqref{Euler}, we can write% \begin{eqnarray*} F_{1,k}\left( \alpha ,\beta ,\beta ^{\prime };\gamma ;x,y\right) &=&\sum\limits_{m=0}^{\infty }\frac{\left( \alpha \right) _{m,k}\left( \beta \right) _{m,k}}{\left( \gamma \right) _{m,k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha +mk\ ,\ \beta ^{\prime } \\ \gamma +mk% \end{array}% ;y\right] \frac{x^{m}}{m!} \\ &=&\sum\limits_{m=0}^{\infty }\frac{\left( \alpha \right) _{m,k}\left( \beta \right) _{m,k}}{\left( \gamma \right) _{m,k}}\left( 1-ky\right) ^{-% \frac{\beta ^{\prime }}{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \beta ^{\prime },\ \gamma -\alpha \\ \gamma +mk% \end{array}% ;-\frac{y}{1-ky}\right] \frac{x^{m}}{m!} \\ &=&\left( 1-ky\right) ^{-\frac{\beta ^{\prime }}{k}}\sum\limits_{m,n=0}^{% \infty }\tfrac{\left( \alpha \right) _{m,k}\left( \beta \right) _{m,k}\left( \beta ^{\prime }\right) _{n,k}\left( \gamma -\alpha \right) _{n,k}}{\left( \gamma \right) _{m,k}\left( \gamma +mk\right) _{n,k}}\frac{x^{m}}{m!}\tfrac{% \left( -\frac{y}{1-ky}\right) ^{n}}{n!} \\ &=&\left( 1-ky\right) ^{-\frac{\beta ^{\prime }}{k}}F_{3,k}\left( \alpha ,\gamma -\alpha ,\beta ,\beta ^{\prime };\gamma ;x,-\frac{y}{1-ky}\right) \end{eqnarray*}% Thus we finish the proof. \end{proof} \section{Generating Relations Involving the Generalized Appell Functions} In this section, employing the theory of Riemann Liouville $k$-fractional derivative \cite{Rahman} and making use of the relations which we consider previous sections, we establish linear and bilinear generating relations for $k$-analogue of hypergeometric functions and $k$-Appell functions. \begin{theorem} We have the generating relation% \begin{equation} \sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta% \end{array}% ;x\right] t^{n}=\left( 1-kt\right) ^{-\frac{\lambda }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda ,\ \ \ \ \alpha \\ \beta% \end{array}% ;\frac{x}{1-kt}\right] , \label{gf1} \end{equation}% where $\left\vert x\right\vert <\frac{1}{k}\min \left\{ 1,1-kt\right\} .$ \end{theorem} \begin{proof} To prove the result, consider the elementary identities given by% \begin{equation} \left( 1-kx-kt\right) ^{-\frac{\lambda }{k}}=\left( 1-kt\right) ^{-\frac{% \lambda }{k}}\left( 1-\frac{kx}{1-kt}\right) ^{-\frac{\lambda }{k}}, \label{gf1a} \end{equation}% \begin{equation*} \left( 1-kx-kt\right) ^{-\frac{\lambda }{k}}=\left( 1-kx\right) ^{-\frac{% \lambda }{k}}\left( 1-\frac{kt}{1-kx}\right) ^{-\frac{\lambda }{k}}. \end{equation*}% From the series expansion using the definition of Pochhammer $k$-symbol\ \cite{Diaz} \begin{equation*} \sum\limits_{n=0}^{\infty }\left( \alpha \right) _{n,k}\frac{z^{n}}{n!}% =\left( 1-kz\right) ^{-\frac{\alpha }{k}}, \end{equation*}% we can write% \begin{eqnarray} \left( 1-kx-kt\right) ^{-\frac{\lambda }{k}} &=&\left( 1-kx\right) ^{-\frac{% \lambda }{k}}\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}% }{n!}\left( \frac{t}{1-kx}\right) ^{n} \notag \\ &=&\left( 1-kx\right) ^{-\frac{\lambda }{k}}\sum\limits_{n=0}^{\infty }% \frac{\left( \lambda \right) _{n,k}}{n!}\left( 1-kx\right) ^{-n}t^{n} \notag \\ &=&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}% \left( 1-kx\right) ^{-\frac{\lambda }{k}-n}t^{n}. \label{gf1b} \end{eqnarray}% From \eqref{gf1a} and \eqref{gf1b}, we have the equality \begin{equation} \sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\left( 1-kx\right) ^{-\frac{\lambda }{k}-n}t^{n}=\left( 1-kt\right) ^{-\frac{% \lambda }{k}}\left( 1-\frac{kx}{1-kt}\right) ^{-\frac{\lambda }{k}} \label{gf1c} \end{equation}% where $\left\vert t\right\vert <\left\vert 1-kx\right\vert .\ $Multiplying both sides of \eqref{gf1c}$\ $by $x^{\frac{\alpha }{k}-1}$and then applying $% _{k}D_{x}^{\alpha -\beta }$ to the both sides of \eqref{gf1c}, we can reach% \begin{equation*} _{k}D_{x}^{\alpha -\beta }\left\{ \sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ x^{\frac{\alpha }{k}-1}\left( 1-kx\right) ^{-% \frac{\lambda }{k}-n}t^{n}\right\} =_{k}D_{x}^{\alpha -\beta }\left\{ \left( 1-kt\right) ^{-\frac{\lambda }{k}}\ x^{\frac{\alpha }{k}-1}\left( 1-\frac{kx% }{1-kt}\right) ^{-\frac{\lambda }{k}}\right\} . \end{equation*}% Since $\Re(\alpha )>0$ ve $\left\vert t\right\vert <\left\vert 1-kx\right\vert $, it is possible to change the order of the summation and differentiation, we get \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{k}D_{x}^{\alpha -\beta }\left\{ x^{\frac{\alpha }{k}-1}\left( 1-kx\right) ^{-\frac{\lambda }{k}-n}\right\} t^{n} \label{appk19d} \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}}\ _{k}D_{x}^{\alpha -\beta }\left\{ x^{\frac{\alpha }{k}-1}\left( 1-\frac{kx}{1-kt}\right) ^{-\frac{% \lambda }{k}}\right\} . \notag \end{eqnarray}% Finally using relation \eqref{krl4} in \eqref{appk19d}, it follows \begin{equation*} \sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] t^{n}=\left( 1-kt\right) ^{-\frac{\lambda }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda ,\ \ \ \ \alpha \\ \beta \end{array}% ;\frac{x}{1-kt}\right] \end{equation*}% where $\left\vert x\right\vert <\frac{1}{k}\min \left\{ 1,1-kt\right\} .\ $Hence, we get the desired result. \end{proof} \begin{theorem} We have the generating relation% \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \rho -nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] t^{n} \notag \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}}\ F_{1,k}\left[ \alpha ,\rho ,\lambda ;\beta ;x,-\frac{kxt}{1-kt}\right] \label{gf2} \end{eqnarray}% where $\left\vert x\right\vert <\frac{1}{k}, \ \left\vert \frac{kxt}{1-kt}\right\vert <\frac{1}{k}.$ \end{theorem} \begin{proof} Consider the identity% \begin{equation} \left( 1-k\left( 1-kx\right) t\right) ^{-\frac{\lambda }{k}}=\left( 1-kt\right) ^{-\frac{\lambda }{k}}\left( 1+\frac{k^{2}xt}{1-kt}\right) ^{-% \frac{\lambda }{k}}. \label{gf2a} \end{equation}% Under the assumption $\left\vert kt\right\vert <\left\vert 1-kx\right\vert ^{-1},$we can rewrite \eqref{gf2a} \begin{equation} \sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\left( 1-kx\right) ^{n}t^{n}=\left( 1-kt\right) ^{-\frac{\lambda }{k}}\left( 1+% \frac{k^{2}xt}{1-kt}\right) ^{-\frac{\lambda }{k}}. \label{gf2b} \end{equation}% Multiplying $x^{\frac{\alpha }{k}-1}\left( 1-kx\right)^{-\frac{\rho }{k}% }$ and taking the $D_{x}^{\alpha -\beta }$ on both sides of \eqref{gf2b}, we obtain% \begin{eqnarray*} &&_{k}D_{x}^{\alpha -\beta }\left\{ \sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ x^{\frac{\alpha }{k}-1}\left( 1-kx\right) ^{n-% \frac{\rho }{k}}t^{n}\right\} \\ &=&\ _{k}D_{x}^{\alpha -\beta }\left\{ x^{\frac{\alpha }{k}-1}\left( 1-kt\right) ^{-\frac{\lambda }{k}}\left( 1-kx\right) ^{-\frac{\rho }{k}% }\left( 1+k\frac{kxt}{1-kt}\right) ^{-\frac{\lambda }{k}}\right\} . \end{eqnarray*}% For $\Re(\alpha )>0$ , interchanging the order of the summation and the operator\ $_{k}D_{x}^{\alpha -\beta },\ $we have \begin{eqnarray*} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{k}D_{x}^{\alpha -\beta }\left\{ x^{\frac{\alpha }{k}-1}\left( 1-kx\right) ^{n-\frac{\rho }{k}}\right\} t^{n} \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}}\ _{k}D_{x}^{\alpha -\beta }\left\{ \ x^{\frac{\alpha }{k}-1}\left( 1-kx\right) ^{-\frac{\rho }{k}% }\left( 1+k\frac{kxt}{1-kt}\right) ^{-\frac{\lambda }{k}}\right\} . \end{eqnarray*}% Assuming $\left\vert x\right\vert <\frac{1}{k} \ and \ \left\vert \frac{kxt}{1-kt}\right\vert <\frac{1}{k}$ and using \eqref{krl4} and \eqref{krl5}, \begin{equation*} \sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \rho -nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] t^{n}=\left( 1-kt\right) ^{-\frac{\lambda }{k}}\ F_{1,k}\left[ \alpha ,\rho ,\lambda ;\beta ;x,-\frac{kxt}{1-kt}\right] \end{equation*}% the theorem is immediate. \end{proof} \begin{theorem} We have the generating relations \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \beta -\rho \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \rho -nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] t^{n} \notag \\ &=&\left( 1-kt\right) ^{\frac{\alpha +\rho -\beta }{k}}\ \left( 1-kt+k^{2}xt\right) ^{-\frac{\alpha }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha ,\rho \\ \beta \end{array}% ;\frac{x}{1-kt+k^{2}xt}\right] \label{gf3} \\ && and \notag \\ &&\sum\limits_{n=0}^{\infty }\frac{\left( \beta \right) _{n,k}\left( \gamma \right) _{n,k}}{\left( \delta \right) _{n,k}n!}\ _{2}F_{1,k}\left[ \begin{array}{c} -nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] t^{n} \notag \\ &=&F_{1,k}\left( \gamma ,\beta -\alpha ,\alpha ;\delta ;t,\left( 1-kx\right) t\right) \label{gf4} \end{eqnarray} \end{theorem} \begin{proof} We use the result of the previous theorem. Setting $\lambda =\beta -\rho $ in \eqref{gf2}, we find that% \begin{equation*} \sum\limits_{n=0}^{\infty }\frac{\left( \beta -\rho \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \rho -nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] t^{n}=\left( 1-kt\right) ^{\frac{\rho -\beta }{k}}\ F_{1,k}\left[ \alpha ,\rho ,\beta -\rho ;\beta ;x,-\frac{kxt}{1-kt}\right] \end{equation*}% If we use reduction formula for $F_{1,k}$ given by \eqref{appk16}, we obtain easily the desired result as follows, \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \beta -\rho \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \rho -nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] t^{n} \notag \\ &=&\left( 1-kt\right) ^{\frac{\alpha +\rho -\beta }{k}}\ \left( 1-kt+k^{2}xt\right) ^{-\frac{\alpha }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \alpha ,\rho \\ \beta \end{array}% ;\frac{x}{1-kt+k^{2}xt}\right] . \label{gf3a} \end{eqnarray}% For $\rho =0$, \eqref{gf3a} gives,% \begin{equation} \sum\limits_{n=0}^{\infty }\frac{\left( \beta \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} -nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] t^{n}=\left( 1-kt\right) ^{\frac{\alpha -\beta }{k}}\ \left( 1-kt+k^{2}xt\right) ^{-\frac{\alpha }{k}}. \label{gf4a} \end{equation}% Multiplying both sides of \eqref{gf4a}\ with $t^{\frac{\gamma }{k}-1}$ and operation of the $_{k}D_{t}^{\gamma -\delta }\ $on\ \eqref{gf4a},\ one can easily obtain% \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \beta \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} -nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] _{k}D_{t}^{\gamma -\delta }\left\{ t^{n+\frac{\gamma }{k}% -1}\right\} \notag \\ &=&D_{t}^{\gamma -\delta }\left\{ t^{\frac{\gamma }{k}-1}\left( 1-kt\right) ^{\frac{\alpha -\beta }{k}}\ \left( 1-kt+k^{2}xt\right) ^{-\frac{\alpha }{k}% }\right\} . \label{gf4b} \end{eqnarray}% In view of \eqref{krl3} and \eqref{krl5} on the right and left side of \eqref{gf4b}, respectively, we can reach \begin{equation*} \sum\limits_{n=0}^{\infty }\frac{\left( \beta \right) _{n,k}\left( \gamma \right) _{n,k}}{\left( \delta \right) _{n,k} n!}\ _{2}F_{1,k}\left[ \begin{array}{c} -nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] t^{n}=F_{1,k}\left( \gamma ,\beta -\alpha ,\alpha ;\delta ;t,\left( 1-kx\right) t\right). \end{equation*} \end{proof} \begin{theorem} We have the generating relation% \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \ _{2}F_{1,k}\left[ \begin{array}{c} -nk,\ \ \ \ \gamma \\ \delta \end{array}% ;y\right] t^{n} \notag \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}}F_{2,k}\left( \lambda ,\alpha ,\gamma ;\beta ,\delta ;\frac{x}{1-kt},-\frac{kyt}{1-kt}\right) . \label{gf5} \end{eqnarray} \end{theorem} \begin{proof} Putting $\left( 1-ky\right) t$ instead of $t\ $in \eqref{gf1}, we can obtain$\ $% \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \left( 1-ky\right) ^{n}t^{n} \notag \\ &=&\left( 1-k\left( 1-ky\right) t\right) ^{-\frac{\lambda }{k}}\ _{2}F_{1,k}% \left[ \begin{array}{c} \lambda ,\ \ \ \ \alpha \\ \beta \end{array}% ;\frac{x}{1-k\left( 1-ky\right) t}\right] . \label{gf5a} \end{eqnarray}% Multiplying with $y^{\frac{\gamma }{k}-1},\ $employing $_{k}D_{y}^{\gamma -\delta }\ $both sides of \eqref{gf5a} and the under the assumption $\Re \left( \gamma \right) >0$ interchanging differentiation and summation, we can write \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \ _{k}D_{y}^{\gamma -\delta }\left\{ y^{\frac{\gamma }{k}-1}\left( 1-ky\right) ^{n}\right\} t^{n} \label{gf5b} \\ &=&\ _{k}D_{y}^{\gamma -\delta }\left\{ y^{\frac{\gamma }{k}-1}\left( 1-k\left( 1-ky\right) t\right) ^{-\frac{\lambda }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda ,\ \ \ \ \alpha \\ \beta \end{array}% ;\tfrac{x}{1-k\left( 1-ky\right) t}\right] \right\} . \notag \end{eqnarray}% Make use of the formula \eqref{krl4}, we can easily simplify left side of the \eqref{gf5b} as follows, \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \ _{k}D_{y}^{\gamma -\delta }\left\{ y^{\frac{\gamma }{k}-1}\left( 1-ky\right) ^{n}\right\} t^{n} \label{gf5c}\\ &=&\tfrac{\Gamma _{k}\left( \gamma \right) }{\Gamma _{k}\left( \delta \right) }y^{\frac{\delta }{k}-1}\sum\limits_{n=0}^{\infty }\tfrac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \ _{2}F_{1,k}\left[ \begin{array}{c} -nk,\ \ \ \ \gamma \\ \delta \end{array}% ;y\right] t^{n}. \notag \end{eqnarray} For the right side of the \eqref{gf5b}, using the definition of $_{2}F_{1,k}\ $% \ and the formula \eqref{krl3}, one obtain \begin{eqnarray} &&_{k}D_{y}^{\gamma -\delta }\left\{ y^{\frac{\gamma }{k}-1}\left( 1-k\left( 1-ky\right) t\right) ^{-\frac{\lambda }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda ,\ \ \ \ \alpha \\ \beta \end{array}% ;\frac{x}{1-k\left( 1-ky\right) t}\right] \right\} \notag \\ &=&\frac{\Gamma _{k}\left( \gamma \right) }{\Gamma _{k}\left( \delta \right) }\left( 1-kt\right) ^{-\frac{\lambda }{k}}y^{\frac{\delta }{k}% -1}F_{2,k}\left( \lambda ,\alpha ,\gamma ;\beta ,\delta ;\frac{x}{1-kt},-% \frac{kyt}{1-kt}\right) \label{gf5d} \end{eqnarray}% where $\left\vert x\right\vert <\frac{1}{k},\ \left\vert y\right\vert <\frac{1}{k}, \ \left\vert \frac{x}{1-kt}\right\vert +\left\vert \frac{kyt}{1-kt}\right\vert <\frac{1}{k},\ \left\vert \frac{1-ky}{1-x}t\right\vert <\frac{1}{k}$. Combining the relations \eqref{gf5c} and \eqref{gf5d}, we get desired result. \end{proof} As a special case of \eqref{gf5}, we give the following theorem as follows. \begin{theorem} We have the generating relation% \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\tfrac{\left( \beta -\rho \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \rho -nk,\ \ \ \ \alpha \\ \beta% \end{array}% ;x\right] \ _{2}F_{1,k}\left[ \begin{array}{c} -nk,\ \ \ \ \gamma \\ \delta% \end{array}% ;y\right] t^{n} \label{gf6} \\ &=&\left( 1-kx\right) ^{-\frac{\alpha }{k}}\left( 1-kt\right) ^{\frac{\rho -\beta }{k}}F_{2,k}\left( \beta -\rho ,\alpha ,\gamma ;\beta ,\delta ;-\tfrac{ x}{\left( 1-kx\right) \left( 1-kt\right) },-\tfrac{kyt}{1-kt}\right) \notag \end{eqnarray} \end{theorem} \begin{proof} For $\lambda =\beta -\rho \ $in \eqref{gf5}, we get \begin{eqnarray*} &&\sum\limits_{n=0}^{\infty }\frac{\left( \beta -\rho \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \beta -\rho +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \ _{2}F_{1,k}\left[ \begin{array}{c} -nk,\ \ \ \ \gamma \\ \delta \end{array}% ;y\right] t^{n} \\ &=&\left( 1-kt\right) ^{\frac{\rho -\beta }{k}}F_{2,k}\left( \beta -\rho ,\alpha ,\gamma ;\beta ,\delta ;\frac{x}{1-kt},-\frac{kyt}{1-kt}\right) . \end{eqnarray*}% Using Euler transformation given by \eqref{Euler} for$\ _{2}F_{1,k}$% \begin{eqnarray*} &&\sum\limits_{n=0}^{\infty }\frac{\left( \beta -\rho \right) _{n,k}}{n!}% \left( 1-kx\right) ^{-\frac{\alpha }{k}}\ _{2}F_{1,k}\left[ \begin{array}{c} \rho -nk,\ \ \ \ \alpha \\ \beta \end{array}% ;-\frac{x}{1-kx}\right] \ _{2}F_{1,k}\left[ \begin{array}{c} -nk,\ \ \ \ \gamma \\ \delta \end{array}% ;y\right] t^{n} \\ &=&\left( 1-kt\right) ^{\frac{\rho -\beta }{k}}F_{2,k}\left( \beta -\rho ,\alpha ,\gamma ;\beta ,\delta ;\frac{x}{1-kt},-\frac{kyt}{1-kt}\right) \end{eqnarray*} and putting $-\frac{x}{1-kx}$ instead of $x,\ $we reach the desired result. \end{proof} \begin{theorem} We have the generating relation% \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \gamma \\ \delta \end{array}% ;y\right] t^{n} \notag \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}}\sum\limits_{n=0}^{\infty }% \frac{\left( \lambda \right) _{n,k}\left( \alpha \right) _{n,k}}{\left( \beta \right) _{n,k}n!}\left( -\frac{kxy}{1-kt}\right) ^{n} \label{gf7} \\ &&\times F_{2,k}\left( \lambda +nk,\alpha +nk,\gamma +nk;\beta +nk,\delta +nk;\frac{x}{1-kt},-\frac{ky}{1-kt}\right) \notag \end{eqnarray} \end{theorem} \begin{proof} Replacing $t$\ by $\frac{t}{1-ky}$ and after some simplification in \eqref{gf1}% , we find that% \begin{eqnarray*} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \frac{t^{n}}{\left( 1-ky\right) ^{n+\frac{\lambda }{k}}} \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}}\sum\limits_{n=0}^{\infty }% \frac{\left( \lambda \right) _{n,k}\left( \alpha \right) _{n,k}}{\left( \beta \right) _{n,k}n!}\left( \frac{x\left( 1-ky\right) }{1-kt}\right) ^{n}\left( 1-\frac{ky}{1-kt}\right) ^{-n-\frac{\lambda }{k}} \end{eqnarray*}% Using the binomial expansion $\left( x+y\right) ^{n}=\sum\limits_{k=0}^{n}\left( \begin{array}{c} n \\ k% \end{array}% \right) x^{k}y^{n-k},$% \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \frac{t^{n}}{\left( 1-ky\right) ^{n+\frac{\lambda }{k}}} \notag \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}} \notag \\ &&\times \sum\limits_{n=0}^{\infty }\sum\limits_{k_{1}=0}^{n}\frac{\left( \lambda \right) _{n,k}\left( \alpha \right) _{n,k}}{\left( \beta \right) _{n,k}n!}\left( \begin{array}{c} n \\ k_{1}% \end{array}% \right) \left( -1\right) ^{n-k_{1}}\left( \frac{x}{1-kt}\right) ^{k_{1}}\left( \frac{xky}{1-kt}\right) ^{n-k_{1}}\left( 1-\frac{ky}{1-kt}% \right) ^{-n-\frac{\lambda }{k}} \notag \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}} \notag \\ &&\times \sum\limits_{n,k_{1}=0}^{\infty }\frac{\left( \lambda \right) _{n+k_{1},k}\left( \alpha \right) _{n+k_{1},k}}{\left( \beta \right) _{n+k_{1},k}\left( n+k_{1}\right) !}\left( \begin{array}{c} n+k_{1} \\ k_{1}% \end{array}% \right) \left( -1\right) ^{n}\left( \frac{x}{1-kt}\right) ^{k_{1}}\left( \frac{xky}{1-kt}\right) ^{n}\left( 1-\frac{ky}{1-kt}\right) ^{-n-k_{1}-\frac{% \lambda }{k}} \notag \end{eqnarray} \begin{eqnarray} &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}} \notag \\ &&\times \sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}\left( \alpha \right) _{n,k}}{\left( \beta \right) _{n,k}n!}\left( -% \frac{xky}{1-kt}\right) ^{n}\left( 1-\frac{ky}{1-kt}\right) ^{-n-\frac{% \lambda }{k}} \notag \\ &&\times _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha +nk \\ \beta +nk% \end{array}% ;\frac{\frac{x}{1-kt}}{1-\frac{ky}{1-kt}}\right] \label{gf7a} \end{eqnarray} Multiplying $y^{\frac{\gamma }{k}-1},\ $operating $_{k}D_{y}^{\gamma -\delta }\ $and applying \eqref{krl3}, \eqref{krl4}\ and \eqref{krl5}\ together both sides of the \eqref{gf7a} (in a similar way of proof of the \eqref{gf5}) for $ \left\vert x\right\vert <\frac{1}{k},\ \left\vert y\right\vert <\frac{1}{k} ,\left\vert \frac{x}{1-kt}\right\vert +\left\vert \frac{ky}{1-kt}\right\vert <\frac{1}{k}$, we complete the proof. \end{proof} \begin{theorem} We have the generating relation% \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \gamma \\ \delta \end{array}% ;y\right] t^{n} \notag \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}}\sum\limits_{n=0}^{\infty }% \frac{\left( \lambda \right) _{n,k}\left( \alpha \right) _{n,k}\left( \gamma \right) _{n,k}}{\left( \beta \right) _{n,k}\left( \delta \right) _{n,k}n!}% \left( \frac{k^{3}xyt}{\left( 1-kt\right) ^{2}}\right) ^{n} \label{gf8} \\ &&\times \ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha +nk \\ \beta +nk% \end{array}% ;\frac{x}{1-kt}\right] \ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \gamma +nk \\ \delta +nk% \end{array}% ;\frac{y}{1-kt}\right] . \notag \end{eqnarray}% For the special case, we have% \begin{eqnarray} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \lambda \end{array}% ;x\right] \ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \gamma \\ \lambda \end{array}% ;y\right] t^{n} \notag \\ &=&\left( 1-kt\right) ^{\frac{\gamma +\alpha -\lambda }{k}}\left( 1-kt-kx\right) ^{-\frac{\alpha }{k}}\left( 1-kt-ky\right) ^{-\frac{\gamma }{k% }}\ \notag \\ &&\times _{2}F_{1,k}\left[ \begin{array}{c} \alpha ,\ \ \ \ \gamma \\ \lambda \end{array}% ;\frac{k^{3}xyt}{\left( 1-kt-kx\right) \left( 1-kt-ky\right) }\right] . \label{gf9} \end{eqnarray} \end{theorem} \begin{proof} From the elementary identity, we find that% \begin{equation} \left( \left( 1-kx\right) \left( 1-ky\right) -kt\right) ^{-\frac{\lambda }{k}% }=\left( 1-kt\right) ^{-\frac{\lambda }{k}}\left( \left( 1-\frac{kx}{1-kt}% \right) \left( 1-\frac{ky}{1-kt}\right) -\frac{k^{3}xyt}{\left( 1-kt\right) ^{2}}\right) ^{-\frac{\lambda }{k}}. \label{gf8a} \end{equation}% for $\left\vert \frac{kt}{\left( 1-kx\right) \left( 1-ky\right) }\right\vert <\frac{1}{k}\ $and$\ \left\vert \frac{k^{3}xyt}{\left( 1-kt-kx\right) \left( 1-kt-ky\right) }\right\vert<\frac{1}{k} $. Applying \eqref{kpoc5}\ for the \eqref{gf8a}, multiplying $x^{\frac{\alpha }{k}-1}y^{\frac{\gamma }{k}-1}$ and taking $\ _{k}D_{x}^{\alpha -\beta }\ _{k}D_{y}^{\gamma -\delta }\ $together both sides of \eqref{gf8a}, we have \begin{eqnarray*} &&_{k}D_{x}^{\alpha -\beta }\ D_{y}^{\gamma -\delta }\left\{ \sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}x^{\frac{% \alpha }{k}-1}\left( 1-kx\right) ^{-\frac{\lambda }{k}-n}y^{\frac{\gamma }{k}% -1}\left( 1-ky\right) ^{-\frac{\lambda }{k}-n}t^{n}\right\} \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}}\ \\ &&\times _{k}D_{x}^{\alpha -\beta }\ D_{y}^{\gamma -\delta }\left\{ \sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}\left( k^{3}t\right) ^{n}}{n!\left( 1-kt\right) ^{2n}}x^{\frac{\alpha }{k}% +n-1}\left( 1-\frac{kx}{1-kt}\right) ^{-\frac{\lambda }{k}-n}y^{\frac{\gamma }{k}+n-1}\left( 1-\frac{ky}{1-kt}\right) ^{-\frac{\lambda }{k}-n}\right\} . \end{eqnarray*}% Under the conditions $\Re\left( \alpha \right) >0,\ \Re\left( \gamma \right) >0, \ \left\vert x\right\vert <\frac{1}{k},\ \left\vert y\right\vert <\frac{1}{k}, \ \left\vert \frac{x}{1-kt}\right\vert <\frac{1}{k}$ and $% \left\vert \frac{y}{1-kt}\right\vert <\frac{1}{k},\ $directly from the properties \eqref{krl3}, \eqref{krl4}\ and \eqref{krl5}, we can obtain% \begin{eqnarray*} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \beta \end{array}% ;x\right] \ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \gamma \\ \delta \end{array}% ;y\right] t^{n} \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}}\sum\limits_{n=0}^{\infty }% \frac{\left( \lambda \right) _{n,k}\left( \alpha \right) _{n,k}\left( \gamma \right) _{n,k}}{\left( \beta \right) _{n,k}\left( \delta \right) _{n,k}n!}% \left( \frac{k^{3}xyt}{\left( 1-kt\right) ^{2}}\right) ^{n}\ \\ &&\times _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha +nk \\ \beta +nk \end{array} ;\frac{x}{1-kt}\right] \ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \gamma +nk \\ \delta +nk \end{array} ;\frac{y}{1-kt}\right] . \end{eqnarray*} For the special case, $\beta =\delta =\lambda \ $in \eqref{gf8}, we have, \begin{eqnarray*} &&\sum\limits_{n=0}^{\infty }\frac{\left( \lambda \right) _{n,k}}{n!}\ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \alpha \\ \lambda \end{array}% ;x\right] \ _{2}F_{1,k}\left[ \begin{array}{c} \lambda +nk,\ \ \ \ \gamma \\ \lambda \end{array}% ;y\right] t^{n} \\ &=&\left( 1-kt\right) ^{-\frac{\lambda }{k}} \\ &&\times \sum\limits_{n=0}^{\infty }\frac{\left( \alpha \right) _{n,k}\left( \gamma \right) _{n,k}}{\left( \lambda \right) _{n,k}n!}\left( \frac{k^{3}xyt}{\left( 1-kt\right) ^{2}}\right) ^{n}\left( 1-\frac{kx}{1-kt}% \right) ^{-\frac{\alpha +nk}{k}}\left( 1-\frac{ky}{1-kt}\right) ^{-\frac{% \gamma +nk}{k}} \\ &=&\left( 1-kt\right) ^{\frac{\gamma +\alpha -\lambda }{k}}\left( 1-kt-kx\right) ^{-\frac{\alpha }{k}}\left( 1-kt-ky\right) ^{-\frac{\gamma }{k% }}\ \\ &&\times _{2}F_{1,k}\left[ \begin{array}{c} \alpha ,\ \ \ \ \gamma \\ \lambda \end{array}% ; \frac{k^{3}xyt}{\left( 1-kt-kx\right) \left( 1-kt-ky\right) }\right] . \end{eqnarray*} \end{proof}
{ "timestamp": "2020-07-28T02:43:00", "yymm": "2007", "arxiv_id": "2007.13643", "language": "en", "url": "https://arxiv.org/abs/2007.13643" }
\section{Introduction} \label{sec:intro} The Bethe-Salpeter equation (BSE) formalism \cite{Salpeter_1951,Strinati_1988} is to the $GW$ approximation \cite{Hedin_1965,Golze_2019} of many-body perturbation theory (MBPT) \cite{Onida_2002,Martin_2016} what time-dependent density-functional theory (TD-DFT) \cite{Runge_1984,Casida_1995} is to Kohn-Sham density-functional theory (KS-DFT), \cite{Hohenberg_1964,Kohn_1965} an affordable way of computing the neutral (or optical) excitations of a given electronic system. In recent years, it has been shown to be a valuable tool for computational chemists with a large number of systematic benchmark studies on large families of molecular systems appearing in the literature \cite{Boulanger_2014,Jacquemin_2015a,Bruneval_2015,Jacquemin_2015b,Hirose_2015,Jacquemin_2017a,Jacquemin_2017b,Rangel_2017,Krause_2017,Gui_2018,Liu_2020} (see Ref.~\onlinecite{Blase_2018} for a recent review). Qualitatively, taking the optical gap (\textit{i.e.}, the lowest optical excitation energy) as an example, BSE builds on top of a $GW$ calculation by adding up excitonic effects (\textit{i.e.}, the electron-hole binding energy $E_B$) to the $GW$ HOMO-LUMO gap \begin{equation} E_\text{g}^{GW} = \varepsilon_{\text{LUMO}}^{GW} - \varepsilon_{\text{HOMO}}^{GW}, \end{equation} which is itself a corrected version of the Kohn-Sham (KS) gap \begin{equation} E_\text{g}^{\text{KS}} = \varepsilon_{\text{LUMO}}^{\text{KS}} - \varepsilon_{\text{HOMO}}^{\text{KS}} \ll E_\text{g}^{GW} \approx \Eg^\text{fund}, \end{equation} in order to approximate the optical gap \begin{equation} \Eg^\text{opt} = E_1^{N} - E_0^{N} = \Eg^\text{fund} + E_B, \end{equation} where \begin{equation} \label{eq:Egfun} \Eg^\text{fund} = I^N - A^N \end{equation} is the fundamental gap, $I^N = E_0^{N-1} - E_0^N$ and $A^N = E_0^{N} - E_0^{N+1}$ being the ionization potential and the electron affinity of the $N$-electron system, respectively. Here, $E_S^{N}$ is the total energy of the $S$th excited state of the $N$-electron system, and $E_0^N$ corresponds to its ground-state energy. Because the excitonic effect corresponds physically to the stabilization implied by the attraction of the excited electron and its hole left behind, we have $\Eg^\text{opt} < \Eg^\text{fund}$. Due to the smaller amount of screening in molecules as compared to solids, a faithful description of excitonic effects is paramount in molecular systems. Most of BSE implementations rely on the so-called static approximation, which approximates the dynamical (\textit{i.e.}, frequency-dependent) BSE kernel by its static limit. In complete analogy with the ubiquitous adiabatic approximation in TD-DFT where the exchange-correlation (xc) kernel is made static, one key consequence of the static approximation within BSE is that double (and higher) excitations are completely absent from the BSE spectrum. Indeed, a frequency-dependent kernel has the ability to create additional poles in the response function, which describe states with a multiple-excitation character, and, in particular, double excitations. Although these double excitations are usually experimentally dark (which means that they usually cannot be observed in photo-absorption spectroscopy), these states play, indirectly, a key role in many photochemistry mechanisms, \cite{Boggio-Pasqua_2007} as they strongly mix with the bright singly-excited states leading to the formation of satellite peaks. \cite{Helbig_2011,Elliott_2011} They are particularly important in the faithful description of the ground state of open-shell molecules, \cite{Casida_2005,Romaniello_2009a,Huix-Rotllant_2011,Loos_2020c} and they are, moreover, a real challenge for high-level computational methods. \cite{Loos_2018a,Loos_2019,Loos_2020b,Loos_2020c} Double excitations play also a significant role in the correct location of the excited states of polyenes that are closely related to rhodopsin, a biological pigment found in the rods of the retina and involved in the visual transduction. \cite{Olivucci_2010,Robb_2007,Manathunga_2016} In butadiene, for example, while the bright $1 ^1B_u$ state has a clear $\text{HOMO} \rightarrow \text{LUMO}$ single-excitation character, the dark $2 ^1A_g$ state includes a substantial fraction of doubly-excited character from the $\text{HOMO}^2 \rightarrow \text{LUMO}^2$ double excitation (roughly $30\%$), yet with dominant contributions from the $\text{HOMO}-1 \rightarrow \text{LUMO}$ and $\text{HOMO} \rightarrow \text{LUMO}+1$ single excitations. \cite{Maitra_2004,Cave_2004,Saha_2006,Watson_2012,Shu_2017,Barca_2018a,Barca_2018b,Loos_2019} Going beyond the static approximation is difficult and very few groups have been addressing the problem. \cite{Strinati_1988,Rohlfing_2000,Sottile_2003,Myohanen_2008,Ma_2009a,Ma_2009b,Romaniello_2009b,Sangalli_2011,Huix-Rotllant_2011,Sakkinen_2012,Zhang_2013,Rebolini_2016,Olevano_2019,Lettmann_2019} Nonetheless, it is worth mentioning the seminal work of Strinati \titou{(who originally derived the dynamical correction to the BSE)} on core excitons in semiconductors, \cite{Strinati_1982,Strinati_1984,Strinati_1988} in which the dynamical screening effects were taken into account through the dielectric matrix, and where he observed an increase of the binding energy over its value for static screening and a narrowing of the Auger width below its value for a core hole. Following Strinati's footsteps, Rohlfing and coworkers have developed an efficient way of taking into account, thanks to first-order perturbation theory, the dynamical effects via a plasmon-pole approximation combined with the Tamm-Dancoff approximation (TDA). \cite{Rohlfing_2000,Ma_2009a,Ma_2009b,Baumeier_2012b} With such a scheme, they have been able to compute the excited states of biological chromophores, showing that taking into account the electron-hole dynamical screening is important for an accurate description of the lowest $n \rightarrow \pi^*$ excitations. \cite{Ma_2009a,Ma_2009b,Baumeier_2012b} Indeed, studying PYP, retinal and GFP chromophore models, Ma \textit{et al.}~found that \textit{``the influence of dynamical screening on the excitation energies is about $0.1$ eV for the lowest $\pi \rightarrow \pi^*$ transitions, but for the lowest $n \rightarrow \pi^*$ transitions the influence is larger, up to $0.25$ eV.''} \cite{Ma_2009b} A similar conclusion was reached in Ref.~\onlinecite{Ma_2009a}. Zhang \textit{et al.}~have studied the frequency-dependent second-order Bethe-Salpeter kernel and they have observed an appreciable improvement over configuration interaction with singles (CIS), time-dependent Hartree-Fock (TDHF), and adiabatic TD-DFT results. \cite{Zhang_2013} Rebolini and Toulouse have performed a similar investigation in a range-separated context, and they have reported a modest improvement over its static counterpart. \cite{Rebolini_2016,Rebolini_PhD} In these two latter studies, they also followed a (non-self-consistent) perturbative approach within the TDA with a renormalization of the first-order perturbative correction. It is important to note that, although all the studies mentioned above are clearly going beyond the static approximation of BSE, they are not able to recover additional excitations as the perturbative treatment accounts for dynamical effects only on excitations already present in the static limit. However, it does permit to recover, for transitions with a dominant single-excitation character, additional relaxation effects coming from higher excitations. These higher excitations would be explicitly present in the BSE Hamiltonian by ``unfolding'' the dynamical BSE kernel, and one would recover a linear eigenvalue problem with, nonetheless, a much larger dimension. \cite{Loos_2020f} Based on a simple two-level model which permits to analytically solve the dynamical equations, Romaniello and coworkers \cite{Romaniello_2009b,Sangalli_2011} evidenced that one can genuinely access additional excitations by solving the non-linear, frequency-dependent eigenvalue problem. For this particular system, it was shown that a BSE kernel based on the random-phase approximation (RPA) produces indeed double excitations but also unphysical excitations. \cite{Romaniello_2009b} The appearance of these spurious excitations was attributed to the self-screening problem. \cite{Romaniello_2009a} This was fixed in a follow-up paper by Sangalli \textit{et al.} \cite{Sangalli_2011} thanks to the design of a number-conserving approach based on the folding of the second-RPA Hamiltonian, \cite{Wambach_1988} which includes explicitly both single and double excitations. By computing the polarizability of two unsaturated hydrocarbon chains, \ce{C8H2} and \ce{C4H6}, they showed that their approach produces the correct number of physical excitations. Finally, let us mention efforts to borrow ingredients from BSE in order to go beyond the adiabatic approximation of TD-DFT. For example, Huix-Rotllant and Casida \cite{Casida_2005,Huix-Rotllant_2011} proposed a nonadiabatic correction to the xc kernel using the formalism of superoperators, which includes as a special case the dressed TD-DFT method of Maitra and coworkers, \cite{Maitra_2004,Cave_2004,Elliott_2011,Maitra_2012} where a frequency-dependent kernel is build \textit{a priori} and manually for a particular excitation. Following a similar strategy, Romaniello \textit{et al.} \cite{Romaniello_2009b} took advantages of the dynamically-screened Coulomb potential from BSE to obtain a dynamic TD-DFT kernel. In this regard, MBPT provides key insights about what is missing in adiabatic TD-DFT, as discussed in details by Casida and Huix-Rotllant in Ref.~\onlinecite{Casida_2016}. In the present study, we extend the work of Rohlfing and coworkers \cite{Rohlfing_2000,Ma_2009a,Ma_2009b,Baumeier_2012b} by proposing a renormalized first-order perturbative correction to the static BSE excitation energies. Importantly, our correction goes beyond the plasmon-pole approximation as the dynamical screening of the Coulomb interaction is computed exactly. In order to assess the accuracy of the present scheme, we report singlet and triplet excitation energies of various natures for small- and medium-size molecules. Our calculations are benchmarked against high-level coupled-cluster (CC) calculations, allowing to clearly evidence the systematic improvement brought by the dynamical correction. In particular, we found that, although $n \rightarrow \pi^*$ and $\pi \rightarrow \pi^*$ transitions are systematically red-shifted by $0.3$--$0.6$ eV, dynamical effects have a much smaller magnitude for charge transfer (CT) and Rydberg states. Unless otherwise stated, atomic units are used. \section{Theory} \label{sec:theory} In this Section, following Strinati's seminal work, \cite{Strinati_1988} we first discuss in some details the theoretical foundations leading to the dynamical BSE. We present, in a second step, the perturbative implementation of the dynamical correction as compared to the standard static approximation. \subsection{General dynamical BSE} The two-body correlation function $L(1,2; 1',2')$ --- a central quantity in the BSE formalism --- relates the variation of the one-body Green's function $G(1,1')$ with respect to an external non-local perturbation $U(2',2)$, \textit{i.e.}, \begin{equation} iL(1,2; 1',2') = \pdv{G(1,1')}{U(2',2)}, \end{equation} where, \textit{e.g.}, $1 \equiv (\mathbf{x}_1 t_1)$ is a space-spin plus time composite variable. The relation between $G$ and the one-body charge density $\rho(1) = -i G(1,1^+)$ provides a direct connection with the density-density susceptibility $\chi(1,2) = L(1,2;1^+,2^+)$ at the core of TD-DFT. (The notation $1^+$ means that the time $t_1$ is taken at $t_1^{+} = t_1 + 0^+$, where $0^+$ is a positive infinitesimal.) The two-body correlation function $L$ satisfies the self-consistent BSE \cite{Strinati_1988} \begin{multline} \label{eq:BSE} L(1,2; 1',2') = L_0(1,2;1',2') \\ + \int d3456 \, L_0(1,4;1',3) \Xi(3,5;4,6) L(6,2;5,2'), \end{multline} where \begin{subequations} \begin{align} \label{eq:L0} iL_0(1, 4; 1', 3) & = G(1, 3)G(4, 1'), \\ \label{eq:L} iL(1,2; 1',2') & = - G_2(1,2;1',2') + G(1,1') G(2,2'), \end{align} \end{subequations} can be expressed as a function of the one- and two-body Green's functions \begin{subequations} \begin{align} \label{eq:G1} G(1,2) & = - i \mel{N}{T [ \Hat{\psi}(1) \Hat{\psi}^{\dagger}(2) ] }{N}, \\ \label{eq:G2} G_2(1,2;1',2') & = - \mel{N}{T [ \Hat{\psi}(1) \Hat{\psi}(2) \Hat{\psi}^{\dagger}(2') \Hat{\psi}^{\dagger}(1') ]}{N}, \end{align} \end{subequations} and \begin{equation} \Xi(3,5;4,6) = i \fdv{[v_\text{H}(3) \delta(3,4) + \Sigma_\text{xc}(3,4)]}{G(6,5)} \end{equation} is the BSE kernel that takes into account the self-consistent variation of the Hartree potential \begin{equation} v_\text{H}(1) = - i \int d2 \, v(1,2) G(2,2^+), \end{equation} [where $\delta$ is Dirac's delta function and $v$ is the bare Coulomb operator] and the xc self-energy $ \Sigma_\text{xc}$ with respect to the variation of $G$. In Eqs.~\eqref{eq:G1} and \eqref{eq:G2}, the field operators $\Hat{\psi}(\mathbf{x} t)$ and $\Hat{\psi}^{\dagger}(\mathbf{x}'t')$ remove and add (respectively) an electron to the $N$-electron ground state $\ket{N}$ in space-spin-time positions ($\mathbf{x} t$) and ($\mathbf{x}'t'$), while $T$ is the time-ordering operator. The resolution of the dynamical BSE starts with the expansion of $L_0$ and $L$ [see Eqs.~\eqref{eq:L0} and \eqref{eq:L}] over the complete orthonormalized set of $N$-electron excited states $\ket{N,S}$ (with $\ket{N,0} \equiv \ket{N}$). \cite{Strinati_1988} In the optical limit of instantaneous electron-hole creation and destruction, imposing $t_{2'} = t_2^+$ and $t_{1'} = t_1^+$, and using the relation between the field operators in their time-dependent (Heisenberg) and time-independent (Schr\"{o}dinger) representations, \textit{e.g.}, \begin{equation} \label{Eisenberg} \Hat{\psi}(1) = e^{ i \Hat{H} t_1 } \Hat{\psi}(\mathbf{x}_1) e^{-i \Hat{H} t_1 }, \end{equation} ($\Hat{H}$ being the exact many-body Hamiltonian), one gets \begin{equation} \begin{split} iL(1,2; 1',2') & = \theta(+\tau_{12}) \sum_{s > 0} \chi_S(\mathbf{x}_1,\mathbf{x}_{1'}) \Tilde{\chi}_S(\mathbf{x}_2,\mathbf{x}_{2'}) e^{ - i \Om{S}{} \tau_{12} } \\ & - \theta(-\tau_{12}) \sum_{s > 0} \chi_S(\mathbf{x}_2,\mathbf{x}_{2'}) \Tilde{\chi}_S(\mathbf{x}_1,\mathbf{x}_{1'}) e^{ + i \Om{S}{} \tau_{12} }, \end{split} \end{equation} where $\tau_{12} = t_1 - t_2$, $\theta$ is the Heaviside step function, and \begin{subequations} \begin{align} \chi_S(\mathbf{x}_1,\mathbf{x}_{1'}) & = \mel{N}{T [\Hat{\psi}(\mathbf{x}_1) \Hat{\psi}^{\dagger}(\mathbf{x}_{1'})] }{N,S}, \\ \Tilde{\chi}_S(\mathbf{x}_1,\mathbf{x}_{1'}) & = \mel{N,S}{T [\Hat{\psi}(\mathbf{x}_1) \Hat{\psi}^{\dagger}(\mathbf{x}_{1'})] }{N}. \end{align} \end{subequations} The $\Om{s}{}$'s are the neutral excitation energies of interest (with $\Om{s}{} = E^N_s - E^N_0$). Picking up the $e^{+i \Om{S}{} t_2 }$ component of both $L(1,2; 1',2')$ and $L(6,2;5,2')$, simplifying further by $\Tilde{\chi}_S(\mathbf{x}_2,\mathbf{x}_{2'})$ on both sides of the BSE [see Eq.~\eqref{eq:BSE}], we seek the $e^{-i \Om{S}{} t_1 }$ Fourier component associated with the right-hand side of a modified dynamical BSE, which reads \begin{multline} \label{eq:BSE_2} \mel{N}{T [ \Hat{\psi}(\mathbf{x}_1) \Hat{\psi}^{\dagger}(\mathbf{x}_{1}') ] } {N,S} e^{ - i \Om{S}{} t_1 } \theta ( \tau_{12} ) \\ = \int d3456 \, L_0(1,4;1',3) \Xi(3,5;4,6) \\ \times \mel{N}{T [\Hat{\psi}(6) \Hat{\psi}^{\dagger}(5)] }{N,S} \theta [\min(t_5,t_6) - t_2]. \end{multline} For the neutral excitation energies falling in the fundamental gap of the system (\textit{i.e.}, $\Om{S}{} < \Eg^\text{fund}$ due to excitonic effects), $L_0(1,2;1',2')$ cannot contribute to the $e^{-i \Om{S}{} t_1 }$ response term since its lowest excitation energy is precisely the fundamental gap [see Eq.~\eqref{eq:Egfun}]. Consequently, special care has to be taken for high-lying excited states (like core or Rydberg excitations) where additional terms have to be taken into account (see Refs.~\onlinecite{Strinati_1982,Strinati_1984}). Dropping the space/spin variables, the Fourier components with respect to $t_1$ of $L_0(1,4;1',3)$ reads \begin{align} \label{eq:iL0} [iL_0]( \omega_1 ) = \int \frac{d\omega}{2\pi} \; G\qty(\omega - \frac{\omega_1}{2} ) G\qty( {\omega} + \frac{\omega_1}{2} ) e^{ i \omega \tau_{34} } e^{ i \omega_1 t^{34} }, \end{align} with $\tau_{34} = t_3 - t_4$ and $t^{34} = (t_3 + t_4)/2$. We now adopt the Lehman representation of the one-body Green's function in the quasiparticle approximation, \textit{i.e.}, \begin{equation} \label{eq:G-Lehman} G(\mathbf{x}_1,\mathbf{x}_2 ; \omega) = \sum_p \frac{ \MO{p}(\mathbf{x}_1) \MO{p}^*(\mathbf{x}_2) } { \omega - \e{p} + i \eta \times \text{sgn} (\e{p} - \mu) }, \end{equation} where $\eta$ is a positive infinitesimal and $\mu$ is the chemical potential. The $\e{p}$'s in Eq.~\eqref{eq:G-Lehman} are quasiparticle energies (\textit{i.e.}, proper addition/removal energies) and the $\MO{p}(\mathbf{x})$'s are their associated one-body (spin)orbitals. In the following, $i$ and $j$ are occupied orbitals, $a$ and $b$ are unoccupied orbitals, while $p$, $q$, $r$, and $s$ indicate arbitrary orbitals. Projecting the Fourier component $L_0(\mathbf{x}_1,4;\mathbf{x}_{1'},3; \omega_1 = \Om{S}{} )$ onto $\MO{a}^*(\mathbf{x}_1) \MO{i}(\mathbf{x}_{1'})$ yields \begin{multline} \label{eq:iL0bis} \iint d\mathbf{x}_1 d\mathbf{x}_{1'} \, \MO{a}^*(\mathbf{x}_1) \MO{i}(\mathbf{x}_{1'}) L_0(\mathbf{x}_1,4;\mathbf{x}_{1'},3; \Om{S}{}) \\ = \frac{ \MO{a}^*(\mathbf{x}_3) \MO{i}(\mathbf{x}_4) e^{i \Om{S}{} t^{34} }} { \Om{S}{} - ( \e{a} - \e{i} ) + i \eta } \qty[ \theta( \tau_{34} ) e^{i \qty( \e{i} + \frac{\Om{S}{}}{2}) \tau_{34} } + \theta( - \tau_{34} ) e^{i \qty(\e{a} - \frac{\Om{S}{}}{2}) \tau_{34} } ]. \end{multline} More details are provided in Appendix \ref{app:A}. As a final step, we express the terms $\mel{N}{T [\Hat{\psi}(\mathbf{x}_1) \Hat{\psi}^{\dagger}(\mathbf{x}_{1}')] }{N,S}$ and $\mel{N}{T [\Hat{\psi}(6) \Hat{\psi}^{\dagger}(5)] }{N,S}$ from Eq.~\eqref{eq:BSE_2} in the standard electron-hole product (or single-excitation) space. This is done by expanding the field operators over a complete orbital basis of creation/destruction operators. For example, we have (see derivation in Appendix \ref{app:B}) \begin{multline} \label{eq:spectral65} \mel{N}{T [\Hat{\psi}(6) \Hat{\psi}^{\dagger}(5)] }{N,S} \\ = - \qty( e^{ -i \Om{S}{} t^{65} } ) \sum_{pq} \MO{p}(\mathbf{x}_6) \MO{q}^*(\mathbf{x}_5) \mel{N}{\Hat{a}_q^{\dagger} \Hat{a}_p}{N,S} \\ \times \qty[ \theta( \tau_{65} ) e^{- i \qty( \e{p} - \frac{\Om{S}{}}{2} ) \tau_{65} } + \theta( - \tau_{65} ) e^{ - i \qty( \e{q} + \frac{\Om{S}{}}{2}) \tau_{65} } ], \end{multline} with $t^{65} = (t_5 + t_6)/2$ and $\tau_{65} = t_6 -t_5$. The $\mel{N}{\Hat{a}_q^{\dagger} \Hat{a}_p}{N,S}$ are the unknown particle-hole amplitudes \subsection{Dynamical BSE within the $GW$ approximation} Adopting now the $GW$ approximation \cite{Hedin_1965} for the xc self-energy, \textit{i.e.}, \begin{equation} \Sigma_\text{xc}^{GW}(1,2) = i G(1,2) W(1^+,2), \end{equation} leads to the following simplified BSE kernel \begin{equation} \label{eq:Xi_GW} \Xi(3,5;4,6) = v(3,6) \delta(3,4) \delta(5,6) - W(3^+,4) \delta(3,6) \delta(4,5), \end{equation} where $W$ is the dynamically-screened Coulomb operator. The $GW$ quasiparticle energies $\eGW{p}$ are usually good approximations to the removal/addition energies $\e{p}$ introduced in Eq.~\eqref{eq:G-Lehman}. Substituting Eqs.~\eqref{eq:iL0bis}, \eqref{eq:spectral65}, and \eqref{eq:Xi_GW} into Eq.~\eqref{eq:BSE_2}, and projecting onto $\MO{a}^*(\mathbf{x}_1) \MO{i}(\mathbf{x}_{1'})$, one gets after a few tedious manipulations the dynamical BSE: \begin{equation} \label{eq:BSE-final} \begin{split} ( \eGW{a} - \eGW{i} - \Om{S}{} ) X_{ia,S} & + \sum_{jb} \qty[ \kappa \ERI{ia}{jb} - \widetilde{W}_{ij,ab}(\Om{S}{}) ] X_{jb,S} \\ & + \sum_{jb} \qty[ \kappa \ERI{ia}{bj} - \widetilde{W}_{ib,aj}(\Om{S}{}) ] Y_{jb,S} = 0, \end{split} \end{equation} with $X_{jb,S} = \mel{N}{\Hat{a}_j^{\dagger} \Hat{a}_b}{N,S}$ and $Y_{jb,S} = \mel{N}{\Hat{a}_b^{\dagger} \Hat{a}_j}{N,S}$, and where $\kappa = 2 $ or $0$ for singlet and triplet excited states (respectively). \titou{This equation is identical to the one presented by Rohlfing and coworkers. \cite{Rohlfing_2000,Ma_2009a,Ma_2009b}} Neglecting the anti-resonant terms, $Y_{jb,S}$, in the dynamical BSE, which are (usually) much smaller than their resonant counterparts, $X_{jb,S}$, leads to the well-known TDA. In Eq.~\eqref{eq:BSE-final}, \begin{equation} \ERI{pq}{rs} = \iint d\mathbf{r} d\mathbf{r}' \, \MO{p}(\mathbf{r}) \MO{q}(\mathbf{r}) v(\mathbf{r} -\mathbf{r}') \MO{r}(\mathbf{r}') \MO{s}(\mathbf{r}'), \end{equation} are the bare two-electron integrals in the (real-valued) spatial orbital basis $\lbrace \MO{p}(\mathbf{r}{}) \rbrace$, and \begin{multline} \label{eq:wtilde} \widetilde{W}_{pq,rs}(\Om{S}{}) = \frac{ i }{ 2 \pi} \int d\omega \; e^{-i \omega 0^+ } W_{pq,rs}(\omega) \\ \times \qty[ \frac{1}{ \Om{ps}{S} - \omega + i \eta } + \frac{1}{ \Om{qr}{S} + \omega + i\eta } ], \end{multline} is an effective dynamically-screened Coulomb potential, \cite{Romaniello_2009b} where $\Om{pq}{S} = \Om{S}{} - ( \eGW{q} - \eGW{p} )$ and \begin{equation} W_{pq,rs}({\omega}) = \iint d\mathbf{r} d\mathbf{r}' \, \MO{p}(\mathbf{r}) \MO{q}(\mathbf{r}) W(\mathbf{r} ,\mathbf{r}'; \omega) \MO{r}(\mathbf{r}') \MO{s}(\mathbf{r}'). \end{equation} \subsection{Dynamical screening} \label{sec:dynW} In the present study, we consider the exact spectral representation of $W$ at the RPA level \titou{consistently with the underlying $GW$ calculation}: \begin{multline} \label{eq:W-RPA} W_{ij,ab}(\omega) = \ERI{ij}{ab} + 2 \sum_m \sERI{ij}{m} \sERI{ab}{m} \\ \times \qty[ \frac{1}{ \omega-\Om{m}{\text{RPA}} + i\eta } - \frac{1}{ \omega + \Om{m}{\text{RPA}} - i\eta } ], \end{multline} where $m$ labels single excitations, and \begin{equation} \label{eq:sERI} \sERI{pq}{m} = \sum_{ia} \ERI{pq}{ia} (\bX{m}{\text{RPA}} + \bY{m}{\text{RPA}})_{ia} \end{equation} are the spectral weights. In Eqs.~\eqref{eq:W-RPA} and \eqref{eq:sERI}, $\OmRPA{m}{}$ and $(\bX{m}{\text{RPA}} + \bY{m}{\text{RPA}})$ are RPA neutral excitations and their corresponding transition vectors computed by solving the (static) linear response problem \begin{equation} \label{eq:LR-RPA} \begin{pmatrix} \bA{\text{RPA}} & \bB{\text{RPA}} \\ -\bB{\text{RPA}} & -\bA{\text{RPA}} \\ \end{pmatrix} \cdot \begin{pmatrix} \bX{m}{\text{RPA}} \\ \bY{m}{\text{RPA}} \\ \end{pmatrix} = \OmRPA{m} \begin{pmatrix} \bX{m}{\text{RPA}} \\ \bY{m}{\text{RPA}} \\ \end{pmatrix}, \end{equation} with \begin{subequations} \begin{align} \label{eq:LR_RPA-A} \A{ia,jb}{\text{RPA}} & = \delta_{ij} \delta_{ab} (\e{a} - \e{i}) + 2 \ERI{ia}{jb}, \\ \label{eq:LR_RPA-B} \B{ia,jb}{\text{RPA}} & = 2 \ERI{ia}{bj}, \end{align} \end{subequations} where the $\e{p}$'s are taken as the HF orbital energies in the case of $G_0W_0$ \cite{Hybertsen_1985a, Hybertsen_1986} or as the $GW$ quasiparticle energies in the case of self-consistent schemes such as ev$GW$. \cite{Hybertsen_1986,Shishkin_2007,Blase_2011,Faber_2011,Rangel_2016,Kaplan_2016,Gui_2018} The RPA matrices $\bA{\text{RPA}}$ and $\bB{\text{RPA}}$ in Eq.~\eqref{eq:LR-RPA} are of size $O V \times O V$, where $O$ and $V$ are the number of occupied and virtual orbitals (\textit{i.e.}, $N_\text{orb} = O + V$ is the total number of spatial orbitals), respectively, and $\bX{m}{\text{RPA}}$, and $\bY{m}{\text{RPA}}$ are (eigen)vectors of length $O V$. The analysis of the poles of the integrand in Eq.~\eqref{eq:wtilde} yields \begin{multline} \widetilde{W}_{ij,ab}( \Om{S}{} ) = \ERI{ij}{ab} + 2 \sum_m \sERI{ij}{m} \sERI{ab}{m} \\ \times \qty[ \frac{1}{\Om{ib}{S} - \Om{m}{\text{RPA}} + i\eta} + \frac{1}{\Om{ja}{S} - \Om{m}{\text{RPA}} + i\eta} ]. \end{multline} One can verify that, in the static limit where $\Om{m}{\text{RPA}} \to \infty$, the matrix elements $\widetilde{W}_{ij,ab}$ correctly reduce to their static expression \begin{equation} \label{eq:Wstat} \W{ij,ab}{\text{stat}} \equiv W_{ij,ab}(\omega = 0) = \ERI{ij}{ab} - 4 \sum_m \frac{\sERI{ij}{m} \sERI{ab}{m}}{\OmRPA{m}{} }, \end{equation} evidencing that the standard static BSE problem is recovered from the present dynamical formalism in this limit. Due to excitonic effects, the lowest BSE excitation energy, $\Om{1}{}$, stands lower than the lowest RPA excitation energy, $\Om{1}{\text{RPA}}$, so that, $\Om{ib}{S} - \Om{m}{\text{RPA}} < 0 $ and $\widetilde{W}_{ij,ab}(\Om{S}{})$ has no resonances. This property holds for low-lying excitations but special care must be taken for higher ones. Furthermore, $\Om{ib}{S}$ and $\Om{ja}{S}$ are necessarily negative quantities for in-gap low-lying BSE excitations. Thus, we have $\abs*{\Om{ib}{S} - \Om{m}{\text{RPA}}} > \Om{m}{\text{RPA}}$. As a consequence, we observe a reduction of the electron-hole screening, \textit{i.e.}, an enhancement of electron-hole binding energy, as compared to the standard static BSE, and consequently smaller (red-shifted) excitation energies. This will be numerically illustrated in Sec.~\ref{sec:resdis}. \subsection{Dynamical Tamm-Dancoff approximation} The analysis of the (off-diagonal) screened Coulomb potential matrix elements multiplying the $Y_{jb,S}$ coefficients in Eq.~\eqref{eq:BSE-final}, \textit{i.e.}, \begin{multline} \label{eq:W-Y} \widetilde{W}_{ib,aj}(\Om{S}{}) = \ERI{ib}{aj} + 2 \sum_m \sERI{ib}{m} \sERI{aj}{m} \\ \times \qty[ \frac{1}{\Om{ij}{S} - \Om{m}{\text{RPA}} + i\eta} + \frac{1}{\Om{ba}{S} - \Om{m}{\text{RPA}} + i\eta} ], \end{multline} reveals strong divergences even for low-lying excitations when, for example, $\Om{ba}{S} - \Om{m}{\text{RPA}} = \Om{S}{} - \Om{m}{\text{RPA}} - ( \eGW{a} - \eGW{b} ) \approx 0$. Such divergences may explain that, in previous studies, dynamical effects were only accounted for at the TDA level. \cite{Strinati_1988,Rohlfing_2000,Ma_2009a,Ma_2009b,Romaniello_2009b,Sangalli_2011,Zhang_2013,Rebolini_2016} To avoid confusions here, enforcing the TDA for the dynamical correction (which corresponds to neglecting the dynamical correction originating from the anti-resonant part of the BSE Hamiltonian) will be labeled as dTDA in the following. Going beyond the dTDA is outside the scope of the present study but shall be addressed eventually. \subsection{Perturbative dynamical correction} From a more practical point of view, Eq.~\eqref{eq:BSE-final} can be recast as an non-linear eigenvalue problem and, to compute the BSE excitation energies of a closed-shell system, one must solve the following dynamical (\textit{i.e.}, frequency-dependent) response problem \cite{Strinati_1988} \begin{equation} \label{eq:LR-dyn} \begin{pmatrix} \bA{}(\Om{S}{}) & \bB{}(\Om{S}{}) \\ -\bB{}(-\Om{S}{}) & -\bA{}(-\Om{S}{}) \\ \end{pmatrix} \cdot \begin{pmatrix} \bX{S}{} \\ \bY{S}{} \\ \end{pmatrix} = \Om{S}{} \begin{pmatrix} \bX{S}{} \\ \bY{S}{} \\ \end{pmatrix}, \end{equation} where the dynamical matrices $\bA{}$ and $\bB{}$ have the same $O V \times O V$ size than their RPA counterparts, and we assume real quantities from hereon. Same comment applies to the eigenvectors $\bX{S}{}$, and $\bY{S}{}$ of length $O V$. Note that, due to its non-linear nature, Eq.~\eqref{eq:LR-dyn} may provide more than one solution for each value of $S$. \cite{Romaniello_2009b,Sangalli_2011,Martin_2016} Accordingly to Eq.~\eqref{eq:BSE-final}, the BSE matrix elements in Eq.~\eqref{eq:LR-dyn} read \begin{subequations} \begin{align} \label{eq:BSE-Adyn} \A{ia,jb}{}(\Om{S}{}) & = \delta_{ij} \delta_{ab} (\eGW{a} - \eGW{i}) + \kappa \ERI{ia}{jb} - \tW{ij,ab}{}(\Om{S}{}), \\ \label{eq:BSE-Bdyn} \B{ia,jb}{}(\Om{S}{}) & = \kappa \ERI{ia}{bj} - \tW{ib,aj}{}(\Om{S}{}). \end{align} \end{subequations} Now, let us decompose, using basic Rayleigh-Schr\"odinger perturbation theory, the non-linear eigenproblem \eqref{eq:LR-dyn} as a zeroth-order static (hence linear) reference and a first-order dynamic (hence non-linear) perturbation, such that \begin{multline} \label{eq:LR-PT} \begin{pmatrix} \bA{}(\Om{S}{}) & \bB{}(\Om{S}{}) \\ -\bB{}(-\Om{S}{}) & -\bA{}(-\Om{S}{}) \\ \end{pmatrix} \\ = \begin{pmatrix} \bA{(0)} & \bB{(0)} \\ -\bB{(0)} & -\bA{(0)} \\ \end{pmatrix} + \begin{pmatrix} \bA{(1)}(\Om{S}{}) & \bB{(1)}(\Om{S}{}) \\ -\bB{(1)}(-\Om{S}{}) & -\bA{(1)}(-\Om{S}{}) \\ \end{pmatrix}, \end{multline} with \begin{subequations} \begin{align} \label{eq:BSE-A0} \A{ia,jb}{(0)} & = \delta_{ij} \delta_{ab} (\eGW{a} - \eGW{i}) + \kappa \ERI{ia}{jb} - \W{ij,ab}{\text{stat}}, \\ \label{eq:BSE-B0} \B{ia,jb}{(0)} & = \kappa \ERI{ia}{bj} - \W{ib,aj}{\text{stat}}. \end{align} \end{subequations} and \begin{subequations} \begin{align} \label{eq:BSE-A1} \A{ia,jb}{(1)}(\Om{S}{}) & = - \tW{ij,ab}{}(\Om{S}{}) + \W{ij,ab}{\text{stat}}, \\ \label{eq:BSE-B1} \B{ia,jb}{(1)}(\Om{S}{}) & = - \tW{ib,aj}{}(\Om{S}{}) + \W{ib,aj}{\text{stat}}. \end{align} \end{subequations} According to perturbation theory, the $S$th BSE excitation energy and its corresponding eigenvector can then be expanded as \begin{subequations} \begin{gather} \Om{S}{} = \Om{S}{(0)} + \Om{S}{(1)} + \ldots, \\ \begin{pmatrix} \bX{S}{} \\ \bY{S}{} \\ \end{pmatrix} = \begin{pmatrix} \bX{S}{(0)} \\ \bY{S}{(0)} \\ \end{pmatrix} + \begin{pmatrix} \bX{S}{(1)} \\ \bY{S}{(1)} \\ \end{pmatrix} + \ldots. \end{gather} \end{subequations} Solving the zeroth-order static problem \begin{equation} \label{eq:LR-BSE-stat} \begin{pmatrix} \bA{(0)} & \bB{(0)} \\ -\bB{(0)} & -\bA{(0)} \\ \end{pmatrix} \cdot \begin{pmatrix} \bX{S}{(0)} \\ \bY{S}{(0)} \\ \end{pmatrix} = \Om{S}{(0)} \begin{pmatrix} \bX{S}{(0)} \\ \bY{S}{(0)} \\ \end{pmatrix}, \end{equation} yields the zeroth-order (static) $\Om{S}{(0)}$ excitation energies and their corresponding eigenvectors $\bX{S}{(0)}$ and $\bY{S}{(0)}$. Thanks to first-order perturbation theory, the first-order correction to the $S$th excitation energy is \begin{equation} \label{eq:Om1} \Om{S}{(1)} = \T{\begin{pmatrix} \bX{S}{(0)} \\ \bY{S}{(0)} \\ \end{pmatrix}} \cdot \begin{pmatrix} \bA{(1)}(\Om{S}{(0)}) & \bB{(1)}(\Om{S}{(0)}) \\ -\bB{(1)}(-\Om{S}{(0)}) & -\bA{(1)}(-\Om{S}{(0)}) \\ \end{pmatrix} \cdot \begin{pmatrix} \bX{S}{(0)} \\ \bY{S}{(0)} \\ \end{pmatrix}. \end{equation} From a practical point of view, if one enforces the dTDA, we obtain the very simple expression \begin{equation} \label{eq:Om1-TDA} \Om{S}{(1)} = \T{(\bX{S}{(0)})} \cdot \bA{(1)}(\Om{S}{(0)}) \cdot \bX{S}{(0)}. \end{equation} This correction can be renormalized by computing, at basically no extra cost, the renormalization factor which reads, in the dTDA, \begin{equation} \label{eq:Z} Z_{S} = \qty[ 1 - \T{(\bX{S}{(0)})} \cdot \left. \pdv{\bA{(1)}(\Om{S}{})}{\Om{S}{}} \right|_{\Om{S}{} = \Om{S}{(0)}} \cdot \bX{S}{(0)} ]^{-1}. \end{equation} This finally yields \begin{equation} \Om{S}{\text{dyn}} = \Om{S}{\text{stat}} + \Delta\Om{S}{\text{dyn}} = \Om{S}{(0)} + Z_{S} \Om{S}{(1)}. \end{equation} with $\Om{S}{\text{stat}} \equiv \Om{S}{(0)}$ and $\Delta\Om{S}{\text{dyn}} = Z_{S} \Om{S}{(1)}$. This is our final expression. \titou{As mentioned in Sec.~\ref{sec:intro}, the present perturbative scheme does not allow to access double excitations as only excitations calculated within the static approach can be dynamically corrected. We hope to report a genuine dynamical treatment of the BSE in a forthcoming work.} In terms of computational cost, if one decides to compute the dynamical correction of the $M$ lowest excitation energies, one must perform, first, a conventional (static) BSE calculation and extract the $M$ lowest eigenvalues and their corresponding eigenvectors [see Eq.~\eqref{eq:LR-BSE-stat}]. These are then used to compute the first-order correction from Eq.~\eqref{eq:Om1-TDA}, which also require to construct and evaluate the dynamical part of the BSE Hamiltonian for each excitation one wants to dynamically correct. The static BSE Hamiltonian is computed once during the static BSE calculation and does not dependent on the targeted excitation. Searching iteratively for the lowest eigenstates, via Davidson's algorithm for instance, can be performed in $\order*{N_\text{orb}^4}$ computational cost. Constructing the static and dynamic BSE Hamiltonians is much more expensive as it requires the complete diagonalization of the $(O V \times O V)$ RPA linear response matrix [see Eq.~\eqref{eq:LR-RPA}], which corresponds to a $\order*{O^3 V^3} = \order*{N_\text{orb}^6}$ computational cost. Although it might be reduced to $\order*{N_\text{orb}^4}$ operations with standard resolution-of-the-identity techniques, \cite{Duchemin_2019,Duchemin_2020} this step is the computational bottleneck in the current implementation. \section{Computational details} \label{sec:compdet} All systems under investigation have a closed-shell singlet ground state. We then adopt a restricted formalism throughout this work. The $GW$ calculations performed to obtain the screened Coulomb operator and the quasiparticle energies are done using a (restricted) HF starting point. Perturbative $GW$ (or {\GOWO}) \cite{Hybertsen_1985a,Hybertsen_1986,vanSetten_2013} quasiparticle energies are employed as starting points to compute the BSE neutral excitations. These quasiparticle energies are obtained by linearizing the frequency-dependent quasiparticle equation, and the entire set of orbitals is corrected. Further details about our implementation of {\GOWO} can be found in Refs.~\onlinecite{Loos_2018b,Veril_2018}. Note that, for the present (small) molecular systems, {\GOWO}@HF and ev$GW$@HF yield similar quasiparticle energies and fundamental gap. Moreover, {\GOWO} allows to avoid rather laborious iterations as well as the significant additional computational effort of ev$GW$. In the present study, the zeroth-order Hamiltonian [see Eq.~\eqref{eq:LR-PT}] is always the ``full'' BSE static Hamiltonian, \textit{i.e.}, without TDA. The dynamical correction, however, is computed in the dTDA throughout. As one-electron basis sets, we employ the Dunning families cc-pVXZ and aug-cc-pVXZ (X = D, T, and Q) defined with cartesian Gaussian functions. Finally, the infinitesimal $\eta$ is set to $100$ meV for all calculations. It is important to mention that the small molecular systems considered here are particularly challenging for the BSE formalism, \cite{Hirose_2015,Loos_2018b} which is known to work best for larger systems where the amount of screening is more important. \cite{Jacquemin_2017b,Rangel_2017} For comparison purposes, we employ the theoretical best estimates (TBEs) and geometries of Refs.~\onlinecite{Loos_2018a,Loos_2019,Loos_2020b} from which CIS(D), \cite{Head-Gordon_1994,Head-Gordon_1995} ADC(2), \cite{Trofimov_1997,Dreuw_2015} CC2, \cite{Christiansen_1995a} CCSD, \cite{Purvis_1982} and CC3 \cite{Christiansen_1995b} excitation energies are also extracted. Various statistical quantities are reported in the following: the mean signed error (MSE), mean absolute error (MAE), root-mean-square error (RMSE), and the maximum positive [Max($+$)] and maximum negative [Max($-$)] errors. All the static and dynamic BSE calculations have been performed with the software \texttt{QuAcK}, \cite{QuAcK} freely available on \texttt{github}, where the present perturbative correction has been implemented. \section{Results and Discussion} \label{sec:resdis} \begin{squeezetable} \begin{table*} \caption{ Singlet and triplet excitation energies (in eV) of \ce{N2} computed at the BSE@{\GOWO}@HF level for various basis sets. \label{tab:N2} } \begin{ruledtabular} \begin{tabular}{lcddddddddd} & & \multicolumn{3}{c}{cc-pVDZ ($E_\text{g}^{GW} = 20.71$ eV)} & \multicolumn{3}{c}{cc-pVTZ ($E_\text{g}^{GW} = 20.21$ eV)} & \multicolumn{3}{c}{cc-pVQZ ($E_\text{g}^{GW} = 20.05$ eV)} \\ \cline{3-5} \cline{6-8} \cline{9-11} State & Nature & \tabc{$\Om{S}{\text{stat}}$} & \tabc{$\Om{S}{\text{dyn}}$} & \tabc{$\Delta\Om{S}{\text{dyn}}$} & \tabc{$\Om{S}{\text{stat}}$} & \tabc{$\Om{S}{\text{dyn}}$} & \tabc{$\Delta\Om{S}{\text{dyn}}$} & \tabc{$\Om{S}{\text{stat}}$} & \tabc{$\Om{S}{\text{dyn}}$} & \tabc{$\Delta\Om{S}{\text{dyn}}$} \\ \hline $^1\Pi_g(n \rightarrow \pi^*)$ & Val. & 9.90 & 9.58 & -0.32 & 9.92 & 9.53 & -0.40 & 10.01 & 9.59 & -0.42 \\ $^1\Sigma_u^-(\pi \rightarrow \pi^*)$ & Val. & 9.70 & 9.37 & -0.33 & 9.61 & 9.19 & -0.42 & 9.69 & 9.25 & -0.44 \\ $^1\Delta_u(\pi \rightarrow \pi^*)$ & Val. & 10.37 & 10.05 & -0.31 & 10.27 & 9.88 & -0.39 & 10.34 & 9.93 & -0.41 \\ $^1\Sigma_g^+$ & Ryd. & 15.67 & 15.50 & -0.17 & 15.04 & 14.84 & -0.21 & 14.72 & 14.43 & -0.21 \\ $^1\Pi_u$ & Ryd. & 15.00 & 14.79 & -0.21 & 14.75 & 14.48 & -0.27 & 14.80 & 14.59 & -0.29 \\ $^1\Sigma_u^+$ & Ryd. & 22.88\footnotemark[1] & 22.73 & -0.15 & 19.03 & 18.95 & -0.08 & 16.78 & 16.71 & -0.06 \\ $^1\Pi_u$ & Ryd. & 23.62\footnotemark[1] & 23.51 & -0.11 & 19.15 & 19.04 & -0.11 & 16.93 & 16.85 & -0.09 \\ \\ $^3\Sigma_u^+(\pi \rightarrow \pi^*)$ & Val. & 7.39 & 6.91 & -0.48 & 7.46 & 6.87 & -0.59 & 7.59 & 6.97 & -0.62 \\ $^3\Pi_g(n \rightarrow \pi^*)$ & Val. & 8.07 & 7.65 & -0.42 & 8.14 & 7.62 & -0.52 & 8.24 & 7.70 & -0.54 \\ $^3\Delta_u(\pi \rightarrow \pi^*)$ & Val. & 8.56 & 8.15 & -0.41 & 8.52 & 8.00 & -0.52 & 8.62 & 8.07 & -0.55 \\ $^3\Sigma_u^-(\pi \rightarrow \pi^*)$ & Val. & 9.70 & 9.37 & -0.33 & 9.61 & 9.19 & -0.42 & 9.69 & 9.25 & -0.44 \\ \hline \\ & & \multicolumn{3}{c}{aug-cc-pVDZ ($E_\text{g}^{GW} = 19.49$ eV)} & \multicolumn{3}{c}{aug-cc-pVTZ ($E_\text{g}^{GW} = 19.20$ eV)} & \multicolumn{3}{c}{aug-cc-pVQZ ($E_\text{g}^{GW} = 19.00$ eV)} \\ \cline{3-5} \cline{6-8} \cline{9-11} State & Nature & \tabc{$\Om{S}{\text{stat}}$} & \tabc{$\Om{S}{\text{dyn}}$} & \tabc{$\Delta\Om{S}{\text{dyn}}$} & \tabc{$\Om{S}{\text{stat}}$} & \tabc{$\Om{S}{\text{dyn}}$} & \tabc{$\Delta\Om{S}{\text{dyn}}$} & \tabc{$\Om{S}{\text{stat}}$} & \tabc{$\Om{S}{\text{dyn}}$} & \tabc{$\Delta\Om{S}{\text{dyn}}$} \\ \hline $^1\Pi_g(n \rightarrow \pi^*)$ & Val. & 10.18 & 9.77 & -0.41 & 10.42 & 9.99 & -0.42 & 10.52 & 10.09 & -0.43 \\ $^1\Sigma_u^-(\pi \rightarrow \pi^*)$ & Val. & 9.95 & 9.51 & -0.44 & 10.11 & 9.66 & -0.45 & 10.20 & 9.75 & -0.45 \\ $^1\Delta_u(\pi \rightarrow \pi^*)$ & Val. & 10.57 & 10.16 & -0.41 & 10.75 & 10.33 & -0.42 & 10.85 & 10.42 & -0.42 \\ $^1\Sigma_g^+$ & Ryd. & 13.72 & 13.68 & -0.04 & 13.60 & 13.57 & -0.03 & 13.54 & 13.52 & -0.02 \\ $^1\Pi_u$ & Ryd. & 14.07 & 14.02 & -0.05 & 13.98 & 13.94 & -0.04 & 13.96 & 13.93 & -0.03 \\ $^1\Sigma_u^+$ & Ryd. & 13.80 & 13.72 & -0.08 & 13.98 & 13.91 & -0.07 & 14.08 & 14.03 & -0.06 \\ $^1\Pi_u$ & Ryd. & 14.22 & 14.19 & -0.04 & 14.24 & 14.21 & -0.03 & 14.26 & 14.23 & -0.03 \\ \\ $^3\Sigma_u^+(\pi \rightarrow \pi^*)$ & Val. & 7.75 & 7.12 & -0.63 & 8.02 & 7.38 & -0.64 & 8.12 & 7.48 & -0.64 \\ $^3\Pi_g(n \rightarrow \pi^*)$ & Val. & 8.42 & 7.88 & -0.54 & 8.66 & 8.10 & -0.56 & 8.75 & 8.20 & -0.56 \\ $^3\Delta_u(\pi \rightarrow \pi^*)$ & Val. & 8.86 & 8.32 & -0.54 & 9.04 & 8.48 & -0.56 & 9.14 & 8.57 & -0.56 \\ $^3\Sigma_u^-(\pi \rightarrow \pi^*)$ & Val. & 9.95 & 9.51 & -0.44 & 10.11 & 9.66 & -0.45 & 10.20 & 9.75 & -0.45 \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Excitation energy larger than the fundamental gap.} \end{table*} \end{squeezetable} First, we investigate the basis set dependence of the dynamical correction. The singlet and triplet excitation energies of the nitrogen molecule \ce{N2} computed at the BSE@{\GOWO}@HF level for the cc-pVXZ and aug-cc-pVXZ families of basis sets are reported in Table \ref{tab:N2}, where we also report the $GW$ gap, $E_\text{g}^{GW}$, to show that corrected transitions are usually well below this gap. The \ce{N2} molecule is a very convenient example for this kind of study as it contains $n \rightarrow \pi^*$ and $\pi \rightarrow \pi^*$ valence excitations as well as Rydberg transitions. As we shall further illustrate below, the magnitude of the dynamical correction is characteristic of the type of transitions. One key result of the present investigation is that the dynamical correction is quite basis set insensitive with a maximum variation of $0.03$ eV between aug-cc-pVDZ and aug-cc-pVQZ. It is only for the smallest basis set (cc-pVDZ) that one can observe significant differences. \titou{We note further that due to its unbound LUMO, the $GW$ gap of \ce{N2}, and to a lesser extent its BSE excitation energies, are very sensitive to the presence of diffuse orbitals. However, the dynamical correction is again very stable, being insensitive to the presence of diffuse orbitals (at least for the lowest optical excitations).} We can then safely conclude that the dynamical correction converges rapidly with respect to the size of the one-electron basis set, a triple-$\zeta$ or an augmented double-$\zeta$ basis being enough to obtain near complete basis set limit values. This is quite a nice feature as it means that one does not need to compute the dynamical correction in a very large basis to get a meaningful estimate of its magnitude. \begin{squeezetable} \begin{table*} \caption{ Singlet excitation energies (in eV) for various molecules obtained with the aug-cc-pVTZ basis set computed at various levels of theory. CT stands for charge transfer. \label{tab:BigTabSi} } \begin{ruledtabular} \begin{tabular}{llcdddddddddd} & & & \multicolumn{5}{c}{BSE@{\GOWO}@HF} & \multicolumn{5}{c}{Wave function-based methods} \\ \cline{4-8} \cline{9-13} Mol. & State & Nature & \tabc{$E_\text{g}^{GW}$} & \tabc{$\Om{S}{\text{stat}}$} & \tabc{$\Om{S}{\text{dyn}}$} & \tabc{$\Delta\Om{S}{\text{dyn}}$} & \tabc{$Z_{S}$} & \tabc{CIS(D)} & \tabc{ADC(2)} & \tabc{CC2} & \tabc{CCSD} & \tabc{TBE} \\ \hline \ce{HCl} & $^1\Pi$ & CT & 13.43 & 8.30 & 8.19 & -0.11 & 1.009 & 6.07 & 7.97 & 7.96 & 7.91 & 7.84 \\ \\ \ce{H2O} & $^1B_1(n \rightarrow 3s)$ & Ryd. & 13.58 & 8.09 & 8.00 & -0.09 & 1.007 & 7.62 & 7.18 & 7.23 & 7.60 & 7.17 \\ & $^1A_2(n \rightarrow 3p)$ & Ryd. & & 9.79 & 9.72 & -0.07 & 1.005 & 9.41 & 8.84 & 8.89 & 9.36 & 8.92 \\ & $^1A_1(n \rightarrow 3s)$ & Ryd. & & 10.42 & 10.35 & -0.07 & 1.006 & 9.99 & 9.52 & 9.58 & 9.96 & 9.52 \\ \\ \ce{N2} & $^1\Pi_g(n \rightarrow \pi^*)$ & Val. & 19.20 & 10.42 & 9.99 & -0.42 & 1.031 & 9.66 & 9.48 & 9.44 & 9.41 & 9.34 \\ & $^1\Sigma_u^-(\pi \rightarrow \pi^*)$ & Val. & & 10.11 & 9.66 & -0.45 & 1.029 & 10.31 & 10.26 & 10.32 & 10.00 & 9.88 \\ & $^1\Delta_u(\pi \rightarrow \pi^*)$ & Val. & & 10.75 & 10.33 & -0.42 & 1.030 & 10.85 & 10.79 & 10.86 & 10.44 & 10.29 \\ & $^1\Sigma_g^+$ & Ryd. & & 13.60 & 13.57 & -0.03 & 1.003 & 13.67 & 12.99 & 12.83 & 13.15 & 12.98 \\ & $^1\Pi_u$ & Ryd. & & 13.98 & 13.94 & -0.04 & 1.004 & 13.64 & 13.32 & 13.15 & 13.43 & 13.03 \\ & $^1\Sigma_u^+$ & Ryd. & & 13.98 & 13.91 & -0.07 & 1.008 & 13.75 & 13.07 & 12.89 & 13.26 & 13.09 \\ & $^1\Pi_u$ & Ryd. & & 14.24 & 14.21 & -0.03 & 1.002 & 14.52 & 14.00 & 13.96 & 13.67 & 13.46 \\ \\ \ce{CO} & $^1\Pi(n \rightarrow \pi^*)$ & Val. & 16.46 & 9.54 & 9.19 & -0.34 & 1.029 & 8.78 & 8.69 & 8.64 & 8.59 & 8.49 \\ & $^1\Sigma^-(\pi \rightarrow \pi^*)$ & Val. & & 10.25 & 9.90 & -0.35 & 1.023 & 10.13 & 10.03 & 10.30 & 9.99 & 9.92 \\ & $^1\Delta(\pi \rightarrow \pi^*)$ & Val. & & 10.71 & 10.39 & -0.32 & 1.023 & 10.41 & 10.30 & 10.60 & 10.12 & 10.06 \\ & $^1\Sigma^+$ & Ryd. & & 11.88 & 11.85 & -0.03 & 1.005 & 11.48 & 11.32 & 11.11 & 11.22 & 10.95 \\ & $^1\Sigma^+$ & Ryd. & & 12.39 & 12.37 & -0.02 & 1.003 & 11.71 & 11.83 & 11.63 & 11.75 & 11.52 \\ & $^1\Pi$ & Ryd. & & 12.37 & 12.32 & -0.05 & 1.004 & 12.06 & 12.03 & 11.83 & 11.96 & 11.72 \\ \\ \ce{C2H2} & $^1\Sigma_{u}^-(\pi \rightarrow \pi^*)$ & Val. & 12.28 & 7.37 & 7.05 & -0.32 & 1.026 & 7.28 & 7.24 & 7.26 & 7.15 & 7.10 \\ & $^1\Delta_{u}(\pi \rightarrow \pi^*)$ & Val. & & 7.74 & 7.46 & -0.29 & 1.025 & 7.62 & 7.56 & 7.59 & 7.48 & 7.44 \\ \\ \ce{C2H4} & $^1B_{3u}(\pi \rightarrow 3s)$ & Ryd. & 11.49 & 7.64 & 7.62 & -0.03 & 1.004 & 7.35 & 7.34 & 7.29 & 7.42 & 7.39 \\ & $^1B_{1u}(\pi \rightarrow \pi^*)$ & Val. & & 8.18 & 8.03 & -0.15 & 1.022 & 7.95 & 7.91 & 7.92 & 8.02 & 7.93 \\ & $^1B_{1g}(\pi \rightarrow 3p)$ & Ryd. & & 8.29 & 8.26 & -0.03 & 1.003 & 8.01 & 7.99 & 7.95 & 8.08 & 8.08 \\ \\ \ce{CH2O} & $^1A_2(n \rightarrow \pi^*)$ & Val. & 12.00 & 5.03 & 4.68 & -0.35 & 1.027 & 4.04 & 3.92 & 4.07 & 4.01 & 3.98 \\ & $^1B_2(n \rightarrow 3s)$ & Ryd. & & 7.87 & 7.85 & -0.02 & 1.001 & 6.64 & 6.50 & 6.56 & 7.23 & 7.23 \\ & $^1B_2(n \rightarrow 3p)$ & Ryd. & & 8.76 & 8.72 & -0.04 & 1.003 & 7.56 & 7.53 & 7.57 & 8.12 & 8.13 \\ & $^1A_1(n \rightarrow 3p)$ & Ryd. & & 8.85 & 8.84 & -0.01 & 1.000 & 8.16 & 7.47 & 7.52 & 8.21 & 8.23 \\ & $^1A_2(n \rightarrow 3p)$ & Ryd. & & 8.87 & 8.85 & -0.02 & 1.002 & 8.04 & 7.99 & 8.04 & 8.65 & 8.67 \\ & $^1B_1(\sigma \rightarrow \pi^*)$ & Val. & & 10.18 & 9.77 & -0.42 & 1.032 & 9.38 & 9.17 & 9.32 & 9.28 & 9.22 \\ & $^1A_1(\pi \rightarrow \pi^*)$ & Val. & & 10.05 & 9.81 & -0.24 & 1.026 & 9.08 & 9.46 & 9.54 & 9.67 & 9.43 \\ \hline MAE & & & & 0.64 & 0.50 & & & 0.43 & 0.24 & 0.25 & 0.15 & 0.00 \\ MSE & & & & 0.64 & 0.48 & & & 0.14 & 0.02 & 0.03 & 0.14 & 0.00 \\ RMSE & & & & 0.70 & 0.58 & & & 0.55 & 0.33 & 0.33 & 0.20 & 0.00 \\ Max($+$) & & & & 1.08 & 0.91 & & & 1.06 & 0.54 & 0.57 & 0.44 & 0.00 \\ Max($-$) & & & & 0.20 & -0.22 & & & -1.77 & -0.76 & -0.71 & -0.02 & 0.00 \\ \end{tabular} \end{ruledtabular} \end{table*} \end{squeezetable} \begin{squeezetable} \begin{table*} \caption{ Triplet excitation energies (in eV) for various molecules obtained with the aug-cc-pVTZ basis set computed at various levels of theory. \label{tab:BigTabTr} } \begin{ruledtabular} \begin{tabular}{llcdddddddddd} & & & \multicolumn{5}{c}{BSE@{\GOWO}@HF} & \multicolumn{5}{c}{Wave function-based methods} \\ \cline{4-8} \cline{9-13} Mol. & State & Nature & \tabc{$E_\text{g}^{GW}$} & \tabc{$\Om{S}{\text{stat}}$} & \tabc{$\Om{S}{\text{dyn}}$} & \tabc{$\Delta\Om{S}{\text{dyn}}$} & \tabc{$Z_{S}$} & \tabc{CIS(D)} & \tabc{ADC(2)} & \tabc{CC2} & \tabc{CCSD} & \tabc{TBE} \\ \hline \ce{H2O} & $^3B_1(n \rightarrow 3s)$ & Ryd. & 13.58 & 7.62 & 7.48 & -0.14 & 1.009 & 7.25 & 6.86 & 6.91 & 7.20 & 6.92 \\ & $^3A_2(n \rightarrow 3p)$ & Ryd. & & 9.61 & 9.50 & -0.11 & 1.007 & 9.24 & 8.72 & 8.77 & 9.20 & 8.91 \\ & $^3A_1(n \rightarrow 3s)$ & Ryd. & & 9.80 & 9.66 & -0.14 & 1.008 & 9.54 & 9.15 & 9.20 & 9.49 & 9.30 \\ \\ \ce{N2} & $^3\Sigma_u^+(\pi \rightarrow \pi^*)$ & Val. & 19.20 & 8.02 & 7.38 & -0.64 & 1.032 & 8.20 & 8.15 & 8.19 & 7.66 & 7.70 \\ & $^3\Pi_g(n \rightarrow \pi^*)$ & Val. & & 8.66 & 8.10 & -0.56 & 1.031 & 8.33 & 8.20 & 8.19 & 8.09 & 8.01 \\ & $^3\Delta_u(\pi \rightarrow \pi^*)$ & Val. & & 9.04 & 8.48 & -0.56 & 1.031 & 9.30 & 9.25 & 9.30 & 8.91 & 8.87 \\ & $^3\Sigma_u^-(\pi \rightarrow \pi^*)$ & Val. & & 10.11 & 9.66 & -0.45 & 1.029 & 10.29 & 10.23 & 10.29 & 9.83 & 9.66 \\ \\ \ce{CO} & $^3\Pi(n \rightarrow \pi^*)$ & Val. & 16.46 & 6.80 & 6.25 & -0.55 & 1.031 & 6.51 & 6.45 & 6.42 & 6.36 & 6.28 \\ & $^3\Sigma^+(\pi \rightarrow \pi^*)$ & Val. & & 8.56 & 8.06 & -0.50 & 1.025 & 8.63 & 8.54 & 8.72 & 8.34 & 8.45 \\ & $^3\Delta(\pi \rightarrow \pi^*)$ & Val. & & 9.39 & 8.96 & -0.43 & 1.024 & 9.44 & 9.33 & 9.56 & 9.23 & 9.27 \\ & $^3\Sigma_u^-(\pi \rightarrow \pi^*)$ & Val. & & 10.25 & 9.90 & -0.35 & 1.023 & 10.10 & 10.01 & 10.27 & 9.81 & 9.80 \\ & $^3\Sigma_u^+$ & Ryd. & & 11.17 & 11.07 & -0.10 & 1.008 & 10.98 & 10.83 & 10.60 & 10.71 & 10.47 \\ \\ \ce{C2H2} & $^3\Sigma_{u}^+(\pi \rightarrow \pi^*)$ & Val. & 12.28 & 5.83 & 5.32 & -0.51 & 1.031 & 5.79 & 5.75 & 5.76 & 5.45 & 5.53 \\ & $^3\Delta_{u}(\pi \rightarrow \pi^*)$ & Val. & & 6.64 & 6.23 & -0.41 & 1.028 & 6.62 & 6.57 & 6.60 & 6.41 & 6.40 \\ & $^3\Sigma_{u}^-(\pi \rightarrow \pi^*)$ & Val. & & 7.37 & 7.05 & -0.32 & 1.026 & 7.31 & 7.27 & 7.29 & 7.12 & 7.08 \\ \\ \ce{C2H4} & $^3B_{1u}(\pi \rightarrow \pi^*)$ & Val. & 11.49 & 4.95 & 4.49 & -0.46 & 1.032 & 4.62 & 4.59 & 4.59 & 4.46 & 4.54 \\ & $^3B_{3u}(\pi \rightarrow 3s)$ & Ryd. & & 7.46 & 7.42 & -0.04 & 1.004 & 7.26 & 7.23 & 7.19 & 7.29 & 7.23 \\ & $^3B_{1g}(\pi \rightarrow 3p)$ & Ryd. & & 8.23 & 8.19 & -0.04 & 1.004 & 7.97 & 7.95 & 7.91 & 8.03 & 7.98 \\ \\ \ce{CH2O} & $^3A_2(n \rightarrow \pi^*)$ & Val. & 12.00 & 4.28 & 3.87 & -0.40 & 1.027 & 3.58 & 3.46 & 3.59 & 3.56 & 3.58 \\ & $^3A_1(\pi \rightarrow \pi^*)$ & Val. & & 6.31 & 5.75 & -0.56 & 1.033 & 6.27 & 6.20 & 6.30 & 5.97 & 6.06 \\ & $^3B_2(n \rightarrow 3s)$ & Ryd. & & 7.60 & 7.56 & -0.05 & 1.002 & 6.66 & 6.39 & 6.44 & 7.08 & 7.06 \\ \hline MAE & & & & 0.41 & 0.27 & & & 0.27 & 0.21 & 0.24 & 0.10 & 0.00 \\ MSE & & & & 0.41 & 0.06 & & & 0.23 & 0.10 & 0.14 & 0.05 & 0.00 \\ RMSE & & & & 0.45 & 0.33 & & & 0.31 & 0.27 & 0.30 & 0.13 & 0.00 \\ Max($+$) & & & & 0.70 & 0.60 & & & 0.63 & 0.57 & 0.63 & 0.29 & 0.00 \\ Max($-$) & & & & 0.11 & -0.39 & & & -0.40 & -0.67 & -0.62 & -0.11 & 0.00 \\ \end{tabular} \end{ruledtabular} \end{table*} \end{squeezetable} Tables \ref{tab:BigTabSi} and \ref{tab:BigTabTr} report, respectively, singlet and triplet excitation energies for various molecules computed at the BSE@{\GOWO}@HF level and with the aug-cc-pVTZ basis set. For comparative purposes, excitation energies obtained with the same basis set and several second-order wave function methods [CIS(D), ADC(2), CC2, and CCSD] are also reported. The highly-accurate TBEs of Refs.~\onlinecite{Loos_2018a,Loos_2019,Loos_2020b} (computed in the same basis) will serve us as reference, and statistical quantities [MAE, MSE, RMSE, Max($+$), and Max($-$)] are defined with respect to these references. For each excitation, we report the static and dynamic excitation energies, $\Om{S}{\text{stat}}$ and $\Om{S}{\text{dyn}}$, as well as the value of the renormalization factor $Z_S$ defined in Eq.~\eqref{eq:Z}. As one can see in Tables \ref{tab:BigTabSi} and \ref{tab:BigTabTr}, the value of $Z_S$ is always quite close to unity which shows that the perturbative expansion behaves nicely, and that a first-order correction is probably quite a good estimate of the non-perturbative result. Moreover, we have observed that an iterative, self-consistent resolution [where the dynamically-corrected excitation energies are re-injected in Eq.~\eqref{eq:Om1}] yields basically the same results as its (cheaper) renormalized version. Note that, unlike in $GW$ where the renormalization factor lies in between $0$ and $1$, the dynamical BSE renormalization factor $Z_S$ defined in Eq.~\eqref{eq:Z} can be smaller or greater than unity. A clear general trend is the consistent red shift of the static BSE excitation energies induced by the dynamical correction, as anticipated in Sec.~\ref{sec:dynW}. \begin{figure*} \includegraphics[width=\linewidth]{fig1a} \includegraphics[width=\linewidth]{fig1b} \caption{Error (in eV) with respect to the TBEs of Refs.~\onlinecite{Loos_2018a,Loos_2019,Loos_2020b} for singlet (top) and triplet (bottom) excitation energies of various molecules obtained with the aug-cc-pVTZ basis set computed within the static (red) and dynamic (blue) BSE formalism. CT and R stand for charge transfer and Rydberg state, respectively. See Tables \ref{tab:BigTabSi} and \ref{tab:BigTabTr} for raw data. \label{fig:SiTr-SmallMol}} \end{figure*} The results gathered in Tables \ref{tab:BigTabSi} and \ref{tab:BigTabTr} are depicted in Fig.~\ref{fig:SiTr-SmallMol}, where we report the error (with respect to the TBEs) for the singlet and triplet excitation energies computed within the static and dynamic BSE formalism. From this figure, it is quite clear that the dynamically-corrected excitation energies are systematically improved upon their static analogs, especially for singlet states. (In the case of triplets, one would notice a few cases where the excitation energies is underestimated.) In particular, the MAE is reduced from $0.64$ to $0.50$ eV for singlets, and from $0.41$ to $0.27$ eV for triplets. The MSE and RMSE are also systematically improved when one takes into account dynamical effects. The second important observation extracted from Fig.~\ref{fig:SiTr-SmallMol} is that the (singlet and triplet) Rydberg states are rather unaltered by the dynamical effects with a correction of few hundredths of eV in most cases. The same comment applies to the CT excited state of \ce{HCl}. The magnitude of the dynamical correction for $n \rightarrow \pi^*$ and $\pi \rightarrow \pi^*$ transitions is much more important: $0.3$--$0.5$ eV for singlets and $0.3$--$0.7$ eV for triplets. Dynamical BSE does not quite reach the accuracy of second-order methods [CIS(D), ADC(2), CC2, and CCSD] for the singlet and triplet optical excitations of these small molecules. However, it is definitely an improvement in terms of performances as compared to static BSE, especially for triplet states, where dynamical BSE reaches an accuracy close to CIS(D), ADC(2), and CC2. \begin{squeezetable} \begin{table} \caption{ Singlet and triplet excitation energies (in eV) for various molecules obtained with the aug-cc-pVDZ basis set computed at various levels of theory. \label{tab:BigMol} } \begin{ruledtabular} \begin{tabular}{llcdddddd} & & & \multicolumn{5}{c}{BSE@{\GOWO}@HF} \\ \cline{4-8} Molecule & State & Nature & \tabc{$E_\text{g}^{GW}$} & \tabc{$\Om{S}{\text{stat}}$} & \tabc{$\Om{S}{\text{dyn}}$} & \tabc{$\Delta\Om{S}{\text{dyn}}$} & \tabc{$Z_{S}$} & \tabc{CC3} \\ \hline acrolein & $^1A''(n \rightarrow \pi^*)$ & Val. & 11.67 & 4.62 & 4.28 & -0.35 & 1.030 & 3.77 \\ & $^1A'(n \rightarrow \pi^*)$ & Val. & & 6.86 & 6.70 & -0.16 & 1.023 & 6.67 \\ & $^1A'(n \rightarrow 3s)$ & Ryd. & & 7.57 & 7.53 & -0.04 & 1.004 & 6.99 \\ \\ & $^3A''(n \rightarrow \pi^*)$ & Val. & & 3.97 & 3.54 & -0.43 & 1.031 & 3.47 \\ & $^3A'(\pi \rightarrow \pi^*)$ & Val. & & 4.03 & 3.61 & -0.42 & 1.032 & 3.95 \\ \\ butadiene & $^1B_u(\pi \rightarrow \pi^*)$ & Val. & 9.88 & 6.25 & 6.13 & -0.12 & 1.019 & 6.25 \\ & $^1A_g(\pi \rightarrow \pi^*)$ & Val. & & 6.88 & 6.86 & -0.03 & 1.003 & 6.68 \\ \\ & $^3B_u(\pi \rightarrow \pi^*)$ & Val. & & 3.68 & 3.25 & -0.43 & 1.032 & 3.36 \\ & $^3A_g(\pi \rightarrow \pi^*)$ & Val. & & 5.51 & 5.01 & -0.50 & 1.040 & 5.21 \\ & $^3B_g(\pi \rightarrow 3s)$ & Ryd. & & 6.29 & 6.25 & -0.04 & 1.005 & 6.20 \\ \\ diacetylene & $^1\Sigma_u^-(\pi \rightarrow \pi^*)$ & Val. & 11.01 & 5.62 & 5.35 & -0.28 & 1.025 & 5.44 \\ & $^1\Delta_u(\pi \rightarrow \pi^*)$ & Val. & & 5.87 & 5.63 & -0.25 & 1.024 & 5.69 \\ \\ & $^3\Sigma_u^+(\pi \rightarrow \pi^*)$ & Val. & & 4.30 & 3.82 & -0.49 & 1.031 & 4.06 \\ & $^3\Delta_u(\pi \rightarrow \pi^*)$ & Val. & & 5.04 & 4.68 & -0.36 & 1.027 & 4.86 \\ \\ glyoxal & $^1A_u(n \rightarrow \pi^*)$ & Val. & 10.90 & 3.46 & 3.14 & -0.33 & 1.028 & 2.90 \\ & $^1B_g(n \rightarrow \pi^*)$ & Val. & & 4.96 & 4.55 & -0.41 & 1.034 & 4.30 \\ & $^1B_u(n \rightarrow 3p)$ & Ryd. & & 7.90 & 7.86 & -0.04 & 1.004 & 7.55 \\ \\ & $^3A_u(n \rightarrow \pi^*)$ & Val. & & 2.77 & 2.38 & -0.39 & 1.028 & 2.49 \\ & $^3B_g(n \rightarrow \pi^*)$ & Val. & & 4.23 & 3.75 & -0.48 & 1.034 & 3.91 \\ & $^3B_u(\pi \rightarrow \pi^*)$ & Val. & & 5.01 & 4.47 & -0.55 & 1.034 & 5.20 \\ \\ streptocyanine & $^1B_2(\pi \rightarrow \pi^*)$ & Val. & 13.79 & 7.66 & 7.51 & -0.15 & 1.019 & 7.14 \\ \hline MAE & & & & 0.32 & 0.23 & & & 0.00 \\ MSE & & & & 0.30 & 0.00 & & & 0.00 \\ RMSE & & & & 0.38 & 0.29 & & & 0.00 \\ Max($+$) & & & & 0.85 & 0.54 & & & 0.00 \\ Max($-$) & & & & -0.19 & -0.73 & & & 0.00 \\ \end{tabular} \end{ruledtabular} \end{table} \end{squeezetable} \begin{figure*} \includegraphics[width=\linewidth]{fig2} \caption{Error (in eV) with respect to CC3 for singlet and triplet excitation energies of various molecules obtained with the aug-cc-pVDZ basis set computed within the static (red) and dynamic (blue) BSE formalism. R stands for Rydberg state. See Table \ref{tab:BigMol} for raw data. \label{fig:SiTr-BigMol}} \end{figure*} Table \ref{tab:BigMol} reports singlet and triplet excitation energies for larger molecules (acrolein \ce{H2C=CH-CH=O}, butadiene \ce{H2C=CH-CH=CH2}, diacetylene \ce{HC#C-C#CH}, glyoxal \ce{O=CH-CH=O}, and streptocyanine-C1 \ce{H2N-CH=NH2+}) at the static and dynamic BSE levels with the aug-cc-pVDZ basis set. We also report the CC3 excitation energies computed in Refs.~\onlinecite{Loos_2018a,Loos_2019,Loos_2020b} with the same basis set. These will be our reference as they are known to be extremely accurate ($0.03$--$0.04$ eV from the TBEs). \cite{Loos_2018a,Loos_2019,Loos_2020b,Loos_2020g} Errors associated with these excitation energies (with respect to CC3) are represented in Fig.~\ref{fig:SiTr-BigMol}. As expected the static BSE excitation energies are much more accurate for these larger molecules with a MAE of $0.32$ eV, a MSE of $0.30$ eV, and a RMSE of $0.38$ eV. Here again, the dynamical correction improves the accuracy of BSE by lowering the MAE, MSE, and RMSE to $0.23$, $0.00$, and $0.29$ eV, respectively. Rydberg states are again very slightly affected by dynamical effects, while the dynamical corrections associated with the $n \rightarrow \pi^*$ and $\pi \rightarrow \pi^*$ transitions are much larger and of the same magnitude ($0.3$--$0.6$ eV) for both types of transitions. This latter observation is somehow different from the outcomes reached by Rohlfing and coworkers in previous works \cite{Ma_2009a,Ma_2009b} (see Sec.~\ref{sec:intro}) where they observed i) smaller corrections, and ii) that $n \rightarrow \pi^*$ transitions are more affected by the dynamical screening than $\pi \rightarrow \pi^*$ transitions. The larger size of the molecules considered in Refs.~\onlinecite{Ma_2009a,Ma_2009b} may play a role on the magnitude of the corrections, even though we do not observe here a significant reduction going from small systems (\ce{N2}, \ce{CO}, \ldots) to larger ones (acrolein, butadiene, \ldots). We emphasize further that previous calculations \cite{Ma_2009a,Ma_2009b} were performed within the plasmon-pole approximation for modeling the dynamical behaviour of the screened Coulomb potential, while we go beyond this approximation in the present study [see Eq.~\eqref{eq:wtilde}]. Finally, while errors were defined with respect to experimental data in Refs.~\onlinecite{Ma_2009a,Ma_2009b}, we consider here as reference high-level CC calculations performed with the very same geometries and basis sets than our BSE calculations. As pointed out in previous works, \cite{Loos_2018,Loos_2019b,Loos_2020g} a direct comparison between theoretical transition energies and experimental data is a delicate task, as many factors (such as zero-point vibrational energies and geometrical relaxation) must be taken into account for fair comparisons. Further investigations are required to better evaluate the impact of these considerations on the influence of dynamical screening. To provide further insight into the magnitude of the dynamical correction to valence, Rydberg, and CT excitations, let us consider a simple two-level systems with $i = j = h$ and $a = b = l$, where $(h,l)$ stand for HOMO and LUMO. The dynamical correction associated with the HOMO-LUMO transition reads \begin{equation} \W{hh,ll}{\text{stat}} - \widetilde{W}_{hh,ll}( \Om{1}{} ) = - 4 \sERI{hh}{hl} \sERI{ll}{hl} \qty( \frac{1}{\OmRPA{hl}} - \frac{1}{\Om{hl}{1} - \OmRPA{hl}} ), \end{equation} where the only RPA excitation energy, $\OmRPA{hl} = \e{l} - \e{h} + 2 \ERI{hl}{lh}$, is again the HOMO-LUMO transition, \textit{i.e.}, $m=hl$ [see Eq.~\eqref{eq:sERI}]. For CT excitations with vanishing HOMO-LUMO overlap [\textit{i.e.}, $\ERI{h}{l} \approx 0$] \titou{and small excitonic binding energy}, $\sERI{hh}{hl} \approx 0$ and $\sERI{ll}{hl} \approx 0$, so that one can expect the dynamical correction to be weak. Likewise, Rydberg transitions which are characterized by a delocalized LUMO state, that is, a small HOMO-LUMO overlap, are expected to undergo weak dynamical corrections. The discussion for $\pi \rightarrow \pi^*$ and $n \rightarrow \pi^*$ transitions is certainly more complex and molecule-specific symmetry arguments must be invoked to understand the magnitude of the $\sERI{hh}{hl}$ and $\sERI{ll}{hl}$ terms. As a final comment, let us discuss the two singlet states of butadiene reported in Table \ref{tab:BigMol}.\cite{Maitra_2004,Cave_2004,Saha_2006,Watson_2012,Shu_2017,Barca_2018a,Barca_2018b,Loos_2019} As discussed in Sec.~\ref{sec:intro}, these corresponds to a bright state of $^1B_u$ symmetry with a clear single-excitation character, and a dark $^1A_g$ state including a substantial fraction of double excitation character (roughly $30\%$). Although they are both of $\pi \rightarrow \pi^*$ nature, they are very slightly altered by dynamical screening with corrections of $-0.12$ and $-0.03$ eV for the $^1B_u$ and $^1A_g$ states, respectively. The small correction on the $^1A_g$ state might be explained by its rather diffuse nature (similar to a Rydberg states). \cite{Boggio-Pasqua_2004} \section{Conclusion} \label{sec:conclusion} The BSE formalism is quickly gaining momentum in the electronic structure community thanks to its attractive computational scaling with system size and its overall accuracy for modeling single excitations of various natures in large molecular systems. It now stands as a genuine cost-effective excited-state method and is regarded as a valuable alternative to the popular TD-DFT method. However, the vast majority of the BSE calculations are performed within the static approximation in which, in complete analogy with the ubiquitous adiabatic approximation in TD-DFT, the dynamical BSE kernel is replaced by its static limit. One key consequence of this static approximation is the absence of higher excitations from the BSE optical spectrum. Following Strinati's footsteps \titou{who originally derived the dynamical BSE equations}, \cite{Strinati_1982,Strinati_1984,Strinati_1988} several groups have explored the BSE formalism beyond the static approximation by retaining (or reviving) the dynamical nature of the screened Coulomb potential \cite{Sottile_2003,Romaniello_2009b,Sangalli_2011} or via a perturbative approach coupled with the plasmon-pole approximation. \cite{Rohlfing_2000,Ma_2009a,Ma_2009b,Baumeier_2012b} In the present study, we have computed exactly the dynamical screening of the Coulomb interaction within the random-phase approximation, going effectively beyond both the usual static approximation and the plasmon-pole approximation. \titou{Dynamical corrections have been calculated using a renormalized first-order perturbative correction to the static BSE excitation energies following the work of Rohlfing and coworkers. \cite{Rohlfing_2000,Ma_2009a,Ma_2009b,Baumeier_2012b} Note that, although the present study goes beyond the static approximation of BSE, we do not recover additional excitations as the perturbative treatment accounts for dynamical effects only on excitations already present in the static limit. However, we hope to report results on a genuine dynamical approach in the near future in order to access double excitations within the BSE formalism.} In order to assess the accuracy of the present scheme, we have reported a significant number of calculations for various molecular systems. Our calculations have been benchmarked against high-level CC calculations, allowing to clearly evidence the systematic improvements brought by the dynamical correction for both singlet and triplet excited states. We have found that, although $n \rightarrow \pi^*$ and $\pi \rightarrow \pi^*$ transitions are systematically red-shifted by $0.3$--$0.6$ eV thanks to dynamical effects, their magnitude is much smaller for CT and Rydberg states. \acknowledgements The authors would like to thank Elisa Rebolini, Pina Romaniello, Arjan Berger, and Julien Toulouse for insightful discussions on dynamical kernels. PFL thanks the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481) for financial support. This work was performed using HPC resources from GENCI-TGCC (Grant No.~2019-A0060801738) and CALMIP (Toulouse) under allocation 2020-18005. Funding from the \textit{``Centre National de la Recherche Scientifique''} is acknowledged. This study has been (partially) supported through the EUR grant NanoX No.~ANR-17-EURE-0009 in the framework of the \textit{``Programme des Investissements d'Avenir''.} \section*{Data availability} The data that support the findings of this study are available within the article.
{ "timestamp": "2020-09-04T02:11:15", "yymm": "2007", "arxiv_id": "2007.13501", "language": "en", "url": "https://arxiv.org/abs/2007.13501" }
\section{Introduction} \label{sec:intro} Particle physics focuses on understanding fundamental laws of nature by observing elementary particles, either in controlled environments (collider physics) or in nature (astro-particle). The standard model of particle physics is a theory of the strong, weak and electromagnetic forces, and elementary particles (quarks and leptons). Physicists are building experiments to measure elementary particles and by using statistical methods can test the validity of various models. The data from the experiments are generally a sparse sampling of a physics process in both time and space. Machine learning has historically played a significant role in particle physics \cite{Radovic:2018dip}, with classification and regression applications using classical techniques, such as boosted decision trees, support vector machine, simple multi-layer perceptrons, etc. Inspired by the success deep learning has achieved at reaching super-human performance at various tasks, various domains in the physical sciences~\cite{Carleo:2019ptp}, including particle physics~\cite{Radovic:2018dip,Guest:2018yhq,Bourilkov:2019yoi,Larkoski:2017jix,livingreview}, have begun exploring deep learning as a unique tool for handling difficult scientific problems that go beyond straightforward classification, to organize and make sense of vast data sources, draw inferences about unobserved causal factors, and even discover physical principles underpinning complex phenomena~\cite{cranmer2019learning,cranmer2020discovering}. HEP experiments often use machine learning for learning complicated inverse functions, trying to infer something about the underlying physics process from the information measured in the detector. This scheme is illustrated in figure~\ref{fig:simulation_chain}. \begin{figure}[h] \centering \includegraphics[]{fig/simulation_chain} \caption{Simulation is used in HEP experiments to create a "truth record" of the physics event which caused a certain detector response. This "truth record" is used to train supervised learning algorithms to invert the detector simulation and infer something about the underlying physics from the observed data. These algorithms are then applied to real data that were measured by the detector.} \label{fig:simulation_chain} \end{figure} While the most widely used trio of deep learning building blocks---the fully connected network (FC), convolutional neural network (CNN) and recurrent neural network (RNN)---have proven valuable across many scientific domains, the focus of this review is on a class of architectures called Graph Neural Networks (GNN) --- as described below, we regard self-attention as a graph-based architecture---, which can be trained from data to learn functions on graphs. Many problems involve data represented as unordered sets of elements with rich relations and interactions with one another, and can be naturally expressed as graphs. They are however not convenient to represent as vectors, grids, or sequences --- the format required by FCs, CNNs, and RNNs, respectively --- unless for specific structure of tree~\cite{mou2014convolutional,shen2018ordered}. Extensive reviews of GNNs are available in the literature \cite{bronstein2017geometric,gilmer2017neural,battaglia2018relational,zhou2018graph,wu2019comprehensive}. However applications of GNNs in high energy physics (HEP) are evolving rapidly, and the purposes of this review are to outline the key principles and uses of GNNs for particle physics, and build bridges between physics and machine learning by exposing researchers on both sides to important, challenging problems in each others' domains. \paragraph{Data Representation.}\label{sec:representation} Measurements in particle physics are commonly done in large accelerator facilities (CERN, KEK, Fermilab, etc), using detectors with sizes on the order of tens of meters, which capture millions of high-dimensional measurements each second. These detectors are composed of multiple sub-detectors --- tracking detector, calorimeters, muon detector, etc --- each using a different technology to measure the trace of particles. The data in particle physics are therefore heterogeneous. Detectors in astrophysics are typically bigger, with size up to kilometers (IceCube, Antares, etc) constructed around a single measurement technology, the data are therefore homogeneous. In both cases, the measurements are inherently sparse in space, due to the design of the geometry of the sensors. The measurements therefore do not a-priori fit homogeneous, grid-like data structures. Deep learning is often applied on high level features derived from particle physics data~\cite{Radovic:2018dip}. This can improve over more classical data analysis methods, but does not use the full potential of deep learning, which can be effective when operating on lower level information. \begin{figure}[h] \begin{center} \subfigure[]{ \includegraphics[]{fig/tracker.pdf} } \subfigure[]{ \includegraphics[]{fig/calo_graph.pdf} } \subfigure[]{ \includegraphics[]{fig/event_as_graph.pdf} } \subfigure[]{ \includegraphics[]{fig/jet_as_graph.pdf} } \caption{HEP data lend itself to being represented as a graph for many applications: (a) clustering tracking detector hits into tracks, (b) segmenting calorimeter cells, (c) classifying events with multiple types of physics objects, (d) jet classification based on the particles associated to the jet.} \label{fig:data_as_graph} \end{center} \end{figure} Some data in particle physics can be fractionally interpreted as images and hence computer vision techniques (CNNs) are being applied with improved performances \cite{ATLAS:2017dfg,Kasieczka:2017nvn,Macaluso:2018tck,Andrews:2018nwy,Lin:2018cin,ATLAS:2019fxb}. However, image representations face some limitations with irregular geometry of detectors or sparsity of the projections applied. Because of the inherent loss of information, image representations may constrain the amount of information that can be extracted from the data. Measurement and reconstructed objects can be viewed as sequences, with an order imposed from theoretical or experimental understanding of the data. Methods otherwise applied to natural language processing (e.g., RNNs, LSTMs~\cite{hochreiter1997long}, GRUs~\cite{cho2014learning}, etc) have thus been explored \cite{ATLAS:2017gpy,Sirunyan:2020lcu}. While the ordering used can usually be justified experimentally, it is often imposed and therefore constrains how the data are presented to models. This ordering can also be learned \cite{Louppe:2017ipp} in some cases, using prior experimental knowledge of the physics process at stake. This is however not always the case and one may expect that the imposed ordering will reduce the learning performance --- ordering that is not required as we will see in the following. For example~\cite{DIPs} shows evidence that a permutation invariant network outperforms a sequence based algorithm that uses the exact same input features, for the same classification task. At many levels the data are, by definition, sets (unordered collection) of items. If one considers relation between items (geometrical, or physical) a set transforms into a graph with the addition of an adjacency matrix There is a-priori less limitation in applying deep learning on this intrinsic representation of the data, than at the other levels mentioned above. A variety of HEP data and their formulation as graphs is illustrated in figure~\ref{fig:data_as_graph}. We concentrate in this review on the applications of GNNs to HEP. We argue why graphs are a very useful data representation, and review key architectures. Common traits in graph construction and model architecture will be linked to the specific requirements of the HEP problems under consideration. By providing a normalized description of the models through the formalism introduced in \cite{battaglia2018relational} we hope to make the adoption and further development of GNNs for HEP simpler. This review paper is organized as follows. An overview of the field of geometrical deep learning is given in section~\ref{sec:geomdeeplearning}. Existing applications to particle physics are reviewed in \ref{sec:applications}. General guidelines for formulating HEP tasks for GNNs are given in section~\ref{sec:guidelines}. In particular we go in the details of the different approaches in building the graph connectivity in section~\ref{sec:graphconstruction}, the various model architecture adopted in section~\ref{sec:modelarch}. This paper concludes with a discussion on the various approaches and the remaining open questions in section~\ref{sec:discussion}. \section{Geometric Deep Learning}\label{sec:geomdeeplearning} \subsection{Overview} Deep learning has been central to the past decade's advances in machine learning and artificial intelligence~\cite{schmidhuber2015deep,lecun2015deep}, and can be understood as the confluence of several key factors. First, large neural networks can express very complex functions. Second, valuable information in big data can be encoded into the parameters of large neural networks via gradient-based training procedures. Third, parallel computer hardware can perform such training in hours or days, which is efficient enough for many important use cases. Fourth, well-designed software frameworks, such as TensorFlow \cite{tensorflow2015-whitepaper} and PyTorch \cite{paszke2017automatic}, lower the technical bar to developing and distributing deep learning applications, making powerful machine learning tools broadly accessible to practitioners. Fully connected, convolutional, and recurrent \textit{layers} have been the primary building blocks in modern deep learning, each of which carries different \textit{inductive biases}, which incentivize or constrain the learning algorithm to prioritize one solution over another. For example, convolutional layers share their underlying kernel function across spatial dimensions of the input signal, while recurrent layers share across the temporal dimension of the input. These building blocks are most suitable for approximating functions on vectors, grids, and sequences, but when a problem involves data with richer structure, these modules are not always convenient or effective to apply. For example, consider learning functions over sets of particles --- while it is possible to order them, for example sorting by the transverse momentum $p_{T}$ of the particle, the imposed ordering in not unique, and it fails to reflect that particles are fundamentally unordered. The aforementioned deep learning modules do not have appropriate inductive biases to exploit this richer graphical structure. Graph-structured data are ubiquitous across science, engineering, and many other problem domains. A graph is defined, minimally, as a set of nodes as well as a set of edges adjacent to pairs of nodes. Richer varieties and special cases include: trees, where there is exactly one sequence of edges connecting any two nodes; directed graphs, where the two nodes associated with an edge are ordered; attributed graphs, which include node-level, edge-level, or graph-level attributes; multigraphs, where more than one edge may exist between a pair of nodes; hypergraphs, where more than two nodes are associated with an edge; etc. Crucially, graphs are a natural and powerful way of representing many complex systems \cite{bronstein2017geometric,gilmer2017neural,battaglia2018relational,zhou2018graph,wu2019comprehensive}, e.g., trees for representing evolution of species, or the hierarchical structure of sentences; lattices and meshes for representing regular and irregular discretizations of space, respectively; dynamic networks for representing traffic on roads and social relationships over time. GNNs~\cite{scarselli2008graph,bronstein2017geometric,gilmer2017neural,battaglia2018relational} are a class of deep learning architectures which implement strong relational inductive biases for learning functions that operate on graphs. They implement a form of parameterized message-passing whereby information is propagated across the graph, allowing sophisticated edge-, node-, and graph-level outputs to be computed. Within a GNN there are one or more standard neural network building blocks, typically fully connected layers, which implement the message computations and propagation functions. The first GNNs~\cite{gori2005new,scarselli2008graph} were developed and applied for network analysis, especially on internet data, and were trained not with the back-propagation algorithm, but with fixed point iteration via the Almeida-Pineda algorithm~\cite{almeida1987learning,pineda1987generalization}. Li et al.'s~\cite{li2015gated}'s gated graph sequence neural networks helped integrate more recent deep learning innovations into GNNs, adding RNN modules for improving multiple rounds of message-passing and optimizing their parameters by the back-propagation learning rule~\cite{schmidhuber2015deep,lecun2015deep}. In recent years, the field of GNNs has grown very rapidly, with applications to science and engineering. For example, graph convolution has been used for molecular fingerprinting~\cite{kearnes2016molecular}. Message-passing neural networks~\cite{gilmer2017neural}, which provided a general formulation of GNNs which captured a number of previous methods, were introduced for quantum chemistry. Interaction networks~\cite{battaglia2016interaction} and graph networks~\cite{battaglia2018relational} have been developed for learning to simulate increasingly complex physical systems~\cite{battaglia2016interaction,sanchez2018graph,li2018dpi,sanchez2020learning}. GNNs are situated within the broader family of what Bronstein et al.~\cite{bronstein2017geometric} term \textit{geometric deep learning}, which, aside from GNNs, captures related deep learning methods which apply to data structures beyond vectors, tensors, sequences, etc. Their survey explores graph signal processing and how it can be connected to deep learning, with substantial discussion on how the general principles of CNNs applied to Euclidean signals can be transferred to graph-structured signals. Key examples of spectral graph convolution approaches are~\cite{bruna2013spectral,defferrard2016convolutional,henaff2015deep}, which applied neural networks to the eigenvalues and eigenvectors of the graph Laplacian. Much work on GNs has focused on learning physical simulation \cite{battaglia2016interaction,sanchez2018graph,li2018dpi,Ummenhofer2020Lagrangian}, similar to Lagrangian methods for particle-based simulation in engineering and graphics. The system is represented as a set of particle vertices, whose interactions are represented by edges and computed via learned functions. Recent work by \cite{sanchez2020learning} highlights how far this sub-field has advanced: they trained models to predict systems of thousands of particles, which represent fluids, solids, sand, and ``goop'', and show generalization to orders of magnitude more particles and longer trajectories than experienced during training. Because GNs are highly parallelizable on modern deep learning hardware (GPUs, TPUs, FPGAs), their approach scaled well, and its speed was on par with heavily engineered state-of-the-art fluid simulation engines, despite that they did not optimize for speed in their work. Recently GNs have been extended by adding inductive biases derived from physics, adjusting their architectures to be consistent with Hamiltonian~\cite{sanchez2019hamiltonian} and Lagrangian mechanics~\cite{cranmer2020lagrangian}, which can improve performance and generalization on various physical prediction problems. Other recent work~\cite{cranmer2019learning} has shown symbolic physical laws can be extracted from the learned functions within a GN. \subsection{The Graph Network Formalism } \label{sec:GNformalism} Here we focus on the \textit{graph network} (GN) formalism~\cite{battaglia2018relational}, which generalizes various GNNs, as well as other methods (e.g., Transformer-style self-attention~\cite{vaswani2017attention}). GNs are graph-to-graph functions, whose output graphs have the same node and edge structure as the input. Adopting \cite{battaglia2018relational}'s formalism, a graph can be represented by, $G = (\mathbf{u}, V, E)$, with $N_v$ vertices and $N_e$ edges. The $\mathbf{u}$ represents graph-level attributes. The set of \textit{nodes} (or vertices) are $V = \{\mathbf{v}_i\}_{i=1:N_v}$, where $\mathbf{v}_i$ represents the $i$-th node's attributes. The set of edges are $E = \{\left(\mathbf{e}_k, r_k, s_k\right)\}_{k=1:N_e}$, where $\mathbf{e}_k$ represents the $k$-th edge's attributes, and $r_k$ and $s_k$ are the indices of the two ($r$eceiver and $s$ender, respectively) nodes connected by the $k$-th edge. A GN's stages of processing are as follows. \begin{align} \begin{split} \mathbf{e}'_k &= \phi^e\left(\mathbf{e}_k, \mathbf{v}_{r_k}, \mathbf{v}_{s_k}, \mathbf{u} \right) \\ \mathbf{v}'_i &= \phi^v\left(\mathbf{\bar{e}}'_i, \mathbf{v}_i, \mathbf{u}\right) \\ \mathbf{u}' &= \phi^u\left(\mathbf{\bar{e}}', \mathbf{\bar{v}}', \mathbf{u}\right) \end{split} \begin{split} \mathbf{\bar{e}}'_i &= \rho^{e \rightarrow v}\left(E'_i\right) \hspace{48pt} \triangleright \text{ Edge block} \\ \mathbf{\bar{e}}' &= \rho^{e \rightarrow u}\left(E'\right) \hspace{48pt} \triangleright \text{ Vertex block}\\ \mathbf{\bar{v}}' &= \rho^{v \rightarrow u}\left(V'\right) \hspace{48pt} \triangleright \text{ Global block} \end{split} \label{eq:gn-functions} \end{align} A GN block contains 6 internal functions: 3 \textit{update functions} ($\phi^e$, $\phi^v$, and $\phi^u$) and 3 \textit{aggregation functions} ($\rho^{e \rightarrow v}$, $\rho^{e \rightarrow u}$, and $\rho^{v \rightarrow u}$). The GN formalism is not a specific model architecture, it does not determine what exactly those functions are. The update functions are functions of fixed size input and fixed size output, and the aggregation functions take in a variable-sized set of inputs (such as a set of edges connected to a particular node) and output a fixed size representation of the input set. This is illustrated in figure~\ref{fig:gn_functions}. \begin{figure}[] \centering \subfigure[]{ \includegraphics[]{fig/GN_functions_update.pdf} } \subfigure[]{ \includegraphics[]{fig/GN_functions_aggregate.pdf} } \caption{The internal components of a GN block are \textit{update functions} and \textit{aggregation functions}. (a) The update functions take a set of objects with a fixed size representation, and apply the same function to each of the elements in the set, resulting in an updated representation (also with a fixed size). (b) The aggregation functions take a set of objects and create one fixed size representation for the entire set, by using some order invariant function to group together the representations of the objects (such as an element-wise sum).} \label{fig:gn_functions} \end{figure} The \textit{edge block} computes one output for each edge, $\mathbf{e}'_k$, and aggregates them by their corresponding receiving node, $\mathbf{\bar{e}}'_i$, where $E'_i$ is the set of edges incident on the $i$-th node. The \textit{vertex block} computes one output for each node, $\mathbf{v}'_i$. The edge- and node-level outputs are all aggregated in order to compute the \textit{global block}. The output of the GN is the set of all edge-, node-, and graph-level outputs, $G'=(\mathbf{u}', V', E')$. See Figure~\ref{fig:gn-framework}a. In practice the $\phi^e$, $\phi^v$, and $\phi^u$ are often implemented as a simple trainable neural network, e.g. a fully connected network. The $\rho^{e \rightarrow v}$, $\rho^{e \rightarrow u}$, and $\rho^{v \rightarrow u}$ functions are typically implemented as permutation invariant reduction operators, such as element-wise sums, means, or maximums. The $\rho$ functions must be permutation invariant if the GN block is to maintain permutation equivariance. \begin{figure}[thbt] \begin{center} \subfigure[]{ \includegraphics[width=0.775\linewidth]{fig/GN-full-block} \label{fig:full_gn_block} } \subfigure[]{ \includegraphics[width=0.575\linewidth]{fig/GN-stack-config2} \label{fig:gn_stack} } \caption{(a) A GN block (from~\cite{battaglia2018relational}). An input graph, $G=(\mathbf{u}, V, E)$, is processed and a graph with the same edge structure but different attributes, $G'=(\mathbf{u}', V', E')$, is returned as output. The component functions are described in Equation~\ref{eq:gn-functions}. (b) GN blocks can be composed into more complex computational architectures. The top row shows a sequence of different GN blocks arranged in series, or depth-wise, fashion. The bottom row replaces the distinct GN blocks with a shared, recurrent, configuration.} \label{fig:gn-framework} % \end{center} \end{figure} Some key benefits of GNs are that they are generic: if a problem can be expressed as requiring a graph to be mapped to another graph or some summary output, GNs are often suitable. They also tend to generalize well to graphs not experienced during training, because the learning is focused on the edge- and node-level---in fact if the global block is omitted, the GN is not even aware of the full graph in any of its computations, as the edge and node blocks take only their respective localities as input. Yet when multiple GN blocks are arranged in deep or recurrent configurations, as in Figure~\ref{fig:gn-framework}b, information can be processed and propagated across the graph's structure, to allow more complex, long-range computations to be performed. The GN formalism is a general framework which can capture a variety of other GNN architectures. Such architectures can be expressed by removing or rearranging internal components of the general GN block in Figure~\ref{fig:gn-framework}, and implementing the various $\phi$ and $\rho$ functions using specific functional forms. For example, one very popular GNN architecture is the Graph Convolutional Network (GCN)~\cite{kipf2016semi}. Using the GN formalism\cite{battaglia2018relational,gilmer2017neural}, a GCN can be expressed as, \begin{alignat*}{2} \mathbf{e}'_k &= \phi^e\left(\mathbf{e}_k, \mathbf{v}_{s_k} \right) &&= \mathbf{e}_k \mathbf{v}_{s_k} \ , \qquad\text{where } \mathbf{e}_k = \frac{1}{\sqrt{\text{degree}(r_k)\text{degree}(s_k)}} \\ \mathbf{\bar{e}}'_i &= \rho^{e\rightarrow v}\left(E'_i\right) &&= \sum_{\{k \, \vert \, r_k = i\}} \mathbf{e}'_k \\ \mathbf{v}'_i &= \phi^v\left(\mathbf{\bar{e}}'_i \right) &&= \sigma \left(\mathbf{\bar{e}}'_i W\right) \end{alignat*} Figure~\ref{fig:gcn} shows the correspondence between the GCN and the GN depicted in Figure~\ref{fig:gn-framework}. \begin{figure}[!b] \centering \subfigure[]{ \includegraphics[height=4.0cm]{fig/gcn.pdf} } \subfigure[]{ \includegraphics[height=4.0cm]{fig/GN-global-pool.pdf} } \caption{(a) The Graph Convolutional Network (GCN)~\cite{kipf2016semi}, a type of message-passing neural network, can be expressed as a GN, without a global attribute and a linear, non-pairwise edge function. (b) A more dramatic rearrangement of the GN's components gives rise to a model which pools vertex attributes and combines them with a global attribute, then updates the vertex attributes using the combined feature as context.} \label{fig:gcn} \end{figure} In section~\ref{sec:guidelines} we will discuss the considerations taken into account when deciding how to choose the actual implementation of the GNs internal functions. The choice of the specific architecture is motivated by the relationships that exist between the elements in the input data and the task one is trying to solve with the model. \section{Survey of Applications to Particle Physics}\label{sec:applications} Beyond discriminating signals from background in physics analysis, machine learning can be applied in many of the steps of the event: triggering, reconstruction and simulation. GNNs are used in three different ways to make predictions : at the level of the graph, or node, or edge, depending on the task at hand. We described briefly below the challenges and the methods applied, coming back in further details in~\ref{sec:guidelines}. All the presented methods were developed on simulated events, and no performance on real data is reported so far. In each line of work described below, a decision was first made about how the data could be expressed as a graph: What are the entities and relations which would be represented as nodes and edges, respectively? What is the required output, i.e., edge-, node-, or graph-level predictions? From there, choices about the specific GNN architecture were made to reflect the desired computation: Is a global output network required to produce graph-level outputs? Should pairwise interactions among nodes be computed, or more GCN-like summation and non-linear transformation? How many message-passing steps should be used, in order to propagate information among distant nodes in the graph? \subsection{Graph Classification} \paragraph{Jet Classification.} \textit{Jets} or \textit{showers} are sprays of stable particles that are stemming from multiple successive interaction and decays of particles, originating from a single initial object. The identification of this original object is of paramount importance in particle physics. Because of the rather large lifetime of the b-hadrons~\cite{Tanabashi:2018oca} and hence a significantly displaced decay vertex, identification of b-jet (\textit{b-tagging}) using classical methods has been rather successful. With the advent of deep learning methods, lower level information has been used to improve the performance of b-tagging, and opened the possibility of identifying jets coming from other particle (c-hadron, top-quark, tau, etc). The jets coming from pure hadronic interaction driven by quantum chromo-dynamics (QCD) (so called \textit{QCD jets}), are covering an extremely large phase space and constitute an irreducible background to other classes of jets. In particular, within the framework of the particle flow reconstruction \cite{Sirunyan:2017ulk}, the event is interpreted through a set of particle candidates. As such, in references \cite{henrionneural,Komiske_2019,qu2019particlenet,Moreno_2020,moreno2019interaction,mikuni2020abcnet,Bernreuther:2020vhm} the collection of particle candidates is represented on a graph and various methods are applied. The authors of~\cite{henrionneural} use a fully connected graph, and message passing architecture to learn the adjacency matrix, comparing several directed and undirected graph constructions. The classification of jets originating from the hadronic decay of a $W$ boson and QCD jets is shown to improve with the proposed method. Work on physics-based inductive biases is left for future work to improve the learning of the adjacency matrix. It should be noted that learning the adjacency matrix is related to learning attention in~\cite{mikuni2020abcnet}. In~\cite{qu2019particlenet} the authors use the \textit{edgeconv} method from~\cite{wang2018dynamic} to derive a point cloud architecture for jet tagging. The connectivity of the graph is defined dynamically by computing node neighborhoods over the distance in either the input space, or an intermediate latent space when graph layers are stacked. The architecture respects the particle permutation invariance by mean of averaging of contributions from the connected neighbors. The performance of this model for the quark/gluon discrimination (separating jets originating from a quark or a gluon) and top tagging (discriminating hadronic top decay and $QCD$ jet) tasks is reported to be better than other previously studied architectures. The learned edge function is constrained to taking as input a node feature and the feature difference between this node and the connected node. In \cite{Bernreuther:2020vhm} the same model architecture is applied to the specific case of semi-visible jet originating from the cascade decay of hypothetical dark hadrons. The method outperforms neural networks that operate on images, as well as models including physical inductive biases \cite{Butter:2017cot}. The authors demonstrate an order of magnitude improvement on the sensitivity of dark matter search when using this method. The authors of~\cite{Moreno_2020,moreno2019interaction} take inspiration from~\cite{battaglia2016interaction} and adapt the interaction network architecture to the purpose of graph categorisation. Using a fully connected graph over the particles of a jet and primary vertices of the event, a graph category is extracted after one step of message passing. The performance of this model on a multi-class categorisation (light quarks, gluon, W and Z bosons hadronic decays, and hadronic top jets) is better than other non-graph-based architectures against which it was compared. On the specific use case of tagging jets which stem from Higgs bosons decaying onto a pair of b quarks, the algorithm outperforms state of the art methods, even when the proper mass decorrelation method \cite{ATLAS:2018ibz} is applied. The authors report some potential computation performance issues with running the model for predictions. The measurement however, is done with a model obtained from a format conversion between major frameworks, and the performance could be improved with a native implementation instead. With~\cite{Komiske_2019} the authors applied the \textit{Deep Sets} method from~\cite{NIPS2017_6931} to jet tagging. They propose a simplified model architecture with provable physics properties, such as infrared and colinear safety. The features of each particle are encoded into a latent space and the graph category is extracted from the summed representation in that latent space. The model has no connectivity, and thus no attention or message passing, and pools information globally across all the elements before the categorisation is output, and yet the performance of this simple model on the quark/gluon classification is surprisingly on par with other more complicated models. The authors provide ways of interpreting what the model has learned, and are able to extract closed-form observables from their trained model. In~\cite{mikuni2020abcnet} the \textit{graph attention network} from~\cite{velickovic2018graph} is adapted for graph categorisation. The node and edge features are created and updated by means of multiple fully connected neural networks, operating on the graph, and an additional attention factor, equivalent to a weighted, directed adjacency matrix is computed per directed edge, and used in the update rule. A k-nearest neighborhood connectivity pattern is constructed using the distance over the edge features, initialized to the difference between node features, and later in a latent space when using stacked graph layers. Stability of the models is improved with the use of a multi-head mechanism, and skip connections at multiple level are added to facilitate the information flow. Their model outperforms the model from~\cite{qu2019particlenet} on the quark/gluon classification task, indicating the importance of the attention mechanism --- to which we come back to in sections~\ref{sec:modelarch} and~\ref{sec:discussion}. \paragraph{Event Classification.} Here we use the term \textit{event} for the capture by an experiment of the full history of a physics process. In astroparticle, for example it is the collection of signals that covers the interaction of an high energy particle interacting with the atmosphere. The jet tagging task presented in the previous section is part of a full event identification in collider physics. Event classification is the task of predicting or inferring the physics process at the origin of the recorded data. The authors of~\cite{choma2018graph} applied a graph convolution method for the classification of the signal in the IceCube detector, to determine if a muon originated from a cosmic neutrino, or from a cosmic ray showering in the earth atmosphere. The adjacency matrix of a fully connected graph of the detector sensors is constrained to a Gaussian kernel on the physical distance, with a learnable locality parameter. Node features are updated by application of the adjacency matrix and non-linear activation. The graph property is extracted from the sum over the latent features of the nodes of the graph. This GNN model yields a signal-to-background ratio about three times as big as the baseline analysis of such signal. In~\cite{Abdughani:2018wrw}, the \textit{message passing neural network} architecture from~\cite{gilmer2017neural} is used over a fully connected graph composed of the final state particles, and the missing transverse energy. Messages are computed from the node features and a distance in the azimuth-rapidity plane first, then in the node latent space for later iterations. Such messages are passed across the graph in two iterations, and each node receives a categorisation. The node-averaged value is used to predict the event category. The model is compared to densely connected models, and is showing superior performance when comparing the $S/\sqrt{B}$ analysis significance. From the same authors, in~\cite{Ren:2019xhp,Abdughani:2020xfo}, a similar architecture is applied to event classification for other signal topologies, demonstrating the versatility of the method. \subsection{Node Classification and Regression} \paragraph{Pileup Mitigation.} In a view to increase the overall probability of producing rare processes and exotic events, the particle density of bunches composing the colliding beams can be increased. This results in multiple possible interactions per beam crossing. The downside of this increased probability is that, when occurring, an interesting interaction will be accompanied with other spurious, less interesting interactions (\textit{pileup}), considered as noise for the analysis. Mitigation of pileup is of prime importance for analysis at colliders. While it is rather easy to suppress charged particles by virtue of the primary vertex they are originating from, neutral particles are harder to suppress. In a particle flow reconstruction \cite{Sirunyan:2017ulk}, the state of the art is to compute a pileup weight per particle \cite{Bertolini:2014bba}, and use it for mitigation. In~\cite{martinez2018pileup} the authors utilize the \textit{gated graph network architecture}~\cite{li2015gated} to predict a per particle probability of belonging to the pileup part of the event. The graph is composed of one node per charged and neutral particle in the event, and the connectivity is imposed to $\Delta R \equiv \sqrt{\delta \phi^2 + \delta \eta^2} < 0.3$ in the azimuth-pseudorapidity plane. An averaged R-dependent message is computed and gated with each previous node representation by mean of a gated recurrent unit (GRU) to form the new node representation. The per-particle pileup probability is extracted with a dense model, after three stacked graph layers, and a skip connection into the last graph layer. The model outperforms other standard methods for pileup subtraction and improves resolution of several physical observables. The authors of~\cite{mikuni2020abcnet} take inspiration from the \textit{graph attention network} from~\cite{velickovic2018graph} to predict a per-particle pileup probability. An architecture very similar to the one used for the jet classification (described previously) is used to create a global graph latent representation, which in turn is used to compute an output that is mapped back to each node, thanks to a given order of the latter. This method is shown to improve the resolution on the jet and di-jet mass observables, while being stable over a large range of pileup density. \paragraph{Calorimeter Reconstruction.} A \textit{calorimeter} is a detector which goal is to contain and measure the total energy of a system. In particle physics, a calorimeter is commonly composed on the one hand of inactive material inducing showering of particles and energy loss (\textit{absorber}), and on the other hand a sensitive material that aims at measuring the collective released energy in the absorber. Reconstruction of the energy of the incoming particle in such a sampling calorimeter involves calibration and clustering of the signal of various cells. With~\cite{Qasim_2019} a graph network based approach is proposed to cluster and assign the signal in a high granularity calorimeter to two incoming particles. A latent edge representation is constructed in the latent space of the nodes, using a potential function of the distance also in the latent space. Two methods are proposed for the graph connectivity, one --- \textit{GravNet} --- using nearest neighbors in a latent space, the other --- \textit{GarNet} --- using a fixed number of additional nodes (dubbed \textit{aggregator}) in the graph. Node features are updated using concatenated message from multiple aggregation methods, and provides in output the fraction of energy of the cell belonging to each particle. The proposed methods are slightly improving over more classical approaches, and could be beneficial in more complex detector geometry than the one studied. \paragraph{Particle Flow Reconstruction.} Typically, detectors in particle physics are composed of multiple sub-detectors with various sensing technologies. Each sub-detector is targeting the measurement of specific characteristic of the particle. The assembly of all measurements allows for the characterisation of the particle properties. The \textit{particle flow} --- or \textit{energy flow} --- reconstruction is an algorithm that aims at assigning to a candidate particle all the measurements in each sub-detector \cite{Sirunyan:2017ulk}. Since all particles produced during a collision can potentially be reconstructed, \textit{particle flow reconstruction} allows for fine grained interpretation and analysis of collision events. The author of~\cite{kieseler2020object} proposes the \textit{object condensation} loss formulation, using a GNN method to extract the particles' information from the graph of individual measurements. In this context, the model is set to predict the properties of a smaller number of particles than there are measurements, in essence doing a graph reduction. A stacked-\textit{GravNet}-based model performs node-wise regression of a kinematic corrective factor together with a \textit{condensation weight}. The latter indicates whether a node of the graph has to be considered as representative of a particle in the event, and have its regressed quantities be assigned to that particle. The performance of this algorithm is compared with a baseline particle-flow algorithm on rather sparse large hadron collider (LHC) environments. The proposed method is shown to be more efficient and produces less fake particles than the standard approach. \paragraph{Efficiency Parametrization.} The analysis of particle physics data --- in particular collider experiment data --- requires applying selection criteria on the large volume of data, in a view to enhance the proportion of interesting signals. It is crucial to determine with as little uncertainty as possible the fraction of signal passing these selections, if one wants to measure the rate of production of that signal during the experiment. Much care is taken to determine these selection efficiencies, as they play significant roles in measuring the cross section of known processes, or while setting limits on production of unknown signals. The efficiencies can be measured from data or simulation, per event or any component of it. It is often the case that the efficiency of a specific selection on a component of the full events also depends on the other components of the event. Taking into account the correlation between all components of an event is a hard task that machine learning can help with. The authors of~\cite{badiali2020efficiency} use GNNs to learn the per-jet tagging efficiency, from a fully connected graph representation of the jets in the event. The model is a message passing GNN. The edge update and node updates are both implemented as simple fully connected networks. The final node representation is used to predict the per-jet efficiency for each jet in an event. The GN allows taking into account the dependency of the per-jet efficiency on the other jets in the event. The comparison is made with the classical method of explicitly parametrizing the per-jet efficiency with a two dimensional histogram, whose axis are the jet transverse momentum and pseudo-rapidity. The authors show how the graph representation and GNN parametrisation allows improving determination of the per-jet efficiency, compared to the more traditional method. \subsection{Edge Classification} \paragraph{Charged Particle Tracking.} Charged particles have the property of ionizing the material they traverse. This property is utilized in a tracking device (\textit{tracker}) to perform precise measurement of the passage of charged particles. Contrary to calorimeters, trackers should not alter too much the energy of the incoming particle, as such it usually produces a sparse spatial sampling of the trajectory. The reconstruction of the trajectory of original particles amounts to finding what set of isolated measurement (\textit{hits}) belong to the same particle. Most tracking devices are embedded in a magnetic field that will curve the trajectories and hence provide a handle at measuring the particle momentum component transverse to the magnetic field, since this quantity and the curvature are inversely proportional. The authors of~\cite{farrell2018novel} propose a GNN approach to charged particle tracking using edge classification. Each node of the graph represents one sparse measurement, or hit, with edge constructed between pairs of hits with geometrically plausible relations. Using multiple updates of the node representation and edge weight over the graph (using the edge weight as attention), the model learns what are the edges truly connecting hits belonging to the same track. This approach transforms the clustering problem into an edge classification that defines the sub-graphs of hits belonging to the same trajectory. The performance of this method has high accuracy when applied in a simplified case, and is promising for more realistic scenarios. In~\cite{Ju:2020xty}, a GNN model involving message passing is presented and provides improved performance. \paragraph{Secondary Vertex Reconstruction.} The particles within a jet often originate from various intermediate particles that are worth identifying for the purpose of identifying the origin of the jet (see the paragraph on jet identification above). The decay of the intermediate particles are identified as secondary vertices within the jet, using clustering algorithms on the particles, such as the adaptive vertex reconstruction~\cite{5734880}. Based on the association to secondary vertex, the particles within a jet can henceforth be partitioned. In~\cite{serviansky2020set2graph}, the authors develop a general formalism for \textit{set-to-graph} neural networks and provide mathematical proof that their formulation is a universal approximation of function mapping a graph structure onto an input set --- all invariance taken into account. In particular, they apply a \textit{set-to-2-edge} --- predicting single edge characteristics from the input set --- approximation to the problem of particle association within a jet. The model is a composition of an embedding model, a fixed broadcasting mapping and a graph-to-graph model. All components are actually rather simple and the expressivity of the full model stems from the specific equivariant formulation. Their model outperforms the standard methods on jet partitioning by about 10\% over multiple metrics. \section{Formulating HEP tasks with GNN}\label{sec:guidelines} The articles described in section~\ref{sec:applications} make use of multiple graph connectivity schemes, model architecture and loss functions. Experience shows that using our knowledge about the underlying physics in order to encode the relationship between the nodes --- whatever they may represent --- in both the input graph and the model architecture is key in developing algorithms. Unfortunately it is not always clear which methods and model architectures will outperform the others. This section aims to clarify the choices made and provide a checklist of considerations for the particle physicist looking to develop a new application using a GNN. \subsection{Task Definition} The first step is to decide what function one wants to learn with the GNN. In some applications this is trivial - for example jet, event or particle classification. In those cases a GNN is used to learn some representation of the node or the entire graph/set and a standard classifier is trained on that representation. For tasks such as segmentation or clustering, there is a choice between formulating the task as edge classification or something like the object condensation method which uses node representations to formulate a partition of the input set. The object condensation method has an important advantage, in that it computes relationships between objects (the attractive or repulsive potential) only while training the algorithm, in the computation of the loss function. An edge classifier will learn an edge representation and use that to classify edges. The number of edges can be large, increasing the computation and memory requirements of the algorithm. The determination of the set partition in the object condensation method is a simple function of the node representation, which greatly reduces those requirements. None of the work presented in section~\ref{sec:applications} is using a mapping of the input onto the edges of the graph. Because an edge can only link two nodes --- while a node can be connected to as many edges as desirable --- construction of such graph would require a specific structure of the input. One such use case could be in situations where observations arise from two concurrent measurements, such as hit position in stereo strip detectors. The detector is composed of two rectangular modules with a thin strip of sensors along one dimension, and the modules are tilted with respect to each other by a couple of degrees so as to have the strip sensors overlapping and hence creating a grid. With strip measurement positioned on the nodes, the important information would be located on the edges, as a combination of two such hits. Other examples in network communication might also be relevant. \subsection{Graph Construction}\label{sec:graphconstruction} In most particle physics applications, the nature of the relationships between different elements in the set are not clear cut (as it would be for a molecule or a social network). Therefore a decision needs to be made about how to construct a graph from the set of inputs. Different graph construction methods are illustrated in figure~\ref{fig:graphconstruction}. Depending on the task, one might even want to avoid creating any pairwise relationships between nodes. If the objects have no pairwise conditional dependence --- a DeepSet~\cite{Komiske_2019} architecture with only node and global properties might be more suitable. Edges in the graph serve 3 roles: \begin{enumerate} \item The edges are communication channels among the nodes. \item Input edge features can indicate a relationship between objects, and can encode physics motivated variables about that relationship (such as $\Delta R$ between objects). \item Latent edges store relational information computed during message-passing, allowing the network to encode such variables it sees relevant for the task. \end{enumerate} \begin{figure}[b] \centering \subfigure[]{ \includegraphics[]{fig/fully_connected} } \subfigure[]{ \includegraphics[]{fig/nearest_neighbors} } \subfigure[]{ \includegraphics[]{fig/dynamic} } \caption{Different methods for constructing the graph. (a) Connecting every node to every other node (b) Connecting neighboring nodes in some predefined feature space (c) Connecting neighboring nodes in a learned feature space.} \label{fig:graphconstruction} \end{figure} In cases where the input sets are small ($N_v \sim \mathcal{O}(10)$ ) the typical and easiest choice is to form a fully connected graph, allowing the network to learn which object relationships are important. In larger sets, as the number of edges between all nodes increases as $N_e \propto ({N_v})^2$, the computational load of using a neural network to create an edge representation or compute attention weights becomes prohibitive. One possible work-around is to choose a fixed edge feature that is easy to pre-compute --- such as distance between detector modules. If an edge-level computation is required, it is necessary to only form some edges. Edges can be formed based on a relevant metric such as the $\Delta R$ between particles in a detector, or the physical distance between detector modules. Given a distance measure between nodes, some criterion for connecting them needs to be formulated, such as connecting k-nearest neighbors in the feature space. The node features used to connect edges can also be based on a learned representation. This is sometimes referred to as \textit{dynamic} graph construction, and used by the EdgeConv~\cite{qu2019particlenet} and GravNet~\cite{Qasim_2019} architectures, for example. We will discuss this in more detail in section~\ref{sec:modelarch}, showing the connection between the idea of dynamic graph construction and attention mechanisms. When the graph is constructed dynamically, such as using the node representation to connect edges between k-nearest neighbors, the gradient of the neural network parameters is only affected by those nodes that have actually been connected. Since the indexing of node-neighborhood is non differentiable, its parameters cannot be learn with gradient descent, but can be optimized on hyper-parameter search. In initial stages of the training, the edge formation is essentially random, allowing the network to explore which node representations should be closer together in the latest space. During later stages of the training, one may wish to encourage further exploration by the network. One possible way to do this is to inject random edges --- for example besides connecting nodes to k-nearest neighbors in latest space, connecting an additional small number of random connections to nodes further away in the latent space. A recent paper~\cite{johnson2020learning} introduces a reinforcement learning agent which traverses an input graph to reach nodes which should be connected by new edges. Its policy is optimized for some downstream task performance, so that the nodes it chooses to connect with new edges improve the task performance. \subsection{Model Architecture} \label{sec:modelarch} Designing the model architecture should reflect a logical combination of the inputs towards the learning task. In the language of the GN formalism (section~\ref{sec:GNformalism}), we need to select a concrete implementation of the GN block update and aggregation functions $\phi$ and $\rho$, and decide how to configure their sequence inside the GN block. Additionally we need to decide which kinds of GN blocks we want to combine and how to stack them together. As explained in section~\ref{sec:GNformalism}, different architectures such as Graph Convolution Networks, Graph Attention Networks, are specific choices for constructing a GNN --- but they are all equivalent in the sense that their output is a graph with learned node/edge/graph representations which are then used to perform the actual task. \paragraph{GN block functions.} The key question here is what logical steps one would take to form the GN block output in a way that serves the task, and which parts of this logical process should be modeled with neural networks? The most general GN block (as shown in figure~\ref{fig:full_gn_block}) could have all of its update functions implemented as neural networks, which allows the most flexibility in the learning processes. This flexibility might not be required for the task, and it might carry computational costs that we wish to keep to a minimum. Therefore its probably better to start with a simple architecture, and only add complexity gradually, until the algorithms performance is satisfactory. Figure~\ref{fig:choosingGNblockfunctions} shows two examples of possible configurations, either creating an edge representation before aggregating edges and forming a node update, or using global aggregation before a node update. Both configurations result in an updated node representation, but one of them is based on a sum of pair-wise representations, and the other on a global sum of node representations --- the information content is the same, but the inductive bias is different. For example, the authors of~\cite{badiali2020efficiency} assumed that the jet-tagging efficiency is heavily affected by the $\Delta R$ between neighboring jets --- therefore an edge update step created a representation of pair-wise interaction between jets, which was then summed for each jet to create the updated node representation. In contrast the authors of~\cite{Komiske_2019} used a DeepSet architecture, where each node representation is created independently from its neighbors, the node representations are then summed to create the graph representation, with each node representation weighted by the particles energy. \begin{figure} \centering \includegraphics[]{fig/node_update.pdf} \caption{Possible architectures for a GN block that create an updated node representation. Using an edge representation as an intermediate step (upper diagram) gives a different inductive bias to the model, compared to using a global representation of the set (lower diagram). The function names are from equation \ref{eq:gn-functions} and figure \ref{fig:full_gn_block}} \label{fig:choosingGNblockfunctions} \end{figure} \paragraph{Attention Mechanisms.} Another important component that can be used in defining the $\rho^{e\rightarrow v}$ and $\rho^{v,e\rightarrow u}$ aggregation functions is using \textit{attention mechanisms}, as illustrated in figure~\ref{fig:attention}. The term \textit{attention} is rooted in the perceptual psychology and neuroscience literatures, where it refers to the phenomenon and mechanisms by which a subset of incoming sensory information is selected for more extensive processing, while other information is deprioritized or filtered out. The key consideration for defining and adding an attention mechanism is whether different parts of the input data are more important than others. For example, in classifying jets, some particles that originate from a secondary decay are an important footprint of a particular class of jets --- therefore those particles may be more important for the classification task. There are a few different implementations of attention mechanisms. They all share the basic concept of using a neural network or a pre-defined function to compute weights which represent the relative importance of different elements in a set. In the GN block $\rho$ functions, these weights are used to create weighted sums of the representations of the different elements. \begin{figure} \centering \includegraphics[]{fig/attention} \caption{Attention mechanisms allow the network to learn relative importance of different nodes/edges in the aggregation functions. The red node is a node whose neighbors are being aggregated by $\rho^{e\rightarrow v}$ , the attention mechanism will learn to provide relative weights for the adjacent nodes/edges (the green highlights) such that the output of $\rho^{e\rightarrow v}$ is a weighted sum of either the node or edge representations.} \label{fig:attention} \end{figure} Here we want to draw attention to the connection between attention mechanisms and dynamic graph construction. Figure~\ref{fig:compareEdgeConvGarNet} shows the structure of two architectures discussed in section~\ref{sec:applications}, the EdgeConv, GravNet layers. These are both GN block implementations, they take as input a set of nodes (without explicit edges) and output an updated node representation. Both begin with a node embedding stage, which creates a node representation without exchanging information between the nodes. This node embedding (or only part of its feature vector, in the case of GravNet) is interpreted as a position of the node in a latent euclidean space, and edges are formed between k-nearest neighbors. This can be thought of as a fully connected graph with an attention mechanism that assigns a weight of 1 to nodes within the set of k-nearest neighbors, and 0 otherwise. The advantage of this procedure over using a neural network to compute attention weights is the much lower computational cost of both computing the edge attention weight and the subsequent edge-related operations. \begin{figure} \centering \includegraphics[]{fig/GravGarEdgeConv.pdf} \caption{The GN block structure of the EdgeConv, GravNet layers as described in the GN formalism. The node embedding stage is a GN block which operates on the nodes independently (without any information exchange between them), followed by a GN block which creates an edge representation for every pair of vertices, aggregates edges for each node and then updates the vertex representation. The edge update function $\phi^e$ does not use a neural network, but uses a pre-defined function of the node representation - leading to a reduction in computational cost.} \label{fig:compareEdgeConvGarNet} \end{figure} Its worth noting that the GarNet layer~\cite{Qasim_2019} can be described as a form of \textit{multi-headed} self-attention mechanism~\cite{vaswani2017attention}. The GarNet layer interprets the node embedding as $s$ different ``distances'' (with $s$ being the dimension of the embedding). These distances are attention weights over each node of the graph, and they are used to compute $s$ different weighted sums --- these are the $s$ different \textit{heads} of the attention mechanism. The weighted sums are propagated back to the nodes again via attention weights of each node to each of the $s$ attention heads. The reason GarNet is computationally affordable without a hard cutoff --- such as k-nearest neighbors --- is that $\phi^v$, the node embedding function, is the only one computed with a neural network. The attention weights are all computed with pre-defined functions given the node embedding (specifically, the function is $exp(-|w|)$ where $w$ is the attention weight). \paragraph{Stacking GN blocks.} A stack of GN blocks (as described in figure~\ref{fig:gn_stack}) serves two purposes. First, in the same way that stacked layers in any neural network architecture (such as a CNN) can be thought of as gradually constructing a high level representation of the data, GN blocks arranged sequentially serve the same purpose for constructing the node/edge and graph representations. Therefore, additional GN blocks increase the depth of the model and its expressive power. Second, after one iteration of message passing in a single GN block, the node has only exchanged information with its immediate connected neighbors. This is illustrated in figure~\ref{fig:message_passing}. Multiple iterations with a GN block (either the same block applied multiple times, or different blocks applied in a sequence) increase each nodes receptive field, as the representation of its neighboring nodes was previously updated with information from their neighbors. Often skip or residual connections, which combine the input with the output, are used to prevent corruption of the updated representations, and preservation of the gradient signal, over many message passing steps, as is common in CNNs and RNNs. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fig/message_passing} \caption{Each iteration of message passing between nodes increases a nodes receptive field. For example the node in red communicates with its three connected neighbors (red outline) in the first message passing step. The orange and yellow dotted lines represent the nodes that communicate after two and three iterations respectively. The node left out of the yellow line have not exchanged information with the red node, after three iterations only.} \label{fig:message_passing} \end{figure} \section{Summary and Discussion}\label{sec:discussion} The papers reviewed in section~\ref{sec:applications} can be seen as the first wave of application of graph neural network architectures to diverse tasks in high energy physics. The methods show superior performance over other model architecture, thanks to the inductive bias, reduction of number of parameters, more elaborated loss function, and above all a much more natural data representation. Graphs are constructed from observable in various ways, often with sparse connectivity to lessen computational requirements. While multiple architectures are presented with different names, and slightly different formalisms, they all share the core concept of exchanging information across the graph. We deciphered the variety of models in section~\ref{sec:guidelines} by providing some considerations on how the models were build. We provide in the following some new directions to be considered as future direction for the next generation of graph neural network applications in high energy physics. \paragraph{Transformer, Reformer, etc.} Following the discussion of the GravNet and EdgeConv layers in section~\ref{sec:modelarch} and their relation to attention mechanisms, another class of models which are closely related to GNNs, and which perform a type of soft structural prediction, are Transformer architectures, based on the self-attention mechanism~\cite{vaswani2017attention}. In GNN language, a Transformer computes normalized edge weights in a complete graph (i.e., a graph with edges connecting all pairs of nodes), and passes messages along the edges in proportion to these weights, analogous to a hybrid of graph attention networks~\cite{velickovic2018graph} and GCNs~\cite{kipf2016semi}. In GN notation, described in~\cite{battaglia2018relational} and used explicitly in graph attention networks~\cite{velickovic2018graph}, the Transformer uses a $\phi^e$ which produces both a vector message and a scalar unnormalized weight, and the $\rho^{e\rightarrow v}$ function normalizes the weights before computing a weighted sum of the message vectors. This allows a set of input items to be treated as nodes in a graph, without observed input edges, and the edge structure to be inferred and used within the architecture for message-passing. Different variants of attention mechanisms are a way to give different weights in the pooling operations $\rho^{e\rightarrow v}$, $\rho^{v,e\rightarrow u}$, as illustrated if figure~\ref{fig:attention}. The implementation of attention should reflect the nature of the interaction between the objects in the set, as they relate to the task The Reformer~\cite{kitaev2020reformer} architecture overcomes the quadratic computational and memory costs that challenge traditional Transformer-based methods, by projecting nodes into a learned high-dimensional embedding space where nearest neighbors are efficiently computed to inform a sparse graph over which to pass messages. The recent Linformer~\cite{wang2020linformer} method is similar, but with a low rank approximation to the soft adjacency matrix. \paragraph{Graph generative models.} Importantly, the GN does not predict structural changes directly. However, many recent papers use GNs (or other GNNs) to decide how to modify a graph's structure. For example, \cite{li2018learning} and \cite{nash2020polygen} are autogressive graph generators, which use a GN or Transformer to predict whether a new vertex should be added to a graph (by the graph-level output), and which existing vertices to connect it to with edges (by the vertex-level outputs). The GraphRNN~\cite{you2018graphrnn}, and Graphite~\cite{grover2018graphite} are generative models over edges that use an RNN for sequential prediction, and GraphGAN~\cite{wang2018graphgan} is an analogous method based on generative adversarial networks. \cite{kipf2018neural}'s Neural Relational Inference treats the existence of edges as latent random variables, and trains a posterior edge inference front-end via variational autoencoding. In \cite{hamrick2018relational} and \cite{bapst2019structured}, a GN is used to guide the policy of a reinforcement learning agent and build graphs that represent physical scenes. The DiffPool~\cite{ying2018hierarchical} architecture (illustrated in figure~\ref{fig:graph_reduction} is an attention-based soft edge prediction mechanism, but over hierarchies of graphs, where lower-level ones are pooled to higher-level ones. \begin{figure} \centering \includegraphics[]{fig/partition} \caption{The DiffPool~\cite{ying2018hierarchical} layer and similar architectures allow to modify the graph structure as an intermediate step of the model computation. In the illustration, nodes in the input graph are grouped together to form nodes in the output graph. Each node is colored according to the outline of the nodes associated to it in the input graph. The output graph adjacency matrix is also learned as part of the DiffPool layer output. } \label{fig:graph_reduction} \end{figure} Generative models of graphs have not been explored much in particle physics, though some unpublished work is on-going. The need for computational resource for simulation in particle physics is almost as large as the requirements for event reconstruction. There is a breadth of efforts on using machine learning as surrogate simulators in particle physics. For the reasons exposed in section~\ref{sec:intro} that data in particle physics can often be represented as graphs, it is natural to investigate the use of generative models using graphs as a possible solution. Models under development are for example predicting energy deposition in the cells of a calorimeter or the particle candidates obtained from a particle flow reconstruction algorithm. In all cases, the generated quantities are naturally represented as a set or graph, with fixed or variable size. \paragraph{Computation Performance.} An important consideration for building and efficiently training GNNs on hardware is whether to use dense or sparse implementations of the graph's edges. The number of edges in a graph usually defines the memory and speed bottleneck, because there are typically more edges than nodes and the $\phi^e$ function is applied the most times. A dense adjacency matrix supports fast, parallel matrix multiplication to compute $E'$, which, for example, is exploited in speed-efficient GCN- and Transformer-style models. The downside is that the adjacency matrix's memory footprint is quadratic in the number of nodes Alternatively, using sparse adjacency matrices allows the memory to scale linearly in the number of edges, which allows much larger graphs to be processed. But the sparse indexing operations required to implement sparse matrix multiplication can incur greater time costs than their dense counterparts --- this is an active area of development for both software and hardware acceleration. However, sparse operations are a key bottleneck in current deep learning hardware, and should next generation hardware substantially improve their speed, this would potentially improve the relative advantage of sparse edge implementations of GNNs. In a computing environment in HEP, one cannot expect to have access to dedicated accelerators (GPU, TPU, FPGA, etc) --- although work is going in the direction of building the infrastructure --- and one needs to keep into consideration the time for running the model in production. \paragraph{Final Remarks.} Neural networks that operate on sets are increasing in popularity in high energy physics tasks, both in event reconstruction, and in physics analysis. These neural networks are performing well in proof-of-concept studies, either surpassing or matching existing state of the art techniques. They have not yet been tested in the field with real detector data. It is important to understand that all of the models use the same basic building blocks to perform their tasks, and the most important consideration in designing the architecture for these neural networks is to correctly model the nature of interaction between the objects in the input set. It is probably the best practice to start with a simple graph model and architecture then build up on additional complexity geared towards incorporating scientific understanding of the physical process at stake. \section*{Acknowledgement} We thank Thomas Keck for valuable feedback on the manuscript. J.S is supported by the NSF-BSF Grant 2017600 and the ISF Grant 125756 and partially supported by the Israeli Council for Higher Education (CHE) via the Weizmann Data Science Research Center. J-R.V. is partially supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement n$^o$ 772369) and by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under award numbers DE-SC0011925, DE-SC0019227 and DE-AC02-07CH11359. Data sharing is not applicable to this article as no new data were created or analysed in this study. \vspace{1cm}
{ "timestamp": "2020-10-22T02:15:20", "yymm": "2007", "arxiv_id": "2007.13681", "language": "en", "url": "https://arxiv.org/abs/2007.13681" }
\section{Introduction} \label{Introduction} The simultaneous determination of multiple physical quantities can be very advantageous in many sensor applications, for example, when an in-situ or a remote acquisition is required. If the physical effect on which the measurement method is based presents cross-interference of more than one quantity, their simultaneous determination becomes a necessity. Optical luminescence sensing is particularly attractive for multiple sensing. Using the same measuring principle, several optical elements, like optical fibers and detectors, can be shared in the setup for the detection of more than one parameter, thus allowing a compact and simple sensor design. The typical approaches to multiple sensing are based on either the use of a single luminescence indicator (luminophore), whose luminescence is sensitive to more than one physical quantity, or the use of several luminophores, one for each quantity, embedded in a substrate and placed in close physical proximity \cite{Stich2010,Borisov2011novel,Kameya2014,Wang2014,Santoro2016,Biring2019}. To be able to determine each quantity separately, it may be necessary to determine more than one optical property (e.g., absorption spectrum, emission spectrum, luminescence intensity, decay time). Another possibility is to measure one single optical property using special detection schemes that take advantage of the emission properties of the used luminophores \cite{Wang2014,Biring2019,Collier2013,Stehning2004,Jorge2008,Moore2006}. The problem of dual sensing is particularly relevant in applications that involve oxygen sensing. The determination of oxygen partial pressure is of great interest in numerous fields, like medicine, biotechnology, environmental monitoring, or chemistry since oxygen plays an important role in many processes \cite{Papkovsky2013,Wang2014}. One of the most used optical measuring approaches uses the effect of the dynamical luminescence quenching by oxygen molecules. The measuring principle is based on the measurement of the luminescence of a specific luminophore, whose intensity and decay time are reduced due to collisions with molecular oxygen \cite{Lakowicz2006}. Sensors based on this principle must rely on approximated empirical models to parametrize the dependence of the measured sensing quantity (e.g., luminescence intensity or decay time) on influencing factors. Among these, the temperature is the factor with the strongest influence since both the luminescence and the quenching phenomena are strongly temperature-dependent. Therefore, in any optical oxygen sensor, the temperature must be continuously monitored, most frequently with a separate sensor, and used to correct the calculated oxygen concentration \cite{Li2015}. This task can be difficult in practical implementation and may become a significant source of error in sensors based on luminescence sensing. Another disadvantage of this approach is that the parametrization of the sensor response with temperature is system-specific since it depends on how the sensing element was fabricated and on the sensor itself \cite{Xu1994,Draxler1995,Hartmann1996,Mills1998,Badocco2008,Dini2011}. In this work, we propose a revolutionary approach based on neural networks for parallel inference. The method enables accurate dual-sensing, using one single luminophore, and measuring a single quantity. Instead of describing the response of the sensor as a function of the relevant parameters through an analytical model, a neural network was designed and trained to predict both oxygen concentration and temperature simultaneously. This new approach is based on multi-task learning (MTL) neural network architectures. These are characterized by common hidden layers, whose output is then the input of multiple branches of task-specific hidden layers. MTL architectures were chosen because they can learn correlated tasks \cite{Argyriou2006, Thrun1996, Caruana1997, Zhang2017, Baxter2000, Thung2018}. In a previous purely theoretical study that used only synthetic data, the authors showed that MTL architectures can be flexible enough to address multi-dimensional regressions problems \cite{Michelucci2019_2}. This work demonstrates for the first time that this is indeed true by building and characterizing a real physical optical sensor based on this principle. To train the MTL neural network and to test the performance of the sensor on unseen data a very large amount of data is needed. Since the collection cannot be performed by hand a fully automated data collection setup was developed and used to both vary the sensor environment conditions (gas concentration and temperature) and to collect the sensor response. This work proposes a paradigm shift from the classical description of the response of a sensor through an approximate model to the use of MTL sensor learning thanks to neural networks. These will learn the complex inter-parameter dependencies and sensor-specific response characteristics from a large amount of data automatically collected. This new method will enable to build sensors even if the response of the system to the physical quantities is too complex to be comfortably described by a mathematical model. \section{Methods} \label{sec:methods} \subsection{Luminescence Quenching for Oxygen Determination} \label{Theory} Luminescence-based oxygen sensors usually consist of a luminophore whose luminescence intensity and decay time decrease for increasing O$_2$ concentrations. This reduction is due to collisions of the excited luminophore with molecular oxygen, which thus provides a radiationless deactivation process (collisional quenching). In the case of homogeneous media characterized by an intensity decay which is a single exponential, the decrease in intensity and lifetime are both described by the Stern-Volmer (SV) equation \cite{Lakowicz2006} \begin{equation} \frac{I_0}{I}=\frac{\tau_0}{\tau}=1+K_{SV} \cdot \left[O_2\right] \label{SVe} \end{equation} where $I_0$ and $I$, respectively, are the luminescence intensities in the absence and presence of oxygen, $\tau_0$ and $\tau$ the decay times in the absence and presence of oxygen, $K_{SV}$ the Stern–Volmer constant and $\left[O_2\right]$ indicates the oxygen concentration. For practical applications, the luminophore needs to be embedded in a supporting substrate, frequently a polymer. As a result, the SV curve deviates from the linear behavior of Eq. (\ref{SVe}). This deviation can be due, for example, to heterogeneities of the micro-environment of the luminophore, or to the presence of static quenching \cite{Wang2014}. A proposed scenario describes this non-linear behavior as due to the presence in the substrate of two or more environments, in which the luminescence is quenched at different rates \cite{Carraway1991,Demas1995}. This multi-site model describes the SV curve as the sum of $n$ contributions as \begin{equation} \frac{I_0}{I}=\bigg[ \sum_{i=1}^n \frac{f_i}{1+K_{SVi} \cdot \left[O_2\right]} \bigg]^{-1} \label{SVe2} \end{equation} where $f_i$'s are the fractions of the total emission for each component under unquenched conditions, and $K_{SVi}$'s are the associated effective Stern–Volmer constants. Depending on the luminophore and on the substrate material, the models proposed in the literature may be even more complex \cite{Demas1995,Hartmann1995,Mills1999}. In most industrial and commercial sensors, the decay time $\tau$ is frequently preferred to intensity measurement because of its higher reliability and robustness \cite{Wei2019}. The determination of the decay time is done most easily in the frequency domain by modulating the intensity of the excitation. As a result, the emitted luminescence is also modulated but shows a phase shift $\theta$ due to the finite lifetime of the excited state. This method has the additional advantage of allowing very simple and low-cost implementation. Although the multi-site model was introduced for luminescence intensities, it is frequently also used to describe the oxygen dependence of the decay times \cite{Demas1995,Quaranta2012}. Therefore, in the simplest case of a two-sites scenario, the model can be rewritten in terms of phase shift as \cite{Michelucci2019} \begin{equation} \begin{aligned} \frac{\tan \theta_0 (\omega, T)}{\tan \theta (\omega, T, [O_2])}= & \bigg( \frac{f (\omega , T) }{1+K_{SV1} (\omega , T) \cdot \left[O_2\right]}+ \\ &\frac{1-f (\omega , T) }{1+K_{SV2} (\omega , T) \cdot \left[O_2\right]} \bigg)^{-1} \\ \label{theta_full} \end{aligned} \end{equation} where $\theta_0$ and $\theta$, respectively, are the phase shifts in the absence and presence of oxygen, $f$ and $1-f$ are the fractions of the total emission for each component under unquenched conditions, $K_{SV1}$ and $K_{SV2}$ are the associated Stern–Volmer constants for each component, and $\omega$ is the angular modulation frequency. It is to be noted that the quantities $\theta_0$, $f$, $K_{SV1}$, and $K_{SV2}$ are all non-linearly temperature dependent \cite{Ogurtsov2006,lo2008,Zaitsev2016}. Additionally, if the modulation frequency is varied, they may show a frequency dependence, an artifact due to the approximate nature of the model. Finally, Eq. (\ref{theta_full}) needs to be inverted to determine $[O_2]$ from the measured quantity $\theta$. The proposed approach not only solves the difficulties of finding an approximate mathematical model for a complex system, but also allows the determination of multiple quantities simultaneously. Even if it is an approximate description, however, the structure of Eq. (\ref{theta_full}), remains relevant to understand the structure of the data and optimize the architecture of the neural network. \subsection{Experimental Procedure} \label{Experimental} The optical setup used in this work for the luminescence measurements is shown schematically in Fig. \ref{fig:setup}. To be able to acquire a large number of data, the program for both the instrument control and the data acquisition was written using the software LabVIEW by National Instruments. The acquisition procedure is described in detail in Section \ref{Data}. \begin{figure}[t!] \centering \includegraphics[keepaspectratio, width=8.3cm]{Setup_auto.eps} \caption{Schematic diagram of the experimental setup. Blue indicates the excitation optical path, red the luminescence one. SP: shortpass filter; LP: longpass filter PD: photodiode; TIA: trans-impedance amplifier.} \label{fig:setup} \end{figure} \subsubsection{Experimental Setup} The sample used for the characterization and test is a commercially available Pt-TFPP-based oxygen sensor spot (PSt3, PreSens Precision Sensing). To control its temperature, the sample was placed in good thermal contact with a copper plate, set in a thermally insulated chamber. The temperature of this plate was adjusted and stabilized using a Peltier element with a temperature controller (PTC10, Stanford Research Systems). The thermally insulated chamber was connected to a self-made gas-mixing apparatus, which enabled to vary the oxygen concentration between 0 $\%$ and 20 $\%$ vol $O_2$ by mixing nitrogen and dry air from two bottles. In the following, the concentration of oxygen will be given in $\%$ of the oxygen concentration of dry air and indicated with $\%$ air. This means, for example, that 20 $\%$ air was obtained by mixing 20 $\%$ dry air with 80 $\%$ nitrogen and therefore corresponds to 4 $\%$ vol $O_2$. The absolute error on the oxygen concentration adjusted with the gas mixing device is estimated to be below 1 $\%$ air. The excitation light was provided by a 405 nm LED (VAOL-5EUV0T4, VCC Visual Communications Company LLC), filtered by a shortpass (SP) filter with cut-off at 498 nm (498 SP BrightLine HC Shortpass Filter, Semrock) and focused on the surface of the samples with a collimation lens. The luminescence was focused by a lens and collected by a photodiode (SFH 213, Osram). To suppress stray light and light reflected by the sample surface, the emission channel was equipped with a longpass filter with cut-off at 594 nm (594 LP Edge Basic Longpass Filter, Semrock) and a shortpass filter with cut-off at 682 nm (682 SP BrightLine HC Shortpass Filter, Semrock). The driver for the LED and the trans-impedance amplifier (TIA) are self-made. For the frequency generation and the phase detection a two-phase lock-in amplifier (SR830, Stanford Research Inc.) was used. \subsubsection{Automated Data Acquisition} \label{Data} \begin{figure}[b!] \centering \includegraphics[keepaspectratio, width=5.8 cm]{flow-chart.png} \caption{Flow-chart of the automated data acquisition program.} \label{fig:auto-data} \end{figure} The large amount of data needed for the training and the test of the neural network was acquired using an automated acquisition program which followed the flow-chart shown in Fig. \ref{fig:auto-data}. First, the program fixed the temperature and concentration. Then, the phase shift was measured for 50 modulation frequencies between 200 Hz and 15 kHz. This measurement was repeated 20 times. Next, keeping the temperature fixed, the program changed the oxygen concentration and the entire frequency-loop was repeated. The oxygen concentration was varied between 0 $\%$ air and 100 $\%$ air in 5 $\%$ air steps. Finally, the temperature was changed, and then the oxygen and frequency loops where repeated. The temperature was varied between 5 $^\circ$C and 45~$^\circ$C in 5 $^\circ$C steps. The total number of measurements was thus 50 (frequencies) x 20 (loops) x 21 (oxygen concentrations) x 9 (temperatures) = 189'000, which required a total acquisition time of approximately 65 hours. This number of measurements was chosen as a compromise between maximizing the number of data and avoiding photodegradation, which naturally occurs when the sample is subjected to illumination. At the end of the session, a minimal change in the phase shift was observed. \subsection{Neural Network Approach} \label{NN} The software component of this new sensor type is based on a neural network model (NNM). A NNM is made of three components \cite{Michelucci2017}: a neural network architecture (that includes how neurons are connected, the activation functions and all the hyperparameters), a loss function (here indicated with $L$) and an optimizer algorithm. In this section, those three components are described in detail. \subsubsection{Neural Network Architecture} The neural network used in this work has a multi-task-learning architecture and is depicted schematically in Fig. \ref{fig:NN_MTL_O2_T}. It consists of three {\sl common hidden layers} with 50 neurons each, which generates as output a "shared representation". The name shared representation comes from the fact that the output of common hidden layers is used to predict both $[O_2]$ and $T$. These layers are followed by three branches, one without additional layers to predict $[O_2]$ and $T$ at the same time, and two with each two additional {\sl task-specific hidden layers} to predict respectively $[O_2]$ and $T$. The shared representation is the input of two "task-specific hidden layers", that learn how to predict $[O_2]$ and $T$ better. This architecture uses the common hidden layers to find common features beneficial to each of the two tasks. During the training phase, learning to predict $[O_2]$ will influence the common hidden layers and, therefore, the prediction of $T$, and vice-versa. The further task-specific hidden layers learn features specific to each output and therefore improve the prediction accuracy. The number of neurons of each task-specific hidden layer used in this work is five. The activation function is the sigmoid function for all the neurons. A study of which network architecture works best with this kind of data can be found in \cite{Michelucci2019_2}. The network was trained with two types of input to test its effectiveness. In the first case, each observation consists of a vector of 50 values defined as \begin{equation} \label{input1} {\pmb \theta}_s = \left( \frac{\theta(w_1)}{90} , \frac{\theta(w_2)}{90} , ..., \frac{\theta(w_{50})}{90} \right) \end{equation} where $w_i$ are the 50 values of the angular modulation frequency of the excitation light (see Sec. \ref{Experimental}). The measured phase shift were divided by 90 to normalize the inputs between 0 and 1. In the second case, each observation is \begin{equation} \label{input2} {\pmb \theta}_n = \left( \frac{\theta(w_1)}{\theta_0(w_1)} , \frac{\theta(w_2)}{\theta_0(w_2)} , ..., \frac{\theta(w_{50})}{\theta_0(w_{50})} \right) \end{equation} where $\theta_0(w_i)$ is the value of the measured phase shift without oxygen quenching at the angular modulation frequency $w_i$. \begin{figure}[t!] \centering \includegraphics[width=8.7 cm]{NN_MTL.png} \caption{Architecture of the multi-task learning neural network used in this paper. The common hidden layers generate a "shared representation" as output, that is used as input to task specific branches that learn specific features to each quantity and therefore improve the prediction accuracy. $L_i$ are the task-specific loss functions; $[O_2]_{i,pred}$ and $T_{i,pred}$ are the oxygen concentration and temperature predictions of the corresponding branch $i$. Note that branch 2 and 3 have only one output.} \label{fig:NN_MTL_O2_T} \end{figure} \subsubsection{Loss Function} The task-specific loss functions for each branch $i$ are indicated with $L_i$ and is the mean square error (MSE) defined as \begin{equation} L_i = \frac{1}{n} \sum_{j=1}^n \sum_{k=1}^{d_i} (y_{k,i}^{[j]}-\hat y_{k,i}^{[j]})^2, \ \ \ i=1,2,3 \label{MSE} \end{equation} where $n$ is the number of observations in the input dataset; ${\pmb y}_i^{[j]} \in \mathbb{R}^{d_i}$ is the measured value of the desired quantity for the $j^{th}$ observation, with $j=1, ..., n$ and $d_i$ is the dimension of the neural network branch output. In this case, $d_1=2, d_2=1$ and $d_3=1$. $ \hat {\pmb y}_i^{[j]} \in \mathbb{R}^{d_i}$ is the output of the network branch $i$, when evaluated on the $j^{th}$ observation. Since there are multiple branches, a global loss function $L$ is defined as a linear combination of the task-specific loss functions with weights $\alpha_i$ \begin{equation} L = \sum_{i=1}^{n_T}\alpha_i L_i . \label{globalcf} \end{equation} The parameters $\alpha_i$ have to be determined during the hyper-parameter tuning phase to optimize the network predictions. In this paper, being the loss function the MSE (Eq. (\ref{MSE})), the global loss function is \begin{equation} L = \sum_{i=1}^{3}\alpha_i \frac{1}{n} \sum_{j=1}^n \sum_{k=1}^{d_i} (y_{i,k}^{[j]}-\hat y_{i,k}^{[j]})^2 \label{global_MSE} \end{equation} The global loss function weights used for this work were $\alpha_1 = 0.3$, $\alpha_2 = 5$ and $\alpha_3 = 1$. These parameters are the result of a hyper-parameter tuning for this architecture \cite{Michelucci2019_2}. \subsubsection{Optimiser Algorithm} \label{training} The loss function was minimized using the optimizer Adaptive Moment Estimation (Adam) \cite{Kingma2014, Michelucci2017}. The implementation was performed using the TensorFlow\texttrademark $\ $library. The training was performed with a starting learning rate of $10^{-3}$. Two types of training were investigated to compare the training efficiency and performance of the network. {\sl No-batch training}: with this method all the training data are used to perform an update of the weights and to evaluate the loss function. The loss function used is given by Eq. (\ref{global_MSE}). {\sl Mini-batch training}: with this method the weights update is performed after the network has seen 32 observations. In this case, Eq. (\ref{global_MSE}) is used with $n=32$. For each update of the weights, 32 random observations are chosen from the training dataset without repetitions until all the training data are fed to the network. The size of the mini-batch was chosen as a compromise between a good performance (small value of the loss function ) and the duration of training. No-batch training has the advantage of stability and requires less time for each epoch since it performs one update of the weights using the entire training dataset. Mini-batch training is normally more effective in reaching small values of the loss function in less epochs, but it requires more time for each epoch \cite{Michelucci2017}. In our experiments for $20 \cdot 10^3$ epochs no-batch training took roughly five minutes on a modern MacBook Pro, while mini-batch training with $b=32$ took approximately 1 hour, thus resulting ca. 12 times slower. \subsection{Performance Evaluation} To evaluate the performance of the sensor, different metrics were analyzed. These are discussed in the next sections. The dataset $S$ of measured data was divided in two parts: one containing 80\% of randomly chosen observations (indicated with $S_{train}$), and one containing the remaining 20\% of the data (indicated with $S_{test}$). All the results presented were obtained by measuring the different metrics on the $S_{test}$ dataset. \subsubsection{Absolute Error on the Prediction} The metric used to compare predictions from expected values is the absolute error ($AE$) defined as the absolute value of the difference between the predicted and the expected value for a given observation. Note that in the architecture described in the previous sections, only branch 1 and 2 can predict $[O_2]$, while only branch 1 and 3 can predict $T$. The $AE$ for the oxygen concentration for the $j^{th}$ observation $[O_2]^{[j]}$ is \begin{equation} \label{AE} AE^{{[j]}}_{[O_2]} = |[O_2]^{{[j]}}_{pred}-[O_2]^{[j]}_{meas}|. \end{equation} where $[O_2]^{{[j]}}_{pred}$ and $[O_2]^{{[j]}}_{meas}$ are respectively the $[O_2]$ network prediction and measured value. The further quantity used to analyse the performance of the network is the mean absolute error ($MAE$), defined as the average of the $AE$. For example, for the oxygen prediction using the training dataset $S_{train}$, the $MAE_{[O_2]}$ is defined as \begin{equation} \label{MAE} MAE_{[O_2]}(S_{train}) = \frac{1}{|S_{train}|} \sum_{j \in S_{train}}|[O_2]_{pred}^{[j]}-[O_2]_{real}^{[j]}| \end{equation} where $|S_{train}|$ is the size (or cardinality) of the training dataset. $AE_{T}$ and $MAE_T$ are similarly defined, using the prediction and the measured temperature values. \subsubsection{Kernel Density Estimation} A fundamental quantity to study the performance of the network is the prediction distribution of the $AE$s. This metrics carries information on the probability of the network to predict the expected value. To better illustrate this distribution, the kernel density estimate ($KDE$) of the distributions of the $AE$s was also calculated for both the oxygen concentration and the temperature. $KDE$ is a non-parametric algorithm to estimate the probability density function of a random variable by inferring the population distribution based on a finite data sample \cite{Hastie2009}. In this work a Gaussian Kernel and a Scott bandwidth adaptive estimation \cite{Sain1996} using the seaborn Python package \cite{Waskom2020} were used. \subsubsection{Error Limited Accuracy $\eta$} \label{sektion:ela} Generally, in a commercial sensor, the accuracy quantifies the performance of the sensor and helps to decide if the chosen device is appropriate for the application of interest. The above-defined metrics ($AE$, $MAE$ and $KDE$) are useful to compare the performance of different NNMs but do not help quantify which error the neural network senor will ultimately have in practice. For this reason, in this work we introduce a new metric, called Error Limited Accuracy ($ELA$) and indicated with $\eta$. \begin{definition*} In a regression problem, given the metric $AE$, and a chosen value of it $\hat{AE}$, the $ELA$ $\eta$ limited by the error $\hat{AE}$ is defined as the number of predictions $\hat y$ of the NNM that lie in the range $|\hat y-y|\leq \hat{AE}$, with $y$ the expected value, divided by the total number of observations. It will be indicated with $\eta(\hat{AE})$. In more mathematical terms, given the set \begin{equation} E(\hat{AE}) = \{ \hat y^{[i]} \ {\text with } \ i = 1,..., n\ | \ \ |\hat y^{[i]}-y^{[i]}|\leq \hat{AE} \} \end{equation} $\eta(\hat{AE})$ is defined as \begin{equation} \eta(\hat{AE}) = \frac{|E(\hat{AE})|}{n} \end{equation} where $|E(\hat{AE})|$ is the cardinality of the set $E(\hat{AE})$ or in other words, the number of its elements. \end{definition*} This metric allows interpreting the regression problem as a classification one. $\eta(\hat{AE})$ simply describes how many observations are predicted by the NNM within a given value of the absolute error. In other words, it represents the percentage of predictions that are within a certain error $\hat{AE}$ from the expected values. Finally, if we take $\hat{AE}$ big enough, all the predictions will be classified perfectly, so $\eta(\hat{AE})$ is expected to approach 1. The smaller $\hat{AE}$ is, the smaller will be the number of predictions correctly classified. We finally define $\overline{AE}$ as the value for which $\eta(\overline{AE})=1$, so the value of the absolute error for which the network predicts all the observations correctly. This value ($\overline{AE}$) will give us the biggest error in the sensor predictions. \section{Results and Discussion} \label{Results} \subsection{Luminescence Experimental Results} \begin{figure}[b!] \centering \includegraphics[width=8.2 cm]{phase_O2_T.eps} \caption{Measured phase shift as a function of the oxygen concentration for selected temperatures at a fixed modulation frequency of 6 kHz. The arrow marks increasing temperatures.} \label{fig:expdata1} \end{figure} As described in Section \ref{Theory}, the phase shift depends non-linearly on the oxygen concentration according to the Stern-Volmer equation. It depends also on the temperature, which influences the luminescence and the collision mechanisms, and on the modulation frequency of the excitation light, as described in Eq. (\ref{theta_full}). The experimental observations for the phase shift for variations of these three quantities are shown in the Figs. \ref{fig:expdata1} to \ref{fig:expdata3}. Fig. \ref{fig:expdata1} shows the measured phase shifts as a function of the oxygen concentration at a constant modulation frequency of 6 kHz and for increasing temperatures. For clarity, the results at only few selected temperatures are shown. The decrease of the phase shift due to the collisional quenching is clearly visible in all curves. The phase shift is, as expected, also strongly temperature-dependent. For $[O_2]=0$, in the absence of oxygen, the reduction of the phase shift with increasing $T$ is due to temperature quenching; the influence of temperature becomes stronger at higher oxygen concentration, as a result of the increase of the diffusion rates of oxygen through the sample. For a given oxygen concentration, the phase shift is strongly dependent on the modulation frequency, as it can be seen in Fig. \ref{fig:expdata2}, where the shape of the frequency response is determined by the distribution of decay times of the sample. From the figure it is visible that the reduction of the phase shift with increasing temperatures is not constant but depends on the modulation frequency. \begin{figure}[t!] \centering \includegraphics[width=8.2 cm]{phase_f_T.eps} \caption{Measured phase shift as a function of the modulation frequency for selected temperatures at a fixed oxygen concentration of $[O_2]=20 \ \%$ air. The arrow marks increasing temperatures.} \label{fig:expdata2} \end{figure} For completeness, the effect of the oxygen concentration on the frequency response at a fixed temperature is shown in Fig. \ref{fig:expdata3}. Compared to Fig. \ref{fig:expdata2}, the frequency response of the sample is affected more strongly by the oxygen concentration than by temperature. In other words, the sample has a higher sensitivity to oxygen than to temperature. \begin{figure}[t!] \centering \includegraphics[width=8.2 cm]{phase_f_O2.eps} \caption{Measured phase shift as a function of the modulation frequency for selected oxygen concentrations at a fixed temperature of $T=25 \ ^{\circ}$C. The arrow marks increasing oxygen concentrations.} \label{fig:expdata3} \end{figure} The measurements of Figs. \ref{fig:expdata1} to \ref{fig:expdata3} show how similar the curves of the phase shift are for different values of oxygen, temperature and modulation frequency. This helps to understand why it is not possible from the measurement of the phase shift, or even of the phase shift for varying modulation frequencies, to simultaneously determine both the oxygen concentration and the temperature using Eq. (\ref{theta_full}). The temperature must be known in advance and used to compute the oxygen concentration. This is no longer the case with the neural network approach, as it will be shown in the next section. \subsection{Sensor Performance} First, the effect of the training on the sensor performance was investigated. As described in Section \ref{training}, the neural network was trained with no-batches and with mini-batches. For this comparison the network was trained for 20'000 epochs using the input observations ${\pmb \theta}_s$ as defined in Eq. (\ref{input1}). The results for $AE_{[O_2]}$ and $AE_T$ are shown in Fig. \ref{fig:KDE_results_all}(A) and \ref{fig:KDE_results_all}(B), respectively. The blue histogram shows the $AE$ distribution when using no-batch, the gray when using mini-batches of size 32. The $KDE$ profiles help illustrating the features of the histogram. The effect of introducing mini-batches on the performance is significant. The predictions distributions get much narrower, the mean average errors decrease from $MAE_{[O_2]}=2.4$ \% air and $MAE_{T}=3.6 \ ^\circ$C to $MAE_{[O_2]}=1.4$ \% air and $MAE_{T}=1.6 \ ^\circ$C. Although the performance is significantly improved, from Fig. \ref{fig:KDE_results_all}(A) and \ref{fig:KDE_results_all}(B) it can also be clearly seen that errors as high as approximately 5~\%~air for $[O_2]$ or 12 $^\circ$C for $T$ are possible. \begin{figure*}[htbp] \centering \includegraphics[width=15 cm]{KDE_results_all.eps} \caption{Distributions of the neural network predictions for the oxygen concentration (panels (A), (C) and (E)) and for the temperature (panels (B), (D) and (F)). In all panels the normalized prediction distribution histogram (columns), the kernel density estimate ($KDE$) of the distribution of the $AE$s (solid line), and $MAE$ (dashed vertical line) are shown. Panels (A) and (B): Comparison between training using no batches (NB) and using mini-batches (MB) with a batch size of 32 for 20'000 epochs; the input of the network is ${\pmb \theta}_s$. Panels (C) and (D): Comparison between training using mini-batches (MB) with a batch size of 32 for 100'000 and 20'000 epochs; the input of the network is ${\pmb \theta}_s$. Panels (E) and (F): training using mini-batches (MB) with a batch size of 32 for 20'000 epochs; the input of the network is ${\pmb \theta}_n$.} \label{fig:KDE_results_all} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=14 cm]{ELA_comparison_O2_T.eps} \caption{Comparison of the $ELA$ $\eta$: Panel (A) oxygen prediction, panel (B) temperature prediction. The black lines are the results obtained with a network that was trained with ${\pmb \theta}_n$ as input for 20'000 epochs with mini-batchs of size 32, while the red ones with ${\pmb \theta}_s$ as input for 100'000 epochs with mini-batchs of size 32. The dashed lines indicates the values of the $\overline{AE}$ for which the predictions would give $\eta=1$.} \label{fig:ELA_result_comparison} \end{figure*} Fig. \ref{fig:KDE_results_all}(C) and \ref{fig:KDE_results_all}(D) show the comparison between prediction distributions with 20'000 and 100'000 epochs (always using a mini-batch of size 32), using the input observations ${\pmb \theta}_s$ as defined in Eq. (\ref{input1}). The effect of longer training is a dramatic improvement in the performance. When the network was trained for 100'000 epochs the mean average errors are reduced to only $MAE_{[O_2]}=0.22$ \% air and $MAE_{T}=0.27 \ ^\circ$C. Additionally, all the predictions for $[O_2]$ lie below 0.94 \% air, and for $T$ lie below 2.1 $^\circ$C. The results of Fig. \ref{fig:KDE_results_all}(C) and \ref{fig:KDE_results_all}(D) demonstrate two new findings: 1) with the proposed approach, it is possible to predict both $[O_2]$ and $T$ at the same time from the phase shift using a single luminophore; 2) the prediction has an expected error which is comparable or below the typical accuracy of commercial sensors. The possibility of dual sensing paves the road to the development of a completely new generation of sensors. The price to pay is that the training of a network for 100'000 epochs requires approximately 5 hours on a modern laptop. To investigate if the training can be performed more efficiently, the normalized phase shift ${\pmb \theta}_n$ defined in Eq. (\ref{input2}) is used as input to the network. The performance of the network in this case, with a mini-batch size of 32 and a training of 20'000 epochs is shown in Fig. \ref{fig:KDE_results_all}(E) and \ref{fig:KDE_results_all}(F). The performance is further improved: even if the number of epochs is only 20'000 the mean average errors are better than what obtained with ${\pmb \theta}_s$ and a training of 100'000 epochs, achieving $MAE_{[O_2]}=0.13$ \% air and $MAE_{T}=0.24 \ ^\circ$C. The distributions are also narrower, particularly for the temperature. Additionally, all the $AE_{[O_2]}$ lie below 0.87 \% air, and $AE_{T}$ below 1.7 $^\circ$C. This type of training is clearly more efficient. The reason may lie in the additional information which is fed to the network when using the input ${\pmb \theta}_n$ and in the simplified functional behavior of ${\pmb \theta}_n$ compared to ${\pmb \theta}_s$ as it may be expected by Eq. (\ref{theta_full}). The performance of the different neural networks is summarized in Table \ref{TableMAE_summary}. \begin{table}[hbt] \centering \caption {\bf Summary of the performance for neural network models} \begin{tabular}{ cccc} \smallskip Input & Epochs / Batch size & $MAE_{[O_2]}$ & $MAE_{[T]}$ \\ \hline ${\pmb \theta}_s$ & 20'000 / \textrm{no batch} & 2.4 \% air & 3.6 $^\circ C$\\ ${\pmb \theta}_s$ & 20'000 / 32 & 1.4\% air & 1.6 $^\circ C$\\ ${\pmb \theta}_s$& 100'000 / 32 & 0.22 \% air & 0.27 $^\circ C$\\ ${\pmb \theta}_n$ & 20'000 / 32 & 0.13 \% air & 0.24 $^\circ C$\\ \end{tabular} \label{TableMAE_summary} \end{table} \subsection{Error Limited Accuracy} The metrics discussed in the previous sections are useful to compare the network performance and to measure how good the predictions are. However, they do not offer an understanding on what a sensor built with such a model could achieve. For practical applications, the relevant question is rather what is the maximum error which the sensor will have predicting the oxygen concentration and temperature. To answer this question, the $ELA$ ($\eta$) defined in Section \ref{sektion:ela} can be used. As explained previously, $\eta$ is defined depending on the chosen metric $m$. In this section, the metric chosen is $m=AE_{[O_2]}$ for the oxygen concentration and $m=AE_{T}$ for the temperature. This new metrics will allow the determination of the maximum error of the sensor. Fig. \ref{fig:ELA_result_comparison} displays the $ELA$ $\eta(\widehat {AE})$ for oxygen concentration (A) and for the temperature (B). In each panel, the results obtained using the input ${\pmb \theta}_n$ and a training for 20'000 epochs are shown in black, and the results obtained using the input ${\pmb \theta}_s$ and a training for 100'000 epochs in red. In both cases, the training was performed with mini-batches of size 32. The dashed lines indicate the values of the $\overline{AE}_{[O_2]}$ and $\overline{AE}_{T}$ for which the error limited accuracy $\eta$ equals 1. In other words, all the predictions will have an error equal or smaller than $\overline{AE}$. \begin{table}[t!] \centering \caption {\bf Summary of the values of $\overline{AE}$ for the cases shown in Fig. \ref{fig:ELA_result_comparison}(A) and \ref{fig:ELA_result_comparison}(B).} \begin{tabular}{ cccc} \smallskip Input & Epochs / Batch size & $\overline{AE}_{[O_2]}$ & $\overline{AE}_{T}$ \\ \hline ${\pmb \theta}_s$ & 100'000 / 32 & 0.95 \% air & 2.1 $^\circ C$\\ ${\pmb \theta}_n $ & 20'000 / 32 & 0.87\% air & 1.7 $^\circ C$\\ \end{tabular} \label{table:ela} \end{table} Fig. \ref{fig:ELA_result_comparison}(A) shows that, for the network trained with ${\pmb \theta}_s$ as input, the model would predict perfectly all the oxygen concentrations within 0.95 \% air error. For the network trained with ${\pmb \theta}_n$ this value is futher reduced to 0.87 \% air. $\overline{AE}_{[O_2]}$ can be interpreted as the accuracy a sensor based on this NNM would have. Fig. \ref{fig:ELA_result_comparison}(B) shows the results of the same analysis for the temperature measurement. The interpretation is similar to the one given above for the oxygen concentration. For the network trained with ${\pmb \theta}_s$ as input, the model would predict perfectly all the temperature values within $\overline{AE}_{T}=2.1 \ ^\circ$C error. For the network trained with ${\pmb \theta}_n$ this value would be $\overline{AE}_{T}=1.7 \ ^\circ$C. The values of $\overline{AE}_{[O_2]}$ and $\overline{AE}_{T}$ are summarized in Table \ref{table:ela}. \section{Conclusions} In this work, a new sensor learning approach to luminescence sensing is presented. The proposed method allows parallel inference, or the extraction of multiple physical quantities simultaneously, from a single dataset without any {\sl a priori} mathematical model, even in the presence of cross interferences. Classical approaches to this type of problems in physics can be challenging or impossible to solve if the mathematical models describing the functional dependencies are too complex or even unknown. The approach is demonstrated by realizing a luminescence sensor, which uses a single luminophore and a single measuring channel, and can measure simultaneously both the oxygen concentration and the temperature of a medium. This is achieved using a multi-task learning neural network model, which was trained on a very large dataset. The results in the prediction of the oxygen concentration and temperature show unprecedented accuracy for both parameters, demonstrating that this approach can make a new generation of dual- or even multiple-parameter sensors possible. The expected error or accuracy of a sensor based on a given NNM approach is intrinsically difficult. For this reason, the new metric Error Limited Accuracy $ELA$ ($\eta(AE)$) is proposed. The $ELA$ enables to estimate how many predicted values lie within a certain absolute error from the expected measurement. This new metric allows therefore giving a maximum measurement error of the NNM results. The ability to predict both $[O_2]$ and $T$ at the same time, from a single set of data obtained with a single indicator, has profound implications for the development of luminescence sensors. Sensors will become easier and cheaper to build since no separate temperature measurements are necessary anymore. Generally, the effect of interferences can be learned by the neural network and do not need to be corrected for in the data processing. This work opens the road to complete new optical sensing approaches for future generations of sensors. Those sensors will be able to extract multiple physical quantities from a common set of data at the same time to achieve consistent results that are both accurate and stable. The described approach is relevant for many practical applications in sensor science and demonstrates that this model-free approach has the potential of revolutionizing optical sensing. \medskip \noindent\textbf{Disclosures.} The authors declare no conflicts of interest. \bibliographystyle{elsarticle-num-names}
{ "timestamp": "2020-07-28T02:43:28", "yymm": "2007", "arxiv_id": "2007.13663", "language": "en", "url": "https://arxiv.org/abs/2007.13663" }
\section{Introduction} The seminal developments of Shor's and Grover's algorithm, that showed a provable exponential and polynomial speedup with respect to their classical counterparts respectively, sparked the decades-long run to build a quantum computer. First quantum computing devices with tens of noisy qubits, so called Noisy Intermediate-Scale Quantum (NISQ) devices have already been built \cite{barends2016digitized,dicarlo2009demonstration,debnath2016demonstration,reagor2018demonstration}. However, to outperform todays most powerful classical computers, Shor's and Grover's algorithms require a fully error-corrected device with of the order of $10^5$ qubits \cite{jones2012layered}. Finding algorithms for NISQ devices that are superior with respect to their fastest known classical counterpart is therefore the next important step to bridge the gap to fully error corrected quantum computing. To achieve this, a deep understanding of the relations and features of NISQ quantum algorithms is essential. Hybrid quantum classical algorithms, such as parameterized quantum circuits that are optimized in a classical learning loop, are generally believed to be the strongest candidates in the NISQ era. Among these are algorithms like the Variational Quantum Eigensolver (VQE) \cite{peruzzo2014variational} for quantum chemistry calculations, Quantum Neural Networks (QNN) \cite{farhi2018classification,grant2018hierarchical} for machine learning tasks and the QAOA algorithm \cite{farhi2014quantum}. QAOA can be used to solve combinatorial optimization problems, e.g. MaxCut \cite{wang2018quantum}, Max E3LIN2 \cite{2014arXiv1412.6062F} and for generative machine learning tasks such as sampling from Gibbs states \cite{verdon2017quantum}. Interestingly there also exist QAOA versions of Shor's number factoring algorithm \cite{anschuetz2018variational}, and Grover's problem of searching an unstructured database \cite{jiang2017near} that substantially reduce the number of gates with respect to their counterparts for fully error-corrected quantum computers. Moreover it has been shown that there is no efficient classical algorithm that can simulate sampling from the output of a QAOA circuit \cite{farhi2016quantum}. The performance and general characteristics of a heuristic like QAOA with a classical optimization loop are a nascent, vibrant however scattered research field \cite{streif2020training, zhou2018quantum, mbeng2019quantum, bapat2018bang, willsch2020benchmarking}. We add a piece to this puzzle by investigating the class of problems that can be solved exactly, i.e. a single run of the QAOA circuit would suffice to measure one of the possible answers to the problem which we will call target states henceforth. We coined the term "deterministic QAOA" to describe this specific setting. There are alternative classical- and quantum algorithms for combinatorial optimization: Quantum Annealing (QA) and its classical counterpart Simulated Annealing (SA). For these algorithms, there already exists an extensive body of research separating the strengths and weaknesses of QA and SA. Classes of problems have been identified that are either tailored \cite{mandra2018deceptive,denchev2016computational} or randomly generated and post-selected \cite{katzgraber2015seeking} to show a quantum speedup of QA, on existing hardware. In the present work we add QAOA to this framework of comparisons. If there is only a single target state, we are able to identify a set of problems based on their spectral features which can be solved exactly with QAOA with at most polynomially growing number of gates as a function of the problem size. Among these, there are problems that cannot be solved with neither QA nor SA, which we corroborate with their overlap distribution. We further show that for these problem instances there exists an efficient classical algorithm that can find the solution. Therefore, our results provide us with a rich understanding of the nature of the algorithm and show how interference effects separate QAOA from SA and QA. In a recent work Aram Harrow introduces an upper bound on the ability of shallow circuits to have support on outcomes which are separated in Hamming distance \cite{napp2019efficient}. To add to this finding, we show, for the case of two target states, that the depth of a level-1 QAOA circuit has to grow linearly with the Hamming distance of the target states. We further show that it is impossible to have a level-1 deterministic QAOA circuit with a number of target states bigger than $2$ and smaller than $2^n$ where $n$ is the number of qubits. IQP circuits are a non-universal quantum computational paradigm for which it is known that there is no efficient classical algorithm that can simulate sampling from its output \cite{bremner2011classical, bremner2017achieving}. As such, IQP circuits are one of the examples of recent proposals to show quantum supremacy together with boson sampling \cite{lund2017quantum} and random quantum circuits \cite{arute2019quantum}. Because of the similarity of level-1 QAOA and IQP circuits we are able to transfer all insights we provide in this work to IQP circuits as well. The present article is organized as follows: First we shortly recapitulate the QAOA algorithm in section \ref{sec:qaoa}. Then we identify a set of equations that implicitly defines instances that can be solved exactly with level-1 QAOA, cf. section \ref{sec:spectralprops}. We start by considering problems with a single target state, cf. section \ref{sec:singleTarget}. Subsequently we consider the case of two target states in section \ref{sec:2targets}. In section \ref{sec:manytargets} we show that there can be no genuine solution if the number of target states is in between $2$ and $2^n$. In section \ref{sec:iqp} we introduce IQP circuits and transfer all findings for level-1 QAOA circuits. We conclude with discussion and outlook in section \ref{sec:conclusion}. \section{The Quantum Approximate Optimization Algorithm (QAOA)}\label{sec:qaoa} The QAOA algorithm by Farhi et al.~\cite{farhi2014quantum} is a variational wavefunction ansatz with the goal to sample from low-energy states of a Hamiltonian $H_\mathrm{P}$ which is diagonal in the computational basis. Computational states are the simultaneous eigenstates of the Pauli-z operators $\sigma_z^{(i)}$ for all qubits, $i \in \left\{1, \dots, n \right\}$. The algorithm consists of two steps: First the expectation value $E_g$ of the Hamiltonian $H_\mathrm{P}$ for the variational state, \begin{align} E_g= \min_{\Vec{\beta},\Vec{\gamma}}\braket{\Psi(\Vec{\beta},\Vec{\gamma})|H_\mathrm{P}|\Psi(\Vec{\beta},\Vec{\gamma})}\,, \label{minimize} \end{align} is repeatedly evaluated with the help of the Quantum Processing Unit (QPU) while the variational parameters $\Vec{\beta}$ and $\Vec{\gamma}$ are adjusted in an outer learning loop to minimize $E_g$. When $E_g$ is sufficiently low, the variational state is repeatedly prepared and measured to produce candidates for low-energy states of the Hamiltonian $H_\mathrm{P}$. The energy of all candidates is calculated and the lowest energy state is the outcome of the QAOA algorithm. The ansatz for the variational wavefunction $\ket{\Psi(\Vec{\beta},\Vec{\gamma})}$ is inspired by the quantum annealing protocol where a system is initialized in an easy to prepare ground state of a local mixing Hamiltonian $H_\mathrm{X}=\sum _i\sigma_x^{(i)}$, with the Pauli-x operators acting on qubit $i$, $\sigma_x^{(i)}$, which is then slowly transformed to the problem Hamiltonian $H_\mathrm{P}$ \cite{kadowaki1998quantum}. The QAOA variational wavefunction resembles a trotterized version of this procedure, \begin{align} \ket{\Psi(\Vec{\beta},\Vec{\gamma})}=\mathrm{e}^{-\mathrm{i}\beta_pH_\mathrm{X}}\mathrm{e}^{-\mathrm{i}\gamma_pH_\mathrm{P}}\dots\mathrm{e}^{-\mathrm{i}\beta_1H_\mathrm{X}}\mathrm{e}^{-\mathrm{i}\gamma_1H_\mathrm{P}}\ket{+}\,, \label{finalstate} \end{align} where the starting state $\ket{+}$ is the product state of eigenstates of $\sigma_x^{(i)}$ with eigenvalue $1$, $\ket{+} = \prod_i (\ket{0}_i + \ket{1}_i)/\sqrt{2}$, which is simultaneously the superposition of all computational basis states. The number of repetitions $p$ of the fundamental block of QAOA is called its level. This means a level-1 QAOA circuit consists of a single application of a unitary generated by the problem Hamiltonian followed by the mixing operation generated by the mixing Hamiltonian. To solve an arbitrary combinatorial optimization problem the first step is to reformulate its cost in terms of the energy of a Ising Hamiltonian. This diagonal Hamiltonian $H_p$ should be chosen such that it is possible to infer solutions of the combinatorial optimization problem from low-energy eigenstates. This is always possible with polynomial classical computing overhead for NP-complete combinatorial optimization problems since the spin-glass itself is a NP-complete problem. There exist various known embeddings of combinatorial optimization problems onto problem Hamiltonians $H_p$ \cite{lucas2014ising}. Low energy eigenstates from $H_p$ can be sampled with QAOA which can be recomputed to solutions of the combinatorial optimization problem. It can be shown that the QAOA algorithm is strictly superior to QA since the step-like application of the mixing and problem Hamiltonian is the optimal solution for the optimal transport problem of transforming the initial state $\ket{+}$ to any other target state \cite{yang2017optimizing}. Various types of outer learning loops have been used thus far ranging from brute force grid search \cite{farhi2014quantum} to gradient based methods \cite{guerreschi2017practical} and recently methods inspired by supervised machine learning where the parameters $\Vec{\beta}$ and $\Vec{\gamma}$ were trained on random samples of combinatorial optimization problems and afterwards kept fixed to solve instances not seen during training of the same combinatorial optimization problem \cite{crooks2018performance,2018arXiv181204170B,zhou2018quantum, streif2020training}. \section{Spectral Conditions for Deterministic QAOA}\label{sec:spectralprops} In the following we derive conditions for the spectrum of the problem Hamiltonian $H_\mathrm{P}$ such that a level-1 version of QAOA ($p=1$) succeeds exactly, i.e. we consider a deterministic version of QAOA where we not only strive to minimize the expectation value $E_g$, cf. Eq.~(\ref{minimize}), but search for optimal values of $\beta$ and $\gamma$ such that we find perfect overlap, \begin{equation} 1 \overset{!}{=}\text{tr}\left[\sum\limits_{t=1}^{T}\ket{t}\bra{t} \ket{\Psi(\beta, \gamma)}\bra{\Psi(\beta, \gamma)}\right]\,, \end{equation} with $T$ target states $\{\ket{t}\}_{t=1}^T$ that can be ground states of a generic $N$-qubit Hamiltonian that is diagonal in the computational basis, $H_\mathrm{P}=\mathrm{diag}\left(\left\{E_l |\, l \in \{0,1\}^n\right\}\right)$. To find a parametrized version of the class of spectra that fulfill the above requirement we slightly reformulate the above equality to the question if there exist complex values $\alpha_t$ for $t = {1, \dots, T}$ that are normalized $\sum_t |\alpha_t|^2 = 1$ such that, \begin{equation}\label{eq:overlap1} 1\overset{!}{=}|\left(\sum\limits_{t=1}^T \alpha_t \bra{t}\right) \ket{\Psi(\beta, \gamma)} |^2\,, \end{equation} holds. Since $H_\mathrm{P}$ is diagonal, the overlap of the variational QAOA wavefunction with the subspace spanned by the target states can be reformulated to, \begin{multline}\label{eq:overlap2} \left(\sum\limits_{t=1}^T \alpha_t \bra{t} \right)\mathrm{e}^{-\mathrm{i}\beta H_\mathrm{X}}\mathrm{e}^{-\mathrm{i}\gamma H_\mathrm{P}}\ket{+}=\\ = \sum\limits_{l \in \{0,1\}^n} \underbrace{\frac{\mathrm{e}^{-\mathrm{i}\gamma E_l}}{\sqrt{2^n}}}_{x_l^*}\underbrace{\sum\limits_{t=1}^T \alpha_t \braket{t| \mathrm{e}^{-\mathrm{i} \beta H_\mathrm{X}}| l}}_{z_l}\,. \end{multline} With the definition of the complex vectors $\vec{x}$ and $\vec{z}$, cf. Eq.~(\ref{eq:overlap2}), the equality cf. Eq.~(\ref{eq:overlap1}) can be seen as the scalar product of two $2^n$-dimensional vectors, $1=|\vec{x}^* \cdot \vec{z}|$. It is easy to see that $|\vec{x}|^2 = 1$ and a small calculation, \begin{align*} |\vec{z}|^2 &= \sum\limits_{l \in \{0,1\}^n} \sum\limits_{t,t'= 1}^T \alpha_t \alpha_{t'}^* \braket{t|\mathrm{e}^{-\mathrm{i}\beta H_\mathrm{X}}|l}\braket{l|\mathrm{e}^{\mathrm{i}\beta H_\mathrm{X}}|t'} =\\ & = \sum\limits_{t,t' = 1}^T \alpha_t\alpha_{t'}^* \braket{t | t'} = 1 \end{align*} reveals $|\vec{z}|^2 = 1$ as well. From the Cauchy-Schwarz inequality we therefore know that $1=|\vec{x} \cdot \vec{z}|$ only holds for, $\vec{x} = \vec{z}$, ignoring an overall phase factor. With this we can conclude for the phases of the complex numbers $z_l$, \begin{equation}\label{eq:condition1} (\gamma E_l + \textrm{arg}(z_l))\,\textrm{mod}\, 2\pi = C \qquad \forall \hspace{1mm} l \in \{0,1\}^n\,, \end{equation} and for their magnitudes \begin{equation}\label{eq:condition2} |z_l| = \frac{1}{\sqrt{2^n}} \qquad \forall \hspace{1mm} l \in \{0,1\}^n\,. \end{equation} The first conditions, cf. Eq.~(\ref{eq:condition1}), are the desired conditions on the spectrum while the second conditions, cf. Eq.~(\ref{eq:condition2}) are necessary conditions for the spectrum to exist. Since in our setting $\gamma$ is a mere rescaling of the energy spectrum we will henceforth absorb $\gamma$ into the definition of the spectrum, $\gamma E_l = \epsilon_l$. \section{Single Target State}\label{sec:singleTarget} If there is only a single target state $t_1$ the conditions on the magnitudes of $|z_l|$, Eq.~(\ref{eq:condition2}) are fulfilled if $\beta = \frac{1}{4}\pi, \frac{3}{4}\pi, \frac{5}{4}\pi, \frac{7}{4}\pi$ and the energy eigenvalues $E_l$ have to fulfill either the condition, \begin{equation}\label{eq:1target_condition} \left(\epsilon_l - \frac{1}{2}\pi\Delta(t_1,l) \right) \bmod 2 \pi = C \end{equation} for $\beta=\frac{1}{4}\pi$ and $\beta=\frac{5}{4}\pi$ or \begin{equation} \left(\epsilon_l - \frac{3}{2}\pi\Delta(t_1,l) \right) \bmod 2 \pi = C \end{equation} for $\beta=\frac{3}{4}\pi$ and $\beta=\frac{7}{4}\pi$. Here, $\Delta(t,l)$ is the Hamming distance between the computational states $l$ and the target state $t$, i.e. the number of spin flips required to change the state $l$ to the state $t$. $C$ is an arbitrary constant that reflects the fact that energy eigenvalues are defined up to an additive constant. In the following we will concentrate on the first case, Eq.~(\ref{eq:1target_condition}), since the generalization to cases for $\beta=\frac{3}{4}\pi$ and $\beta=\frac{7}{4}\pi$ is straight forward. \paragraph{Construction of the Hamiltonian} To convert the energy eigenvalues $E_l$ to a quantum circuit, we reformulate them into a Ising Hamiltonian, \begin{align} H_\mathrm{P}=&\sum_{i_1}^n h_{i_1} \sigma_z^{(i_1)} + \sum_{i_1,i_2}^n J_{i_1i_2}\sigma_z^{(i_1)}\sigma_z^{(i_2)}\nonumber\\&+ \sum_{i_1,i_2,i_3}^n J_{i_1i_2i_3}\sigma_z^{(i_1)} \sigma_z^{(i_2)} \sigma_z^{(i_3)} + \dots \label{gspinglass} \end{align} given in terms of their on-site fields ($h_i$) and up to $k$-local interactions ($J_{i_1i_2}, J_{i_1i_2i_3}, \dots,J_{i_1i_2i_3\dots i_k}$), that fulfill the requirements of the instances found above. Here $\sigma_z^{(i)}$ are Pauli-z matrices acting on qubit $i$. To implement the evolution generated by this Hamiltonian, we transform every term to a $k$-qubit gate. To fulfill the above defined conditions on the spectrum, it is necessary to group the states according to their Hamming distance with respect to the target state $\ket{t}$ we would like to find with QAOA. We construct the Ising Hamiltonian with the help of the term \begin{equation}\label{eq:Hammingpauli} \sum\limits_i^n \left(1 - 2 t^{(i)}\right)\sigma_z^{(i)}= n - 2 \tilde{\Delta}(t) \,, \end{equation} where $\tilde{\Delta}_t$ is the Hamming distance operator defined by the eigenstates given by the computational basis states and the eigenvalues given by the Hamming distance of the respective computational basis state and target state $\ket{t}$, $\tilde{\Delta}_t \ket{l} = \Delta(t, l) \ket{l}$ and $t^{(i)}$ is the $i$-th entry in bitstring $t$. We decompose the Ising Hamiltonians for our instances into two parts, \begin{equation}\label{eq:sg-rep} H_p^{\textrm{1-target}} = \frac{\pi}{4}\sum\limits_i^n \left(1 - 2 t_1^{(i)}\right) \sigma_z^{(i)} + H_{2 \pi}\,. \end{equation} The first term fixes the conditions given in Eq.~(\ref{eq:condition1}) and the second term $H_{2 \pi}$ is an arbitrary Ising Hamiltonian with the sole condition that all eigenvalues are multiples of $2\pi$, which can be adjusted for any Ising Hamiltonian by rescaling of the energies. This means that we can add a "watermark"-state $\ket{t}$ to every arbitrary Ising Hamiltonian such that QAOA deterministically creates this state which can be any state computational basis state, not necessarily the ground state. \subsection{SA/QA-hard instances} \label{sec-hardinstances} Among the above defined instances there are problems that are hard to solve for both QA as well as SA. Both of these methods are heuristics designed to find a state that minimizes the energy of a given Ising Hamiltonian. For SA one starts in a random computational basis state and performs a random walk in the configuration space with Metropolis--Hasting updates with the goal to relax to low lying minima of the potential landscape. On the way to the solution, the found energy barriers can be overcome if their height is of the order of the thermal fluctuations or smaller. When cooling down the temperature slowly, in the best case scenario, SA finds the global minimum of the energy landscape. For QA in comparison a system is initialized in the superposition of all computational basis states and the magnitude of the quantum fluctuations are decreased until the system settles in a minimum of the potential landscape. Tunneling has been proven to be beneficial in this process \cite{denchev2016computational}. It is however known that tunneling through a barrier is exponentially suppressed as a function of the barrier width while it is proportional to the inverse of the barrier height. QA therefore shows advantages compared with SA for potential landscapes where minima are separated by thin and tall barriers while both heuristics fail for minima separated by tall and wide barriers \cite{katzgraber2015seeking}. We therefore identify the two requirements for instances that are hard to solve for QA and SA: First, the potential landscape should feature a large number of minima separated by wide barriers, where the relevant metric in this case is Hamming distance. Second, only one minimum should be the global minimum with all other minima separated by an amount of energy which is considered to be large enough such that the specific non-optimal minimum cannot be considered to be an acceptable solution to the encoded problem. \begin{center} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{totalplot.pdf} \begin{minipage}[t!]{1\linewidth} \caption{(a) The dashed line shows an artificially constructed energy distribution, which fulfills the spectral conditions given in Eq.~(\ref{eq:condition1}) and employs maximal 4-local interactions. The solid line shows the normalized density of states w.r.t. the Hamming distance to highlight that the energy landscape is dominated by many sub optimal minima. (b) Overlap distribution for the given spectrum. The peak at $q=1$ denotes the overlap of every minimum with itself. The peak at in the red/dark area, however, is the overlap of all suboptimal minima with the global minimum. The position at q=0 means that they are mainly located at a Hamming distance of $n/2$. Following \cite{katzgraber2015seeking}, the peaks around $q<|0.75|$ indicate that both SA and QA will struggle to find the global minimum. Both plots show numerical data for $n=100$.} \label{fig:4local} \end{minipage} \end{figure*} \end{center} In general, we can generate Ising Hamiltonians with arbitrary eigenenergies. However, this could lead to $k$-local interactions up to the maximal $N$-locality. This in turn leads to a decomposition of the problem Hamiltonian block in the QAOA algorithm with an exponentially growing number of elementary gates. We therefore add an additional requirement of finite $k$-locality of the Ising Hamiltonian, where $k$ is independent of the size of the problem. The instances we found that fulfill the above requirements with maximal 4-local terms are the following, \begin{equation} H_{2 \pi}= 2\pi \tilde{\Delta}(t_1)^2 (\tilde{\Delta}(t_1) - (n/2))^2 + H_{2\pi}'\,, \end{equation} where $H_{2\pi}'$ is another arbitrary Ising Hamiltonian with the sole requirement that its eigenenergies are multiples of $2\pi$ and that the interactions may not be greater than 4-local. The quartic polynomial in the Hamming distance operator ensures that the target state is also a ground state of the Ising Hamiltonian while at the same time it generates an exponential number $\binom{n}{n/2}$ of minima with Hamming distance $n/2$. These minima are suboptimal because of the first part of Eq.~(\ref{eq:sg-rep}). In Fig.~\ref{fig:4local}~(a), we show the energy distribution as function of the Hamming distance and the density of states w.r.t. the Hamming distance. The density of states visualizes that an exponentially large fraction of random starting points in classical methods will be close to sub optimal minima. To provide numerical evidence that these constructed instances are hard for both SA and QA and to make contact with the notions introduced in \cite{katzgraber2015seeking}, we calculate their overlap distributions. The overlap distribution is defined as the probability distribution of \begin{align} q=\frac{1}{n}\sum_i^n s^{(i)}_{\alpha} s^{(i)}_{\beta}. \end{align} defined over two replicas, $\alpha$ and $\beta$, of the system in a thermodynamic state. It is shown that the overlap distribution allows to draw conclusions about the hardness of combinatorial problems for both Simulated Annealing and Quantum Annealing. Instances with peaks in the overlap distribution for small values of $q$ have been identified as hard to solve for both QA and SA \cite{katzgraber2015seeking}. We calculated the overlap distributions exactly, cf. the Supplementary Material, for the 4-local instances found above where $H_{2\pi}' = 0$. We find perfect alignment of our instances with heuristics found in \cite{katzgraber2015seeking} for hard instances for QA and SA, cf. Fig.~\ref{fig:4local}~(b). \subsection{Classical algorithm} The fully trained version of the QAOA circuit for instances with deterministic outcome as defined above, cf. Eq.~(\ref{eq:condition1}), does not build up any entanglement, as can be seen from the following observation, \begin{equation} e^{-\mathrm{i}\left(\frac{\pi}{4}\sum\limits_i \left(1 - 2t_1^{(i)}\right)\sigma_z^{(i)} + H_{2\pi}\right)} = \prod_i e^{-\mathrm{i} \frac{\pi}{4} \left(1 - 2t_1^{(i)}\right) \sigma_z^{(i)}}\,, \end{equation} i.e. every gate in the fully optimized QAOA circuit is local. This suggests that there is an efficient classical algorithm to find solutions for these instances. In the following we present an efficient classical algorithm that can find the target state given oracle access to the energies of computational basis states of the Hamiltonian given in Eq.~(\ref{eq:sg-rep}): First, one queries the energy of a random computational basis state. Second, the first spin of the initial state is flipped. The Hamming distance of the resulting state w.r.t. the target state then is either increased or decreased by one. If the Hamming distance is increased by one, then we know the initial state of the spin was the correct one. If the Hamming distance is decreased by one, then we can leave the spin as is. To see if the Hamming distance was increased or decreased we query the energy for the state with flipped spin and examine the difference to the energy of the initial state modulo $2\pi$. Since the Hamming distance can only decrease or increase by one the energy difference is either equal to $\pi/2$ if the Hamming distances increased or $-\pi/2$ if the Hamming distance decreased. We repeat the above described method for every spin and are able to find the target state with $n+1$ queries of the oracle. For a detailed description of the algorithm cf. Fig.~\ref{alg:mapping}. \begin{figure} \begin{algorithm}[H] \caption{Classical algorithm} \begin{algorithmic}[1] \Statex \Function{ClassicalAlgorithm}{$H_P$} \\ \hrulefill \State \textbf{Input}: Oracle access to the energies of a n-spin \State \phantom{\textbf{Input}: }Hamiltonian $H_\mathrm{P}$, cf. Eq.~(\ref{eq:sg-rep}) \State \textbf{Output}: ground state of $H_\mathrm{P}$\\ \hrulefill \State Draw random bitstring $b=(b_1,b_2,\dots,b_n)$ \State Calculate energy $E_b=H_P(b)$ \For{$k \gets 1$ to $n$} \State{Flip spin $k \rightarrow \tilde{b}=(b_1,b_2,\dots,-b_k,\dots,b_n)$} \State Calculate energy $E_{\tilde{b}}=H_P(\tilde{b})$ \If{$(E_{\tilde{b}}-E_{b}) \mod 2\pi =-\pi/2$} \State $b=\tilde{b}$ \EndIf \EndFor \State \Return {b} \EndFunction \end{algorithmic} \end{algorithm} \caption{Pseudo code to find the exact solution of Eq.~(\ref{eq:sg-rep}) classically in $n+1$ queries.} \label{alg:mapping} \end{figure} \section{Two target states}\label{sec:2targets} We start the two target state case by showing under what circumstances a solution can exist. To this end we reformulate Eq.~(\ref{eq:condition2}) to, \begin{equation} | \vec{v}_l \cdot \vec{\alpha}| = \frac{1}{\sqrt{2^n}} \qquad \forall \hspace{1mm} l \in \{0,1\}^n\,, \end{equation} with the definition of the vector $[\vec{v}_l]_t=\bra{t} e^{-i\beta H_x} \ket{l}$. The above equation has to hold for all computational basis states $l$. Yet, it also has to hold for only 2 computational basis states. We start with the case where the Hamming distance between the two target states is odd, the case with even Hamming distance is a trivial adaption of the following. We take the two equations corresponding to the two target states, i.e. $l_1 = t_1$ and $l_2=t_2$, and set up a linear system of equations for the unknown $\vec{\alpha}$, \begin{multline} \begin{pmatrix} \vec{v}_{t_1}\\ \vec{v}_{t_2} \end{pmatrix} \begin{pmatrix} \alpha_1\\ \alpha_2 \end{pmatrix} = \\ \cos(\beta)^n \begin{pmatrix} 1 & (-\mathrm{i} \tau)^{\Delta(t_1,t_2)} \\ (-\mathrm{i}\tau)^{\Delta(t_1,t_2)} & 1 \end{pmatrix} \begin{pmatrix} \alpha_1\\ \alpha_2 \end{pmatrix}= \begin{pmatrix} \frac{e^{i\varphi_1}}{\sqrt{2^n}}\\ \frac{e^{i\varphi_2}}{\sqrt{2^n}} \end{pmatrix}\,, \end{multline} for some $\varphi_1$ and $\varphi_2$ where $\tau = \tan(\beta)$. Since the Hamming distance $\Delta(1,2)$ is odd the coefficient matrix of the above linear system of equations is non-singular and can be inverted to solve for $\vec{\alpha}$. The norm of $\vec{\alpha}$ resulting from this procedure is, \begin{equation} |\alpha_1|^2 + |\alpha_2|^2 = \frac{1}{2^n\cos(\beta)^{2n}}\frac{2}{1 + \tan(\beta)^{2\Delta(t_1,t_2)}}\,, \end{equation} which is equal to $1$ for same values of $\beta$ as for the case with a single target state, $\beta = \frac{1}{4} \pi, \frac{3}{4} \pi, \frac{5}{4} \pi, \frac{7}{4} \pi$. If the Hamming distance between the target states is even then we take a other computational basis states such that the coefficient matrix is invertible which is always possible. Once we have found the computational basis states we can proceed through the above steps in complete analogy with the same feasible $\beta$-values as found above. We have thereby proven that only for the above cited values of $\beta$ there can be a solution and proceed in showing that there actually exists a solution by explicitly calculating it. To calculate the actual $\vec{\alpha}$ and consequently the parametrized spectrum we take the square of Eq.~(\ref{eq:condition2}) and use the fact that the vector $\vec{\alpha}$ is normalized, \begin{equation}\label{eq:quadCondition} \vec{\alpha}^\dag \mathbf{M}_l\vec{\alpha} = 0\, \end{equation} where, \begin{equation} \left[\mathbf{M}_l\right]_{t,t'} = \frac{1 - \delta(t,t')}{2^n} (-i)^{\Delta(t,l) - \Delta(t',l)}\,. \end{equation} The above equation has to be fulfilled for all computational basis states $l$. To find out the requirements for that to happen we need to make a couple of observations first. First consider the sum and difference of Hamming distances $\Delta(t,l) \pm \Delta(t',l)$. Lets assume we approach the state l with a sequence of bit flips starting with state $t$, $t\rightarrow l_0 \rightarrow l_1 \rightarrow \dots \rightarrow l$. Every bit flip changes the sum and difference of Hamming distances by either $-2$, $0$ or $2$ starting from the Hamming distance $\Delta(t,t')$ of the two target states. Therefore an even (odd) Hamming distance between target states $t$ and $t'$ implies an even(odd) sum and difference in Hamming distances $\Delta(t,l) \pm \Delta(t',l)$. To reformulate this in a symmetric rule: Only two or no Hamming distance between three states can be odd the others are even. Second, the equality above has to hold for all $l$ therefore all contributions from off-diagonal terms have to vanish. The matrix $\mathbf{M}_l$ is hermitian and due to the special relationship of the sum and difference in Hamming distances the entries are either $\left[\mathbf{M}_l\right]_{t,t'} = \left[\mathbf{M}_l\right]_{t',t}$ or $\left[\mathbf{M}_l\right]_{t,t'} = - \left[\mathbf{M}_l\right]_{t',t}$. This implies restrictions on the possible choices of $\alpha_t$, \begin{align}\label{eq:hammingdist_angle} \textrm{Re}[\alpha_1 \alpha_2^*] &= 0 \quad \textrm{if} \quad \Delta(t_1,t_2) \quad \textrm{is even} \\ \textrm{Im}[\alpha_1 \alpha_2^*] &= 0 \quad \textrm{if} \quad \Delta(t_1, t_2) \quad \textrm{is odd}\,. \end{align} This means if the Hamming distance between the two target states is even (odd) then the complex numbers $\alpha_1$ $\alpha_2$ are perpendicular (parallel) in the Gaussian plane. We proceed for even Hamming distance and parametrize the complex amplitudes according to $\alpha_1 = \cos(\varphi) e^{i \sigma}$ and $\alpha_2 = \pm \mathrm{i} \sin(\varphi)e^{i\sigma}$, which is the most generic parameterization that already fulfills Eq.~(\ref{eq:hammingdist_angle}) and the normalization of $\alpha$. The $z_l$ are, \begin{align} z_l &= e^{i\sigma}\left(\cos(\varphi) \left(-i\right)^{\Delta(t_1,l)} \pm i \sin(\varphi)\left(-i\right)^{\Delta(t_2,l)}\right)\\ &= \exp(i(\sigma - \frac{\pi}{2}\Delta(t_1,l)\pm(-1)^{\frac{\Delta(t_2,l)-\Delta(t_1,l)}{2}} \varphi))\,. \end{align} The spectrum is therefore defined by, \begin{equation}\label{eq:2targetspectrum} \epsilon_l = \frac{\pi}{2}\Delta(t_1,l)\pm (-1)^{\frac{\Delta(t_2,l) - \Delta(t_1,l)}{2}}\varphi\,, \end{equation} for arbitrary $\varphi$ and we have chosen to gauge the spectrum according to $\sigma = C$. With the definition of the energy eigenvalues we can deduce the gate sequence that needs to be executed on an all-to-all connected QPU to implement the propagation with the problem Hamiltonian. The first term on the right hand side of the conditions on the spectrum, Eq.~(\ref{eq:2targetspectrum}) can be implemented as explained above, cf. Eq.~(\ref{eq:Hammingpauli}) with local $\sigma_z$ rotations. The gate sequence for the second term can be deduced with the technique of Walsh functions \cite{Welch2014}, \begin{equation} \label{eq:2targetwalsh} a_j = \sum\limits_{l \in \{0,1\}^n} (-1)^{\frac{\Delta(t_2,l)-\Delta(t_1,l)}{2}}(-1)^{\sum\limits_{i=1}^n j^{(i)} l^{(i)}}\,. \end{equation} All non-zero $a_j$ for $j\in \{0,1\}^n$ correspond to Walsh operators $\bigotimes\limits_{i=1}^n (\sigma_z^{(i)})^{j^{(i)}}$ that need to be implemented in order to get the problem Hamiltonian with the defined spectrum. Eq.~(\ref{eq:2targetwalsh}) can be reformulated to, \begin{equation} a_j = (-1)^{\sum\limits_{i=1}^n\frac{t^{(i)}_1 - t^{(i)}_2}{2}} \sum\limits_{l \in \{0,1\}^n} (-1)^{\sum\limits_{i=1}^n (t^{(i)}_2 - t^{(i)}_1 + j^{(i)})l^{(i)}}\,. \end{equation} From this we can deduce that there is only a single non-vanishing Walsh coefficient represented by a binary string $j$ that is one at every digit where the target states differ and zero elsewhere. This means that the second term in Eq.~(\ref{eq:2targetspectrum}) can be generated by a $\Delta(t_2,t_1)$-local term that is the tensor product of local $\sigma_z$ operators on all qubits where target state $t_1$ and target state $t_2$ differ. If we assume the generic decomposition of a k-local $\sigma_z$-rotation into two ladders of CNOTs and a local rotation Rz, cf. Fig.~\ref{fig:circuit}, this would mean that the depth of the QAOA circuit that generates a superposition of two target states scales linearly with the Hamming distance between both states. \begin{figure}[t!] \centering \begin{tabular}{cc} \begin{tabular}{c}$\mathrm{e}^{\mathrm{i}\alpha\left( \sigma_z^{(1)}\sigma_z^{(2)}\sigma_z^{(3)}\cdots\sigma_z^{(k)}\right)}=$\vspace{2mm}\end{tabular} \begin{tabular}{c} \hspace*{0.25 cm}\Qcircuit @C=0.8em @R=.5em { \lstick{1}& \ctrl{1} & \qw & \qw & & & \qw & \ctrl{1} & \qw \\\lstick{2}& \targ & \ctrl{1} & \qw &\hdots & & \ctrl{1} & \targ & \qw \ \\\lstick{3}& \qw & \targ & \qw & & & \targ & \qw & \qw \\& & & & \vdots & & & & \\& & & & & & & & & \\\lstick{k} & \hdots & & \targ & \gate{R_\mathrm{Z}(\alpha)} & \targ &\qw & \hdots &} \end{tabular} & \end{tabular} \caption{Circuit realizing a $k$-local parametrized $\sigma_z$ rotation.} \label{fig:circuit} \end{figure} \section{More than two target states}\label{sec:manytargets} Based on the assumption that the only solution can be found for $\beta = \frac{1}{4} \pi, \frac{3}{4} \pi, \frac{5}{4} \pi, \frac{7}{4} \pi$ we show in the following that level-1 QAOA can not map to a genuine superposition of more than $2$ but less than $2^n$ target states. We consider a genuine superposition of target states a superposition where no amplitude vanishes. Eq.~(\ref{eq:condition2}) can be reformulated to \begin{equation} \cos(\beta)^n |((-i\tau)^{\Delta(t_1,l)}, \dots, (-i\tau)^{\Delta(t_T,l)} ) \vec{\alpha}| = \frac{1}{\sqrt{2^n}}\,. \end{equation} If there is a combination of computational basis states $l$ and $l'$ with $\Delta(t,l) = m + \Delta(t, l')$ for all $t$ and some integer $m$, it is easy to see that $|\tan(\beta)| = 1$ which is only true for $\beta = \frac{1}{4} \pi, \frac{3}{4} \pi, \frac{5}{4} \pi, \frac{7}{4} \pi$. Based on this observation as well as the calculation for the 2 target state case and numerical investigations we conducted we conjecture that the only possible solutions are $\beta = \frac{1}{4} \pi, \frac{3}{4} \pi, \frac{5}{4} \pi, \frac{7}{4} \pi$. Based on this conjecture what is left to show is that Eq.~(\ref{eq:quadCondition}) can only be fulfilled for all entries of $\vec{\alpha}$ but two vanishing. We start by rewriting the equation based on the above findings on Hamming distances, \begin{equation}\label{eq:quad_lsoe} \sum\limits_{t>t'} (-1)^{\left \lfloor{\frac{\Delta(t,l)-\Delta(t',l)}{2}}\right \rfloor } \text{Re/Im}[\alpha_t^*\alpha_{t'}] = 0 \end{equation} where the real and imaginary part of the product of $\vec{\alpha}$ entries is chosen according to the parity of the Hamming distance according to Eq.~(\ref{eq:Hammingdist_angle}). If we assume for the moment that all products of $\vec{\alpha}$ entries are independent, then Eq.~(\ref{eq:quad_lsoe}) can be seen as homogeneous linear system of equations where the coefficient matrix is $[(-1)^{\left \lfloor{\frac{\Delta(t,l)-\Delta(t',l)}{2}}\right \rfloor }]_{\left\{t,t'\right\},l}$. The Hamming state difference is bounded from below and above by $-\Delta(t,t')\leq \Delta(t,l) -\Delta(t', l) \leq \Delta(t,t')$, i.e. the rows in the coefficient matrix are all possible combinations of $-1$ and $1$. Therefore the coefficient matrix has full rank and the only possible solution to the linear system of equations is the trivial one, \begin{align}\label{eq:Hammingdist_angle} \textrm{Re}[\alpha_t \alpha_t'^*] &= 0 \quad \textrm{if} \quad \Delta(t,t') \quad \textrm{is even} \\ \textrm{Im}[\alpha_t \alpha_t'^*] &= 0 \quad \textrm{if} \quad \Delta(t, t') \quad \textrm{is odd}\,. \end{align} Assume a situation where there are more than two target states. Then it is always possible to choose three of them. Further assume that all three complex amplitudes $\alpha_i$ for the three target states are non-vanishing. As we already know there are two cases: either all Hamming distances between them are even or two Hamming distances are odd and the remaining one is even. If all Hamming distances are even then the real part has to vanish for all products of complex amplitudes $\alpha_t^*\alpha_{t'}$. This means that the complex vectors $\alpha_1$, $\alpha_2$ and $\alpha_3$ in the Gaussian plane must all be mutually perpendicular, which is not possible in the two-dimensional Gaussian plane. A straight forward adaption of the above reasoning also precludes the possibility of a solution in the case where two of the Hamming distances are odd. \section{IQP circuits} \label{sec:iqp} IQP circuits are a non-universal quantum computational paradigm. They serve as a tool in complexity theoretic proofs to discern quantum- from classical computational power. Efficient classical sampling from the output distribution of IQP circuits has been shown to be \#P-hard even for approximate sampling or sampling from IQP circuits with noise \cite{PhysRevLett.117.080501,bremner2017achieving}. IQP circuits are very similar to level-1 QAOA circuits: an arbitrary diagonal unitary transform $e^{-i H_p^{(\textrm{IQP})}}$, generated by the diagonal Hamiltonian $H_p^{(\textrm{IQP})}$, is applied to the equal superposition of all possible computational basis states $\ket{+}$ followed by Hadamard gates on all qubits and a measurement in the computational basis, i.e. the IQP output state is, \begin{equation} \ket{\text{IQP}}=\frac{1}{\sqrt{2}} \begin{pmatrix} 1& 1 \\ 1 & -1 \end{pmatrix}^{\otimes n} e^{-iH_p^{(\text{IQP})}}\ket{+}\,. \end{equation} We can transfer all findings of the preceding section, c.f. Sec.~\ref{sec:spectralprops} to IQP circuits, because of their similarity to level-1 QAOA circuits. Therefore the questions that we answer in the following is, if there are complex amplitudes $\alpha_t$ for $t=1, \dots, T$, that are normalized $\sum_t |\alpha_t|^2 = 1$ such that, \begin{equation} 1 \overset{!}{=}|\left(\sum\limits_{t=1}^T \alpha_t \left\langle t \right| \right)\ket{\text{IQP}}|^2\,. \end{equation} Similar to the findings for QAOA we can derive from that two sets of equations that define the spectrum, \begin{equation}\label{eq:IQP_cond2} \left(\epsilon_l + \textrm{arg}\left(\sum\limits_{t=1}^T \alpha_t (-1)^{\sum\limits_{i=1}^n t^{(i)} l^{(i)}}\right)\right) \,\textrm{mod}\, 2\pi = C \,. \end{equation} and ensure the existence of a solution, \begin{equation}\label{eq:IQP_cond1} |\sum\limits_{t=1}^T \alpha_t (-1)^{\sum\limits_{i=1}^n t^{(i)} l^{(i)}}| = 1 \end{equation} For the single target state case the condition for the existence of a solution, cf. Eq.~(\ref{eq:IQP_cond1}), is trivially fulfilled and the spectrum is defined by, $\epsilon_l = \pi \sum_i t^{(i)} l^{(i)}$, which can be implemented by the 1-local IQP Hamiltonian, \begin{equation} H_p^{(\textrm{IQP})}= \frac{\pi}{2} \sum\limits_{i=1}t_i\sigma_z^{(i)}\,, \end{equation} that has the same spectrum up to an overall phase factor. In the 2-target state case the solution exists if, $Re[\alpha_1^*\alpha_2]=0$ holds. The parametrizations, $\alpha_1 = i \sin(\varphi)$ and $\alpha_2 = \cos(\varphi)$, automatically ensure the existence and normalization of the solution. With this parametrization we can deduce the spectrum to be, \begin{equation} \epsilon_l = \pi \sum\limits_{i=1}^n t_1^{(i)} l^{(i)} + (-1)^{\sum\limits_{i=1}^n t_1^{(i)} l^{(i)}-\sum\limits_{i=1}^n t_2^{(i)} l^{(i)}}\varphi \end{equation} The first summand in the definition of the spectrum can be implemented with the Hamiltonian found in the 1-target state case we again resort to the technique of walsh functions to deduce the Hamiltonian for the second summand to get the full Hamiltonian, \begin{equation} H_p^{(\textrm{IQP})}=\pi \sum\limits_{i=1}(1-t^{(1)}_i)\sigma_z^{(i)} + \varphi \bigotimes\limits_{i=1}^n \left(\sigma_z^{(i)}\right)^{\frac{1-t^{(1)}_i t^{(2)}_i}{2}}\,. \end{equation} In complete analogy to level-1 QAOA we find that the depth of the circuit implementing the phase seperator grows linearly with the Hamming distance between the two target states $t^{(1)}$ and $t^{(2)}$. Assuming the standard decomposition of a ladder of CNOTs and a local parametrized $R_z$-gate on an all-to-all connected QPU. If there are more than 2 target states we have to find solutions to the quadratic equation, \begin{equation}\label{eq:manytarget_IQP_existence} \vec{\alpha}^\dag M_l \vec{\alpha} = 0\,, \end{equation} where, $[M_l]_{t,t'}= (-1)^{f(t,l)}(-1)^{f(t',l)}$. If we assume for a moment that all $Re[\alpha_t^* \alpha]$ are independent, we may think of Eq.~(\ref{eq:manytarget_IQP_existence}) as a linear equation for independent $Re[\alpha_t^* \alpha]$. The coefficient matrix for this system of linear equations has full rank and consequently the only solution is, $Re[\alpha_t^* \alpha]=0$ for all possible combinations $t$ and $t'$ where $t\neq t'$. This however as argued above is only possible if all $\alpha_t=0$ but two. \section{Decay under Disorder} In the final section we investigate the effects of disorder in the implementation of the Ising Hamiltonian $H_P$ on the unit overlap of the above introduced deterministic QAOA and IQP settings. To this end we assume a fixed but random shift acting on every eigenenergy, $\tilde{\epsilon}_l \to \epsilon_l + \delta_l$. As a generic assumption on QPU control errors we assume all $\delta_l$ to be i.i.d. with Gaussian probability distribution with mean $0$ and variance $\sigma$, i.e. $\delta_l \sim \frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{\delta_l^2}{2\sigma^2}}$. To be able to treat QAOA and IQP circuits on common footing, we introduce the unitaries $U_{\textrm{QAOA}}$ and $U_{\textrm{IQP}}$ such that, $\ket{\Psi(\beta,\gamma)}= U_{\textrm{QAOA}}\ket{+}$ and $\ket{IQP} = U_{\textrm{IQP}}\ket{+}$. With these definitions we can derive a relation between the unitaries defined with respect to the perturbed eigenenergies $\tilde{U}_{\textrm{QAOA}}$ and $\tilde{U}_{\textrm{IQP}}$ and the respective versions with the unperturbed eigenenergies, \begin{equation} \tilde{U}_{\mathrm{QAOA/IQP}} \ket{+} = \frac{1}{\sqrt{2^n}}\sum\limits_{l\in\{0,1\}^n} e^{-i\delta_l} U_{\mathrm{QAOA/IQP}} \ket{l}\,. \end{equation} With this we can reformulate the expectation value with respect to all possible deviations from the exact eigenenergies of the overlap of the output state of the perturbed QAOA or IQP circuit with a projector on the target subspace, \begin{align} &\mathbb{E}\left[\mathrm{tr}\left[\sum\limits_{t=1}^T\ket{t}\bra{t} \tilde{U}_{\mathrm{QAOA/IQP}}\ket{+}\bra{+}\tilde{U}_{\mathrm{QAOA/IQP}}^\dag\right]\right]\nonumber\\ &=\frac{T}{2^n} + e^{-\sigma^2}\left(1- \frac{T}{2^n}\right)\,. \end{align} As expected we recover the unit overlap if the disorder variance $\sigma$ vanishes. The overlap decreases exponentially with the variance of disorder making the unit overlap a volatile property. For completely random eigenenergies, $\sigma \to \infty$, the probability to end up in one of the target states is the ratio of the dimensions of the target subspace spanned by one $T=1$ or two $T=2$ computational basis states and the dimension of the entire Hilbert space $2^n$. \section{Conclusion} \label{sec:conclusion} Based on the premise of deterministic QAOA, i.e. a setting in which the level-1 QAOA circuit is able to map to a genuine superposition of a set of target states, we were able to derive a number of fundamental insights. If there is a single target state the set of specially constructed optimization problems contains cases where both Quantum Annealing and Simulated Annealing fail. Consequently, our results define a first demarcation line between QAOA on one side and SA and QA on the other side. These results highlight the fundamental differences between heuristics designed to find the minimum of potential landscapes such as QA and SA and an interference-based algorithm such as QAOA where all states that are not the target state interfere destructively while only the the amplitudes of the target state add up constructively. This points to a new research direction of new encodings of combinatorial optimization problems into problem Hamiltonians $H_p$ where the desired solution is not necessarily the ground state but rather exceptional in its interference. In the two target state case we were able to derive a parametrized family of problem Hamiltonians that generate level-1 QAOA circuits that can prepare an arbitrary superposition of two target states. With the technique of Walsh functions we were further able to show that level-1 QAOA circuit depth has to grow linearly with the Hamming distance of the target states, which contributes to ongoing research on the relation between the depth of a circuit and its computational power \cite{napp2019efficient, bravyi2018quantum}. Finally we argued that there is no possibility of a level-1 QAOA circuit to map to a genuine superposition of more than $2$ but less than $2^n$ target states and we transferred all of our findings to the quantum computational paradigm of IQP circuits. \label{sec-summary} \section*{ACKNOWLEDGMENTS} The authors would like to thank Eddie Farhi, Jeffrey Goldstone for useful comments and enlightening discussions. We thank VW Group CIO Martin Hofmann and Director of Advanced Technologies and IT Strategy Volkswagen Group of America Florian Neukart, who enabled our research. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Grant Agreement No. 828826. \bibliographystyle{unsrt} \section{Overlap distribution}\label{supp-mat:overlap} Here we calculate an exact expression for the overlap distribution in the case where the problem Hamiltonian is defined via the Hamming distance to the target state $\ket{t}$. The overlap is defined by \begin{align} q=\frac{1}{N}\sum_i^N s_i^{(\alpha)} s_i^{(\beta)}, \end{align} where $\alpha$ and $\beta$ define two replicas of the system. The probability distribution for each combination of states of the two replicas is given by the product of two Gibbs distributions at the same temperature. We calculate the probability distribution of $q$ by summing over combinations of states of the two replicas that amount to the same value of $q$, \begin{align} P(q)=\sum_{i,j}^N \mathrm{e}^{-\beta H(s_i)}\mathrm{e}^{-\beta H(s_j)}\delta_{s(i)s(j),qN}. \label{overlapsum} \end{align} We note that the value $q$ is directly related to the Hamming distance, $\Delta H$, between two computational states, $q=(N-2\Delta H)/N$. For an arbitrary Hamiltonian, evaluating this sum requires exponential many classical resources as there are, in general, exponential many products of different energy pairs. However we consider a problem Hamiltonian that depends solely on the Hamming distance to the target state. Therefore we only have $N+1$ different energies and consequently only $N(N+1)/2$ different Gibbs weight pairs. In the following, we show how to calculate the overlap distribution with polynomial resources in this case. To find all possibilities to create a certain value of $q$, we have to sum over all possibilities to create a certain distance $\Delta H$ from all possible computational states $s_i$. Let us have a look at an example, where our target state is $\ket{t}=\ket{0}^{\otimes N}$. If the system is in a computational state $l$, $\Delta_t(l)$ qubits are in the $\ket{1}$ state and consequently $N-\Delta_t(l)$ qubits are in the $\ket{0}$ state. Note that we here used $\Delta_t(l)$ as the Hamming distance from the target state w.r.t. to the state $s_i$. We now have to sum over all possible combinations, leaving us with the \begin{align} P(q)=&\sum_{\Delta_t=0}^{N}\sum_{K=0}^{\Delta{H}}\binom{N}{\Delta_t}\binom{N-\Delta_t}{\Delta{H}-K}\binom{\Delta_t}{K} \mathrm{e}^{-\beta E(\Delta_t)}\mathrm{e}^{-\beta E(\Delta_t+\Delta H-2K)}. \end{align} This expression has only polynomial many terms and can be computed efficiently. Usually, the overlap distribution is found by doing many simulated annealing runs with different temperatures. In our case, we can set the temperature arbitrarily. We therefore have to make sure to set the temperature such that we do not hit both thermodynamic limits, where i) the ground state is occupied with certainty or ii) all states are equally likely. Therefore we shift the temperature in the regime where both cases i) and ii) do not occur. In this case, the overlap distribution gives insight into the energy landscape of the problem and the capability of QA and SA to solve the problem. \label{appendixoverlapdist}
{ "timestamp": "2020-07-28T02:39:59", "yymm": "2007", "arxiv_id": "2007.13563", "language": "en", "url": "https://arxiv.org/abs/2007.13563" }
\section{Introduction} Dirac semimetals(DSMs) are currently drawing intense interest in condensed matter and materials physics~\cite{wang2012dirac,liu2014discovery,liu2014stable,liang2015ultrahigh,borisenko2014experimental,chang2017type,huang2017black,milicevic2018tilted,young2012dirac,gibson2015three,young2015dirac,chen2015optical,wieder2016double,novak2015large,zhu2019composite}. They possess four-fold-degenerate Dirac points(DPs) close to the Fermi level. Dirac point can be regarded as two Weyl points~\cite{wan2011topological,burkov2011weyl,xu2015discovery,weng2015weyl,xu2015discovery,lu2015experimental,lv2015experimental,soluyanov2015type} with opposite chirality meeting at the same $k$-point. The DSMs provide a fertile ground for exploring relativistic particles and high-energy phenomenology at the far more accessible solid-state physics scale. Existing research shows that the Dirac fermions in crystals include linear Dirac fermions and higher-order Dirac fermions. At present, scientists have discovered three kinds of linear Dirac fermions in crystals. These three kinds of linear Dirac fermions can be described by one model~\cite{huang2017black}: \begin{equation}\label{eq:model type 1 to 3} H(\mathbf{k}) = \left[ \begin{array}{cc} h(\mathbf{k}) & \mathbf{0} \\ \mathbf{0} & h^*(\mathbf{-k}) \end{array} \right ] \end{equation} with \begin{equation} h(\mathbf{k})=\mathbf{v}\cdot \mathbf{k}\sigma_{0}+\sum_{i,j}k_{i}A_{ij}\sigma_{j}, \end{equation} where $\sigma_{0}$ is the identity matrix and $\sigma_{j}$ are Pauli matrices. The energy spectrum of a DP is \begin{equation} \begin{aligned} E_{\pm}(\mathbf{k}) & = \sideset{}{_i}\sum v_{i}k_{i} \pm \sqrt{\sideset{}{_j}\sum (\sideset{}{_i}\sum k_{i}A_{ij})^2} \\ & = T(\mathbf{k}) \pm U(\mathbf{k}). \end{aligned} \end{equation} It is known that the band crossing point is a type-\uppercase\expandafter{\romannumeral2} DP if there exist a direction for which $T > U$~\cite{chang2017type,yan2017lorentz,huang2016type,zhang2017experimental}, otherwise it is a type-\uppercase\expandafter{\romannumeral1} DP~\cite{wang2012dirac,liu2014stable,xiao2017manipulation,liu2019engineering,li2014gapless,ulstrup2014ultrafast}. If and only if for a particular direction $\hat{k}$ in reciprocal space, $T(\hat{k}) = U(\hat{k})$, but $T(\hat{k}) < U(\hat{k})$ for other directions, the DPs are connected by a line-like Fermi surface which is the Dirac line of the type-\uppercase\expandafter{\romannumeral3} DSM~\cite{huang2017black,milicevic2019type,jin2020hybrid,gong2020theoretical}. The bands at the energy of DPs and the corresponding Fermi surfaces are shown in Fig. \ref{type 1 to 3}. One can find that the bands of these three types of Dirac semimetals near the DPs are all cones and the Fermi surface are Linear(For type-\uppercase\expandafter{\romannumeral1} DSM, Fermi surface degenerates to a point). \begin{figure}[h] \centering \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.5cm]{type1band} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.5cm]{type2band} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.5cm]{type3band} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.5cm]{type1fs} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.5cm]{type2fs} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.5cm]{type3fs} \end{minipage} } \centering \caption{(a) Type-\uppercase\expandafter{\romannumeral1} DP with a point-like Fermi surface. (b) Type-\uppercase\expandafter{\romannumeral2} DP is the contact point between electron and hole pocket. (c) Type-\uppercase\expandafter{\romannumeral3} DP appears as the touching point between Dirac lines. The light yellow semitransparent plane corresponds to the position of the Fermi level. (d)-(f) Fermi surfaces of three types of DSMs.} \label{type 1 to 3} \end{figure} The Dirac points with nonlinear (higher-order) energy dispersion have also been studied in several works. Zihao Gao $et\,al$. studied classification of stable Dirac and Weyl semimetals with reflection and rotational symmetry, they pointed out that there are two kinds of Dirac semimetals created via ABC and TBC~\cite{gao2016classification}. Bohm-Jung Yang and Naoto Nagaosa propose a framework to classify three dimensional (3D) DSMs in systems having the time-reversal, inversion and uniaxial rotational symmetries, they found that quadratic Dirac point and cubic Dirac point can exist in systems having $C_6$ rotational symmetry with respect to the $z$ axis~\cite{yang2014classification}. Weikang Wu $et\,al$. perform a systematic search over all 230 space groups with time-reversal symmetry and spin-orbit coupling considered, they found that the order of dispersion cannot be higher than three, i.e., only the quadratic and cubic Dirac points (QDPs and CDPs) are possible~\cite{wu2020higher}. Wing Chi Yu $et\,al$. show that a cubic Dirac point involving a threefold axis can be realized close to the Fermi level in the non-ferroelectric phase of LiOs$\mathrm{O}_3$~\cite{yu2018nonsymmorphic}. Qihang Liu and Alex Zunger find that only materials in two space groups, P$6_3/$m (No. 176) and P$6/$mcc (No. 192), have the potential to host cubic Dirac fermions~\cite{liu2017predicted}. Recently, scientists discovered type-\uppercase\expandafter{\romannumeral3} Weyl points in (TaSe$_4$)$_2$I~\cite{li2019type}. In (TaSe$_4$)$_2$I, the Fermi surface near the Weyl point consists of two electron or two hole pockets and these two pockets are from the same band and the band looks like a saddle. This discovery inspires us whether a similar situation exists in DSMs. In this work, we investigate type-\uppercase\expandafter{\romannumeral2} and type-\uppercase\expandafter{\romannumeral4} Dirac fermions coexist in SrAgBi. Scientists' research interest in SrAgBi family materials has never been reduced~\cite{gibson2015three,chen2017ternary,chen2017hybrid,sasmal2020magnetotransport,chaiconfirmation,mardanya2019prediction,xu2020crystal,nakayama2020observation,tai2020anisotropic}. SrAgBi family materials were first predicted to exist DPs in Ref. \cite{gibson2015three}. Later, CaAgBi was predicted to coexist type-\uppercase\expandafter{\romannumeral1} and type-\uppercase\expandafter{\romannumeral2} Dirac fermions in Ref. \cite{chen2017ternary}. Here, we studied a new type of Dirac fermions in SrAgBi and is dubbed type-\uppercase\expandafter{\romannumeral4} Dirac fermions. At the energy of type-\uppercase\expandafter{\romannumeral4} Dirac fermions, the Fermi surface consists of a electron pocket and a hole pocket, it is similar to type-\uppercase\expandafter{\romannumeral2} Dirac Fermions, but the bands are non-linear. The electron pocket is in the hole pocket and they touch at the DPs. The Fermi arcs of both DPs were clearly discovered. Meanwhile, we use a eight-band $k{\cdot}p$ model to discribe the bandcrossing near the Fermi level along $\Gamma$-A. More importantly, we reproduce the bands of SrAgBi near the Fermi level with a tight-binding model. The Fermi surface of this model is consistent with the result of first-principles band-structure calculations. \section{COMPUTATION METHODS} In this paper, density-functional theory (DFT) calculations with the projected augmented wave (PAW) are implemented in the VASP~\cite{perdew1981self,kresse1996efficiency} with generalized gradient approximation (GGA)~\cite{perdew1996generalized}. The spin-orbit coupling (SOC) was employed in the electronic structure calculations. The tight-binding model matrix elements are calculated by projecting onto the Wannier orbitals~\cite{marzari1997maximally,souza2001maximally,mostofi2014updated}. We used Sr $d$-orbitals, Ag $d$-orbitals and Bi $p$-orbitals as initial wave functions for maximizing localization. Surface spectra were calculated based on the iterative Green¡¯s function with the help of WannierTools~\cite{wu2018wanniertools}. The eight-band tight-binding model is calculated by PythTB~\cite{yusufaly2013tight}. \section{RESULTS} \subsection{Electronic structure} The SrAgBi family of compounds crystalizes in a hexagonal Bravais lattice with space group $D_{6h}^{4}$ ($P6_{3}/mmc$, No. 194). As shown in Figs. \ref{bravais lattice}(a) and \ref{bravais lattice}(b). The crystal structure can be viewed as stacked graphene layers; the Sr$^{2+}$ cations are stacked between [Ag$^{1+}$As$^{3-}$]$^{2-}$ in the honeycomb network. Fig. \ref{bands and pocket}(a) shows the bulk and surface Brillouin zone (BZ) of SrAgBi crystal. The type-\uppercase\expandafter{\romannumeral2} and type-\uppercase\expandafter{\romannumeral4} DPs are marked by green points and blue points, respectively. \begin{figure}[h] \centering \subfigure[]{ \begin{minipage}[t]{0.46\linewidth} \centering \includegraphics[width=3.5cm]{unitcell} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.46\linewidth} \centering \includegraphics[width=3.5cm]{topcell} \end{minipage} } \centering \caption{(a) Side view and (b) top view of the crystal structure of SrAgBi.} \label{bravais lattice} \end{figure} \begin{figure}[htp] \centering \subfigure[]{ \begin{minipage}[t]{0.9\linewidth} \centering \includegraphics[width=7.0cm]{BZboth} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.9\linewidth} \centering \includegraphics[width=5.0cm]{banddft} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.46\linewidth} \centering \includegraphics[width=3cm,height=2cm]{banddftGA} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.46\linewidth} \centering \includegraphics[width=3cm]{banddft3D} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.46\linewidth} \centering \includegraphics[width=4cm]{pocket3D} \end{minipage} } \centering \caption{(a) BZ of bulk and the projected surface BZ of $(100)$ plane. Band structure on high symmetry line (b), (c) and $k_y-k_z$ plane (d). (e) Bulk-state equal-energy contours of E = 0.1043 eV.} \label{bands and pocket} \end{figure} \begin{figure*}[htp] \centering \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=6cm,height=5cm]{fs0000} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=6cm,height=5cm]{fs0037} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=6cm,height=5cm]{fs0050} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=6cm,height=5cm]{fs0070} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=6cm,height=5cm]{fs0080} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=6cm,height=5cm]{fs1043} \end{minipage} } \centering \caption{Bulk constant energy contours in $(k_y,k_z)$ space at $k_x = 0$ of different energy around the Fermi level.} \label{2d pocket 6} \end{figure*} We now present the calculated band structure of SrAgBi to reveal the DPs and its type-\uppercase\expandafter{\romannumeral4} character. The calculated bulk band structure along high symmetry directions [Fig. \ref{bands and pocket}(b)] reveals the semimetallic ground state. The band structure shows that there are two DPs along the $\Gamma$-A direction. We mark these two DPs in Fig. \ref{bands and pocket}(c). The blue one is type-\uppercase\expandafter{\romannumeral4} DP and the green one is type-\uppercase\expandafter{\romannumeral2} DP. Figure \ref{bands and pocket}(d) shows the 3D band structure of valance band and conduction band and they touch at type-\uppercase\expandafter{\romannumeral4} DPs. One can find that the band structure near a type-\uppercase\expandafter{\romannumeral4} DP isn't a perfect cone and can't be described by equation (\ref{eq:model type 1 to 3}). Because a type-\uppercase\expandafter{\romannumeral4} Dirac node exists between $\Gamma$ and A points, and the two planes $k_z$ = 0, $k_z$ = $\pi$ are fully gapped, the two dimensions (2D) topological invariants of the two planes must be topologically distinct. For the illustration, one can consider their 2D Z2 invariants~\cite{kane2005z}, $\nu_0$ and $\nu_\pi$. Using the Fu-Kane method~\cite{fu2007topological}, one results that ($\nu_0$, $\nu_\pi$) = (1, 0). These two invariants indicate the single band inversion at $\Gamma$. Because the type-\uppercase\expandafter{\romannumeral2} DP and the type-\uppercase\expandafter{\romannumeral4} DP are formed by the intersection of three adjacent bands and they are very close to each other in BZ, We think it is meaningful to study the relationship between these two points. Fig. \ref{2d pocket 6} shows the evolution of bulk-state equal-energy contours for different values of E, the blue ``+" is the position of type-\uppercase\expandafter{\romannumeral4} DP and the green ``+" is the position of type-\uppercase\expandafter{\romannumeral2} DP. $e^+$ and $e^-$ represent hole pocket and electron pocket, respectively. When E = 0.0 eV [Fig. \ref{2d pocket 6}(a)], there is a electron pocket in a hole pocket. When E = 0.037 eV [Fig. \ref{2d pocket 6}(b)], the energy of type-\uppercase\expandafter{\romannumeral4} DP, the electron pocket touches the hole pocket. The touched points are type-\uppercase\expandafter{\romannumeral4} DPs. When the energy continues to increase, the electron pocket disappears and when E = 0.1043 eV [Fig. \ref{2d pocket 6}(f)], the other two pockets touch at type-\uppercase\expandafter{\romannumeral2} DPs. \subsection{Fermi arcs and surface states} Fermi arc electron states on the surface of the crystal are the signature of the Dirac semimetal state. In our case, we present calculations of the (100) surface states. Figures \ref{surface state}(a) and \ref{surface state}(b) are (100) Fermi surfaces for type-\uppercase\expandafter{\romannumeral4} and type-\uppercase\expandafter{\romannumeral2} DPs. There exist the Fermi arcs marked by the arrows and the Fermi arcs are terminating at the projected DPs. Figure \ref{surface state}(c) shows the enlarged view of the area highlighted by the black box in Fig. \ref{surface state}(b). Figure \ref{surface state}(d) shows the energy dispersion of the surface band structure along $\bar{\Gamma}$-$\bar A$ ($k_z$) direction. The projected Dirac points are denoted as a blue ``+" and a green ``+" respectively. We observe surface states that emerge out of the DPs at $k_z$=0.406($2\pi/c$) and $k_z$=0.438($2\pi/c$). \begin{figure}[] \centering \includegraphics[width=.49\textwidth]{arctype4} \includegraphics[width=.49\textwidth]{arctype2} \includegraphics[width=.49\textwidth]{arctype2large} \includegraphics[width=.49\textwidth]{surfacestate} \caption{Surface Fermi surface with SOC of (a) at the energy of type-\uppercase\expandafter{\romannumeral4} DPs and (b) at the energy of type-\uppercase\expandafter{\romannumeral2} DPs. (c) An enlarged view of the area highlighted by the black box in (b). (d) Surface band structure of SrAgBi along the $\bar{\Gamma}$-$\bar A$ direction on the $(100)$ surface BZ.} \label{surface state} \end{figure} \section{MODELS} \subsection{Effective model of type-\uppercase\expandafter{\romannumeral4} Dirac fermions} To characterize the type-\uppercase\expandafter{\romannumeral4} Dirac fermion (marked by D1 in Fig. \ref{bands and pocket}(c)), we construct a $k{\cdot}p$ effective model around it, subjected to the symmetry constraints~\cite{wang2012dirac,bradley2009mathematical,mardanya2019prediction}. On $\Gamma$-A path, the point group symmetry is of $C_{6v}$. Using the $\Delta_{9}$ and $\Delta_{7}$ states of the $C_{6v}$ group as the basis with components $\Delta_{9}(+3/2)$, $\Delta_{9}(-3/2)$, $\Delta_{7}(+1/2)$, $\Delta_{7}(-1/2)$, the Hamiltonian around D1 to $\mathcal{O}(k_{x,y}^2)$ and $\mathcal{O}(k_z)$ is given by, \begin{equation} H(\mathbf{k}) = \left[ \begin{array}{cccc} \epsilon_{1}(\mathbf{k}) & 0 & Ck_{+}^{2} & D(k_z)k_{+} \\ 0 & \epsilon_{1}(\mathbf{k}) & -D(k_z)k_{-} & Ck_{-}^{2} \\ Ck_{-}^{2} & -D(k_z)k_{+} & \epsilon_{2}(\mathbf{k}) & 0 \\ D(k_z)k_{-} & Ck_{+}^{2} & 0 & \epsilon_{2}(\mathbf{k}) \end{array} \right ], \end{equation} where $k_{\pm}=k_{x}\ {\pm}\ ik_{y}$, $\epsilon_{1}(\mathbf{k})=A_{1}k_{+}k_{-}+B_{1}k_{z}$, $\epsilon_{2}(\mathbf{k})=A_{2}k_{+}k_{-}+B_{2}k_{z}$, and $D(k_{z})=D_{0}+D_{1}k_{z}$. The eigenvalues of the above Hamiltonian are, \begin{equation} \begin{aligned} &E(\mathbf{k}) = \frac{\epsilon_{1}(\mathbf{k})+\epsilon_{2}(\mathbf{k})}{2} \\ &\ \ \ \ \pm\sqrt{\left(\frac{\epsilon_{1}(\mathbf{k})-\epsilon_{2}(\mathbf{k})}{2}\right)^2+C^{2}(k_{+}k_{-})^{2}+D(k_{z})^{2}k_{+}k_{-}}. \end{aligned} \end{equation} By fitting the energy spectrum of the effective Hamiltonian with that of the $ab\ initio$ calculation, the parameters in the effective model can be determined. For type-\uppercase\expandafter{\romannumeral4} Dirac fermions in SrAgBi, our fitting leads to $A_{1}=-344.688\ \mathrm{eV\AA}^{2}$, $B_{1}=-1.45388\ \mathrm{eV\AA}$, $A_{2}=238.175\ \mathrm{eV\AA}^2$, $B_{2}=0.437982\ \mathrm{eV\AA}$, $C=-0.009549\ \mathrm{eV\AA}^2$, $D_{0}=0.000635\ \mathrm{eV\AA}$, $D_{1}=44.7332\ \mathrm{eV\AA}^2$. The band at the energy of type-\uppercase\expandafter{\romannumeral4} DP in $k_y{-}k_z$ plane and the corresponding Fermi surface are shown in Fig. \ref{type 4}. One can find that the Fermi surface is like two quadratic functions touching at the type-\uppercase\expandafter{\romannumeral4} Dirac point. \begin{figure}[h] \centering \subfigure[]{ \begin{minipage}[t]{0.46\linewidth} \centering \includegraphics[width=3.5cm]{type4band} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.46\linewidth} \centering \includegraphics[width=3.5cm]{type4fs} \end{minipage} } \centering \caption{(a) Dispersion around the type-\uppercase\expandafter{\romannumeral4} DP by $k{\cdot}p$ model fitting. (b) Fermi surface of type-\uppercase\expandafter{\romannumeral4} DSM.} \label{type 4} \end{figure} \subsection{Coexistence of type-\uppercase\expandafter{\romannumeral2} and type-\uppercase\expandafter{\romannumeral4} Dirac fermions} Now we begin to illustrate the coexistence of type-\uppercase\expandafter{\romannumeral2} and type-\uppercase\expandafter{\romannumeral4} DSM phase in a simplified model. Consider two pairs of DPs located on $\Gamma$-A in the BZ. Here, $\Gamma$-A is the principal rotation axis, which offers the symmetry protection needed for the DPs~\cite{yang2014classification}. Because the nontrivial band inversion of SrAgBi is determined by the low-energy bands along $\Gamma$-A, while the bands elsewhere are far away from Fermi level. To describe SrAgBi, we need an eight-band model, because of the following considerations. (\romannumeral1) There are two pairs of bands near the Fermi level. (\romannumeral2) One band crosses the other two bands with different symmetry characters to form the DPs. (\romannumeral3) Each band should be Kramers degenerate in the presence of both time-reversal($\mathcal T$) and inversion($\mathcal P$). With the reasons above, we consider the following model defined around $\Gamma$-A~\cite{zhu2019composite}: \begin{equation}\label{eq:eight band kp model} \mathcal{H}_{\mathrm{eff}} = \left[ \begin{array}{cc} H_{\uparrow \uparrow} & \mathbf{0} \\ \mathbf{0} & H_{\downarrow \downarrow} \end{array} \right ], \end{equation} where $H_{\uparrow \uparrow}$ and $H_{\downarrow \downarrow}$ are $4 \times 4$ matrices with \begin{equation} H_{\uparrow \uparrow} = \left[ \begin{array}{cccc} M_{1} & B_{1}\mathrm{cos}\frac{k_z}{2} & 0 & Ak_{+} \\ B_{1}\mathrm{cos}\frac{k_z}{2} & M_1 & Ak_{+} & 0 \\ 0 & Ak_{-} & M_2 & B_{2}\mathrm{cos}\frac{k_z}{2} \\ Ak_{-} & 0 & B_{2}\mathrm{cos}\frac{k_z}{2} & M_2 \end{array} \right ] \end{equation} and $H_{\downarrow \downarrow} = H_{\uparrow \uparrow}^{\ast}$. Here, $k_{\pm} = k_x \pm ik_y$ and $A$, $B_i$, and $M_i$ $(i = 1, 2)$ are the parameters of the model. On the $\Gamma$-A path in which $k_{\pm} = 0$, one obtains the following Kramers degenerate spectrum: \begin{equation} \varepsilon_{i,\pm}(k_z) = M_i \pm B_i\mathrm{cos}\frac{k_z}{2}. \end{equation} At $\Gamma$ and A, the band energies respectively are \begin{equation} \varepsilon_{i,\pm}^{\Gamma} = M_i \pm B_i, \qquad\varepsilon_{i,\pm}^{A} = M_i. \end{equation} \begin{figure}[htp] \centering \subfigure[]{ \begin{minipage}[t]{0.46\linewidth} \centering \includegraphics[width=4cm]{kpmodel0} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.46\linewidth} \centering \includegraphics[width=4cm]{kpmodel1} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.8\linewidth} \centering \includegraphics[width=6cm]{kpmodel2} \end{minipage} } \centering \caption{Three types of phases with distinct band ordering along the $\Gamma$-$A$ path, as described by the Eq. (\ref{eq:eight band kp model}). (c) is consistent with the band of SrAgBi along $\Gamma$-$A$ near the Fermi energy.} \label{eight band kp pic} \end{figure} By tuning $M_i$ and $B_i$, the band structures for equation (\ref{eq:eight band kp model}) can drop into three distinct cases, as shown in Fig. \ref{eight band kp pic}. We assume that in the atomic limit the pair $\varepsilon_{1,\pm}$ is energetically above $\varepsilon_{2,\pm}$, and consider the bands are half-filled. The case in Fig. \ref{eight band kp pic}(a) is a trivial insulating phase adiabatically connected to the atomic limit. For the case in Fig. \ref{eight band kp pic}(b), there is only one band inversion at $\Gamma$, leading to the formation of DPs. For the case in Fig. \ref{eight band kp pic}(c), there are two band inversions at $\Gamma$, this situation is consistent with SrAgBi. \begin{figure}[htp] \centering \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=3.0cm]{tbmodelcell} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.4\linewidth} \centering \includegraphics[width=4.0cm]{BZtbmodel} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.9\linewidth} \centering \includegraphics[width=6cm]{bandtbmodel} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.9\linewidth} \centering \includegraphics[width=6.0cm]{bandtbmodel3D} \end{minipage} } \centering \caption{(a) 3D lattice model. The arrows indicate two interlayer hopping processes. (b) The bulk BZ of this model. (c) Band structure of the model. (d) 3D band structure on ${k_y}-{k_z}$ plane; we have used $\epsilon_a = -1.1083$ eV, $\epsilon_b = 0.1087$ eV, $t_1 = 0.2533$ eV, $t_2^a=0.0166$ eV, $t_2^b = -0.0766$ eV, $t_3^a = -0.3950$ eV, $t_3^b = -0.1030$ eV.} \label{tbmodel pic} \end{figure} The above analysis provides an intuitive picture of the band inversion and band crossing along $\Gamma$-A of SrAgBi, yet the symmetry protections cannot be captured within the simplified model. To fully characterize the type-\uppercase\expandafter{\romannumeral4} DSM phase, we extend the simplified equation (\ref{eq:eight band kp model}) to a tight-binding model. SrAgBi has three important symmetries in addition to the $\mathcal{T}$ and $\mathcal{P}$: A sixfold screw rotation ${\tilde C_6} : (x,y,z) \rightarrow (x/2-\sqrt{3}y/2,\sqrt{3}x/2+y/2,z+1/2)$, a horizontal mirror $M_z : (x,y,z) \rightarrow (x,y,-z+1/2)$, and a vertical glide mirror ${\tilde M_y} : (x,y,z) \rightarrow (x,-y,z+1/2)$. According to the first principle calculation, we find that almost all of the electron state near the fermi energy come from $Ag$'s $d$ orbital and $Bi$'s $p$ orbital. So we consider a 3D lattice consisting of 2D honeycomb layers stacked along $z$, as sketched in Fig. \ref{tbmodel pic}(a). For each layer, the $A$ and $B$ sites are occupied by two different types of atoms, $Ag$ and $Bi$, respectively. Each unit cell contains two layers, between which $A$ and $B$ are switched. We assume that each site has two basis orbitals forming a Kramers pair: $\vert p_{+},\uparrow\rangle$ and $\vert p_{-},\downarrow\rangle$ on $A$, whereas $\vert d_{+2},\uparrow\rangle$ and $\vert d_{-2},\downarrow\rangle$ on $B$. where $p_{\pm} = p_x \pm ip_y$ and $d_{\pm 2} = d_{x^2-y^2} \pm 2id_{xy}$. Based on these, we use the following tight-binding model~\cite{zhu2019composite}: \begin{equation}\label{eq:eightband tight binding model} \begin{aligned} \mathcal{H}&=\sum_{\alpha,i}(\epsilon_{a}a_{\alpha,i}^{\dag}a_{\alpha,i}+\epsilon_{b}b_{\alpha,i}^{\dag}b_{\alpha,i})\\ &\quad +\sum_{\alpha,i,m}t_1(-1)^{\alpha}(a_{\alpha,i+\mathbf{R}_m}^{\dag}\sigma_{z}e^{i(2m-1)\pi/3}b_{\alpha,i}+\mathbf{H.c.}) \\ &\quad +\sum_{\alpha,i,n}(t_{2}^{a}a_{\alpha,i+\mathbf{R}_n^{'}}^{\dag}a_{\alpha,i}+t_{2}^{b}b_{\alpha,i+\mathbf{R}_n^{'}}^{\dag}b_{\alpha,i})\\ &\quad +\sum_{i,m}(t_{3}^{a}a_{0,i+\mathbf{R}_m}^{\dag}a_{1,i}+t_{3}^{b}b_{1,i+\mathbf{R}_m}^{\dag}b_{0,i}+\mathbf{H.c.}). \end{aligned} \end{equation} \begin{figure}[htp] \centering \subfigure[]{ \begin{minipage}[t]{0.9\linewidth} \centering \includegraphics[width=4.5cm,height=4cm]{fstbmodel1} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.9\linewidth} \centering \includegraphics[width=4.5cm]{fstbmodel2} \end{minipage} } \centering \caption{(a) ${k_y}-{k_z}$ and (b) ${k_x}-{k_y}$ plane bulk-state equal-energy contours.} \label{tbmodel pocket} \end{figure} Here, $a^{\dag}=(a_{\vert p_{+},\uparrow\rangle}^{\dag},a_{\vert p_{-},\downarrow\rangle}^{\dag})$ and $b^{\dag}=(b_{\vert d_{+2},\uparrow\rangle}^{\dag},b_{\vert d_{-2},\downarrow\rangle}^{\dag})$ are the electron creation operators, $\alpha = 0, 1$ label the two layers in a unit cell, $i$ labels the sites within a layer, $\mathbf{R}_m$ $(m = 1, 2, 3)$ correspond to the vectors connecting to the three nearest neighbors in a layer, $\mathbf{R}_n^{'}$ $(n = 1,...,6)$ correspond to the vectors connecting to the six next-nearest neighbors in a layer, $\epsilon_a$ and $\epsilon_b$ are the on-site energies, and the $t$'s are various hopping amplitudes. In model (\ref{eq:eightband tight binding model}), the first term represents an on-site energy difference, and the second and third terms are hoppings within a honeycomb layer. The last term represents the strongest interlayer hopping. This model retains the main symmetry of SrAgBi. If the Sr atoms in SrAgBi were removed, the crystal lattice would become identical to the lattice for this tight-binding model. Figure \ref{tbmodel pic}(c) shows the coexistence of the type-\uppercase\expandafter{\romannumeral2} and type-\uppercase\expandafter{\romannumeral4} DSM phase in the tight-binding model (\ref{eq:eightband tight binding model}), and the low-energy physics resembles that in Fig. \ref{bands and pocket}(b). One can see that there is part of valance band (P1-P2) along $\Gamma$-K in Fig. \ref{tbmodel pic}(c), which is higher than the Fermi level, this is very important in forming type-\uppercase\expandafter{\romannumeral4} DPs. The band near the type-\uppercase\expandafter{\romannumeral4} DP (D1) is shown in Fig. \ref{tbmodel pic}(d), which is consistent with Fig. \ref{bands and pocket}(d). Figures \ref{tbmodel pocket}(a) and \ref{tbmodel pocket}(b) show the bulk-state equal-energy contours of ${k_y}-{k_z}$ and $k_z = 0$ plane, this means that there is a electron pocket in a hole pocket and these two pockets touch at type-\uppercase\expandafter{\romannumeral4} DPs. One can find Fig. \ref{tbmodel pocket}(a) is similar to Fig. \ref{2d pocket 6}(b), this proves that the model we chose is effective in describing SrAgBi. \section{Conclusion} By using the first principle calculation with SOC, we proposed that type-\uppercase\expandafter{\romannumeral4} Dirac fermions exist in the bulk state of SrAgBi. Distinct from type-\uppercase\expandafter{\romannumeral1}, type-\uppercase\expandafter{\romannumeral2} and type-\uppercase\expandafter{\romannumeral3} Dirac semimetals such as Na$_3$Bi~\cite{wang2012dirac}, VGa$_3$~\cite{chang2017type} and Zn$_2$In$_2$S$_5$~\cite{huang2017black}, SrAgBi has nonlinear bands near the type-\uppercase\expandafter{\romannumeral4} DPs. The $k{\cdot}p$ model shows that the type-\uppercase\expandafter{\romannumeral4} Dirac fermion is a kind of higher-order Dirac fermion~\cite{wu2020higher}. Because there is a type-\uppercase\expandafter{\romannumeral2} DP near the type-\uppercase\expandafter{\romannumeral4} DP, SrAgBi provides a platform for exploring the interplay between type-\uppercase\expandafter{\romannumeral2} and type-\uppercase\expandafter{\romannumeral4} Dirac fermions, we used two models to describe the coexistence of these two Dirac fermions. Topological surface states of these two DPs are also calculated. The bulk DPs and the surface states can be experimentally probed by the angle-resolved photoemission spectroscopy (ARPES)~\cite{liu2014discovery,xu2015discovery}. In addition, some articles point out that by doping other atoms, one DP in this type of material can be split into two triple points~\cite{mondal2018broken,mardanya2019prediction}, which points out the direction for our future work. \section*{References} \bibliographystyle{unsrt}
{ "timestamp": "2021-12-21T02:12:56", "yymm": "2007", "arxiv_id": "2007.13548", "language": "en", "url": "https://arxiv.org/abs/2007.13548" }
\section{\bf 1. Introduction} In earlier works, [\putref{dowgjmsren}], I discussed the construction of the (one loop) effective actions for higher derivative scalars and spinors using the spherical product forms of the GJMS--type propagation operators (kinetic operators) for all even sphere dimensions. Working in even dimensions, the interest was in calculating the coefficient of the logarithmic (or divergent) term in the effective action (`free energy'). Technically this amounted to evaluating the relevant $\zeta$--function\ at 0. For $p$--forms at that time, [\putref{dowCTpform}], I was able only to discuss second order operators owing to the lack of ( knowledge of) a conformal higher derivative GJMS--type operator. I now propose to take the Branson--Gover operators as the appropriate ones. The genesis of these operators lies in papers by Branson, [\putref{Branson}], and developed by Branson and Gover, [\putref{BandG}], but for convenience I will refer to Fischmann and Somberg, [\putref{FandS}], who have a combinatorial approach and a useful set of references. The mathematical motivation comes from conformal differential geometry which has been subject to very substantial development. I will, therefore, simply posit the relevant formulae and proceed to a workaday calculation of the `conformal anomaly'. I would also wish to discuss the formal functional determinant but I leave this for another time. I calculate coexact form quantities and combine them to give the unconstrained form ones. These I assemble into a `gauge invariant' combination, although I give no detailed field theoretic justification of this. I will, throughout, refer to a conformal anomaly and an effective action as shorthand terminology for $\sim\zeta(0)$ and $\sim\zeta'(0)$ of the field in question. I work on a $q$ conically deformed sphere which is the $d$--dimensional periodic spherical $q$--lune described in [\putref{dowgjmsren}] to where I refer for basic explanation. \section{\bf2. The operators} The construction of the Branson--Gover higher--derivative, conformally covariant operators on a general manifold is somewhat complicated and need not detain us. However, on conformal Einstein manifolds they have the remarkable property of being factorisable. For spheres, this has been known for a long time, [\putref{Branson}]. Perhaps the reason why the operators have not appeared much in the physics literature, is that the $\delta d$ and $d\delta$ parts are weighted differently, which seems unnaturally different to the de Rham operator but which is necessary for conformal covariance, [\putref{Branson}]. Special amongst the operators are the `critical' ones, where the derivative order, $2k$, of the propagation operator is related to the form order, $p$, and manifold dimension $d$ by $$ 2k=d-2p $$ so that when $k=1$ (the usual case), $p=d/2-1$, the conformal value. In odd dimensions operators exist for all values of the derivative order. The general operator factorises on Einstein manifolds, at least, but I consider here only the sphere S$^d$ for even $d$. Although spectral resolutions exist for the other manifolds, the sphere will be quite enough to begin with. Several expressions are given in [\putref{FandS}] \footnotecheck\defaultoption[]\@footnote{ Note the ArXiv and published versions differ in format.} in Theorems 4.3 and 4.4 corresponding to various restrictions on the parameters. For example, defining $\beta=d/2-p$, an expression is given in equn.(4.7) (or (38)) in [\putref{FandS}] which, for even $d$, holds only in the non--supercritical case, {\it i.e. }\ $\beta>k-1$. I copy it here, choosing a unit $d$--sphere, $$ L^{(p)}_{2k}=\beta\prod_{j=0}^{k-1}\bigg[{\beta+j+1\over \beta+j}\delta d +{\beta-j-1\over\beta-j} d\delta +\big({\beta}-j-1\big)\big({\beta}+j+1\big)\bigg]\,. \eql{genop} $$ From this, one can extract the part which acts on the range of $\delta, {\cal R}(\delta)$, {\it i.e. } on coexact forms, $$\eqalign{ L^{CE(p)}_{2k}&=\big({\beta}+k\big)\prod_{j=0}^{k-1}\bigg[\delta d +\big(\beta-{1\over2}\big)^2 - (j+1/2)^2\bigg]\cr &=\big({\beta}+k\big)\prod_{j=0}^{k-1}\big(B^2-(j+1/2)^2\big) \equiv\big({\beta}+k\big) \prod_0^{k-1}\big(B^2-\alpha_j^2\big)\cr &=\big({\beta}+k\big)\prod_0^{k-1} \big(B-\alpha_j\big)\big(B+\alpha_j\big)\,, \quad k=1,2,\ldots\,, } \eql{OmegaC} $$ where the pseudo--operator $B$ is defined by, \footnotecheck\defaultoption[]\@footnote{ $\delta d$ is taken to be positive.}, $$ B=\sqrt{\delta d +\alpha^2(a,p)}\,, $$ with $$ \alpha(a,p)=\beta-1/2=a-p\,,\quad\quad a\equiv (d-1)/2\,. $$ Likewise, acting on ${\cal R}(d)$, exact forms, the operator turns into $$\eqalign{ L^{E(p)}_{2k}&= \big({\beta}-k\big)\prod_{j=0}^{k-1}\bigg[d\delta +\big(\beta+{1\over2}\big)^2 - \alpha_j^2\bigg]\cr &=\big({\beta}-k\big)\prod_0^{k-1}\big(C^2-\alpha_j^2\big)\cr &=\big({\beta}-k\big)\prod_0^{k-1} \big(C-\alpha_j\big)\big(C+\alpha_j\big)\,, \quad k=1,2,\ldots\,, } \eql{OmegaE} $$ with $ C=\sqrt{d\delta+(\beta+1/2)^2} $. These expressions are given in Branson, [\putref{Branson}], Remark (3.30) and I will refer to them as the Branson operators. When $\beta=k$, the exact part disappears {\it algebraically} from (\puteqn{genop}) leaving just the coexact one. Since we are just at the lower limit of the condition $\beta>k-1$, the operators have been termed `critical', $$ L^{(p)}_{d-2p}=L^{CE(p)}_{d-2p}\,, $$ and could be considered as generalisations of the four dimensional Maxwell operator, $\delta d$. The critical exact operator is the Hodge star dual of the coexact one ($p\to d-p$). Critical operators possess important factorisations involving the $Q$ curvature and the gauge companion, $G$. It is easily seen from (\puteqn{OmegaC}) that $$\eqalign{ L^{(p)}_{2k}&=2k\,\delta\circ\prod_{j=0}^{k-2}\bigg[d\delta +\big(\beta-{1\over2}\big)^2- (j+1/2)^2\bigg]\circ d\,,\quad 2k=d-2p\cr &\equiv \delta\circ Q^{(p+1)}_{k-2}\circ d\cr &\equiv \delta \circ G^{(p+1)}_{k-1}\,. } \eql{Fact} $$ I do not use these When restricted to ${\cal R}(d)$ or to ${\cal R}(\delta)$ the remaining operators in Theorem 4.4 also reduce to the Branson operators, (\puteqn{OmegaC}) and (\puteqn{OmegaE}) which can therefore be taken as valid for {\it all} $\beta$. In addition there is the further condition that $k$ must be less than $d/2$. This is the point at which a zero eigenvalue first appears (for even $d$), {\it i.e. }\ the same limitation as in the {\it scalar} GJMS case. This is because the eigenvalues are independent of the form order, $p$. (Only the degeneracy depends on $p$.) For this reason operators with $k=d/2$ could also be deemed even more `critical'. For odd $d$, $k$ can be continued beyond $d/2$. Most mathematical activity is concerned with even dimensional manifolds and I concentrate on these here too. \section{\bf 3. The geometry} The simplest course would be to take the full sphere as the manifold but, in order ultimately to discuss R\'enyi and entanglement entropies, I will, as in the earlier works, take the $q$--deformed sphere. More precisely, I take a $q$--lune which doubles up to make a periodic $q$--deformed sphere. The complete mode set is then obtained by combining those for absolute and relative boundary conditions on the boundary of the lune. When $q=1$, the lune is a hemisphere and the doubling gives the full round sphere. \section{\bf 4. The calculation} The immediate aim is to calculate various coexact spectral invariants for the critical operators. I will embed these in the more general coexact ones, (\puteqn{OmegaC}), so that the analysis in [\putref{dowCTpform}] can be drawn upon. I will assume that, because of the Hodge decomposition, ${\cal R}(\delta)\oplus{\cal R}(d)\oplus {\cal N}(d)\cap{\cal N}(\delta)$, the coexact quantities are spectrally sufficient. Then, for example, the total $p$--form conformal anomaly is the combination,\footnotecheck\defaultoption[]\@footnote{ This combination would ensure Hodge star duality in $d$ dimensions for the anomaly. For the zeta function, the scaling coefficients would have to be inserted.} $$ \zeta_{tot}(0,p)=\zeta^{CE}(0,p)+\zeta^{CE}(0,p-1)\,, \eql{totcomb} $$ aince one still has, $$ \zeta^{E}(0,p)=\zeta^{CE}(0,p-1)\,, $$ although there would be scaling considerations for the determinant. It should be noted that the system here analysed is not quite the same as that in [\putref{dowCTpform}] for which the relevant operator is $\delta d$ for all allowed dimensions and form orders. The second order Branson operator, $ \sim \delta d +\alpha^2-1/4 $, depends explicitly on the form order and manifold dimension. It could be looked upon as a `conformally improved' de Rham Laplacian because, for scalars (0--forms), it is the Penrose--Yamabe operator, while $\delta d$ is the minimal one. There is, however, agreement for critical forms, and their duals. Since I am presently interested only in the conformal anomaly the overall normalisation, $(\beta+k)$, of the operator (\puteqn{OmegaC}) plays no role and I will ignore it.\footnotecheck\defaultoption[]\@footnote{ This would not strictly be so for the functional determinants but can be allowed for as a change of scale involving the conformal anomaly and hence absorbed by renormalisation.} The computation is given in [\putref{dowCTpform, dowgjmsren}] but I develope a few essentials here.\footnotecheck\defaultoption[]\@footnote{ The general trend and much of the detail of the calculation are valid for even and odd dimensions but I break up the analysis and just consider even dimensions here.} The important fact for the furtherance of the analysis is that $B$ and $C$ have linear eigenvalues, $$ a +1+m\,,\quad m=0,1,2,\ldots \,, $$ which can usefully be organised into an associated $\zeta$--function, $$ \zeta(s,a,p,q)\equiv\sum_{m=0}^{\infty} {d(m)\over (a+1+m)^s}\,, \eql{auxzf} $$ where $d(m)$ is the degeneracy of the sphere eigenlevel labelled by $m$ and depends on the boundary conditions and $p$, $d$ and $q$. The exact degeneracy can be obtained from the coexact one by the replacement $p\to p-1$. The degeneracy is best encoded in a generating function which acts as a square--root `heat kernel' and allows the construction of the auxiliary $\zeta$--function, (\puteqn{auxzf}), by Mellin transform {\it directly}. The details are given in [\putref{dowCTpform}] and result in a linear combination of Barnes $\zeta$--functions, $\zeta_d$, $$\eqalign{ \zeta^{CE}_a(s,a,p,q) &=(-1)^{p+1}\!\!\sum_{r=p+1}^d(-1)^r\bigg[ \comb{d-1}r\zeta_d(s,a-p+r\mid\bom)\cr &\hspace{****************}+ \comb{d-1}{r-1}\,\zeta_d(s,a-p+r+q-1\mid\bom)\bigg]\,, } \eql{zet4} $$ where the vector, $\bom$, stands for the $d$--dimensional set $\bom=(q,1,\ldots,1)$. Equation (\puteqn{zet4}) is for absolute boundary conditions. Coexact duality gives the relative case, $$ \zeta^{CE}_r(s,a,p,q)=\zeta^{CE}_a(s,a,d-1-p,q)\,, $$ which has to be added to the absolute expression in order to get the coexact $p$--form value on the {\it periodic} (double) lune, which is the sphere when $q=1$. One can also subtract them and expect to get a quantity defined on the lune boundary, a ($d-1)$--sphere. The expressions for more general spherical factorings have been given earlier, under a different name, in [\putref{dowpform1}] equns (23) and (24). In the case of the full sphere, formulae and comments can also be found in [\putref{DandKii}]. The importance of $\zeta(s,a)$ is that it is thus readily evaluated, and that the derivatives of the full $\zeta$--function\ of $L$ at $s=0$ can be given in terms of it, as I now briefly outline. The eigenproblem for $L^{CE(p)}_{2k}$ is solved by that for $B$ and its $\zeta$--function\ is given by\footnotecheck\defaultoption[]\@footnote{ The factors of ${\beta\pm k}$ in (\puteqn{OmegaC}), (\puteqn{OmegaE}) have now been discarded.}, $$ Z(s,k,p,d)=\sum_{m=0}^\infty\prod_{j=0}^{k-1}{d(m) \over[(a+1+\alpha_j+m)(a+1-\alpha_j+m)]^s}\,. $$ A formal expansion, [\putref{Dowcmp}], in the $\alpha_j$ allows one to find the value at zero as the average $$ Z(0,k,p,d)={1\over 2k}\sum_{j=0}^{k-1}\bigg( \zeta(0,a+\alpha_j)+\zeta(0,a-\alpha_j)\bigg)\,, \eql{confanom} $$ and the derivative at 0 as the `corrected' sum, $$ Z'(0,k,p,d)=\sum_{j=0}^{k-1}\bigg(\zeta'(0,a+\alpha_j)+\zeta'(0,a-\alpha_j)\bigg)\,+\, M(k)\,, \eql{effact} $$ where the polynomial $M(k)$ could be termed a multiplicative anomaly, but is really part and parcel of the evaluation. \footnotecheck\defaultoption[]\@footnote{ I have simplified the notation a little. The $\zeta$--function\ on the right is that displayed in (\puteqn{zet4}) omitting `$p,q$'.} These are my calculational expressions for the coexact `conformal anomaly' and `effective action' (up to divergences) in even dimensions. Apart from $M$, they have the form of a linear sum over spectral quantities associated with each linear factor in the product operator $L^{CE (p)}_{2k}$, (\puteqn{OmegaC}). Importantly, the GJMS sum over the $\alpha_j$ parameters in (\puteqn{OmegaC}) can be transferred to the Barnes $\zeta$--functions\ making up the auxiliary $\zeta$--function, $\zeta(s,a,p,q)$. It is instructive and computationally quickening to make the effect of this geometric sum explicit at this stage by the relation, [\putref{dowgjmsren}], $$\eqalign{ &\sum_{j=0}^{k-1}\big(\zeta_d(s,a+j+1/2\mid{\bom}) +\zeta_d(s,a-j-1/2\mid{\bom})\big)\cr &\hspace{****}=\zeta_{d+1}(s,a+1/2-k\mid \bom,1)-\zeta_{d+1}(s,a+1/2+k\mid\bom,1)\,, } \eql{holog} $$ which converts the sum of $k$ $d$--dimensional quantities into the {\it difference} of two $(d+1)$--dimensional quantities in a holographic fashion. Note that on the left of (\puteqn{holog}), $k$ is an integer while the right allows an extension off the integers. In particular, one can easily differentiate with respect to $k$. \section{\bf 5. The conformal anomaly} Doing the GJMS sum produces for the (absolute) coexact conformal anomaly on the $q$--lune, (\puteqn{confanom}) using (\puteqn{zet4}), $$\eqalign{ {k}\,Z_a(0,k,p,d,q)&\equiv C_a(p,d,q,k)\cr &={(-1)^{p+1}\over2}\!\!\sum_{r=p+1}^d(-1)^r\bigg[ \comb{d-1}r\zeta_{d+1}(0,\alpha+1/2+k+r\mid\bom,1)\cr &\hspace{****}+ \comb{d-1}{r-1}\,\zeta_{d+1}(0,\alpha-1/2+k+r+q\mid\bom,1)\bigg]\cr &\hspace{******}-(k\to -k)\cr &={(-1)^{p+d}\over2(d+1)!q}\sum_{r=p+1}^d(-1)^r\!\bigg[ \comb{d-1}r \big[B^{(d+1)}_{d+1}(\alpha+1/2+k+r|\bom,1)\cr &\hspace{*************}-B^{(d+1)}_{d+1}(\alpha+1/2-k+r\mid\bom,1)\big]\cr &\hspace{**********}+ \comb{d-1}{r-1}\,\big[B^{(d+1)}_{d+1}(d/2+1+p-r-k\mid\bom,1)\cr &\hspace{**************}-B^{(d+1}_{d+1}(d/2+1+p-r+k\mid\bom,1)\big]\bigg]\,,\cr } \eql{zet41} $$ in terms of generalised Bernoulli functions\footnotecheck\defaultoption[]\@footnote{ I have applied a transformation to the arguments of the last two in order to simplify the $q$--dependence.}, which are easily evaluated as polynomials in $q$ and $k$, and so can be extended off the integers. Particular examples are given {in the next section. \section{\bf 6. Some results} The results are in much the same form as those displayed in [\putref{dowgjmsren}] I firstly present some examples for the absolute coexact conformal anomaly on the single lune, (\puteqn{zet41}), as bi--polynomials in $k$ and $q$ for given $p$ and $d$. $$\eqalign{ C_a(1,4,q,k)&=-{k\over240 q}\big(q^4+10(1-k^2)q^2 +40(k^2-6)q-6k^4+20k^2-11\big)\cr C_a(2,4,q,k)&=-{k\over240q}\big(q^4+10(1-k^2)q^2+40k^2q-6k^4+20k^2-11\big)\cr C_a(1,6,q,k)&={k\over6048q}\big(10q^6+35(3-2k^2)q^4+210(k^2-1)(k^2-4)q^2\cr &\hspace{***}-168(30k^4-5k^2-36)q+60k^6-630k^4+1680k^2-955\big)\cr C_a(2,6,q,k)&={k\over30240q}(10q^6+35(3-2k^2)q^4+210(k^2-1)(k^2-4)q^2\cr &\hspace{***}-252(3k^4-25k^2+12)q+60k^6-630k^4+1680k^2-955\big)\,.\cr } \eql{calunea} $$ The relative value can be obtained using Hodge coexact duality, that is by sending $p\to d-1-p$. Added to the absolute value it gives the (coexact) conformal anomaly on the doubled lune, {\it i.e. }\ on the $q$--deformed sphere. Subtraction yields, $$ C_a(p,d,q,k)-C_r(p,d,q,k)=(-1)^{d/2}\,k\,. $$ That this is independent of $q$ reflects the fact that it is a quantity associated with the boundary of the $q$--lune, which is a full $(d-1)$-sphere. Although not immediately enlightening, the formulae (\puteqn{calunea}) do reveal the important fact that the coefficient of $q$ vanishes when $k$ is an allowed value, $k<d/2$. This is a consequence of conformal invariance. It is also the case for scalars. On the full sphere some values for $C^{CE}=C_a+C_r$ are, $$\eqalign{ C^{CE}(1,4,1,k)&={k\over60}(3k^4-25k^2+60)\cr C^{CE}(2,6,1.k)&={k\over3780}(15k^6-294k^4+1715k^2-3780)\cr C^{CE}(3,8,1,k)&= {k\over181440}(35k^8-1350k^6+17199k^4-86100k^2+181440)\,. } \eql{casph} $$ As a non trivial check, the corresponding results for the scalar $0$--form agree with those in [\putref{dowgjmsren}].\footnotecheck\defaultoption[]\@footnote{ Allowance has then to be made for the zero scalar modes which are `missing' from the coexact spectrum. This amounts to the addition of a term $k$ to $C$ for all $q$. The factor of $k$ arises because the missing mode contributes $1$ to $Z(0)\sim C/k$.} I pass on to the entanglement entropy (universal part of), and the derivative with respect to $q$ taken at the round limit $q=1$. Higher derivatives in the complete field theory would lead onto the central charges. I will later also re--express the sphere coexact conformal anomalies, (\puteqn{casph}), more interestingly. \section{\bf 7. Entanglement entropy. Derivatives.} At this point, it is convenient to introduce the R\'enyi entropy, $S_n$, defined generally by, $$ S_n={nW(1)-W(1/n)\over1-n}\,, \eql{renyi} $$ where $W(q)$ here is the effective action on the periodic $q$--lune. $n=1/q$ is the R\'enyi, or replica, index. $S_1$ is the entanglement entropy\footnotecheck\defaultoption[]\@footnote{ I define the entanglement entropy by the replica trick.} and $S'_1$ determines the central charge, $C_T$, [\putref{Perlmutter}]. In even dimensions, the universal component of $S_n$ is obtained by substituting the value $\zeta(0)$ for the effective action, $W$, in (\puteqn{renyi}). $\zeta(s)$ is the spectral $\zeta$--function\ of the propagating operator on the conically deformed manifold. \begin{ignore} In the situation here, we have all the necessary quantities obtained in the previous sections and a few examples of $\mbox{{\goth\char83}}_q(p,d,k)$ follow from (\puteqn{renyi}) and (\puteqn{zet41}), $$ \eqalign{ \mbox{{\goth\char83}}_q(1,4,k)&={k\over120}\big(q^3+q^2-(10k^2-11)q+30k^2-109\big)\cr \mbox{{\goth\char83}}_q(2,6,k)&={k\over15120}\big(10q^5+10q^4-5(14k^2-13)(q+1)q^2\cr &\hspace{*******}+5(42k^4-224k^2+191)q-546k^4+5180k^2-14165\big) } \eql{renyi2} $$ \end{ignore} At the round sphere ($q=1$), the R\'enyi entropy equals the entanglement entropy and this, as calculation shows, equals (minus) the conformal anomaly, $k\zeta(0)=C$ but {\it only} when $k$ is an integer not above the larger critical limit {\it i.e. }\ $k\le d/2$. An equivalent statement is that the conformal anomaly on the $q$--deformed sphere is an extremum, as $q$ varies, at the full sphere only for these non--supercritical $k$ integers. The same holds for scalar and Dirac fields, [\putref{dowgjmsren}]. I give a few details for present circumstances. The derivative with respect to $q$ can be calculated generally from (\puteqn{zet41}) or individually from any particular expression.such as (\puteqn{calunea}). The coexact result at the round sphere is found to be, $$ {\partial\over\partial q} C^{CE}(p,d,q,k)\bigg|_{q=1}={2\over(d+1)!}\comb{d-1} p\big((d/2)^2-k^2\big)\,. \ldots k^2\,. $$ Sending $p\to p-1$ and adding gives the quantity for a free $p$--form (\puteqn{totcomb}), $$ {\partial\over\partial q} C^{tot}(p,d,q,k)\bigg|_{q=1}={2\over(d+1)!}\comb dp \big((d/2)^2-k^2\big)\,.\ldots k^2\,. $$ These show an explicit vanishing only for the allowed values of $k$. This property carries through to any quantity constructed linearly from coexact ones. \section{\bf8. Alternative factorisation} For $k$ integral, as it mostly has been so far, the product can be written as the ratio of two Gamma functions, in a familiar way, $$ L^{CE(p)}_{2k}=\big({\beta}+k\big){\Gamma(B+k+1/2)\over\Gamma(B-k+1/2)}\,, \eql{gammar} $$ which permits continuation in $k$, in particular to $k$ a half integer when the expression again factorises, this time into a pseudo--operator, $$ L^{CE(p)}_{2k}=\big({\beta}+k\big)B\,\prod_{h=1}^l\big(B^2-h^2\big)\,. \quad k\equiv l+1/2\,. $$ Lowest examples are $\sqrt{\delta d}\,$, $\sqrt{\delta d+1}\,\delta d\,$, $ \sqrt{\delta d+4}\,(\delta d+3)\delta d$ and should be considered as analogous to boundary or Neumann--Dirichlet operators. Taking the critical condition $2k=d-2p$ seriously suggests that these pseudo--operators are relevant for coexact forms in odd dimensions. Both factorisations are economically combined in the product, [\putref{Branson}], $$ \big({\beta}+k\big)\prod_{m=1}^{2k}\big(B-k+m\big)\,, $$ where $k$ is either an integer or a half--integer. The ${\cal R}(d)$ operators can likewise be expressed $$ \big({\beta}-k\big)\prod_{m=1}^{2k}\big(C-k+m\big)\,. $$ By continuation, conformal anomalies computed from these pseudo--operators would agree with those displayed above, such as (\puteqn{casph}). Furthermore, we can investigate how the spectral quantities vary as $k$ varies, in particular the derivatives with respect to $k$, an example of which follows. \section{\bf9. Plancherel form of the sphere conformal anomaly.} It is productive to consider the $k$--derivative \footnotecheck\defaultoption[]\@footnote{ Compare the AdS/CFT calculations of Diaz and Dorn, [\putref{DandD}].} of the $C$ quantities, {\it e.g.}, $$ {\partial\over\partial k}C^{tot}(p,d,q,k)\,, $$ for the complete (free) $p$--form, (\puteqn{totcomb}), on the full sphere. A slightly involved calculation reveals that this has a product structure which can then be integrated to yield the more congenial expression, ($\beta=d/2-p$), $$ C^{tot}(p,d,1,k)={2(-1)^{d/2}\over d!}\comb d p\int_0^kdt\,{1\over \beta^2 -t^2}\prod_{i=0}^{d/2}(i^2-t^2)\,, \eql{planch} $$ which again exhibits the correct dimension of a free $p$--form and Hodge duality (under which, $\beta\to -\beta$). It agrees with known expressions for $0$--forms, [\putref{Diaz,DowGJMS}]. The integrand is (proportional to) the continuation of the {\it coexact} Plancherel measure for $p$--forms on H$^{d+1}$, the Cartan dual of S$^{d+1}$, [\putref{CandH}].\footnotecheck\defaultoption[]\@footnote{ This integral occurs in the functional relation for a Selberg $\zeta$--function, [\putref{Kurokawa}].} A similar representation holds also for spinors. The Plancherel measure plays a basic role in the hyperbolic cylinder approach, [\putref{DandM}]. Note that the product lacks the terms corresponding to critical forms, $\beta=\pm t$. At these points the derivative takes the value $(-1)^p$ except for $\beta=0$ ({\it i.e. } a zero order operator) when it equals $2(-1)^{d/2}$. Perhaps the representation (\puteqn{planch}) is not unexpected. A stiffer test would be the corresponding evaluation of functional determinants in odd dimensions. This is not attempted here. \section{ \bf 10. The complete field theory $\zeta(0)$} Having the single (coexact) $p$--form quantities, it is possible to assemble a ghosts--for--ghosts sum to get the complete physical, gauge invariant $p$--form free energy for the correct number of degrees of freedom, presumably $\comb {d-2}p$. Assuming a standard Lagrangian formulation, I simply write down the Obukhov construction in terms of the unconstrained quantities. These could be eliminated in favour of just the coexact ones, [\putref{CandA}], but I won't make use of this. (See the Appendix.) The expression is, {\it e.g.}\ [\putref{CandT2}],[\putref{CandA}], $$ {\cal F}(p,q,d,k)=\sum_{l=0}^p(-1)^l(1+l)\mbox{{\goth\char70}} (p-l,q,d,k)\,, \eql{gfg} $$ where ${\cal F}$ and $\mbox{{\goth\char70}}$ stand, generically, for free energies that is for a conformal anomaly, or for a functional determinant, or for an interpolation between these, if such is possible This is my construction of the quantised theory. ${\cal F}$ is the gauge invariant, `physical' free energy constructed out of the `geometric', free--form quantity, $\mbox{{\goth\char70}}$, which is, in the case under discussion here, the conformal anomaly, $C^{tot}$. The manifold could be the $q$--lune with absolute or relative boundary conditions, but I consider just the $q$--deformed sphere for brevity. As a preliminary, simple check, I remark that (\puteqn{gfg}) reproduces the well known conformal scalar anomalies on the sphere. The end results are again rational functions of $q$ and $k$ for given $p$ and $d$. I now display some $q$--sphere conformal anomalies, and make some observations. Taking the forms in 6 dimensions on the $q$--sphere, as typical, I find, $$\eqalign{ {\cal F}(2,q,6,k)={1\over5040}&k\big(2q^5-7(2k^2-3)q^3+42(k^2-1)(k^2-4)q\big)\cr &-{1\over90}k^3(3k^2+35)+{k\over 5040q}(12k^6-126k^4+336k^2-191)\cr {\cal F}(1,q,6,k)={1\over7560}&k\big(2q^5-7(2k^2-3)q^3+42 (k^2-1) (k^2-4 )q\big)\cr &-{1\over180}k^3 (3k^2+5 )+{k\over7560q}(12 k^6-126 k^4+336 k^2-191)\cr {\cal F}(0,q,6,k)={1\over30240}&\big(k(2q^5-7(2k^2-3)q^3+42(k^2-1)(k^2-4)q\big)\cr &+{k\over30240q}(12k^6-126k^4+336k^2-191)\,. } \eql{6d2} $$ The coefficient of $q$ is, generally, $$ {1\over2}{\partial^2\over\partial q ^2} \big(q\,{\cal F}(p,q,d,k)\big)\bigg|_{q=0} ={1\over6(d-1)!}\comb{d-2}p\prod_{j=0}^{d/2-1}(k^2-j^2)\,. $$ which exhibits the conformal vanishing at allowed $k$ values. One also sees in this particular formula evidence of the expected reduction in propagating components. In fact, calculation produces for the top two components the values, $$\eqalign{ {\cal F}(d,q,d,k)&=(d+2)k\cr {\cal F}(d-1,q,d,k)&=(-1)^p d \,k\,. } $$ One might expect a physical Hodge duality, $p\to d-2-p$, under which these two components have no dual, and so should vanish, or have a topological or cohomological value. More generally, calculation reveals that the Hodge `anomaly' is, $$ {\cal F}(p,q,d,k)-{\cal F}(d-2-p,q,d,k)=(-1)^p\,2k\big(p-{d\over2}+1\big)\,, \eql{hanom} $$ antisymmetrical about the $k=1$ critical value. A similar circumstance occurs in the $p$-form calculations in [\putref{dowCTpform}] and more especially in [\putref{Raj}]. Further analysis of Hodge duality is given in the Appendix justifying (\puteqn{hanom}). The paper [\putref{DMW}] contains a more thorough investigation into duality and (\puteqn{hanom}) fits in with their results. It is important to note that when $k=1$, {\it i.e. } for a usual second order operator, the expressions (\puteqn{6d2}) do not agree with those in [\putref{dowCTpform}] and [\putref{Raj}] even in the critical case. This should be expected since even if the basic $p$--form is critical, the ghosts are not, having different operators in the two cases. I therefore list some numbers for the conformal anomaly on the full sphere, for the present theory when $k=1$ and $p=d/2-1$, the critical, conformal value, ($p$ runs from 0 to 7), \footnotecheck\defaultoption[]\@footnote{ I also record some values on the present scheme for $k=2$, ({\it i.e. } $p=d/2-2$), $ [14/45,$\break$-326/945,113/315,-171277/467775].$} $$\eqalign{ &{\cal F}_{here}\cr &=\bigg[{1\over3},\,-{16\over45},\,{229\over630},\,-{1042\over2835},\, {276929\over748440},\,-{45201643\over121621500},\,{108829363\over291891600},\, -{121702602491\over325641566250}\bigg]\,.\cr } \eql{crit1} $$ These are to be compared with the corresponding values in [\putref{CandA}] for $p=0$ to $p=7$, copied \footnotecheck\defaultoption[]\@footnote{ I have extended by the $p$=0 and $p=$7 values. The $p=8$ value is 9098897310129059/\break 2771861011920000.} here for convenience, $$\eqalign{ &F_{standard}\cr &=\bigg[{1\over3},\, -{31\over45},\,{221\over210},\,-{8051\over5670}, \,{1339661\over748440},\, -{525793111\over243243000},\,{3698905481\over1459458000} ,\,-{7576167103513\over2605132530000}\bigg]\,.\cr } \eql{canda} $$ It is remarkable that contact with these earlier results can be made by the following observation. It will be found that an alternating sum of critical values, (\puteqn{crit1}), $$ F=\sum_{j=0}^p (-1)^j\,{\cal F}(p-j,1,2p+2-2j,1)\,, \eql{altsum} $$ reproduces the standard ones, (\puteqn{canda}). Conversely, adding two adjacent values in (\puteqn{canda}) yields a term in (\puteqn{crit1}). I have no deep justification for this numerical fact. However, it seems reasonable to generalise it by defining, $$ F_{crit}(p,q,k)\bigg|_{d=2p+2k}\equiv\sum_{j=0}^p (-1)^j\,{\cal F}(p-j,q,2p+2k-2j,k)\,, \eql{fcrit} $$ just for critical $p$--forms of a certain propagation order, $2k$. In the present calculational scheme, it is not possible to find $F_{crit}$ as a polynomial in $k$. Case by case evaluation is needed. I list a few values on the full sphere for $p=0$ to $p=3$, and $k=2$ to $k=4$, $$\eqalign{ F_{crit}(p,1,2)&={14\over45},\,{124\over189},\,{137\over135},\,{645982\over467775}\cr \noalign{\vskip 3pt} F_{crit}(p,1,3)&={41\over140},\,{437\over700},\,{1873\over1925} ,\,{466497\over350350}\cr \noalign{\vskip3pt} F_{crit}(p,1,4)&={3956\over14175},\,{56008\over93555},\,{725692\over773955},\, {493493584\over383107725}\,.\cr } $$ I do not discuss these quantities any further at present. Lengthier comments can be found in section 12 concerning (\puteqn{crit1}) and (\puteqn{altsum}). For the full sphere, I have not succeeded in finding a form like (\puteqn{planch}) for the gauge invariant conformal anomaly so I simply give a few of the resulting $k$--polynomials $$\eqalign{ {\cal F}(1,1,4,k)&={1\over90}k^3(3k^2-35)\cr {\cal F}(2,1,4,k)&={1\over180}k(3k^4-5k^2+360)\cr {\cal F}(1,1,6,k)&={1\over3780}k^3(6k^4-105k^2+161)\cr {\cal F}(2,1,6,k)&={1\over1260}k^3(3k^4-63k^2+518)\cr {\cal F}(3,1,8,k)&={1\over45360}k^3(5k^6-198k^4+2709k^2-19188)\,. } $$ \marginnote{pforment3.wxm} \section{\bf 11. Casimir energy on the Einstein cylinder} In earlier works, [\putref{dowqretspin}], [\putref{dowgjmsren}], it was noticed that the coefficient of the $1/q$ term in the free energy, ${\cal F}$, (conformal anomaly) was minus twice the Casimir energy of the field on the Einstein Universe\footnotecheck\defaultoption[]\@footnote{ Branson, [\putref{Branson}], is mostly concerned with propagation on this.} (generalised cylinder), T$\times$S$^{d-1}$. A direct confirmation of this in the present set up would provide evidence that the system is consistent. This task will be undertaken at another time. For now, as a prediction, I list some vacuum energies, ${\cal E}_0(p,d,k)$, obtained on the basis of the above correspondence, $$\eqalign{ {\cal E}_0(p,4,k)&=-{1\over720}\comb2p k(6k^4-20k^2+11)\cr {\cal E}_0(p,6,k)&=-{1\over60480}\comb4pk(12k^6-126k^4+336k^2-191)\cr {\cal E}_0(p,8,k)&=-{ 1\over3628899}\comb6p k(10k^8-240k^6+1764k^4-4320k^2+2497)\,.\cr } \eql{ezero} $$ The scalar $p=0$ values agree with those computed in [\putref{dowqretspin}]. Our present results verify that the general $p$-form energies are simply weighted with the dynamical degrees of freedom, $\comb{d-2}p$. Some graphs of the $k$--dependence are to be found in [\putref{dowqretspin}]. The $p>0$ results do not agree with the existing ones, as anticipated. It would be expected, however, that the {\it critical} values are related by (\puteqn{fcrit}), at least for $k=1$, the only case available up to now. To make this evident, I again give some numbers for the critical forms calculated from, say, (\puteqn{ezero}) at $k=1$ (displayed for $p=0$ to 6), $$\eqalign{ &{\cal E}_0^{crit}=\cr \noalign{\vskip4truept} &\bigg[ -{1\over12},\,{1\over120},\,-{31\over10080},\,{289\over181440},\, -{2219\over2280960},\,{6803477\over10378368000},\, -{3203699\over6793113600}\ldots\bigg] } \eql{cashere} $$ and quote an extended list of the corresponding ones, $E_0$, given in [\putref{GKT}], computed directly on the Einstein cylinder with standard `Maxwell' theory. An efficient general formula can be found in [\putref{dowqretspin}]. For $p=1$ to 7, one has, $$\eqalign{ &E_0^{crit}=\cr \noalign{\vskip4truept} &\bigg[{11\over120},\,-{191\over2016},\,{2497\over25920} ,\,-{14797\over152064},\,{92427157\over943488000} ,\,-{36740617\over373248000},\,{61430943169\over621831168000},\,\ldots\bigg]\, } \eql{casstand} $$ As before, an alternating sum of (\puteqn{cashere}) yields (\puteqn{casstand}) and the addition of two adjacent terms in (\puteqn{casstand}) gives one in (\puteqn{cashere}). Using this, the numbers in (\puteqn{casstand}) can be extended correctly to the left by $-1/12$ for $p=0$ and $0$ for $p=-1$ while (\puteqn{cashere}) can be extended to the right by 73691749/207277056000. See Appendix B for a more analytical connection between (\puteqn{cashere}) and (\puteqn{casstand}). \begin{ignore} \section{\bf Can the degeneracies be simplified.??????} $$ d_a(p,\sigma,q)={(-1)^{p+1}\over \sigma^{p+1}(1-\sigma)^{d-1}(1-\sigma^q)}\sum_{r=p+1}^d (-1)^r\,e_r(\sigma^q,\sigma,\ldots,\sigma)\,, \eql{deea1} $$ Explicitly $$ e_r(\sigma^q,\sigma,\ldots,\sigma)=\comb{d-1}r\,\sigma^r+\comb{d-1}{r-1}\,\sigma^{r-1+q}\,, \eql{esf} $$ Try first $q=1$ (hemisphere) and show equivalence with [\putref{DandKi}] which is (P-forms II section 8.) and start from this side, $$\eqalign{ g_a(p,\sigma)&=\sum_{m=p+1}^d\comb{m-1}{p}{1\over(1-\sigma)^m}\cr &={1\over(1-\sigma)^d}\sum_{m=p+1}^d\comb{m-1}{p}(1-\sigma)^{d-m}\cr &={1\over(1-\sigma)^d}\sum_{m=p+1}^d\comb{m-1}{p}\sum_{s=0}^{d-m} \comb{d-m}{s}(-1)^s\sigma^s\cr &={1\over(1-\sigma)^d}\sum_{m=1}^d\comb{m-1}{p}\sum_{s=0}^{\infty} \comb{d-m}{s}(-1)^s\sigma^s\cr &={1\over(1-\sigma)^d}\sum_{s=0}^\infty\sum_{m=1}^{d}\comb{m-1}{p} \comb{d-m}{s}(-1)^s\sigma^s\cr } $$ Vandermonde is $$ \sum_{\mu=0}^n\comb{\mu}{j}\,\comb{n-\mu}{k-j}=\comb{n+1}{k+1} $$ Set $\mu=m-1$, $j=p$, $k-j=s$, $n=d-1$ {\it i.e. } $k=s+p$ Hence $$\eqalign{ g_a(p,\sigma) &={1\over(1-\sigma)^d}\sum_{s=0}^\infty\sum_{m=1}^{d}\comb{m-1}{p} \comb{d-m}{s}(-1)^s\sigma^s\cr &={1\over(1-\sigma)^d}\sum_{s=0}^\infty\comb{d}{s+p+1}(-1)^s\sigma^s\cr } \eql{geea} $$ Now, at $q=1$, $$\eqalign{ d_a(p,\sigma.1)={(-1)^{p+1}\over\sigma^{p+1}(1-\sigma)^d}\sum_{r=p+1}^d(-1)^r\comb dr\sigma^r } $$ Now set $r=s+p+1$ and the two quantities are identical, the upper $r$ limit being governed by the vanishing of the binomial. Look at each term in (\puteqn{esf}) separately. First term bit like above but with $d\to d-1$ ($r=d$ term is zero) $$\eqalign{ d_a(p,\sigma,1)\big |_{d\to d-1}&={(-1)^{p+1}\over\sigma^{p+1} (1-\sigma)^{d-1}}\sum_{r=p+1}^{d-1}(-1)^r\comb {d-1}r\sigma^r\cr &=\sum_{m=p+1}^{d-1}\comb{m-1}{p}{1\over(1-\sigma)^m}\cr } \eql{firstt} $$ using the previous identity. Hence first term is, putting in the overall factors $$ {1\over(1-\sigma^q)}\sum_{m=p+1}^{d-1}\comb{m-1}{p}{1\over(1-\sigma)^m} \eql{firstt2} $$ Try to reverse the procedure leading to (\puteqn{geea}). The first term has been dealt with in (\puteqn{firstt2}). Second term $$\eqalign{ \sum_{r=p+1}^d(-1)^r&\comb{d-1}{r-1}\,\sigma^{r-1+q}= -\sigma^q \sum_{\rho=p}^{d-1}(-1)^\rho\comb{d-1}{\rho}\,\sigma^{\rho}\cr &=-(-1)^p\sigma^q\sum_{s=0}^{d-1-p}(-1)^s\comb{d-1}{s+p}\,\sigma^{s+p}\cr &=-(-1)^p\sigma^{q+p}\sum_{s=0}^{\infty}(-1)^s\comb{d-1}{s+p}\,\sigma^{s}\cr &=-(-1)^p\sigma^{q+p}\sum_{s=0}^{\infty}\sum_{m=1}^{d-1}(-1)^s \comb{m-1}{p-1}\comb{d-1-m}{s}\,\sigma^{s}\cr &=-(-1)^p\sigma^{q+p}\sum_{m=1}^{d-1} \comb{m-1}{p-1}\sum_{s=0}^{\infty}\comb{d-1-m}{s}\,(-1)^s\sigma^{s}\cr &=-(-1)^p\sigma^{q+p}\sum_{m=p}^{d-1} \comb{m-1}{p-1}\sum_{s=0}^{d-1-m}\comb{d-1-m}{s}\,(-1)^s\sigma^{s}\cr &=-(-1)^p\sigma^{q+p}\sum_{m=p}^{d-1} \comb{m-1}{p-1}(1-\sigma)^{d-1-m}\equiv \sigma^{p+q} X\cr } $$ Notes: Putting $s=\rho-p$. The finite upper limit occurs when $s+p=d-1$. Vandermonde is $$ \sum_{\mu=0}^n\comb{\mu}{j}\,\comb{n-\mu}{k-j}=\comb{n+1}{k+1} $$ \vglue 20truept $k+1=s+p$, $\mu=m-1$, $n-\mu=d-1-m$, $j=p-1$, $n+1=d-1$, $k-j=s$. Therefore $n=d-2$ and $k=s+p-1$. Checks. Outside factors $$ {(-1)^{p+1}\over \sigma^{p+1}(1-\sigma)^{d-1}(1-\sigma^q)} $$ Therefore $\sigma^p$ cancels to leave $1/\sigma$. Leave this outside. Signs combine to $+1$. The factor $(1-\sigma)^{d-1}$ cancels Make same split as before to cancel $(1-\sigma^q)$ on bottom. Therefore write $\sigma^q\sigma^pX=-(1-\sigma^q)\sigma^pX+\sigma^pX$. Including the outside factors get $$\eqalign{ {1\over(1-\sigma^q)}\sum_{m=p+1}^{d-1}\comb{m-1}{p} {1\over(1-\sigma)^m}+&{1\over\sigma(1-\sigma^q)}\sum_{m=p}^{d-1} \comb{m-1}{p-1}{1\over(1-\sigma)^{m}}\cr &-{1\over\sigma}\sum_{m=p}^{d-1} \comb{m-1}{p-1}{1\over(1-\sigma)^{m}} } $$ Multiplying by $\sigma^{a+1}$, Mellin gives ($a=(d-1)/2$, $$\eqalign{ \sum_{m=p+1}^{d-1}\comb{m-1}{p} \zeta_{\cal B}(s,a+1\mid q,{\bf1}_m)+&\sum_{m=p}^{d-1} \comb{m-1}{p-1}\zeta_{\cal B}(s,a\mid q,{\bf1}_m)\cr &-\sum_{m=p}^{d-1} \comb{m-1}{p-1}\zeta_{\cal B}(s,a\mid{\bf1}_m) } $$ Check for $q=1$. All parameter are 1. Recursion is $$ \zeta_{\cal B}(s,a+1\mid{\bf1}_{m+1})=\zeta_{\cal B}(s,a\mid{\bf1}_{m+1}) -\zeta_{\cal B}(s,a\mid{\bf1}_{m}) $$ Answer should be, $$ \sum_{m=p+1}^d\comb{m-1}p\zeta_B(s,a+1\mid{\bf1}_m) $$ \section{\bf8. $q\to0$ limit and Casimir energies} Alternative expressions for the free energies to those derived directly from the $\zeta$--function\ (\puteqn{zet4}) are more advantageous in order to make contact with the results in [\putref{dowqretspin}]. These are obtained by re-organising the degeneracies in the cylinder kernel, (\puteqn{ck})\footnotecheck\defaultoption[]\@footnote{ In other language this is a re-organisation of the $q$--series.} Briefly, the binomial coefficients in the symmetric function, (\puteqn{esf}), needed for the degeneracy, (\puteqn{deea}), are split into two factors using the Vandermonde formula. This enables the sum to be taken over the {\it dimension} of the Barnes $\zeta$--functions. I do not give the algebraic details but just present the alternative expression to (\puteqn{zet4}) for the simple $\zeta$--function, $$\eqalign{ \zeta_a(s,a,p,q)=\sum_{m=p+1}^{d-1}\comb{m-1}{p} \zeta_{\cal B}(s,a+1\mid q,{\bf1}_m)+&\sum_{m=p}^{d-1} \comb{m-1}{p-1}\zeta_{\cal B}(s,a\mid q,{\bf1}_m)\cr &-\sum_{m=p}^{d-1} \comb{m-1}{p-1}\zeta_{\cal B}(s,a\mid{\bf1}_m) } $$ This can be used for formal as well as calculational purposes since it displays the $p$--dependence more simply. Further, because of the vanishing of the binomial coefficients, the lower summation limits can be taken to be $0$ or $-1$ {\it etc. }, at will. First $s$ to 0 to obtain, $$\eqalign{ q\,\zeta_a&(0,a,p,q)=-\sum_{m=p+1}^{d-1}{(-1)^m\over(m+1)!}\comb{m-1}{p} B^{(m+1)}_{m+1}(a+1\mid q,{\bf1}_m) \cr- &\sum_{m=p}^{d-1}{(-1)^m\over(m+1)!} \comb{m-1}{p-1}B^{(m+1)}_{m+1}(a\mid q,{\bf1}_m)\cr &-q\,\sum_{m=p}^{d-1}{(-1)^m\over m!} \comb{m-1}{p-1}B^{(m)}_{m}(a\mid {\bf1}_m) } $$ Therefore at $q=0$, $$\eqalign{ q\,\zeta_a&(0,a,p,q)\bigg|_{q=0}=-\sum_{m=p+1}^{d-1}{(-1)^m\over(m+1)!}\comb{m-1}{p} B^{(m)}_{m+1}(a+1\mid {\bf1}_m) \cr- &\sum_{m=p}^{d-1}{(-1)^m\over(m+1)!} \comb{m-1}{p-1}B^{(m)}_{m+1}(a\mid {\bf1}_m)\,.\cr } \eql{z0} $$ To get the free energy, the average (\puteqn{pca}) has to be taken. At this point I give the expression for the Maxwell (conformal) Casimir energy on R$\times$S$^{d-1}$, $p=d/2-1$ $$\eqalign{ E^M_0(p) &=\sum_{m=p+1}^{2p+1}{(-1)^m\over(m+1)!}\comb{m-1}p \,B^{(m)}_{m+1}\big(p+1\big)\,,\cr } \eql{maxen} $$ The arguments of the $b$s in (\puteqn{z0}) are $(d-1)/2+1+1/2=d/2+1$, $(d-1)/2+1-1/2=d/2$, and $d/2$, $d/2-1$ for the Maxwell case. \end{ignore} \section{\bf12. Comments and Conclusion} One aspect (the conformal anomaly) of a quantised field propagating according to the Branson--Gover operators on the sphere has been investigated. Even for second order propagation the results differ from the conventional ones for $p>0$ because of the differring ghost system. Both the present theory and the conventional one display the correct degrees of freedom, $\comb {d-2}p$. Numerically it is observed that the two sets of values are simply related when the forms are `critical' {\it i.e. } when the derivative order, form order and dimension satisfy $2k=d-2p$. This relation, (\puteqn{altsum}), has the appearance of the sum involving edge mode `corrections' that yields, for example, the Maxwell entanglement entropy as the conformal anomaly [\putref{DMW}]. \footnotecheck\defaultoption[]\@footnote{ A similar interpretation of the relation between the Casimir energies is not so obvious. More information is given in Appendix B.} Why this should be is not entirely clear. The numbers (\puteqn{crit1}) agree with those in Table 2 of [\putref{DandM}]. They show that both theories deliver the entanglement entropy devoid of any corrections. The numbers for any $p$ in [\putref{DandM}] were constructed using just the coexact forms, as an extrapolation from the detailed gauge calculations for $p=2$. Also in connection with the hyperbolic approach in [\putref{DandM}], it seems that the Branson operators, for $k=1$, saturate the Breitenlohner--Freedman bound. A clean mathematical result is that the unconstrained $p$--form conformal anomaly on S$^d$ can be expressed as an integral over the (continued) Plancherel measure for coexact $p$--forms on the odd hyperbolic space, H$^{d+1}$, indicating connections with a form Selberg $\zeta$--function, [\putref{Kurokawa}]. It is also shown that the entanglement entropy for the sphere equals minus the conformal anomaly, technically because the conformal anomaly on the $q$--deformed sphere is an extremum at the round value, $q=1$, (but only for allowed values of $k$). For brevity, I have not extended the analysis beyond that involving the first derivatives with respect to $q$. Further investigations include finding the functional determinants in odd dimensions where one would want to discuss bounded manifolds like the hemisphere in more detail. \section{\bf Appendix A. Duality} There is some general interest in the validity of Hodge duality. In the present situation a simple analysis can be given starting from the ghost sum, (\puteqn{gfg}), as in most other treatments, including the historic ones. For variety, I use the alternative formulation in terms of the coexact conformal anomalies on the $q$--lune, [\putref{CandA}], $$ {\cal F}^b(p)=\sum_{l=0}^p(-1)^{p+l}\mbox{{\goth\char70}}^b_{CE}(l)+(-1)^p\,(p+1)k\delta^{br} \eql{gfg2} $$ and have dropped the inessential arguments, $d,q,k$. The final term is a zero mode effect which exists only for relative conditions, $b=r$. One makes the duality replacement $p\to d-p-2$ in this expression and, to begin with, I add the absolute and relative values so that the system is a $p$-form on the periodic lune. To make the notation more streamlined for this discussion, I denote the single lune by ${\cal M}$ and its periodic double by $2{\cal M}$. In terms of $\zeta$--functions, the relation is, $$\eqalign{ \zeta(2{\cal M},p)&=\zeta^a({\cal M},p)+\zeta^r({\cal M},p)\cr &=\zeta^a({\cal M},p)+\zeta^a({\cal M},d-1-p)\,. } $$ The ghost sum then reads, $$\eqalign{ {\cal F}(2{\cal M},p)=&\sum_{l=0}^p(-1)^{p+l}\mbox{{\goth\char70}}_{CE}(2{\cal M},l)+(-1)^p\,(p+1)k\cr &=(-1)^{d-1}\sum_{l=d-1}^{d-1-p}(-1)^{p+l}\mbox{{\goth\char70}}_{CE}(2{\cal M},l)+(-1)^p\,(p+1)k\,,\cr } \eql{gfg3} $$ on redefining $l$ and using coexact $a\leftrightarrow r$ duality {\it i.e. } $\mbox{{\goth\char70}}_{CE}(2{\cal M},l)=\mbox{{\goth\char70}}_{CE}(2{\cal M},d-1-l)$. This is an identity. Now set $p\to d-2-p$ in the second term to get $$\eqalign{ {\cal F}(2{\cal M},d-2-p)&=-\sum_{l=p+1}^{d-1}(-1)^{p+l}\mbox{{\goth\char70}}_{CE}(2{\cal M},l) +(-1)^p\,(d-1-p)k\cr } $$ so that the difference in Hodge duals is, $$\eqalign{ {\cal F}(2{\cal M},p)-{\cal F}(2{\cal M},d-2-p)&=\sum_{l=0}^{d-1}(-1)^{p+l}\mbox{{\goth\char70}}_{CE}(2{\cal M},l) +(-1)^p\,(2p+2-d)k\cr &=(-1)^p2(p+1-d/2) } $$ which is the value in the main text. For the more general single $q$-lune with boundary conditions, a similar manipulation reveals the Hodge dual relation,\marginnote{list3. Check sign} $$\eqalign{ {\cal F}^a({\cal M},p)-{\cal F}^r({\cal M},d-2-p) &=-(-1)^p(3d/2-1-p) } $$ Also\marginnote{list2}, $$ {\cal F}^a({\cal M})-{\cal F}^r({\cal M})=-(-1)^p2(p+1)+\delta_{pd} $$ Therefore \marginnote{list7} one arrives at the curiously asymmetric Hodge--like dualities, $$\eqalign{ {\cal F}^r({\cal M},p)-{\cal F}^r({\cal M},d-2-p) &=(-1)^p(2p+2-(3d/2-1-p))k\cr &=3(-1)^p(p+1-d/2)k-k \delta_{pd}\cr } $$ and $$\eqalign{ {\cal F}^a({\cal M},p)-{\cal F}^a({\cal M},d-2-p) &=-(-1)^p(p+1-d/2)k+k\delta_{pd}\cr } $$ $$\eqalign{ {\cal F}^r({\cal M},d-2-p)-{\cal F}^r({\cal M},p) &=3(-1)^{d-2-p}(d-2-p+1-d/2)k\cr &=3(-1)^p(d/2-1-p)k\,. } $$ These relations give the particular values when $p=d-1$ and $p=d$. \section{\bf Appendix B. More numerology on the Einstein cylinder} The relation between the Casimir energies (\puteqn{cashere}) and (\puteqn{casstand}) can be made a little more systematic in the following way. In [\putref{dowqretspin}], the Casimir energy on the Einstein cylinder was obtained conventionally from the coexact Maxwell $\zeta$--function\ on the (odd) sphere S$^{d-1}$ that is, the $\zeta$--function\ for a critical (conformal) $p$--form ($d=2p+2$). This was obtained from the generating function found in [\putref{DandKii}] manipulated to produce a sum of Barnes $\zeta$--functions\ different in form (but equivalent) to that used here for the auxiliary $\zeta$--function. The result is not needed here. Also in [\putref{DandKii}], using recursions, the $p$--form coexact $\zeta$--function\ on a sphere S$^{d-1}$, was expressed as an alternating sum of conformal \footnotecheck\defaultoption[]\@footnote{ Remember that the system is conformally covariant in $d$ dimensions not $d-1$.} scalar, $0$--form $\zeta$--functions, $\zeta^{conf}_0(s)$, on spheres of different dimensions. In the critical, conformal case, {\it i.e. } $p=d/2-1$, it reads, $$ \zeta^{CE}(s,p)\bigg|_{S^{2p+1}}=\sum_{j=0}^p(-1)^{j+p} \comb{2j}{j} \zeta^{conf}_0(s)\bigg|_{S^{2j+1}}\,. \eql{dandkir} $$ Before proceeding, it should be noted that, on the Einstein cylinder, the single coexact $\zeta$--function\ is sufficient as the total ghost--for--ghost sum collapses through t\'elescopage, [\putref{dowzero}].\footnotecheck\defaultoption[]\@footnote{ Something similar seems to occur, but more complicatedly in detail, in the hyperbolic approach, [\putref{DandM}].} Evaluation of (\puteqn{dandkir}) at $s=-1/2$ gives a formula for the standard $p$--form Casimir energy on the Einstein cylinder in terms of conformal scalar Casimir energies on Einstein cylinders of varying dimensions, $$ E_0^{crit}(p)\big|_{S^{2p+1}}=\sum_{j=0}^p(-1)^{p+j} \comb{2j}{j} E_0^{conf}(0)\big|_{S^{2j+1}}\,. \eql{eezcrit} $$ The conformal scalar Casimir energy on the odd--sphere cylinder was calculated some time ago, [\putref{ChandD}], with the result, $$ E_0^{conf}(0)\big|_{S^n}=-{1\over(n+1)!}B^{(n)}_{n+1}\big((n-1)/2\big)\,, \quad n\,\,{\rm odd}\,. $$ The relation between (\puteqn{cashere}) and (\puteqn{casstand}) is then expressed by, $$ E^{crit}_0(p)\big|_{S^{2p+1}}=\sum_{j=0}^p (-1)^{p+j}{\cal E}^{crit}_0(j)\,, \eql{ecrit} $$ where, $$ {\cal E}^{crit}_0(j)\equiv-{1\over 2(j+1)(2j+1)\,j!^2}B^{(2j+1)}_{2j+2}(j)\,, $$ which evaluates to the list (\puteqn{cashere}). This makes the relation with (\puteqn{casstand}) more precise but sheds no physical light on why (\puteqn{cashere}) should emerge from the system defined on the even $q$--sphere as $q\to0$. In fact, equation (\puteqn{ecrit}) could be looked upon as just another way of computing (\puteqn{casstand}). Different ways of expressing the $p$--form $\zeta$--function\ correspond to different arrangements of the generating function (or `$q$--series' or `character'), no one way seemingly being more significant than any other.\footnotecheck\defaultoption[]\@footnote{ Similar considerations would hold in the hyperbolic calculation, [\putref{DandM}], regarding manipulations with the Plancherel measure {\it e.g.}\ expansion.} I note that ${\cal E}^{crit}_0(j)$ has the correct number of degrees of freedom for a critical $j$--form, being equivalent to $\comb {2j}j$ single scalars and that the construction of (\puteqn{ecrit}) is an alternating sum over these scalars, which have conformal dimension, $j$. \vskip15truept \noindent{\bf References.} \vskip5truept \begin{putreferences} \reference{FandS}{Fischmann,M. and Somberg,P., {\it The boundary value problem for Laplacian on differential forms and conformally Einstein infiniity},{\it J.Gen.Lie Th.Appl.} {\bf 11} (2017) 256, ArXiv: 1508. 01511 [math.DG].} \reference{DandM}{David,J.R. and Mukherjee,J. {\it Hyperbolic cylinders and Entanglement entropy: gravitons, higher spins and $p$--forms.}, ArXiv:2005.08402.} \reference{BandC}{Benedetti,V. and Casini,H. {\it Entanglement entropy of linearized gravitons in a sphere,} ArXiv:1908.01800.} \reference{DandKii}{Dowker,J.S. and Kirsten, K. {\it Spinors and forms on the ball and the generalised cone}, {\it Comm. in Anal. and Geom. }{\bf7} (1999) 641, ArXiv:hep--th/9608189.} \reference{Kurokawa}{Kurokawa,S, {\it Gamma Factors and Plancherel Measures,} {\it Proc.Jap. Acad.} {\bf 68} (1992) {256}.} \reference{dowCTpform}{Dowker,J.S, {\it R\'enyi entropy and $C_T$ for p-forms on even spheres}, ArXiv: 1706. 04574.} \reference{Branson}{Branson,T. {\it Group Representations Arising from Lorentz Conformal Geometry}, {\it J.Func.Anal.} {\bf 74} (1987) 199. } \reference{BandG}{Branson,T. and Gover,A.R. {\it Conformally invariant operators , differential forms, cohomology and a generalization of $Q$--curvature}, {\it Comm.Partial Diff.Equns} {\bf30} (2005) 1669.} \reference{BandT}{Beccaria,M. and Tseytlin,A.A. {\it $C_T$ for higher derivative conformal fields and anomalies of (1,0) superconformal 6d theories}, ArXiv:1705.00305.} \reference{NandZ}{Nian,J. and Zhou,Y. {\it R\'enyi entropy of free (2,0) tensor multiplet and its supersymmetric counterpart}, \prD{93}{2016}{125010}, ArXiv:1511.00313} \reference{Huang}{Huang,K--W., {\it Central Charge and Entangled Gauge Fields}, \prD {92}{2015}{025010}, ArXiv:1412.2730.} \reference{NFM}{De Nardo,L., Fursaev,D.V. and Miele,G. {\it}\cqg{14}{1997}{1059}, ArXiv: hep-th/9610011.} \reference{Fursaev}{Fursaev,D.V.,{\it Entanglement R\'enyi Entropies in Conformal Field Theories and Holography},{\it JHEP} 0609:018,2006. ArXiv:1201.1702.} \reference{Raj}{Raj,H. {\it A note on sphere free energy of $p$--form gauge theory and Hodge duality}, \cqg{34}{2017}{247001}, ArXiv:1611.02507.} \reference{DMW}{Donnelly.W, Michel,B. and Wall,A.C. {\it Electromagnetic duality and entanglement entropy}, ArXiv:1611.0592.} \reference{GKT}{Giombi,S. Klebanov,I.R. and Tan, Z.M. {\it The ABC of Higher--Spin AdS/CFT}. {\it Universe} {\bf 4} (2018) 1, ArXiv:1608.07611.} \reference{dowzero}{Dowker,J.S. {\it Zero modes, entropy bounds and partition functions}, \cqg{20}{2003}{L105}, ArXiv:hep-th/0203026.} \reference{dowqretspin}{Dowker,J.S. {\it Revivals and Casimir energy for a free Maxwell field (spin-1 singleton) on $R\times S^d$ for odd $d$}, ArXiv:1605.01633.} \reference{Wunsch}{W\"unsch,V. {\it On Conformally Invariant Differential Operators}, {\it Math. Nachr.} {\bf 129} (1989) 269.} \reference{BEMPSS}{Buchel,A.,Escobedo,J.,Myers,R.C.,Paulos,M.F.,Sinha,A. and Smolkin,M. {\it Holographic GB gravity in arbitrary dimensions}, {\it JHEP} 1003:111,2010, ArXiv: 0911.4257.} \reference{dowpform1}{Dowker,J.S. {\it $p$--forms on$d$--spherical tessellations}, {\it J. Geom. and Phys.} ({\bf 57}) (2007) 1505, ArXiv:math/0601334.} \reference{dowpform2}{Dowker,J.S. {\it $p$--form spectra and Casimir energies on spherical tessellations}, \cqg{23}{2006}{1}, ArXiv:hep-th/0510248.} \reference{dowgjmsren}{Dowker,J.S. {\it R\'enyi entropy and $C_T$ for higher derivative scalars and spinors on even spheres}, ArXiv:1706.01369.} \reference{GPW}{Guerrieri, A.L., Petkou, A. C. and Wen, C. {\it The free $\sigma$CFTs}, ArXiv:1604.07310.} \reference{GGPW}{Gliozzi,F., Guerrieri, A.L., Petkou, A.C. and Wen,C. {\it The analytic structure of conformal blocks and the generalized Wilson--Fisher fixed points}, {\it JHEP }1704 (2017) 056, ArXiv:1702.03938.} \reference{YandZ}{Yankielowicz, S. and Zhou,Y. {\it Supersymmetric R\'enyi Entropy and Anomalies in Six--Dimensional (1,0) Superconformal Theories}, ArXiv:1702.03518.} \reference{OandS}{Osborn.H. and Stergiou,A. {\it $C_T$ for Non--unitary CFTs in higher dimensions}, {\it JHEP} {\bf06} (2016) 079, ArXiv:1603.07307.} \reference{Perlmutter}{Perlmutter,E. {\it A universal feature of CFT R\'enyi entropy} {\it JHEP} {\bf03} (2014) 117. ArXiv:1308.1083.} \reference{Norlund}{N\"orlund,N.E. {\it M\'emoire sur les polynomes de Bernoulli}, \am{43} {1922}{121}.} \reference{dowqretspin}{Dowker,J.S. {\it Revivals and Casimir energy for a free Maxwell field (spin-1 singleton) on $R\times S^d$ for odd $d$}, ArXiv:1605.01633.} \reference{Dowpiston}{Dowker,J.S. {\it Spherical Casimir pistons}, \cqg{28}{2011}{155018}, ArXiv:1102.1946.} \reference{Dowchem}{Dowker,J.S. {\it Charged R\'enyi entropy for free scalar fields}, \jpa{50} {2017}{165401}, ArXiv:1512.01135.} \reference{Dowconfspins}{Dowker,J.S. {\it Effective action of conformal spins on spheres with multiplicative and conformal anomalies}, \jpa{48}{2015}{225402}, ArXiv:1501.04881.} \reference{Dowhyp}{Dowker,J.S. {\it Hyperspherical entanglement entropy}, \jpa{43}{2010}{445402}, ArXiv:1007.3865.} \reference{dowrenexp}{Dowker,J.S.{\it Expansion of R\'enyi entropy for free scalar fields}, ArXiv:1412.0549.} \reference{CaandH}{Casini,H. and Huerta,M. {\it Entanglement entropy for the $n$-sphere}, \plb{694}{2010}{167}.} \reference{Apps}{Apps,J.S. {\it The effective action on a curved space and its conformal properties} PhD thesis (University of Manchester, 1996).} \reference{Dowcen}{Dowker,J.S., {\it Central differences, Euler numbers and symbolic methods}, \break ArXiv:1305.0500.} \reference{KPSS}{Klebanov,I.R., Pufu,S.S., Sachdev,S. and Safdi,B.R. {\it JHEP} 1204 (2012) 074.} \reference{moller}{M{\o}ller,N.M. \ma {343}{2009}{35}.} \reference{BandO}{Branson,T., and Oersted,B \jgp {56}{2006}{2261}.} \reference{BaandS}{B\"ar,C. and Schopka,S. {\it The Dirac determinant of spherical space forms},\break {\it Geom.Anal. and Nonlinear PDEs} (Springer, Berlin, 2003).} \reference{EMOT2}{Erdelyi, A., Magnus, W., Oberhettinger, F. and Tricomi, F.G. { \it Higher Transcendental Functions} Vol.2 (McGraw-Hill, N.Y. 1953).} \reference{Graham}{Graham,C.R. SIGMA {\bf 3} (2007) 121.} \reference{Morpurgo}{Morpurgo,C. \dmj{114}{2002}{477}.} \reference{DandP2}{Dowker,J.S. and Pettengill,D.F. \jpa{7}{1974}{1527}} \reference{Diaz}{Diaz,D.E. {\it Polyakov formulas for GJMS operators from AdS/CFT}, {\it JHEP} {\bf 0807} (2008) 103.} \reference{DandD}{Diaz,D.E. and Dorn,H. {\it Partition functions and double trace deformations in AdS/CFT}, {\it JHEP} {\bf 0705} (2007) 46.} \reference{AaandD}{Aros,R. and Diaz,D.E. {\it Determinant and Weyl anomaly of Dirac operator: a holographic derivation}, ArXiv:1111.1463.} \reference{CandA}{Cappelli,A. and D'Appollonio, G. {\it On the trace anomaly as a measure of degrees of freedom}, \pl{487B}{2000}{87}.} \reference{CandT2}{Copeland,E. and Toms,D.J. {\it Quantized antisymmetric tensor fields and self-consistent dimensional reduction in higher--dimensional spacetimes}, \cqg {3}{1986}{431}.} \reference{Allais}{Allais, A. {\it JHEP} {\bf 1011} (2010) 040.} \reference{Tseytlin}{Tseytlin,A.A. {\it On Partition function and Weyl anomaly of conformal higher spin fields} ArXiv:1309.0785.} \reference{KPS2}{Klebanov,I.R., Pufu,S.S. and Safdi,B.R. {\it JHEP} {\bf 1110} (2011) 038.} \reference{CaandWe}{Candelas,P. and Weinberg,S. \np{237}{1984}{397}.} \reference{ChandD}{Chang,P. and Dowker,J.S. {\it Vacuum energy on orbifold factors of spheres}, \np{395}{1993}{407}, ArXiv:hep-th/9210013.} \reference{Steffensen}{Steffensen,J.F. {\it Interpolation}, (Williams and Wilkins, Baltimore, 1927).} \reference{Barnesa}{Barnes,E.W. {\it Trans. Camb. Phil. Soc.} {\bf 19} (1903) 374.} \reference{DowGJMS}{Dowker,J.S. {\it Determinants and conformal anomalies of GJMS operators on spheres}, \jpa{44}{2011}{115402}.} \reference{Dowren}{Dowker,J.S. {\it R\'enyi entropy on spheres}, \jpamt {46}{2013}{2254}.} \reference{MandD}{Mansour,T. and Dowker,J.S. {\it Evaluation of spherical GJMS determinants}, 2014, Submitted for publication.} \reference{GandK}{Gubser,S.S and Klebanov,I.R. \np{656}{2003}{23}.} \reference{Dow30}{Dowker,J.S. \prD{28}{1983}{3013}.} \reference{Dowcmp}{Dowker,J.S. {\it Effective action on spherical domains}, \cmp{162}{1994}{633}, ArXiv:hep-th/9306154.} \reference{DowGJMSE}{Dowker,J.S. {\it Numerical evaluation of spherical GJMS operators for even dimensions} ArXiv:1310.0759.} \reference{Tseytlin2}{Tseytlin,A.A. \np{877}{2013}{632}.} \reference{Tseytlin}{Tseytlin,A.A. \np{877}{2013}{598}.} \reference{Dowma}{Dowker,J.S. {\it Calculation of the multiplicative anomaly} ArXiv: 1412.0549.} \reference{CandH}{Camporesi,R. and Higuchi,A. {\it The Plancherel measure for $p$--forms in real Hyperbolic space}. {\it J.Geom. and Physics} {\bf 15} (1994) 57.} \reference{Allen}{Allen,B. \np{226}{1983}{228}.} \reference{Dowdgjms}{Dowker,J.S. \jpamt{48}{2015}{125401}.} \reference{Dowsphgjms}{Dowker,J.S. {\it Numerical evaluation of spherical GJMS determinants for even dimensions}, ArXiv:1310.0759.} \end{putreferences} \bye
{ "timestamp": "2020-08-12T02:14:24", "yymm": "2007", "arxiv_id": "2007.13670", "language": "en", "url": "https://arxiv.org/abs/2007.13670" }
\section{Introduction}\label{sec:introduction} The nucleon-nucleon ($NN$) interaction is one of the most important inputs in nuclear physics and nuclear astrophysics. Since the seminal work of Yukawa~\cite{ Yukawa:1935xg}, a variety of formulations of $NN$ {interaction} have been proposed and thoroughly studied. Nowadays, a number of formulations of $NN$ {interaction}, both phenomenological and more microscopic, are already of high precision, in the sense that they can describe $NN$ scattering data with $T_\mathrm{lab}<350$ MeV with a $\chi^2/\mathrm{d.o.f.}\approx 1$. In the phenomenological group, an accurate description of $NN$ scattering data has been achieved by the Reid93~\cite{Stoks:1994wp}, Argonne $V_{18}$~\cite{Wiringa:1994wb}, and (CD-)Bonn potentials~\cite{Machleidt:2000ge}. Although these phenomenological potentials work very well in describing the $NN$ scattering data, there is no strong connection between these interactions and the underlying theory of the strong interaction, Quantum Chromodynamics (QCD). As for the microscopic ones, chiral effective field theory has achieved astonishing success. In the 1990s, Weinberg proposed that one can construct $NN$ {interaction} using the Heavy Baryon (HB) chiral effective field theory (ChEFT)~\cite{Weinberg:1990rz,Weinberg:1991um}. Following this idea, numerous studies have been performed and the description of $NN$ scattering data has become comparable to the phenomenological forces since 2003~\cite{Entem:2003ft,Epelbaum:2004fk,Epelbaum:2008ga,Machleidt:2011zz, Epelbaum:2014sza,Entem:2015xwa,Entem:2017gor}. In chiral $NN$ {interaction}, the low energy constants (LECs) responsible for the short-range part of the $NN$ interaction play an important role for the description of partial waves with $L\leq 2$ while two-pion and one-pion exchanges responsible for the {medium} and long-range parts, respectively, almost saturate higher partial waves~\cite{Kaiser:1997mw}. Nonetheless, it was shown in Ref.~\cite{Kaiser:1997mw} that a non-negligible discrepancy between chiral $NN$ phase shifts and the {Nijmegen} partial wave analysis can still be observed in F-waves especially the $^3\text{F}_2$ and $^3\text{F}_4$ partial waves. In a recent work~\cite{Ren:2016jna}, a study of $NN$ {interaction} in covariant baryon chiral effective field theory was proposed. Interestingly, it was shown that the leading order (LO) relativistic $NN$ interaction can describe the $NN$ phase shifts as well as the next-to-leading order (NLO) nonrelativistic $NN$ interaction. Of course, there is no mystery here because the covariant formulation can be viewed as a more efficient ordering of chiral expansion series. This can be seen from the fact that at LO the covariant formulation has four LECs~\footnote{In fact, there are five LECs in Ref.~\cite{Ren:2016jna}, but according to the power counting of Ref.~\cite{Xiao:2018jot}, there should be four, both of which provide very similar descriptions of the $J=0$ and 1 partial wave phase shifts~\cite{Wang:2020myr}.} which is in between those of the LO and NLO nonrelativistic ChEFT, two and nine, respectively. It remains to be seen whether relativistic effects or corrections are important in the two-pion exchange contributions. As there are no unknown LECs involved in these contributions, it should be more appropriate to check the relevance of relativistic corrections. More specifically, it would be interesting to investigate whether the F partial waves can be better described compared to Ref.~\cite{Kaiser:1997mw}. Although in Ref.~\cite{Kaiser:1997mw} a covariant calculation has been performed already, but the potentials are then expanded and only contributions up to the third order are kept. The corrections of higher order were neglected. We prefer not to do the nonrelativistic expansion, so that all relativistic corrections are properly kept to maintain {Lorentz} invariance. The main purpose of this work is to study the {medium}-range part of $NN$ {interaction}, or more specifically, the two-pion exchange contributions in covariant baryon chiral effective field theory. In this work, we start from the covariant chiral $\pi N$ Lagrangians up to the second order~\cite{Fettes:2000gb} and construct the relativistic TPE $T$-matrix up to the third order of chiral expansion. All power counting breaking (PCB) terms are removed in the spirit of the extended-on-mass-shell scheme (EOMS)~\cite{Gegelia:1999gf,Fuchs:2003qc}, which has been well established in the one-baryon sector (see Ref.~\cite{Geng:2013xn} for a short review). Then we compute the $NN$ phase shifts and mixing angles with $L\geq2$ and $J\geq2$. The paper is organized as follows. In Sec.II, the chiral Lagrangians needed for computing the two-pion exchange contributions are briefly discussed. The TPE $T$-matrix up to the third order is presented in Sec.III. In sec.IV, we compare the so-obtained $NN$ phase shifts with the Nijmegen partial wave analysis and those of Ref.~\cite{Kaiser:1997mw}. A short summary and outlook is given in the last section. \section{Chiral Lagrangian}\label{sec:lagrangians} First, we briefly explain the power counting rule in constructing the covariant baryon chiral Lagrangians, for more details, see, e.g., Refs.~\cite{Gegelia:1999gf,Fuchs:2003qc}. The core of an effective field theory lies in the power counting rule, which emphasises the importance of certain Feynman diagrams for a given process. In this work we adopt the naive dimensional analysis, in which amplitudes are expanded in powers of $(p/\Lambda_\chi)$, where $p$ refers to the low energy scale including the three momentum of nucleons and the pion mass, and $\Lambda_\chi$ refers to the chiral symmetry breaking scale. The chiral order $\nu$ of a Feynman diagram (after proper {renormalization}) with $L$ loops is defined as, \begin{equation}\label{eq:pc} \nu= 4L - 2N_\pi - N_n +\sum_k k V_k, \end{equation} where $N_{\pi, n}$ refers to the number of pion and nucleon propagators, and $V_k$ {denotes} the number of $k$-th order vertices. It was realized very early that such a definition is not full-filled in the one-baryon sector, because the large non-zero baryon mass at the chiral limit leads to the so-called power counting breaking problem~\cite{Gasser:1987rb}. Many approaches have been proposed to recover the power counting defined in Eq.~(\ref{eq:pc}), and the most studied ones are the heavy baryon formulation~\cite{Jenkins:1990jv,Bernard:1992qa}, the infrared approach~\cite{Becher:1999he}, and the EOMS approach~\cite{Gegelia:1999gf,Fuchs:2003qc}. In the present work, we adopt the EOMS approach. The exact procedure of removing the PCB terms in the EOMS approach will be explained later. In order to calculate the contributions of two-pion exchanges, we need the following LO and NLO $\pi N$ Lagrangians, \begin{equation}\label{eq:eff} \mathcal{L}=\mathcal{L}_{\pi N}^{(1)}+\mathcal{L}_{\pi N}^{(2)}, \end{equation} where the superscript refers to the respective chiral order, and they read~\cite{Fettes:2000gb,Chen:2012nx}, respectively, \begin{align}\label{eq:lagpiN} \mathcal{L}_{\pi N}^{(1)} &=\bar{N}\left( {\rm{i}} \slashed{D}-m + \frac{g_A}{2} \slashed{u} \gamma_5 \right) N ,\\ \mathcal{L}_{\pi N}^{(2)} &=c_{1}\trace{\chi_{+}}\bar{N}N- \frac{c_{2}}{4m^2}\trace{u^{\mu}u^\nu}\left(\bar{N}D_{\mu}D_{\nu}N +h.c.\right)+\frac{c_3}{2}\trace{u^2}\bar{N} N-\frac{c_4}{4}\bar{N}\gamma^{\mu}\gamma^{\nu}\left[u_{\mu},u_{\nu}\right] N, \end{align} where the nucleon field $N=(p,n)^{T}$, and the covariant derivative {$D_\mu$} is defined as $D_{\mu}=\partial_{\mu}+\Gamma_{\mu}$ with \begin{align} \nonumber \Gamma_{\mu}=\frac{1}{2}\left(u^\dag \partial_{\mu} u+ u \partial_{\mu} u^\dag\right),~~~~~~~u={\rm{exp}}\left(\frac{{\rm{i}}\Phi}{2f_\pi}\right). \end{align} The pion field $\Phi$ is a $2 \times 2$ matrix of the following form, \begin{align} \nonumber\Phi=\left( \begin{matrix} \pi^0 & \sqrt{2} \pi^{+}\\ \sqrt{2} \pi^{-} & -\pi^0\\ \end{matrix} \right), \end{align} and the axial current {type quantity} $u_{\mu}$ is defined as, \begin{align} \nonumber u_{\mu}={\rm{i}}\left(u^\dag \partial_{\mu} u- u \partial_{\mu} u^\dag\right), \end{align} where $\chi_{+}=u^\dag \chi u + u \chi u^\dag$ with $\chi=\mathcal{M}=diag\left(m_\pi^2,m_\pi^2\right)$. The following values for the relevant LECs and masses are adopted in the numerical calculation: The pion decay constant $f_{\pi}=92.4$ MeV, the axial coupling constant $g_A=1.29$~\cite{Machleidt:2011zz}\footnote{This choice is made in order to be consistent with Refs.~\cite{Entem:2003ft,Epelbaum:2004fk,Epelbaum:2008ga,Machleidt:2011zz, Epelbaum:2014sza,Entem:2015xwa,Entem:2017gor}. Using the more standard value $g_A=1.267$ yields almost the same results.}, the nucleon mass $m_n=939$ MeV, the pion mass $m_\pi=139$ MeV~\cite{Tanabashi:2018oca}, and the low-energy constants $ c_1 =-1.39$, $c_2=4.01$, $c_3=-6.61$, $c_4=3.92$, all in units of GeV$^{-1}$, taken from Ref.~\cite{Chen:2012nx} . {The values of $c_{1,2,3,4}$ employed in this work are different from the standard values (of HB ChPT) due to the different renormalization schemes adopted. In Ref.~\cite{Chen:2012nx}, the pion-nucleon scattering is studied in the EOMS scheme, which is also the scheme we adopted in this work, while previous calculations are mainly based on the HB scheme. Therefore, we took the values of $c_{1,2,3,4}$ from Ref.~\cite{Chen:2012nx} for self-consistency. We cannot use the HB values of $c_{1,2,3,4}$ because then the description of pion-nucleon scattering would be ruined.} One should note that the complete $\mathcal{L}_{\pi N}^{(2)}$ contains more terms than what are relevant here. \section{Two-pion exchange contributions} \label{sec:oneloop} \subsection{ Leading order ($O(p^2)$) results } The two-pion exchange {$T$-matrix} is evaluated in the center-of-mass frame and {in} the isospin limit $m_u=m_d$. The leading order TPE diagrams are shown in Fig.~\ref{fig:tpelo}. They contribute to order $O(p^2)$. All of them can be calculated directly in the EOMS scheme~\cite{Gegelia:1999gf,Fuchs:2003qc}, which is just the conventional dimensional regularization scheme with further removal of PCB terms. There is no so-called pinch singularity~\cite{Weinberg:1990rz,Weinberg:1991um,Kaiser:1997mw} in this case due to the appearance of the finite nucleon mass. Note that only direct diagrams need to be computed in the $NN$ $T$-matrix because of the Pauli exclusion principle~\cite{Kaiser:1997mw}. In principle, the box diagram includes contributions from irreducible TPE and iterated OPE. As a matter of fact, we have calculated the iterated OPE {by inserting the relativistic OPE potential~\cite{Ren:2016jna} into the Lippmann-Schwinger equation} and found that the result is {numerically} identical to that in Ref.~\cite{Kaiser:1997mw}. For this reason, the contributions from iterated OPE are not presented explicitly. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{tpeop2.eps} \caption{Two-pion exchange diagrams at $O(p^2)$. The pion nucleon vertices refer to vertices from $\mathcal{L}_{\pi N}^{(1)}$}. \label{fig:tpelo} \end{figure} The complete TPE {contributions to the on-shell $NN$ $T$-matrix} are decomposed into the scalar integrals $A_0, B_0, C_0$ and $D_0$ multiplied with some polynomials and fermion bilinears using FeynCalc~\cite{Shtabovenko:2020gxv,Shtabovenko:2016sxi,Mertig:1990an} and then calculated numerically with the help of OneLOop~\cite{vanHameren:2009dr,vanHameren:2010cp}. However, there is still a tricky problem. That is, because of the large non-zero nucleon mass $m_n$ in the chiral limit, lower order analytical terms may appear in higher order loop calculations which then break the naive power counting, namely the PCB problem~\cite{Gasser:1987rb}. The procedure to remove the PCB terms is rather standard and has been discussed in detail in Refs.~\cite{Gegelia:1999gf,Fuchs:2003qc,Geng:2013xn,Lu:2018zof}. At the end, the total TPE {contributions to the on-shell $NN$ $T$-matrix} at $O(p^2)$ take the following form, \begin{equation}\label{eq:tpelo} \mathcal{T}_{NN}^{(2)}=\frac{1}{16\pi^2 f_\pi^4}\sum_{i}N_i \mathcal{T}_i^{(2)}, \end{equation} where $i$ refers to the $i$-th Feynman diagram contributing at this order, $N_i$ {denotes} the {product of isospin and coupling factors} which is summarized in Table~\ref{tb:isfoptwo}, {$\mathcal{T}_i^{(2)}$} refers to the $T$-matrix from each Feynman diagram and \begin{table} \centering \caption{{Products of isospin and coupling factors} of two-pion exchange diagrams at $O(p^2)$} \label{tb:isfoptwo} \begin{tabular}{c|c|c|c|c|c} \hline \hline & football & triangle L & triangle R& box & crossed\\ \hline $I=1$ & $\frac{1}{8 }$ & $\frac{1}{8 } g_A^2$ &$\frac{1}{8 } g_A^2$ & $\frac{1}{16 } g_A^4$ & $\frac{5}{16 }g_A^4$ \\ \hline $I=0$ & $ -\frac{3}{8 }$ & $-\frac{3}{8 }g_A^2$ & $-\frac{3}{8 }g_A^2$ & $\frac{9}{16 }g_A^4$ &$ -\frac{3}{16 }g_A^4$ \\ \hline \hline \end{tabular} \end{table} \begin{equation} \mathcal{T}_i^{(2)} = \mathcal{T}_i^{\prime(2)} - \mathcal{T}^{\prime \prime(2)}_i, \end{equation} where {$\mathcal{T}_i^{\prime(2)}$} denotes the total {contribution to $T$-matrix} and {$\mathcal{T}^{\prime \prime(2)}_i$} is the {contribution from the} PCB term. The total {contributions to $T$-matrix} for the football, triangle, cross and box diagrams read explicitly \begin{align}\label{eq:tpepotentials} \mathcal{T}^{\prime(2)}_{\text{Football}} &= -\frac{1}{18}{\cal{B}}_2 \left[3 \left(4 m_{\pi }^2-t\right) B_0\left(t,m_{\pi }^2,m_{\pi }^2\right)+6 \text{A}_0\left(m_{\pi }^2\right)+12 m_{\pi }^2-2 t\right],\\\nonumber \mathcal{T}^{\prime(2)}_{\text{TrigL}} &= 4 m_n^2 \left\{{\cal{B}}_3 m_n \left[2 \left(2 {f_1} +{f_2}+{f_3}\right) +{C}_0\left(\text{A}\right)\right]+2 {\cal{B}}_2 {f_4}\right\} + 2{\cal{B}}_2{f_5},\\\nonumber \mathcal{T}^{\prime(2)}_{\text{TrigR}} & = \mathcal{T}^{\prime(2)}_{\text{TrigL}}({\cal{B}}_3\mapsto {\cal{B}}_4),\\\nonumber \mathcal{T}^{\prime(2)}_{\text{Cross}} & = -\left\{2 m_n^2 \left\{m_n \left[2\left({\cal{B}}_3+{\cal{B}}_4\right)\left({f_2} +{f_3}\right)+4 \left(4 {\cal{B}}_1 m_n+{\cal{B}}_3+{\cal{B}}_4\right) {f_1} \right.\right.\right.\\\nonumber &\left.\left.\left.+\left(4 {\cal{B}}_1 m_n+{\cal{B}}_3\right){C}_0\left(\text{A}\right) +\left(4 {\cal{B}}_1 m_n+{\cal{B}}_4\right) {C}_0\left(\text{B}\right)\right]+4\left\{{\cal{B}}_2{f_4}+m_n^2\left[\left({\cal{B}}_3 + {\cal{B}}_4\right) \right.\right.\right.\right.\\\nonumber &\left.\left.\left.\left. \times D_{22} m_n +D_{23} \left(2 {\cal{B}}_1 m_n^2+{\cal{B}}_5\right)+2 {\cal{B}}_2 D_{00}\right]\right\}+2 {\cal{B}}_1{B_0}\left(t,m_{\pi }^2,m_{\pi}^2\right)\right\} +{\cal{B}}_2{f_5}\right\},\\\nonumber V^{\prime(2)}_{\text{Box}} &=-\left\{2 m_n^2 \left\{m_n \left[-2\left( {\cal{B}}_3 + {\cal{B}}_4\right)\left({f_2}+{f_3}\right) +4 \left(4 {\cal{B}}_1 m_n-{\cal{B}}_3-{\cal{B}}_4\right) {f_1} \right.\right.\right.\\\nonumber &\left.\left.\left.+\left(4 {\cal{B}}_1 m_n-{\cal{B}}_3\right){C}_0\left(\text{A}\right) +\left(4 {\cal{B}}_1 m_n-{\cal{B}}_4\right) {C}_0\left(\text{B}\right)\right] -4\left\{ {\cal{B}}_2{f_4} + m_n^2\left[\left({\cal{B}}_3+{\cal{B}}_4\right)\right.\right.\right.\right.\\\nonumber &\left.\left.\left.\left. \times D_{22} m_n -D_{23} \left(2 {\cal{B}}_1 m_n^2+{\cal{B}}_5\right)+2 {\cal{B}}_2 D_{00}\right]\right\}+2 {\cal{B}}_1{B_0}\left(t,m_{\pi }^2,m_{\pi}^2\right)\right\}-{\cal{B}}_2{f_5}\right\}, \end{align} with \begin{align} f_1 & = {\text{PaVe}}\left(1,\left\{m_n^2,t,m_n^2\right\},\left\{m_n^2,m_{\pi}^2,m_{\pi}^2\right\}\right),\\\nonumber f_2 & = {\text{PaVe}}\left(1,1,\left\{m_n^2,t,m_n^2\right\}, \left\{m_n^2,m_{\pi }^2,m_{\pi}^2\right\}\right),\\\nonumber f_3 & ={\text{PaVe}}\left(1,2,\left\{m_n^2,t,m_n^2\right\},\left\{m_n^2,m_{\pi}^2,m_{\pi}^2\right\}\right),\\\nonumber f_4 & = {\text{PaVe}}\left(0,0,\left\{m_n^2,t,m_n^2\right\},\left\{m_n^2,m_{\pi}^2,m_{\pi}^2\right\}\right),\\\nonumber f_5 & = {\text{PaVe}}\left(0,0,\{t\},\left\{m_{\pi }^2,m_{\pi}^2\right\}\right),\\\nonumber C_0(\text{A}) & = {C}_0\left(m_n^2,m_n^2,t,m_{\pi}^2,m_n^2,m_{\pi }^2\right),\\\nonumber C_0(\text{B}) & = {C}_0\left(m_n^2,t,m_n^2,m_n^2,m_{\pi }^2,m_{\pi }^2\right), \end{align} where {$D_{ij}$} and PaVe are the library functions {of} FeynCalc{~\cite{Shtabovenko:2020gxv,Shtabovenko:2016sxi,Mertig:1990an}} and can be simplified to the scalar integrals $A_0, B_0, C_0, D_0$, and ${\cal{B}}_{1-5}$ {denote} the following fermion bilinears, \begin{align} {\cal{B}}_1 & = \bar u(\bm{p}')u(\bm{p}) \bar u(-\bm{p}')u(-\bm{p}),\\\nonumber {\cal{B}}_2 & = \bar u(\bm{p}') \gamma^\mu u(\bm{p}) \bar u(-\bm{p}') \gamma_\mu u(-\bm{p}),\\\nonumber {\cal{B}}_3 & = \bar u(\bm{p}') (\slashed{p}_2+\slashed{p}_4) u(\bm{p}) \bar u(-\bm{p}')u(-\bm{p}),\\\nonumber {\cal{B}}_4 & = \bar u(\bm{p}') u(\bm{p}) \bar u(-\bm{p}') (\slashed{p}_1+\slashed{p}_3) u(-\bm{p}),\\\nonumber {\cal{B}}_5 & = \bar u(\bm{p}') (\slashed{p}_2 +\slashed{p}_4 )u(\bm{p}) \bar u(-\bm{p}') \slashed{p}_1 u(-\bm{p}),\\\nonumber \end{align} where $\bm{p}, \bm{p}' $are incoming and outgoing three momentum, $p_1^\mu=(E, \bm p)$, $p_2^\mu=(E,-\bm {p})$, $p_3^\mu=(E',\bm {p}')$, $p_4^\mu=(E',-\bm{ p}' )$, $E=\sqrt{\bm{p}^2+m_n^2}$, $E'=\sqrt{\bm{p}'^2+m_n^2}$, $t=(p_1-p_3)^2$, $u(\bm {p})$ and $\bar{u}(\bm {p})$ are Dirac spinors, \begin{align} u(\bm {p}) = N \left( \begin{matrix} \mathbbm{1}\\ \frac{{\bm{\sigma}} \cdot \bm{p}}{E+m_n} \end{matrix} \right) \chi_s,~~~~~~~N = \sqrt{\frac{E+m_n}{2m_n}}, \end{align} where $\chi_s$ {denotes} the {Pauli} spinor matrix and ${\bm{\sigma}}$ is the {Pauli} matrix. To obtain the {$T$-matrix from the} PCB terms, it is convenient to project the {$T$-matrix} from momentum space to helicity space so that the {$T$-matrix} become scalar without the {Pauli} matrix and can easily be expanded in powers of small parameters. The detailed procedure to do this projection is explained later. However, because of their complexity, we do not show the explicit expressions of {$\mathcal{T}'$} in helicity space here. One thing to be noted is that {the above bilinears are all parity even.} This can be easily shown by utilizing momentum conservation and the Dirac equation, e.g., \begin{align} \nonumber {\cal{B}}_5 &= \bar u(\bm{p}') \left(\slashed{p}_2 + \slashed{p}_4 \right)u(\bm{p}) \bar u(-\bm{p}') \slashed{p}_1 u(-\bm{p}) \\\nonumber &= \bar u(\bm{p}') \left(\slashed{p}_2 + \slashed{p}_4\right) u(\bm{p})\bar u(-\bm{p}') \frac{1}{2}\left(\slashed{p}_1 + \slashed{p}_3+\slashed{p}_4 - \slashed{p}_2\right) u(-\bm{p}) \\\nonumber &= \frac{1}{2}\bar u(\bm{p}')\left(\slashed{p}_2 + \slashed{p}_4\right) u(\bm{p}) \bar u(-\bm{p}') \left(\slashed{p}_1 + \slashed{p}_3\right) u(-\bm{p})\\\nonumber &\xrightarrow{\text{Parity}} {\cal{B}}_5. \end{align} The {$T$-matrices from the} PCB terms in helicity space read, \begin{align}\label{eq:pcblo} \mathcal{T}^{\prime \prime(2)}_{\text{Football}} & = 0,\\\nonumber \mathcal{T}^{\prime \prime(2)}_{\text{TrigL}} & = 4{H_1} m_n^2 \ln \left(\frac{\mu}{m_n}\right),\\\nonumber \mathcal{T}^{\prime \prime(2)}_{\text{TrigR}} & = \mathcal{T}_{\text{TrigL}}^{\prime \prime},\\\nonumber \mathcal{T}^{\prime \prime(2)}_{\text{Cross}} & = -4H_1 m_n^2 \left[3 \ln \left(\frac{\mu}{m_n}\right)-1\right],\\\nonumber \mathcal{T}^{\prime \prime(2)}_{\text{Box}} & =- 4H_1 m_n^2 \left[ \ln \left(\frac{\mu}{m_n}\right)+1\right] ,\\\nonumber \end{align} where {$\mu$} refers to the renormalization scale { and is set to 1 GeV in our numerical study unless otherwise stated}, $H_{1}$ is, \begin{align} H_1 = \left[|\bar{\lambda}_1 +\lambda_1| \cos\left(\frac{\theta}{2}\right) +|\bar{\lambda}_1 -\lambda_1| \sin\left(\frac{\theta}{2}\right) \right] \left[ |\bar{\lambda}_2 +\lambda_2| \cos\left(\frac{\theta}{2}\right) - |\bar{\lambda}_2 -\lambda_2| \sin\left(\frac{\theta}{2}\right) \right], \end{align} where $\lambda_{1,2},\bar{\lambda}_{1,2}$ denote the helicities of incoming, outgoing particles respectively and $\theta$ refers to the scattering angle. \subsection{Next-to-leading order ($O(p^3)$) results} The next-to-leading order TPE diagrams are shown in Fig.~\ref{fig:tpenlo}. These diagrams are the same as the corresponding diagrams shown in Fig.~\ref{fig:tpelo} with the replacement of the $\pi N$ vertices $\mathcal{L}_{\pi N}^{(1)}$ with $\mathcal{L}_{\pi N}^{(2)}$. Note that there is no box diagram or cross diagram at this order because there is no $\pi NN$ vertex at order $O(p^2)$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{tpeop3.eps} \caption{Two-pion exchange diagrams at $O(p^3)$. The black dots denote vertices from $\mathcal{L}_{\pi N}^{(2)}$}. \label{fig:tpenlo} \end{figure} The next-to-leading order TPE {contributions to $T$-matrix} read, \begin{equation} \mathcal{T}_{NN}^{(3)} = \mathcal{T}_{\text{FootballL}}^{(3)}+ \mathcal{T}_{\text{FootballR}}^{(3)}+ \mathcal{T}_{\text{TrigL}}^{(3)}+ \mathcal{T}_{\text{TrigR}}^{(3)}, \end{equation} where the notation is the same as that stated above. The {products of isospin and coupling factors} have been included in the {$T$-matrix} at this order for simplicity. \begin{align}\label{eq:nlotpe} \mathcal{T}_{\text{FootballL}}^{(3)\prime} &= \frac{(4I-3)c_4}{72} \left(2 {\cal{B}}_2 m_n-{\cal{B}}_4\right) \left[3 \left(t-4 m_{\pi }^2\right) B_0\left(t,m_{\pi }^2,m_{\pi }^2\right)-6 {A}_0\left(m_{\pi }^2\right)+2 \left(t-6 m_{\pi }^2\right)\right],\\\nonumber \mathcal{T}_{\text{FootballR}}^{(3)\prime} &= \mathcal{T}_{\text{FootballL}}^{(3)\prime}({\cal{B}}_4\rightarrow {\cal{B}}_3) ,\\\nonumber \mathcal{T}_{\text{TrigL}}^{(3)\prime} &= 6 c_1 m_n m_\pi^2 {\cal{B}}_1 \left[ 2m_n^2\left(2 f_1 + C_0(\text{A}) \right) + B_0(t,m_\pi^2,m_\pi^2) \right]+...\quad,\\\nonumber \mathcal{T}_{\text{TrigR}}^{(3)\prime} &= 6 c_1 m_n m_\pi^2 {\cal{B}}_1 \left[ 2m_n^2\left(2 f_1 + C_0(\text{A}) \right) + B_0(t,m_\pi^2,m_\pi^2) \right]+...\quad . \end{align} {The $T$-matrices from PCB terms in helicity space read,} \begin{align} \mathcal{T}_{\text{FootballL}}^{(3)\prime\prime} &= 0,\\\nonumber \mathcal{T}_{\text{FootballR}}^{(3)\prime\prime} &= 0,\\\nonumber \mathcal{T}_{\text{TrigL}}^{(3)\prime\prime} &= 6 c_1 H_1 m_n m_\pi^2 \left[ 2\ln \left( \frac{\mu}{m_n}\right) + 1\right]+ ...\quad,\\\nonumber \mathcal{T}_{\text{TrigR}}^{(3)\prime\prime} &= 6c_1 H_1 m_n m_\pi^2 \left[ 2\ln \left( \frac{\mu}{m_n}\right) + 1\right]+ ... \quad. \end{align} where $I=0,1$ refers to the isospin. The full expressions of the {$T$-matrix from} the triangle {diagrams} at this order are not given explicitly due to their complexity~\footnote{They could be obtained as a Mathematica notebook from the authors upon request.}. In order to compute the phase shifts, we need to transform the {$T$-matrix} into $LSJ$ basis where $L$ is the total orbital angular momentum, $S$ is the total spin, and $J$ is the total angular momentum. The procedure for {the partial wave projection} is rather standard~\cite{Erkelenz:1971caz,Erkelenz:1974uj}. Here we refer to Ref.~\cite{Erkelenz:1974uj} for more details. At first, we compute the {$T$-matrix} directly in momentum space. Then, we transfer it to helicity basis. Next, it is rotated to the total angular momentum space $|JM\rangle$ using the Wigner d-functions. Last, it is projected to $LSJ$ basis. In order to compute the phase shifts and mixing angle, we follow the procedure of Ref.~\cite{Gasser:1990ku}, \begin{align} \delta_{LSJ} &=-\frac{m_n^2 |\bm{p}|}{16\pi^2 E}\text{ Re}\langle LSJ| \mathcal{T}_{NN} |LSJ\rangle,\\\nonumber \epsilon_J &= \frac{m_n^2 |\bm{p}|}{16\pi^2 E}\text{ Re}\langle J-1,1,J| \mathcal{T}_{NN} |J+1,1,J\rangle. \end{align} Note that the kinematical prefactors here differ from those of Ref.\cite{Kaiser:1997mw} {due to a different sign convention for the $NN$ $T$-matrix and a different normalization of the matrix elements obtained by partial wave projection.} \section{Results and discussions} \label{sec:results} In this section, the phase shifts with $2\leq L\leq 6$ and mixing angles with $2\leq J\leq 6$ in the relativistic framework are presented and compared with those of the nonrelativistic results of Ref.~\cite{Kaiser:1997mw}. {But before showing the phase shifts, the choice of the LECs $c_{1,2,3,4}$ needs to be clarified. As we stated above, the values of $c_{1,2,3,4}$ adopted in this work are larger than those of Ref.~\cite{Bernard:1996gq} because of different renomalization schemes. In Ref.~\cite{Chen:2012nx}, the four LECs are determined by fitting to $\pi N$ scattering phase shifts in the EOMS approach at order $O(p^3)$, while in Ref.~\cite{Bernard:1996gq}, they are fixed from a fit to nine $\pi N$ observables in the HB scheme at one-loop order $O(p^3)$. For the sake of self-consistency, we thereby took the values of $c_{1,2,3,4}$ from Ref.~\cite{Chen:2012nx} for the EOMS case and those of Ref.~\cite{Bernard:1996gq} for the HB case. Otherwise the descriptions of $\pi N$ scattering in both schemes will be ruined. As a matter of fact, we have performed the nonrelativistic calculation with $c_{1,2,3,4}$ fixed at the values of Ref.~\cite{Chen:2012nx} and found that the corresponding results are worse than those of Ref.~\cite{Kaiser:1997mw}. } \subsection{D-wave} The D-wave phase shifts and mixing angle $\epsilon_2$ are shown in Fig.~\ref{tb:Dwave}. {The black dots refer to the Nijmegen partial wave phase shifts. The Green dashed lines refer to the contributions from relativistic OPE, the blue dash dotted lines represent the contributions from the leading order TPE, the red curves contain the next-to-leading order TPE, while the black curves are their nonrelativistic counterparts with $g_A$ fixed at 1.29. The bands are generated by varying $\mu$ from 0.5 GeV to 1.5 GeV. The relativistic OPE is independent of $\mu$ since it contributes at tree level. The relativistic leading order TPE is also independent of $\mu$ for partial waves with $L \geq 2$. Nonetheless, the next-to-leading order relativistic TPE depends considerably on $\mu$ for the $^3\text{D}_1$, $^1\text{D}_2$ and $^3 \text{D}_2$ partial waves and shows little dependence on $\mu$ for the $^3\text{D}_3$ partial wave and mixing angle $\epsilon_2$. On the other hand, the nonrelativistic results are independent of $\mu$~\cite{Kaiser:1997mw}.} For all the cases the chiral $NN$ phase shifts are in good agreement with data up to $T_{\text{lab}}=50$ MeV, and the relativistic results show the same tendency as their nonrelativistic counterparts, but the TPE contributions are more moderate, so for all the D-waves the relativistic results are in better agreement with the Nijmegen phase shifts perhaps with the exception of $^{3}\text{D}_1$ for {$T_{\text{lab}} \geq 150$ MeV}, where both descriptions show a much stronger u-turn shape, inconsistent with data. The nonrelativistic result for $^{3}\text{D}_1$ is in fair agreement with data up to $T_{\text{lab}}=200$ MeV due to the cancellation of irreducible TPE and iterated OPE ~\cite{Kaiser:1997mw}, while in the relativistic case, the contribution of {the next-to-leading order TPE is somehow a little bit larger than the nonrelativistic counterparts so that the curve shifts somewhat upwards.} The mixing angle $\epsilon_2$ in the relativistic method is in better agreement with data due to {the moderate contribution from the irreducible parts of TPE.} Although the relativistic corrections are sizeable in D-wave and improve the description of data, the still relatively large discrepancy indicates the need of short-range contributions, namely the contact terms controlled by LECs. A few words are in order for the convergence pattern. For the coupled channels, because of the cancellation of the irreducible part and the iterated part in the leading order TPE, the contribution of the next-to-leading order TPE is very large compared with that of the leading order TPE. But for the singlet channel $^1\text{D}_2$, contrast to our expectation, the iterated part contributes negligibly to the phase shifts while the next-to-leading order TPE contributes a lot. Moreover, the contribution of the next-to-leading order TPE seems to be larger than the contribution of OPE. All in all, although the relativistic results are quantitatively better than the nonrelativistic results, pion-exchange contributions alone are not enough to explain the D-wave data, as concluded in Ref.~\cite{Kaiser:1997mw}. \begin{figure}[htbp] \centering \subfloat{ \includegraphics[width=0.45\textwidth]{1D2.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3D1.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3D2.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3D3.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{E2.eps} } \caption{D-wave phase shifts and mixing angle $\epsilon_2$ as a function of $T_{\text{lab}}$. The black dots refer to the Nijmegen partial wave phase shifts~\cite{Stoks:1993tb}. The green dashed curves correspond to the contributions from relativistic OPE~\cite{Ren:2016jna}, the blue dash dotted curves represent the contributions of leading order TPE, the red solid curves contain the next-to-leading order TPE while the black curves are their nonrelativistic counterparts~\cite{Kaiser:1997mw} with $g_A$ fixed at 1.29. {The bands are generated by varying $\mu$ from 0.5 GeV to 1.5 GeV.}} \label{tb:Dwave} \end{figure} \subsection{F-wave} The F-wave phase shifts and mixing angle $\epsilon_3$ are depicted in Fig.~\ref{tb:Fwave}. {The relativistic chiral phase shifts only show moderate dependence on $\mu$ for the $^3\text{F}_2$ partial wave and the variation of $\mu$ yields indistinguishable difference for the others.} As in the D-wave case, the relativistic TPE is moderate so that overall the phase shifts are in better agreement with data. For the $^1\text{F}_3$ partial wave, the relativistic results are almost identical to data up to $T_{\text{lab}}=200$ MeV. For the $^3\text{F}_3$ partial wave, the relativistic phase shifts are slightly better than the nonrelativistic ones. For the $^3\text{F}_4$ partial wave, the two results are almost identical. However, for the $^3\text{F}_2$ partial wave, {the contributions from the next-to-leading order TPE are very small due to the cancellation of the contributions from $c_3$ and $c_4$} which leads to a fair agreement with data for {$T_{\text{lab}}\leq210 $ MeV}. In addition, the fact that the contributions of leading order relativistic TPE are relatively small indicates a good convergence at least up to $O(p^2)$. The contributions of the next-to-leading order TPE are a bit large for the {$^1\text{F}_3$}, $^3\text{F}_3$ and $^3\text{F}_4$ partial waves when $T_{\text{lab}}\geq150$ MeV {because of the large contribution from $c_3$, while the contributions of the next-to-leading order TPE are still larger than the leading order case for the mixing angle $\epsilon_3$ due to the relatively large contribution from $c_4$}. This may not be too surprising because for this energy, the momentum transfer $q$ is already about $3.85m_\pi \approx 530 $ MeV and therefore may not be regarded as a good low energy scale. \begin{figure}[htbp] \centering \subfloat{ \includegraphics[width=0.45\textwidth]{1F3.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3F2.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3F3.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3F4.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{E3.eps} } \caption{Same as Fig.~\ref{tb:Dwave}, but for F-wave phase shifts and mixing angle $\epsilon_3$.} \label{tb:Fwave} \end{figure} \subsection{G-wave} The G-wave phase shifts and mixing angle $\epsilon_4$ are depicted in Fig.~\ref{tb:Gwave}. Again, {the variation of $\mu$ affects little the phase shifts for all the partial waves and} for all the cases the relativistic phase shifts are in better agreement with data. For the $^1\text{G}_4$, $^3\text{G}_4$, {$^3\text{G}_3$} partial waves and mixing angle $\epsilon_4$, the relativistic results are almost identical to data up to $T_{\text{lab}}=280$ MeV. For the {$^3\text{G}_5$} partial waves, the relativistic phase shift is also in perfect agreement with data up to {$T_{\text{lab}}=250$ MeV}. {Although the descriptions of phase shifts are much improved with the inclusion of the next-to-leading order TPE, the contributions from the next-to-leading order TPE are still relatively large compared with the leading order TPE for the $^1\text{G}_4$, $^3\text{G}_4$, $^3\text{G}_5$ partial waves and mixing angle $\epsilon_4$ because of the same reasons as explained in the F wave case.} \begin{figure}[htbp] \centering \subfloat{ \includegraphics[width=0.45\textwidth]{1G4.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3G3.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3G4.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3G5.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{E4.eps} } \caption{Same as Fig.~\ref{tb:Dwave}, but for G-wave phase shifts and mixing angle $\epsilon_4$.} \label{tb:Gwave} \end{figure} \subsection{H-wave} The H-wave phase shifts and mixing angle $\epsilon_5$ are depicted in Fig.~\ref{tb:Hwave}. For the H wave, {the results are independent of $\mu$ and} although the TPE contributions are much smaller, the relativistic corrections still improve the description of data. For the $^1\text{H}_5$ {and} $^3\text{H}_5$, the relativistic and nonrelativistic phase shifts are almost indistinguishable. {For the $^3\text{H}_4$ partial wave, the relativistic results are slightly better}. Only for $^3\text{H}_6$, the contribution of the next-to-leading order TPE seems to be a bit large when $T_{\text{lab}}\geq150$ MeV. {Moreover, the contributions from the next-to-leading order TPE are still larger than those from the leading order for the $^1\text{H}_5$, $^3\text{H}_5$, $^3\text{H}_6$ partial waves and mixing angle $\epsilon_5$ as explained above.} \begin{figure}[htbp] \centering \subfloat{ \includegraphics[width=0.45\textwidth]{1H5.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3H4.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3H5.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3H6.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{E5.eps} } \caption{Same as Fig.~\ref{tb:Dwave}, but for H-wave phase shifts and mixing angle $\epsilon_5$.} \label{tb:Hwave} \end{figure} \subsection{I-wave} The I-wave phase shifts and mixing angle $\epsilon_6$ are depicted in Fig.~\ref{tb:Iwave}. {The results again are independent of $\mu$ and} the relativistic phase shifts are nearly identical to the nonrelativistic phase shifts and are in perfect agreement with data for this partial wave due to the negligible contribution of TPE. Notice that for the $^3\text{I}_7$ partial wave, the Nijmegen partial wave phase shifts~\cite{Stoks:1993tb} are larger than those in Ref.~\cite{Arndt:1986jb}. {As expected, the contributions from the next-to-leading order TPE are larger than those from the leading order for the $^1\text{I}_6$, $^3\text{I}_6$, $^1\text{I}_7$ partial waves and mixing angle $\epsilon_6$.} \begin{figure}[htbp] \centering \subfloat{ \includegraphics[width=0.45\textwidth]{1I6.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3I5.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3I6.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{3I7.eps} } \quad \subfloat{ \includegraphics[width=0.45\textwidth]{E6.eps} } \caption{Same as Fig.~\ref{tb:Dwave}, but for I-wave phase shifts and mixing angle $\epsilon_6$.} \label{tb:Iwave} \end{figure} \section{Summary and outlook} Based on the covariant $\pi N$ Lagrangians, we calculated the relativistic TPE $T$-matrix up to $O(p^3)$. With this $T$-matrix, we further calculated the chiral $NN$ phase shifts with $2\leq L\leq6$ and mixing angles with $2\leq J \leq6$ and then compared our results with those of the nonrelativistic expansion. We found that for all the partial waves the contributions of relativistic TPE are more moderate than their nonrelativistic counterparts and therefore the obtained $NN$ phase shifts are in better agreement with the Nijmegen partial wave analysis than the nonrelativistic results~\cite{Kaiser:1997mw} especially for the F partial waves. Moreover, we showed that the large discrepancies between the nonrelativistic phase shifts and data in the $^3\text{F}_2$ partial wave can be eliminated by including the relativistic corrections. But for the $^3\text{F}_4$ partial wave, the relativistic corrections are insignificant. We found that the contributions of relativistic TPE at the next-to-leading order, similar to their nonrelativistic counterparts, are a bit large for the {$^1 J_J$, $^3 J_J$, $^3 (J-1)_J$} partial waves {and mixing angles} when $T_{\text{lab}}\geq150$ MeV {because of the large contributions from $c_3$ and $c_4$}, which indicates that the perturbation theory up to $\mathcal{O}(p^3)$ may not work well in this energy region. To summarize, although relativistic corrections are found to improve the description of data as expected, they are not significant enough to alter the results of Ref.~\cite{Kaiser:1997mw} at least at a qualitative level, thus supporting all the existing studies using the nonrelativistic two-pion exchange contributions of Ref.~\cite{Kaiser:1997mw} as inputs. On the other hand, given the covariant nature of the two-pion exchanges presented in this work, they can be easily utilized in the recent series of works~\cite{Ren:2016jna,Li:2016mln,Ren:2017yvw,Song:2018qqm,Li:2018tbt,Xiao:2018jot,Wang:2020myr,Bai:2020yml} which need such two-pion exchanges as inputs and their relevance in such settings remain to be explored. \section{Acknowledgements} Yang Xiao thanks Ubirajara L. van Kolck for useful discussions. This work is supported in part by the National Natural Science Foundation of China under Grants Nos.11735003, 11975041, and 11961141004. Yang Xiao acknowledges the support from China Scholarship Council.
{ "timestamp": "2020-11-13T02:19:34", "yymm": "2007", "arxiv_id": "2007.13675", "language": "en", "url": "https://arxiv.org/abs/2007.13675" }
\section{Introduction} As the demand for processing of big data accelerates, new architectures and computing paradigms are receiving heightened attention. Many of the most challenging computational problems fall into the category of NP-Hard and NP-Complete, where no exact polynomial time solution exists. For this reason, these problems would benefit the greatest from acceleration through novel architectures on FPGAs, ASICs, and new devices. \cite{Colwell2013TheLaw, Waldrop2016MoreMoore} In this context, the Ising Problem has long been known to be in the class of NP-Hard problems. Because of this, a large class of combinatorial optimization problems can be reformulated as Ising problems and solved by finding the ground state of that system \cite{Barahona1982OnModels,Kirkpatrick1983OptimizationAnnealing,Lucas2014IsingProblems}. The Boltzmann Machine \cite{Ackley1985AMachines} was originally introduced as a constraint satisfaction network based on the Ising model problem, where the weights would encode some global constraints, and stochastic units were used to escape local minima. The original Boltzmann Machine found favor as a method to solve various combinatorial optimization problems \cite{Korst1989CombinatorialMachine}. However, convergence of the sampling and training schemes on the Boltzmann Machine has been slow. \cite{Hinton2002TrainingDivergence} To this point, the Restricted Boltzmann Machine (RBM), which is a special form of the more generic Boltzmann Machine, provides a scalable hardware architecture by eliminating intra-layer connections, while maintaining the ability to fully approximate a probability distribution over binary variables \cite{Hinton2002TrainingDivergence}. Nonetheless, even for the RBM, the convergence of the Markov Chain Monte Carlo (MCMC) algorithm, which is typically used for sampling, has proved to be challenging \cite{Tieleman2008TrainingGradient, Tieleman2009UsingDivergence}. On the other hand, the structure of the RBM lends itself to hardware acceleration. Therefore, just in the same way the continued advancement of the Moore's Law has enabled a revolution in the deep learning community, it is conceivable that hardware acceleration will make it possible to overcome the sampling problem in the RBM and allow rapid solutions of combinatorial optimization problems. In this work, we present an end-to-end RBM implementation that combines advances in training, model quantization and an efficient hardware implementation for inference to demonstrate substantial acceleration over standard CPU and GPU implementations. First, we propose a generative model composed of multiple learned modules that is able to solve a larger problem than the individually trained parts. This allows for circumventing the problem of training large modules, thus minimizing training time and enabling a higher degree of model accuracy. The advancement in training is accomplished through a novel merging procedure, which is based on the idea of training two models with overlapping states and then combining them along the intersection states. This is similar to combining circuits in a digital logic, with the output of one circuit sharing the same state as the input of the next. We have combined this method of training large models with algorithmic improvements that allow for compressed weights and therefore enables the use of lower precision representation to develop an efficient FPGA based accelerator. Altogether, the RBM shows approximately a $10^4$X speed acceleration and 32X power improvement for a 16 bit integer factorization problem, which is equivalent to solving a $2^{32}$ phase space, in comparison to implementing the same algorithm in the CPU or GPU. \section*{Algorithm Details} \label{sec:alg} ~ The RBM is a binary stochastic neural network, which can be understood as a Markov random field of Bernoulli random variables divided into a bipartite graph structure, with the two layers called the visible states and the hidden states, graphically demonstrated in Figure \ref{fig:alg} A). We denote $v$ as the visible state, $h$ as the hidden states, and $E(v, h)$ as the energy associated with those states. The probability assigned to a given state $p(v, h) = \frac{1}{Z} e^{-E(v, h)}$ where $Z = \sum_{v, h} e^{-E(v, h)}$ is the normalizing constant of the distribution. The weight matrix $W$ and biases $a$ and $b$ create connections between the hidden and visible layers and create the probability distribution. The bipartite structure means the visible and hidden state can be factored as $p(v | h) = \prod_i p(v_i|h)$ and $p(h | v_i) = \prod_j p(h_j|v)$ due to the conditional independence of states within the same layer. Sampling on the RBM is performed via Block Gibbs Sampling \cite{Hinton2002TrainingDivergence, Geman1987StochasticImages}, where the units in each layer are sampled in parallel. In Figure \ref{fig:alg} B) we show how the RBM preferentially move to higher probability states and stochastically moves through the state space. Each unit has a sigmoidal activation with activation probability $p(v_i = 1 | h) = \sigma(w_i^Th+b_i)$ and $\sigma(x) = (1 + e^{-x})^{-1}$. \subsection{Merging RBMs} \label{sec:merge} The difficulty of training large RBMs means that new innovations are necessary at the training step. In this work, we propose merging smaller models, that are already trained, to form an initial condition for larger models as a way of improving the training of large RBMs. This methodology is inspired from the digital logic where larger functions can be constructed by combining small functional blocks \cite{Camsari2017StochasticLogic}. Notably, all NP-hard problems can be formulated through the Boolean Satisfiability problem. Therefore, constructing the RBM in the aforementioned way provides a natural approach to solving hard optimization problems. An example of this is shown in Figure \ref{fig:alg} F) where we solve a toy example of a 3SAT Boolean Satisfiability problem. This shows that this method has the representational power to solve a wide variety of NP-Hard problems. Merging is performed by combining RBMs along their common visible neuron connection. We show the mechanics of the merging process in Figure \ref{fig:alg} C) and D) where we combine digital logic gates together in this manner. Merging across the visible neurons like this retains the bi-partite, \emph{product of experts} nature of the RBM while giving the expected distribution if we were combining gates to perform logical synthesis. Using this as inspiration, we construct adders and multipliers as shown in Figure \ref{fig:alg} E) to combine trained n-bit adder and multiplier units into 2n-bit multiplier units. Detailed information of this merging protocol has been provided in the Methods section. After smaller models have been trained and merged, we retrain the larger models to fine-tune them. As shown in Methods, Equation \ref{eq:KL}, the models are good approximations for the correct distribution we are interested in, and provide a good initial conditions for training of the final model. Importantly, merging in this way retains the bi-directional nature of the network. The same RBM can be queried to solve what the output is for a given input (the ``forward'' direction), and queried what outputs caused a given input (the ``reverse'' direction). In figure \ref{fig:fact} A), B), and C) we see the consequence of the bi-directional nature where the same model can perform multiplication, division and factorization, as it learns the full joint distribution over variables. In these tasks, the model must get the exact solution as the mode of the sampled distribution. Performance is reported as $p_{correct}$ which is the fraction of instances which report the correct mode for 300 randomly generated instances after the given number of samples. We sample for the given number of samples and check whether the mode of the sampled distribution corresponds to the correct answer to the problem of interest. We additionally show in those figures that by merging and retraining we get significantly better performance than training alone or merging alone. \section*{FPGA Acceleration} \label{sec:FPGA} The massive parallelism present in the RBM algorithm makes it especially efficient on the FPGA. The RBM algorithm also doesn't contain any branches or explicit memory accesses while sampling, removing expensive branch misprediction cycles and DRAM fetch cycles. Furthermore, unlike other deep neural network accelerators, this algorithm is not memory bandwidth limited for any of its operation \cite{Jouppi2017In-DatacenterUnit} as can be seen by the FPGA utilization table (Table \ref{tab:utilization} in Supplementary Section), further increasing the algorithmic performance on hardware. The bipartite nature means that many neurons can be sampled in parallel on the FPGA, allowing us to perform each neuron activation probability in parallel. There has been much work on accelerating RBM training through FPGA implementations \cite{Ly2009AMachines, Kim2009AImplementation, Kim2010AMachines}, but by focusing on inference only, we reduce the necessary hardware requirements to the essential components, fully unlocking the inherent parallelism in the network architecture. \subsection{Model Quantization} In addition to taking advantage of the inherent parallelism, to fit larger models on the FPGA we have performed model quantization to be able to lower precision during the inference. There has been much work on model quantization for deep neural networks, however most of them focus on Convolutional Neural Networks and Multi-Layer Perceptrons \cite{Han2016DeepCoding, Ullrich2019SoftCompression, Chen2015CompressingTrick}, which cannot be directly used for RBMs. We developed a scheme for model quantization, explicitly for RBMs, which is accomplished via 2 additional training steps. The first is adding a constraint to the maximum value of the weight and retraining with this constraint. This makes sure that the weight magnitude cannot overflow the fixed point representation that we are trying to accomplish. As demonstrated in Figure \ref{fig:fact} D), this does not change the overall accuracy of the model, as the retraining causes more weights of smaller magnitude to compensate for a single weight of large magnitude. The second retraining step adds an extra quantization loss term to the training step (see Eqn. \ref{eq:quant_decay} in Methods). In Figure \ref{fig:fact} E) we show that we can quantize from 32 bit floating point to 6 bit fixed point without large loss in error. Figure \ref{fig:fact} F) shows the final weights distribution after these two quantization training steps, demonstrating how the retraining preferentially pushes weights towards their post-quantization values. The exact training steps are detailed in the Methods section. In Figure \ref{fig:fact} H) we see how this quantization effects the performance of a factorization task. This increase in error is accompanied by massive increases in speed and power efficiency. \subsection{Matrix Multiplication with Mask} With the use of a fixed point representation of the model parameters and our restriction of binary node values, the matrix multiplication step of the algorithm is greatly simplified. Instead of instantiating many binary multiplication circuits, we can perform the computation by passing the weight values through a binary mask, or a series of 2-to-1 muxes, created by the node values. We combine this with the use of fixed point rather than floating point calculations, and we see a large decrease in area cost and FPGA resource usage. The estimated area cost of 32 bit floating point multiplication is 27x that of an 8 bit multiplication, and an 8 bit multiplication is 8x costlier than an 8 bit adder. This shows that the benefits of getting rid of the multiplications is very large, allowing more of the calculation to happen on the FPGA in one cycle \cite{Dally2015High-PerformanceLearning}. \subsection{Sigmoid Approximation} Exact calculation of the sigmoidal activation function $f(x) = \frac{1}{1 + e^{-x}}$ is computationally expensive. To accomplish direct calculation, at least 3 extra hardware instructions are needed, exponentiation, addition, and division, which all incur a large hardware cost both in terms of latency and area. Instead, binary sigmoid values are precomputed and enumerated in a look up table (LUT) for use in the FPGA. This implementation allows for fast evaluation of the activation function without expensive hardware resources. After matrix multiplication and bias addition, the computed value is passed through the LUT based activation function to approximate the sigmoid. \subsection{Pseudo-Random Number Generation} We have found that the quality of the random numbers is not important to the algorithm working effectively. The only necessary condition was that the random numbers were not correlated between individual neurons. To accomplish this we use a 32 bit length Linear Feedback Shift Register (LFSR) pseudo random number generator to create high quality pseudo-random numbers. The total cost of these LFSR based random number generators amounts to just 5\% of the design flip flop usage, and 2\% of the lookup table usage making them relatively cheap in comparison to other higher quality random number generators. Each neuron has its own LFSR and is seeded with a different value to minimize the possibility of correlation. Longer sampling runs ($>10^8$ samples) would cause the neuron LFSRs to start looping, and correlate samples at the start of the run with later samples but this can be simply mitigated by increasing the length of the LSFR. \section*{Results and Discussion} \subsection{Trained Model Performance} By merging together small RBMs using the principles of logic synthesis, many complex and NP-Hard problems can be solved using the bi-directional nature of RBMs. In Figure \ref{fig:alg} D) we see the mechanics of this procedure, while Figure \ref{fig:alg} E) shows how it can apply to integer factorization and Figure \ref{fig:alg} F) shows how it can apply to 3SAT and boolean satisfiability. We note that the logic gates formed in part \ref{fig:alg} C) can be operated in reverse to find solutions to the given satisfiability problem with 3SAT. This is the canonical example of an NP-Hard problem, which maps directly to all other NP-Hard problems, showing that these networks have the representational power to solve a variety of such problems. \cite{Cook1971TheProcedures, Karp1972ReducibilityProblems}. These problems are at the heart of many computationallly difficult problems. We additionally show how this type of training can outperform directly trained models. Figure \ref{fig:fact} A), B) and C) show that merging smaller models and retraining them drastically outperforms directly training the model, with errors close to 3-10x less for problems of interest. In Figures \ref{fig:fact} G) we show how these models can factor a semi-prime number into its two factors with high accuracy. The factorization of a semi-prime number into its two co-primes is at the heart of the RSA cryptography algorithm and is the basis of most of modern encryption systems. To take advantage of the parallel resources of the FPGA and unlock hardware performance, we have shown generalizable methods for quantization that can be applied to all RBM problem instances. This is shown in figure \ref{fig:fact} D), E) and F) where we show that adding a max weight constraint followed by quantization loss while training can lead to better performance once quantized on the FPGA. By showing significant model performance down to 6 bit integer representations, we demonstrate that this approach allows for hardware efficient model representations for RBMs. \subsection{Scaling} We have demonstrated scaling of the factorization algorithm up to 16 bit numbers. Markov Chain Monte Carlo based sampling methods for optimization problems fall into the class of "Stochastic Local Search" and are expected to have exponential scaling with problem size \cite{Hoos2005StochasticSearch}. This exponential scaling dependence is shown in Figures \ref{fig:fpga_perf} C) and D). Although this is the case, we see a $10^4$ constant factor speed increase when the algorithm is implemented on the FPGA in Figure \ref{fig:fpga_perf} B) in 16 bit factorization and Figure \ref{fig:fpga_perf} E) across all problem instances. This massive speed increase across the whole spectrum of bit sizes has real world consequences, as it implies that other algorithms mapped onto this general framework can become very efficient in finding ground state solutions which would otherwise be difficult to obtain. \begin{comment} sing the merging and retraining method, we see a polynomial space growth of bits to be factorized vs. model size. As we have limited computation power and time, we were not able to train larger models for these kinds of systems. Further scaling can also be accomplished by moving to deep Boltzmann machines, which may be easier to train. \end{comment} The FPGA implementation of our sampling algorithm has shown a $10^4$ speed increase compared to a dual CPU system, and a $10^3$ speed increase compared to a GPU. This comes with a 32x power decrease compared to the CPU and 6x power decrease compared to the GPU. This is compared to a dual CPU machine running the highly optimized industry standard PyTorch machine learning framework, and a hand optimized GPU algorithm using the CUDA and cuBLAS libraries. We note that the performance improvement of the GPU algorithm compared to the CPU is minimal due to the thread synchronization, limited cache sizes and relatively small RBM size presented here.\cite{Ly2008NeuralMachines} Although our implementation takes up much of the resources of the FPGA, there are many possible areas where our design could be modified to scale for performance. Our goal was not to create the most optimized hardware design, but to demonstrate that parallel hardware running our very hardware friendly algorithm had the potential for drastic improvement. With focused effort on the improvement of the hardware architecture \cite{Li2015AnStreams,Ly2009AMachinesb, Lo2011BuildingMPI} the speed and performance improvement is expected to get much larger. Although we demonstrate sampling speed increase for inference, using the same FPGA accelerated sampling algorithm can also work for decreasing training time. The major bottleneck in the training step is creating a series of uncorrelated samples, which takes a large number of samples for a highly correlated sampler. Using FPGA acceleration of the sampling algorithm could give a lower variance estimate of model probabilities in a much faster and energy efficient manner, than those provided by a CPU or GPU. \subsection{Time Domain Analysis} Markov Chain Samplers (and specifically the Gibbs Sampler we are using) have been proven to converge in a geometric rate with the number of samples. In addition, the distance in total variation between the sampled distribution and the model distribution strictly decreases with each Gibbs sampling step \cite{Bremaud1999GibbsSimulation}. This implies that the quality of our solution should increase as the sampled distribution approaches equilibrium. This is especially useful for optimization problems where run time can be traded directly for better solutions. The time domain sampler also shows a large difference between time taken to model the full distribution (the mixing time) and the time taken to sample the correct factors once (the hitting time)(see Fig. \ref{fig:hit} A and B in comparison to Figure \ref{fig:fpga_perf} C and D). A demonstrative example of this phenomenon is shown directly in Figure \ref{fig:hit} C), with the difference in times being close to 4x. By adding in a sample verification methodology, the time taken to identify correct factors can be drastically decreased. For NP-Hard problems, the solution can be verified in polynomial time but finding a given solution is done within exponential time, a feature which is also present in this factorization problem. This means the relative overhead of checking of factors is relatively low, and can be done on a regular basis to reduce the sampling time significantly. This method of adding heuristics to the sampler is present in many different problem types, and must be done differently for each type of problem. Here we demonstrate both that the sampler approaches the correct distribution, and that effective heuristics using the hitting time are available to reduce the time to solution. \section*{Conclusions} \label{sec:conclusion} Our choice of looking at an integer factorization problem stems from two motivations: (i) as we are exploring a new training methodology, the factorization problem gives access to large amount of training data without worrying about quality and (ii) at the same time integer factorization is also an example of a hard problem. It should be noted, nonetheless, that many combinatorial optimization problems can be broken down into associated sub-problems, and solved using a greedy approach (i.e. using the nearest neighbor approach in the Travelling Salesman Problem, or multiplying single digits in a larger multiplication, or evaluating one Boolean logic statement in a Boolean satisfiability problem as shown in Figure \ref{fig:alg}). Using a greedy approach (such as the one used in the travelling salesman problem with nearest neighbors) can produce non-optimal results, and evaluating all permutations of possible solutions to find the most optimal one can be computationally intractable in a large problem space. By combining these sub-problems using the method proposed here, we bypass the problems associated with those two approaches. We encode possible solutions as a probability indicating its local optimality, and combine these sub problems by merging the visible units of their RBMs. This combination mechanism multiplies the probabilities such that the solution with global optimality is encoded as the mode of the distribution modeled by the larger RBM. In addition, as the phase space of the problem space is $2^{v}$ where $v$ is the number of visible units, we can also encode a large problem space using minimal units and decrease the hardware cost of the approach. As the Boolean satisfiability problem (shown by the 3SAT example) can be mapped to solving a variety of NP-Hard problems \cite{Cook1971TheProcedures, Karp1972ReducibilityProblems}, it is expected that variants of this RBM framework can be applied to a large variety of graph based optimization tasks such as MAX-CUT, the Travelling Salesman Problem, Quadratic Assignment problem and any problem that can be mapped to a series of binary variables. Much work has been done to develop accelerators for the Ising Model problem, such as specialized ASICs \cite{Yamaoka2016AAnnealing, Boyd2018SiliconNews, Schneider1993AnalogCircuits}, FPGA designs \cite{Belletti2009Janus:Computing, Ko2019Flexgibbs:Graphs}, Memristor based accelerators \cite{Bojnordi2016MemristiveLearning, Wan2020AModels}, Quantum Mechanical Accelerators based on quantum adiabatic processes \cite{Dridi2017PrimeGeometry}, Optical Parametric Oscillators \cite{Wang2013CoherentOscillators, McMahon2016AConnections}, Magnetic Tunnel Junction \cite{Camsari2017StochasticLogic, Camsari2017ImplementingMTJ, Borders2019IntegerJunctions} and many others. Our work is distinct from these existing results in that, we showed a new way of constructing large models from smaller building blocks. We further demonstrated that its implementation on an FPGA, which exploits the intrinsically parallel architecture and sparsity, can lead to orders of magnitude improvement in the speed and power for a sufficiently large problem (2$^{32}$ phase space). Importantly, it should be noted that these improvements in performance cannot be decoupled from the algorithmic advances that we have proposed. At the same time, our approach could also benefit from the emerging hardware proposed in the aforementioned reports for further improvement in speed and energy. Further scaling can also be accomplished by moving to Deep Boltzmann machines \cite{Salakhutdinov2009DeepMachines, Salakhutdinov2010EfficientMachines}, which may be easier to train. While we demonstrated an inference problem, the underlying method is equally applicable and expected to accelerate training problems \cite{Dally2015High-PerformanceLearning, Lo2011BuildingMPI, Savich2011ResourceMNIST}. Substantial acceleration of generative models such as the RBM could lead to unsupervised, life-long, learning machines. \clearpage \begin{figure*} \begin{centering} \includegraphics[width=\linewidth]{Figures/Fig_Alg.pdf} \par\end{centering} \caption{\label{fig:alg}. \textbf{Demonstration of RBM structure and sampling algorithm} \\ {\bf (A)\/} Structure of the RBM neural network. The Restricted Boltzmann Machine is a binary neural network structured in a bipartite graph structure. {\bf (B)\/} The RBM maps out the non-convex state space of a probability distribution. Low energy states map to high probability states which the network identifies through a Markov Chain Monte Carlo (MCMC) algorithm. {\bf (C)\/ } A graphical mapping of RBMs to gate level digital circuits. The visible nodes correspond to the inputs and outputs of the logic gate, and the hidden nodes are the internal representation of the logic gate. {\bf (D)\/} Graphical Demonstration of the merging procedure, showing how two RBMs which represent an AND gate and an OR gate can be merged together to form a connection. {\bf (E)\/} We can create arbitrary adders and multipliers by merging together smaller units to create the logical equivalent of larger units. The leftmost image shows how we create a 2n bit multiplier using n bit multiplications and n bit additions. The color coding shows how the partial products are broken apart amongst the adders and multipliers. To the right of that we show how we perform 4, $n$-bit input 2$n$-bit output multiplications, and then accumulate the result. {\bf (F)\/} Using this strategy of merging logical units to solve a simple 3SAT, Combinatorial Optimization problem. } \end{figure*} \clearpage \begin{figure*} \begin{centering} \includegraphics[width=\linewidth]{Figures/Fig_RBMTrain.pdf} \par\end{centering} \caption{\label{fig:fact} {\bf Performance on 16 Bit Multiplication, Division and Factorization} \\ {\bf (A), (B), (C)\/} Showing the performance on multiplication, division, and factorization performed by directly training a 16 bit network (trained 16 bit), merging two 8 bit networks (merged 8 bit), or merging two 8 bit networks and retraining (retrained 8 bit). {\bf (D)\/} Showing the effect of retraining with a maximum weight constraint. Here we see no performance degradation due to retraining the module by adding this extra constraint. {\bf (E)\/} Retraining the network with added $L_1$ quantization loss. By retraining for 6 bit quantization, we see a large increase in performance compared to naive quantization. {\bf (F)\/} A histogram of the RBM weights before and after retraining for quantization. We see the network is strongly clustered around the 6 bit values. {\bf (G)\/} An example of a factorized distribution after $10^7$ samples showing factorization of a 16 bit number into its two prime factors. The sampled distribution shows clear peaks at the two correct answers to the factorization problem. {\bf (H)\/} An example probability mass function showing what the sampling procedure would return after running on the FPGA for $10^7$ samples. Although this shows that there is greater error in the incorrect factors as compared to part (G), there are still two clear peaks in the distribution indicating the correct factors. } \end{figure*} \clearpage \begin{figure*} \begin{centering} \includegraphics[width=0.95\linewidth]{Figures/Fig_FPGACompare_GPU.pdf} \par\end{centering} \caption{\label{fig:fpga_perf} \textbf{Performance of the FPGA implementation vs the CPU implementation on factorization} \\ The sampling algorithm scales approximately exponentially with the bit size (and approximately linearly with the phase space). We see a $10^4$ speed improvement across all model sizes compared to the CPU algorithm and $10^3$ speed improvement compared to the GPU algorithm. {\bf (A)\/} The sample efficiency of the FPGA implementation is similar to the CPU implementation, even after quantizing to 8 bit weights and biases and using the various approximation schemes detailed. {\bf (B)\/} When the time taken to reach a solution is scaled for the FPGA vs. the CPU and GPU, the FPGA outperforms both by orders of magnitude., {\bf (C)\/} The scaling of the algorithm when measured at various accuracy levels on the CPU. The RBM for each bit number is run until it hits the given accuracy on a set of random factorization problems. {\bf (D)\/} Scaling of the sampling algorithm when run on the FPGA. The difference in sample number from part (C) is due to approximations necessary to efficiently port the model onto the FPGA. {\bf (E)\/} Time scaling of the factorization problem measured at the 70\% accuracy level. We see that the FPGA performs 4 orders of magnitude faster compared to the CPU and 3 orders of magnitude compared to the GPU across all bit counts for the outlined sampling algorithm. } \end{figure*} \clearpage \begin{figure*} \begin{centering} \includegraphics[width=0.88\linewidth]{Figures/Fig_Hit.pdf} \par\end{centering} \caption{\label{fig:hit}{\bf Time Domain Analysis of 16 Bit Factorization Algorithm} \\ {\bf (A)\/} Hitting time histogram for the factorization of a 16 bit number. The experiment is repeated 1000 times, and aggregate results are shown for this. This shows that most hits happen fairly early, with a long tail to the distribution. On the bottom panel, when zooming into the hits that occur in the first 100000 samples, we see that there are a large number of hits that occur in the first bin. {\bf (B)\/} Histogram showing the cumulative distribution for the hitting time on the 16 bit factorization task. The smoothness of this distribution can allow us to fit an empirical model for the samplers and effectively parallelize by starting many samplers. {\bf (C)\/} A trace plot showing the time domain behavior of the sampler. The sampler stochastically explores the state space, but preferentially remains in the minimal energy state. This means that the first hit of the correct factors occurs much quicker than the minimal energy/maximal probability state is correctly identified. } \end{figure*} \clearpage \newpage \begin{methods} \begin{subsection}{Model Training} In this paper we trained the RBMs by contrastive divergence as described by \cite{Hinton2002TrainingDivergence}. Each of the models in this paper were validated by checking their performance on the problem they were trying to solve (i.e addition and subtraction for an adder, multiplication and factorization for a multiplier). This method was also used to assess model complexity (i.e number of hidden units) and evaluate learning parameters (learning rate, batch size, etc.). The final results for RBM sizes and approximate training times are shown in Table \ref{tab:models}. Models were trained using the PyTorch library in Python, which is one of the standard libaries used in Machine Learning research and production code. Training was conducted on a computer with 2 Intel Xeon E5-2620 processors, and 2 Nvidia Titan V GPUs. Each RBM was trained until the test error stopped decreasing, and the model had converged to a best solution. Training rate was varied across models and tuned as a hyperparameter. The training set was copies of all possible multiplication or addition problems, encompassing the full state space. In the case that the state space was too large to train on all of it (such as the 16 bit and 32 bit adders), a random sample of the training set was used, and a new random sample was reinitialized every epoch of training. The training time tends to increase with the number of bits in the adder or multiplier due to both the size of the data set (which increases exponentially with the number of bits) and the number of hidden and visible units (which both increase approximately linearly). At the 16 bit adder level, the size of the data set was so large that the entire data set could not be used for training ($\approx 8$ billion data points) and a randomized sample of the set had to be taken. As generalization is not perfect, we can attribute the decrease in their performance to this fact. For the 32 bit adder, this problem was exacerbated, and the 32 bit adder was outperformed by most units even after training for a full week. For multipliers, the 8 bit multiplier has a tractable amount of data, but a good joint density model could not be formed even after a large training time. We believe this is due to an inherent difficult in the multiplication problem that is not present in the addition problem. The multiplication model has a higher number of hidden units as there is not as distinct of a correlation between higher level bits in the 8 bit multiplication problem as there is in the addition problem, first level correlations (as an RBM with 1 layer of hidden units would find) are more difficult to find. We believe that using deep boltzmann machines might help fix the problem of training in large multipliers. \end{subsection} \begin{subsection} {Merging RBMs} Given two RBMs we wish to merge along a common connection (see Figure \ref{fig:alg}) with the following parameters: $W_A \in \mathbb{R}^{n\times r}$ and $W_B \in \mathbb{R}^{m\times s}$, visible biases $b_A \in \mathbb{R}^{n}$ and $b_B \in \mathbb{R}^{m}$, and hidden biases $a_A \in \mathbb{R}^{r}$ and $a_B \in \mathbb{R}^{s}$. The energies and probabilities of these are as follows: \begin{equation} \label{eq:energies} \begin{split} E_A(v, h) &= -v^TW_Ah - a_A^Th - b_A^Tv; \;\; p_A(v, h) = \frac{1}{Z_A} e^{-E_A(v, h)} \\ E_B(v, h) &= -v^TW_Bh - a_B^Th - b_B^Tv; \;\; p_B(v, h) = \frac{1}{Z_B} e^{-E_B(v, h)} \end{split} \end{equation} We can write the weight matrices as a series of row vectors corresponding to one visible unit's connections to a set of hidden units. \begin{equation} \label{eq:weights} \begin{split} W_A = \begin{bmatrix} \rule[.5ex]{1em}{0.4pt} w^A_1 \rule[.5ex]{1em}{0.4pt} \\ \vdots \\ \rule[.5ex]{1em}{0.4pt} w^A_n \rule[.5ex]{1em}{0.4pt} \end{bmatrix}, \;\; W_B = \begin{bmatrix} \rule[.5ex]{1em}{0.4pt} w^B_1 \rule[.5ex]{1em}{0.4pt} \\ \vdots \\ \rule[.5ex]{1em}{0.4pt} w^B_m \rule[.5ex]{1em}{0.4pt} \end{bmatrix}, \;\; \\ \end{split} \end{equation} With this definition, the merge operation is shown below. If unit $k$ of RBM $A$ is merged with unit $l$ of RBM $B$ the associated weight matrix $W_{A+B} \in \mathbb{R}^{(n+m-1)\times (r+s)}$ , visible bias $v_{A+B} \in \mathbb{R}^{n+m - 1}$ and hidden bias $h_{A+B} \in \mathbb{R}^{r+s}$ dictate the probabilities and energies for the merged RBM. Merging multiple units between these two RBMs corresponds to moving multiple row vectors from $W_B$ to $W_A$, which creates the associated decrease in dimensionality of $W_{A+B}$ and $b_{A+B}$ (where $W_{A+B} \in \mathbb{R}^{(n+m-d)\times (r+s)}$ and $v_{A+B} \in \mathbb{R}^{n+m - d}$ where $d$ is the number of merged units. \begin{equation} \label{eq:mergedweights} W_{A+B} = \left[\begin{array}{@{}c | c@{}} \mbox{\normalfont $W_A$} & \begin{matrix} \mbox{\normalfont 0} \\ \rule[.5ex]{1em}{0.4pt} w^B_l \rule[.5ex]{1em}{0.4pt} \\ \mbox{\normalfont 0} \\ \end{matrix} \\ \hline \bigzero & \mbox{\normalfont $W_{B\setminus l}$} \end{array}\right] = \left[\begin{array}{@{}c | c@{}} \begin{matrix} \rule[.5ex]{1em}{0.4pt} w^A_1 \rule[.5ex]{1em}{0.4pt} \\ \vdots \\ \rule[.5ex]{1em}{0.4pt} w^A_k \rule[.5ex]{1em}{0.4pt} \\ \vdots \\ \rule[.5ex]{1em}{0.4pt} w^A_n \rule[.5ex]{1em}{0.4pt} \\ \end{matrix} & \begin{matrix} \bigzero \\ \\ \rule[.5ex]{1em}{0.4pt} w^B_l \rule[.5ex]{1em}{0.4pt} \\ \\ \bigzero \\ \end{matrix} \\ \hline \bigzero & \begin{matrix} \rule[.5ex]{1em}{0.4pt} w^B_1 \rule[.5ex]{1em}{0.4pt} \\ \vdots \\ \rule[.5ex]{1em}{0.4pt} w^B_{l-1} \rule[.5ex]{1em}{0.4pt} \\ \rule[.5ex]{1em}{0.4pt} w^B_{l+1} \rule[.5ex]{1em}{0.4pt} \\ \vdots \\ \rule[.5ex]{1em}{0.4pt} w^B_m \rule[.5ex]{1em}{0.4pt} \\ \end{matrix} \end{array}\right] \\ \end{equation} \begin{equation} \label{eq:mergedbiases} b_{A+B} = \begin{bmatrix} b^A_1 \\ \vdots \\ b^A_k + b^B_l \\ \vdots \\ b^A_n \\ b^B_1 \\ \vdots \\ b^B_{l-1} \\ b^B_{l+1} \\ \vdots \\ b^B_{m} \\ \end{bmatrix} \; \; a_{A+B} = \begin{bmatrix} a^A_1 \\ \vdots \\ a^A_r \\ a^B_1 \\ \vdots \\ \\ a^B_{s} \\ \end{bmatrix} \end{equation} Below, we show how this relates to the original energies and probabilities. The vectors $v$ and $h$ correspond to the visible vector put into the combined RBM, while $v_A$, $v_B$, $h_A$ and $h_B$ correspond to the equivalent state vectors that would be inputted into the single RBMs. Using these equations, we can see that the combined RBM energy factorizes into a sum of the original RBM energies and the probability is the product of the original probabilities. \begin{align} \label{eq:mergedvis} v & = \begin{bmatrix} v_1 \\ \vdots \\ v_{n+m-1}\end{bmatrix} \;\; h = \begin{bmatrix} h_1 \\ \vdots \\ h_{r+s} \end{bmatrix} \;\; \\ v_A & = \begin{bmatrix} v_1 \\ \vdots \\ v_l \\ \vdots \\ v_n\end{bmatrix}, v_B = \begin{bmatrix} v_{n+1} \\ \vdots \\ v_{n+l-1} \\ v_{l} \\ v_{n+l} \\ \vdots \\ v_{n+m - 1}\end{bmatrix}, \\ h_A & = \begin{bmatrix} h_1 \\ \vdots \\ h_r \end{bmatrix} h_B = \begin{bmatrix} h_{r+1} \\ \vdots \\ h_{r+s} \end{bmatrix} \end{align} \begin{equation} \label{eq:mergedEnergy} E_{A+B}(v, h) = E_A(v_A, h_A) + E_B(v_B, h_B), \end{equation} \begin{equation} \label{eq:mergedprobs} p_{A + B}(v, h) = \frac{1}{Z_{A+B}} e^{-E_{A+B}(v, h)} \propto p_A(v_A, h_A)p_B(v_B, h_B) \end{equation} Because of the probabilities approximately multiplying (Eq. \ref{eq:prob_mult}), we can also say that if each of the distributions differed from the ``ideal'' distribution (denoted here by $q$), then we can expect the error (as measured by the KL divergence eqn. \ref{eq:KL}) to increase approximately linearly with the number of distributions summed together. As contrastive divergence learning approximately follows the gradient of the KL divergence, the merged model represents a good initial condition for training of the larger model \cite{Carreira-Perpinan2005OnLearning}. This means that only small corrections in CD training are needed on the merged model to create a good trained model for the larger network. Training the merged model is possible as intermediate nodes are represented as extra visible units, and can be calculated based on the input and outputs of the dataset. The data for the merged models can be calculated as if we were propagating values through a digital circuit and keeping track of intermediate values, which become the data for the merged model to be trained on. \begin{align} \label{eq:prob_mult} \quad p = p_A(v_A, h_A)p_B(v_B, h_B), \;\; q & =q_A(v_A, h_A)q_B(v_B, h_B) \\ \label{eq:KL} D_{\mathrm{KL}}(p \| q) \approx D_{\mathrm{KL}}(p_A \| q_A) & + D_{\mathrm{KL}}(p_B \| q_B); \end{align} \end{subsection} \begin{subsection}{Retraining for Quantization} To train for quantization, the loss function that is optimized for ($L(W, d)$) is modified so that in addition to having the regular Contrastive Divergence loss between the weights $W$ and the data $d$ denoted by $CD(W, d)$, we have a loss term that pushes weights to be closer to their quantized value. The hyperparameter $\lambda$ is slowly increased during training to force the weights progressively closer to their quantized value. This method allows the contrastive divergence term to fix the errors created by quantizing the weights slowly while training. Although taking the exact gradient of this loss term is not possible (as $Q(W)$ is not a smooth function of $W$, see Eqn. \ref{eq:quant_decay2}), we find that by assuming the quantization gradient $\frac{\partial Q(W)}{\partial W} \approx 0$, we obtain a sufficiently good performance. \begin{align} \label{eq:quant_decay} L(W, d) & = \epsilon CD(W, d) - \lambda ||W - Q(W)||_1 \\ \label{eq:quant_decay2} \frac{\partial L(W, d)}{\partial W} & = \epsilon \frac{\partial CD(W, d)}{\partial W} - \lambda sign(W - Q(W)) (1 - \frac{\partial Q(W)}{\partial W}) \end{align} \end{subsection} \begin{subsection}{FPGA Programming} At the heart of the FPGA is the RBM computing core which performs the Gibbs sampling algorithm. All programming was done using the Xilinx Vivado suite on the on the Xilinx Virtex UltraScale+ XCVU9P-L2FLGA2104. The core was designed to output the most samples for the RBM sizes we had. The weights and biases are stored in on-chip SRAM to decrease access time. The values are broadcast to the node update modules each cycle, which performs the necessary operations for the sampling and take up the bulk of the computation resources. There are no pipeline or data hazards, removing the need for any complex timing schemes. Thus, if we instantiate a node update module for every node register, there is a new sample taken from the visible node registers every clock cycle, taking full advantage of the RBM's parallelism. Each node update module contains the logic to perform a matrix-row multiplications, a sigmoid function, and a comparison with a random number. The matrix-row multiplication is performed via a binary mask and an adder module that accumulates the surviving weight values. Once each weight value is masked appropriately, they are passed into an adder tree which accumulates the results of each multiplication. The accumulators take the majority of LUT resources on the FPGA, and represent the bottleneck in the computation. In our implementation, we use single cycle accumulation, but multi-cycle accumulation is possible to save on hardware resources while scaling up for larger RBM sizes. A fixed point sigmoid function is implemented as a LUT, which allows for speed without expensive hardware operations. A Python script generates the LUT Verilog code in order to test different bit lengths and fixed point locations. Finally, a linear feedback shift register (LFSR) generates a pseudo-random number. The number is compared to the output of the sigmoid LUT and the relevant node is updated with the boolean result. Results from the bank of visible node registers is buffered to an IO controller with a FIFO. The IO controller also uses a memory-mapped interface that can program the weights, clamps, and biases. The controller communicates to our desktop via PCIe. A simple PCIe link is provided through a Xillybus IP Core \cite{Preuer2014ReadyFPGAs} which provides up to 800 MB/s data transfer rate. This is a sufficient speed to get all of the sample data off of the FPGA, but faster speeds are possible for future implementations. To handle the large data stream from the bus, a C backend was created to serve data to existing Python code for RBM analysis. This C backend is used for analysis of data and to judge the quality of the solution on the FPGA. This FPGA pipeline provides an efficient method for solving the problems of interest, where the limiting factor in computation speed can become the FPGA sampling speed. \end{subsection} \end{methods} \newpage \bibliographystyle{naturemag}
{ "timestamp": "2020-10-15T02:28:02", "yymm": "2007", "arxiv_id": "2007.13489", "language": "en", "url": "https://arxiv.org/abs/2007.13489" }
\section{Introduction} \begin{figure}[ht] \centering \includegraphics[width=0.98\linewidth]{figs/sec_1.pdf} \setlength{\abovecaptionskip}{3pt} \caption{We propose debiasing method that uses adversarial attacks to balance the data distribution.} \label{fig1} \end{figure} The last few years have witnessed human-level AI in view of accuracy for tasks like image recognition, speech processing and reading comprehension~\cite{he2016deep,rajpurkar2016squad}. The increasing deployment of AI systems drives researchers to pay attention to other criteria beyond accuracy, such as security~\cite{zhang2018emotion}, privacy~\cite{xu2018trust,cciftcci2017reliable} and fairness, which are critical for large-scale robust real-world applications~\cite{escalante2018explainable}. Among these criteria, fairness concerns about the potential discrimination of model towards protected or sensitive groups (e.g., gender, skin color). A biased model can lead to unintended social consequences such as in online advertisement, banking and criminal justice~\cite{sweeney2013discrimination,hardt2016equality,angwin2016machine}, e.g., the popular COMPAS algorithm for recidivism prediction was found biased against black inmates and prone to make unfair sentencing decisions~\cite{angwin2016machine}. Machine learning fairness study involves with two types of variables: target and biased variables. In the COMPAS example, target variable is the recidivism label and biased variable is the skin color. The existing model debiasing attempts can be roughly categorized according to the intervention period to make the prediction of target variables independent of the bias variables: (1) Pre-processing debiasing modifies the training dataset before learning the model. Typical pre-processing solutions resort to sampling or reweighing the training samples~\cite{drummond2003c4,zhou2005training,kamiran2012data} to make target task model not learning the correlation between target and bias variables. (2) In-processing debiasing modifies the standard model learning during the training process~\cite{d2017conscientious,bellamy2018ai}. Adv debiasing is the most popular in-processing method, which adversarially optimizes between the target and bias tasks with goal to extract fair representation that only contributes to the target task~\cite{wadsworth2018achieving,beutel2017data,ganin2014unsupervised}. (3) Post-processing debiasing changes output labels to force fairness after model inference, which enhances outcomes to unprivileged groups and weakens outcomes to privileged groups. Post-processing solution is usually employed only when it is difficult to modify training data or training process ~\cite{bellamy2018ai, berk2017convex, mehrabi2019survey}. Among the three model debiasing categories, in-processing and post-processing both compromise between fairness and accuracy by either imposing additional constraint or explicitly alternating model outputs. Pre-processing enjoys advantage to address the intrinsic attribution of model bias in imbalanced training data distribution, which is widely recognized and further validated in our data analysis. However, conventional pre-processing debiasing solutions employing sampling and reweighing strategies either fails to make full use of the training data or only guarantees superficial data balance. This leads to the reported decreased accuracy in previous debiasing studies. The problem thus transfers to how to supplement and balance the data distribution without damaging the original target task learning. Adversarial example recently draws massive attention regarding model robustness and AI system security~\cite{szegedy2013intriguing}. Other than the endless game between adversarial attack and defense, an interesting line of related studies observed that model trained on adversarial examples discovered useful features unrecognizable by human~\cite{ilyas2019adversarial}. This inspires us to employ adversarial examples to supplement and balance the data distribution for model debiasing. Specifically, given target task (e.g., predicting the facial attribute of ``arched eyebrow'' from image) with imbalanced training set over bias variables (e.g., much more female than male samples with ``arched eyebrow'' annotation), adversarial attack is conducted to alter the bias variable and construct a balanced training set (illustrated in Figure~\ref{fig1}). To guarantee the adversarial generalization and cross-task transferability, we propose an online coupled adversarial attack mechanism to iteratively optimize between target task training, bias task training and adversarial example generation. This can be actually regarded as an debiasing attempt between pre-processing and in-processing, which favorably combines their both advantages. We summarize our main contributions as follows: \begin{itemize} \item We propose to employ adversarial examples to balance training data distribution in the way of data augmentation. Simultaneously improved accuracy and fairness are validated from simulated and real-world debiasing evaluation. \item We provide an online coupled adversarial example generation mechanism, which ensures both the adversarial generalization and cross-task transferability. \item We explore the potential of adversarial examples as supplementary samples, which provides alternative perspective of employing adversarial attack and opens up possibility to addressing data lackage issue from new ways. \end{itemize} \section{Data Analysis and Motivation Justification} In this section, we first examine the attribution of model bias in image classification tasks, and then analyze the potential and challenges of using adversarial examples to address the model bias problem. \subsection{Bias Attribution Analysis}\label{sec2.1} \noindent\textbf{Classification bias definition.}\hspace{2mm} People can be divided into different groups based on their social attributes such as \emph{gender, skin color}. Model bias refers to the unfair and unequal treatment of individuals with certain social attribute, e.g., correlating \emph{arched eyebrow} more with \emph{female} than with \emph{male} in task of facial attribute prediction. We use \emph{equality of opportunity} in this work to evaluate model bias, where ideal fairness outcome requires that each group of people have an equal opportunity of correct classification. In terms of classification tasks, model bias is formally defined as follows: \textsc{Definition 1} (\textsc{Classification Bias}). \emph{We derive the bias of target task class $t \in \mathbb{T}$ in a classifier in terms of group difference as: } \begin{equation} \label {eqn1} \begin{aligned} &bias(\theta,t) \\=&{|P(\hat{t} = t|b = 0,t^* = t) - P(\hat{t} = t|b = 1,t^* = t)|} \end{aligned} \end{equation} where $\theta$ indicates the parameter of the examined classifier, $\hat{t} \in \mathbb{T}$ denotes the predicted target variable of classifier, $t^* \in \mathbb{T}$ represents the ground-truth target variable like \emph{arched eyebrow}, and $b$ represents the bias variables such as \emph{gender, skin color}. Smaller $bias(\theta,t)$ means that the classifier tends not to be affected by the bias variables when making predictions. The sum of the bias of all target variables to measure the overall bias for model $bias(\theta) = \sum_t bias(\theta,t)$. \noindent\textbf{Attribution in imbalanced data distribution.}\hspace{2mm} With the definition of model bias, we then use the CelebA dataset~\cite{liu2018large} to examine its attribution in data distribution. Specifically, using the 34 facial attributes~\footnote{~\small{Within the 40 annotated atttributes of CelebA, we waived the ones not related to face (e.g., wearing hat) or essentially belonging to certain gender (e.g., mustache). This leaves 18 attributes, among which the gender attribute is selected as the bias variable. Regarding each of the remaining 17 facial attributes, the bias data distribution is very different for sample sets w/ and w/o this attribute. To facilitate data analysis and later experimental evaluation, we consider each facial attribute in two target tasks (e.g., attribute of arched eyebrow involve with two target variables of arched eyebrow and non-arched eyebrow), which leads to finally $17*2=34$ target tasks.}} as target variables to predict and the gender as bias variable, we trained facial attributes classifier and calculated their corresponding gender bias according to Eqn.~\eqref{eqn1}. Figure~\ref{fig2} shows the calculated model bias ($y$-axis) for different facial attributes and their corresponding female training image ratio ($x$-axis). It is easy to find the strong correlation between model bias and imbalanced data distribution: for facial attributes with a larger ratio of \emph{female} in training set ($>0.5$ in the $x$-axis), \emph{female} images are more easily correctly classified than \emph{male} images, and vice versa for \emph{male} ($<0.5$ in the $x$-axis). For example, there are more female training images for facial attribute of ``arched eyebrows'', and the corresponding classifier is observed to derive more correct prediction for female images, while male images with \emph{arched eyebrows} are likely to be incorrectly predicted. The observation suggests that the classifier learns the correlation between facial attribute and gender from the imbalanced data, and thus utilizes the gender bias variable for target variable prediction. It well validates the motivation of the previous debiasing attempts via pre-processing to balance training data distribution, so that the learned model will not utilize the bias variables for target task prediction. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figs/sec_2_1.pdf} \setlength{\belowcaptionskip}{-2mm} \caption{Model bias v.s. imbalanced data distribution. $x$-axis denotes the female ratio of total people with certain facial attribute in the training set, and $y$-axis denotes the model bias over gender in predictions testing set.} \label{fig2} \end{figure} \subsection{The Potential of Adversarial Example in Balancing Data Distribution}\label{sec2.2} \noindent\textbf{Feasibility of adversarial example for attack class training.}\hspace{2mm} The above analysis attributes model bias to the imbalanced data distribution over bias variables. However, with the training data reflecting the real-world distribution, it is difficult to explicitly collect more samples with minority-bias variables~\footnote{~\small{We use "minority-bias variable" to denote the value of bias variable with less samples, e.g., male in gender bias.}} (e.g., it is indeed rare to see many males with ``arched eyebrows''). Conventional pre-processing debiasing solutions resort to down-sampling and up-sampling~\cite{drummond2003c4,zhou2005training,kamiran2012data}, which either fails to make full use of the training data or only guarantees superficial data balance. It is observed from recent studies that adversarial examples contain generalized features of the attack class~\cite{ilyas2019adversarial}, i.e., model trained solely on adversarial examples with attack labels performs well on unmodified real testing data. Inspired by this, we are interested in employing adversarial attack to generate pseudo samples for the minority-bias variables, e.g., with the target task of predicting ``arched eyebrows'', adversarial perturbation is added to attack \emph{female} into \emph{male} images. In this way, the generated pseudo samples can be seen as augmented data for data distribution balancing and thus contribute to model debiasing. We conducted preliminary experiment to justify the feasibility of adversarial examples in switching gender labels and generalizing to original real samples. Specifically, we first trained binary gender classifier $g_{ori}$ with original face images from the CelebA dataset, and then employed I-FGSM~\cite{kurakin2016adversarial} to attack each original image to its adversarial image with opposite gender label. Denoting the original image set as $\mathcal{X}_{ori}$ and the attacked adversarial image set as $\mathcal{X}_{adv}$, we constructed the following two training datasets: (1) \emph{Hard switch}: original image set $\mathcal{X}_{ori}$ with manually switched gender labels~\footnote{~\small{E.g., training label is set as \emph{male} if the ground-truth label is \emph{female}.}}; (2) \emph{ADV switch}: adversarial image set $\mathcal{X}_{adv}$ with attacked gender labels~\footnote{~\small{E.g., if a \emph{male} image is attacked as \emph{female}, its training label is set as \emph{female}.}}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figs/sec_2_2.pdf} \setlength{\abovecaptionskip}{3pt} \setlength{\belowcaptionskip}{-1mm} \caption{Gender classification accuracy for different training settings with switched labels.} \label{fig3} \end{figure} We utilized the above two datasets to train gender classifiers $g_{switch}^{hard}$ and $g_{switch}^{ADV}$ respectively. Figure~\ref{fig3} (top 2 rows) shows their classification accuracy on the original image testing set. It is easy to understand the extremely poor performance of $g_{switch}^{hard}$ as the manually switched labels make the image-label correlation exactly the opposite between the training and testing sets. While by replacing the original images with adversarial attacked images, gender classification accuracy increases from $5.7\%$ to $55.6\%$, verifying that adversarial examples contain useful information about the attack class and have potential to generalize to original real data. \noindent\textbf{Stronger adversarial examples contributing to improved attack class training.}\hspace{2mm} However, the accuracy of $55.6\%$ indicates that the adversarial examples are still far from adequate to directly replace real attack-class samples. Actually, studies have found that the generalization of the adversarial example to attack class largely depends on its attack strength~\cite{nakkiran2019discussion}. That is, adversarial examples fooling more classifiers likely to contain more generalized features to attack class. Following this spirit, we expect that a more robust bias classifier can generate stronger adversarial examples generalizing well to attack class. Therefore, we first conducted adversarial training on $g_{ori}$ to improve its robustness and acquired the robust classifier $g_{robust}$, and then employed I-FGSM to attack this robust classifier $g_{robust}$ to derive new training set \emph{ADV switch (robust)} with attacked gender label. The learned gender classifier from this new training set is denoted as $g_{switch}^{ADV (robust)}$, whose classification accuracy is shown in the bottom of Figure~\ref{fig3}. The significant increase from $55.6\%$ to $89.1\%$ demonstrates the superior generalization potential of adversarial examples from robust models, which motivates us to design more robust bias classifiers in generating adversarial examples for data augmentation. \subsection{Cross-task Transferability}\label{sec2.3} The above analysis verifies that adversarial examples hold some generalization to attack class in the task of bias classification. For model debiasing, two tasks are involved, i.e., the bias task like gender classification and the target task like ``arched eyebrows'' prediction. Therefore, in addition to the adversarial generalization within bias task, it is desirable the adversarial example maintains its generalization ability to original real data during training the target task. We refer the adversarial examples' capability in maintaining attack class information from bias tasks to target tasks as \emph{cross-task transferability}. This subsection examines the cross-task transferability by utilizing adversarial examples for data augmentation-based visual debiasing. We first introduced one straightforward way: the derived adversarial examples $\mathcal{X}_{adv}$ from the previous subsection is added into the original training image set to train the target facial attribute classifiers. Specifically, the resultant training set $\mathcal{X}_{augment}$ consists of $\mathcal{X}_{adv}$ and the original images, and the target classifier is realized with VGG-16 containing feature extractor $f(\cdot)$ and classification module $h_t(\cdot)$. To examine cross-task transferability, using target classifier's feature extractor $f(\cdot)$, we trained additional bias classification module $h_b(\cdot)_{generalize}$ based on $\mathcal{X}_{adv}$ with attacked gender labels, and calculated the gender classification accuracy of $\mathcal{X}_{org}$ on bias classifier $\{f(\cdot);h_b(\cdot)_{generalize}\}$. Since $\{f(\cdot);h_b(\cdot)_{generalize}\}$ shares the feature extractor $f(\cdot)$ for target task prediction, if original image can be correctly classified by this bias classifier, we consider $\mathcal{X}_{adv}$ contains information about the attack class and possesses cross-task transferability to some extent. \begin{figure}[t] \centering \includegraphics[width=0.93\linewidth]{figs/sec_2_3.pdf} \caption{Cross-task transferability of adversarial examples during the training process of the target task classifier.} \label{fig4} \end{figure} To further track whether cross-task transferability is maintained during the training process, for every training epoch $m$, we repeated the following three operations: (1) using $\mathcal{X}_{augment}$ to update $f(\cdot)^{(m)}$ and $h_t(\cdot)^{(m)}$ for target task; (2) fixing $f(\cdot)^{(m)}$ and using $\mathcal{X}_{adv}$ with attacked gender labels to update $h_b(\cdot)_{generalize}^{(m)}$ for bias task; (3) using $\{f(\cdot)^{(m)};h_b(\cdot)_{generalize}^{(m)}\}$to test $\mathcal{X}_{org}$ and calculating the generalization accuracy $r^{(m)}$. Figure~\ref{fig4} illustrates the generalization accuracy (left $y-axis$) for every training epoch, where \emph{ADV switch} and \emph{ADV switch (robust)} correspond to the results by using $g_{ori}$ and $g_{robust}$ to generate adversarial examples respectively. It is shown the adversarial examples generated from $g_{ori}$ and $g_{robust}$ both gradually lose cross-task transferability as training proceeds. We explain the reason to yield this result as that: under the optimization goal of minimizing target task loss, the feature extractor $f(\cdot)$ tends to ignore adversarial information of adversarial examples, and the $\{f(\cdot)^{(m)};h_b(\cdot)_{generalize}^{(m)}\}$ trained on $\mathcal{X}_{adv}$ largely fail to classify original samples. To demonstrate the influence of cross-task transferability to model bias, we also calculated the gender bias of target task model during training, which is illustrated in Figure~\ref{fig4} with right y-axis. It is shown that model bias generally increases as adversarial examples lose their cross-task transferability. Combining with the above explanation for the decreased generalization accuracy, we understand the correlation between model bias and cross-task transferability as that: when $f(\cdot)$ tends to ignore useful adversarial information of adversarial examples, the role of augmenting adversarial examples reduces to replicating the original samples and derives trivial effect in balancing data distribution. Therefore, adjusting the generated adversarial examples to fit to the ever updating feature extractor and maintaining the cross-task transferability is critical for adversarial example-based data augmentation for visual debiasing. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figs/sec_3_1.pdf} \caption{Illustration of adversarial perturbation in feature space: (a) only considering the bias task (e.g., female/male); (b) jointly considering the target task (e.g., ``arched eyebrow'' prediction).} \label{fig5} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.92\linewidth]{figs/sec_3_3.pdf} \caption{Framework of \emph{AEDA$\_$online} and \emph{AEDA$\_$robust}. \emph{AEDA$\_$robust} contains additional adversarial training module (highlighted with red dashline) to \emph{AEDA$\_$online}. } \label{fig-framework} \end{figure*} \section{Methodology} \subsection{Overview} We formally define visual debiasing problem as follows: \textsc{Definition 2} (\textsc{Visual Debiasing}). \emph{Given image data set $\mathcal{D}_{ori}=\{\mathbf{x}_i,t_i,b_i\}_{i=1:N}$, where $\mathbf{x}_i\in \mathcal{X}_{ori}$ denotes the $i^{th}$ image original feature, $t_i\in \mathbb{T}$ denotes its target task label, $b_i\in \mathbb{B}$ denotes its bias task label, visual debiasing aims to learn an unbiased target task classifier with parameter $\theta $ satisfying that the prediction of target task label $\hat{t}$ is independent of the bias labels: $P(\hat{t} = t_i|b,t^*;\theta) = P(\hat{t} = t_i|t^*;\theta)$. } The above data analysis section justifies the attribution of model bias from imbalanced data distribution and the potential of adversarial example in balancing data distribution. Following these observations, in this section, we propose the visual debiasing solution via Adversarial Example-based Data Augmentation (AEDA). The direct way to realize \emph{AEDA} is to separately generate adversarial examples to balance data distribution as pre-processing and then use the augmented dataset to train target task classifier. This leads to the basic version of our solution, which we call \emph{AEDA$\_$pre} and will be introduced in Section~\ref{sec3.2}. To address the cross-task transferability issue, we propose to couple target task classifier training and adversarial example generation, which we call \emph{AEDA$\_$online} and will be introduced in Section~\ref{sec3.3}. A complete version of our solution called \emph{AEDA$\_$robust} is elaborated in Section~\ref{sec3.4} to further address the adversarial generalization issue, where we conduct additional adversarial training operation when updating the bias classifier to improve the robustness. \subsection{AEDA$\_$pre}\label{sec3.2} In typical visual debiasing problems, among the training samples with specific target task label $t\in \mathbb{T}$, the ratio of samples with different bias labels $b\in \mathbb{B}$ is usually heavily imbalanced. For example, regarding the classification of ``arched eyebrows'' in CelebA, the ratio of samples with female gender to male gender is above $15:1$. To balance the data distribution for visual debiasing, the direct way to realize \emph{AEDA} consists of two steps: (1) Bias variable adversarial attack, adding perturbation to the original sample $\mathbf{x}_i \in \mathcal{X}_{ori}$ to generate adversarial examples $\mathbf{x}_i^{adv}$ with the altered bias label $b_i^{'}$. In this way, adversarial attack supplements the shortage of minority-bias variable samples (e.g., male ``arched eyebrows'' images) to construct a balanced dataset ${\mathcal{D}_{augment}=\{\mathcal{D}_{ori};\{\mathbf{x}_i^{adv}, t_i, b_i^{'}\}\}}$. (2) Target task classifier training, using the balanced dataset $\mathcal{D}_{augment}$ to train an unbiased target task classifier. Regarding the first step, using I-FGSM as the adversarial attack method, we alter the bias label for the $i^{th}$ image as follows: \begin{equation}\label {eqn2} \mathbf{x}_i^{adv}=\underset{\mathbf{x}_i}{\arg\min } \; L_{bias}\left(\mathbf{x}_i, b_{attack}\right) \end{equation} where $L_{bias}$ is the loss function of the bias classifier, $b_{attack}$ is the bias attack class. While standard adversarial attack changes the bias label of original samples, the added adversarial perturbation has risk to also affect the feature representation for target task prediction. As shown in Figure~\ref{fig5}(a), with the identified perturbation direction $\mathbf{n}_1$ by only considering the bias classifier gradients, adversarial perturbation is risky to also move across the target task classification boundary. This tends to deteriorate the training of target task classifier. Therefore, we revise the adversarial attack step and require the generated adversarial examples to also possess the following property: the added adversarial perturbation should not affect the target task variable of the original sample. To implement this, we modify Eqn.~\eqref{eqn2} and generate adversarial examples as follows: \begin{equation}\label {eqn3} \mathbf{x}_i^{adv}=\underset{\mathbf{x}_i}{\arg\min } \Big( \lambda L_{bias}(\mathbf{x}_i, b_{attack})+(1-\lambda)L_{target}(\mathbf{x}_i, t)\Big) \end{equation} where $L_{target}$ is the loss function of the target task classifier~\footnote{~\small{A preliminary target task classifier needs to be learned first from the original training set before adversarial data augmentation.}}, $t$ is target task label and $\lambda$ is the weighting parameter controlling the two terms. The added term plays role in preventing the target task features from damaged by adversarial perturbation and thus preserving the target task label. As shown in Figure~\ref{fig5}(b), by jointly considering the target task classifier, the identified perturbation direction $\mathbf{n}_2$ is guaranteed to preserve the feature and label for target task prediction. \subsection{AEDA$\_$online}\label{sec3.3} As observed in Section~\ref{sec2.3} that adversarial examples have poor cross-task transferability when training target task classifiers. This will lead to a ``fake'' balanced data distribution, which contributes little to data augmentation and visual debiasing. To address this, instead of separating adversarial attack and target task classifier training as in \emph{AEDA$\_$pre}, we propose to couple these two steps by restricting the bias classifier utilizing the feature extractor of target task classifier. As shown in Figure~\ref{fig-framework}, target task and bias task share the same feature extractor $f(\cdot)$, with two additional classification modules $h_t(\cdot)$ and $h_b(\cdot)$ on top of $f(\cdot)$. By attacking the coupled bias classifier $\{f(\cdot);h_b(\cdot)\}$, the generated examples are expected to deliver discriminative features from $f(\cdot)$ and thus ensure cross-task transferability to the target task. However, this one-time coupled adversarial attack can only guarantee cross-task transferability for few epoches. The observation in Section~\ref{sec2.3} demonstrates that adversarial examples gradually lose their cross-task transferability and fail to continuously balance the data distribution during training of the target task classifier. To address this, we design online coupled adversarial attack where target task classifier and bias task classifier are simultaneously updated. Specifically, the following three steps iterate during the training process (at epoch $m$): \begin{itemize} \item Target task classifier training: \begin{equation}\label{eq-online1} \min _{f, h_{t}} L_{\text {target}}(\{\mathcal{X}_{ori},\{\mathcal{X}_{adv}^{(m-1)}\}, t) \end{equation} \item Bias task classifier training: \begin{equation}\label{eq-online2} \min_{h_{b}} L_{bias}(\mathcal{X}_{ori},b;f(\cdot)) \end{equation} \item Adversarial attack: $\mathcal{X}_{adv}^{(m)}\leftarrow \mathcal{X}_{ori}$ following Eqn.~\eqref{eqn3} \end{itemize} In this way, the adversarial examples are adaptively generated based on the current target task feature extractor and thus guaranteed to well transfer to next epoch's target task training. \subsection{AEDA$\_$robust}\label{sec3.4} Section~\ref{sec2.2} observes that the capability of adversarial examples to generalize to attack class largely depends on the robustness of the attack classifier to generate the adversarial examples. To improve the adversarial generalization, based on \emph{AEDA$\_$online}, we further introduce an adversarial training module when updating the bias task classifier. As highlighted in red dashline of Figure~\ref{fig-framework}, the generated adversarial examples at previous epoch are employed in an adversarial training setting towards robust bias classifier. Specifically, the training process is modified to iterate the following three steps: \begin{itemize} \item Target task classifier training: same as Eqn.~\eqref{eq-online1}; \item Robust bias task classifier training (the adversarial samples are employed in training at intervals of $k$ mini-batches, and $1/k$ represents the frequency of adversarial training): \begin{equation} \min_{h_{b}} L_{bias}(\{\mathcal{X}_{ori},\mathcal{X}_{adv}^{(m-1)}\},b;f(\cdot)) \end{equation} \item Adversarial attack: $\mathcal{X}_{adv}^{(m)}\leftarrow \mathcal{X}_{ori}$ following Eqn.~\eqref{eqn3} \end{itemize} By iteratively optimizing between the above three steps, the generated adversarial examples are guaranteed to continuously hold the generalization capability to bias attack class as well as maintain the cross-task transferability to target task. The optimization ends when the training loss of target task converges. \section{Experiments} \subsection{Experiment Setup} We evaluated the proposed \emph{AEDA} solution on both simulated and real-world bias scenarios: (1) For the simulated bias, we used C-MNIST dataset~\cite{lu2018attribute}. Its 10 handwritten digit classes (0-9) were regarded as the target task variables, and the associated background colors were regarded as the bias variable. When classifying digit classes, the model may rely on the background color of image. The goal is to remove the background color bias in digit recognition. (2) For the real-world bias, we used the face dataset CelebA with multiple facial attributes as the target task variables. The goal is to remove the model’s gender bias in facial attribute classification. \noindent\textbf{Baselines}\hspace{2mm} We consider several typical debiasing solutions for comparison. \noindent\textbf{$\bullet$} Down-sampling: discarding samples with majority bias variable to construct a balanced dataset before target task training~\cite{drummond2003c4,zhou2005training}. \noindent\textbf{$\bullet$} Reweighting: assigning different weights to samples and modifying the training objectives to softly balance the data distribution~\cite{kamiran2012data}. \noindent\textbf{$\bullet$} Adv debiasing: the typical in-processing debiasing solution by adversarially learning between the target and bias tasks~\cite{wadsworth2018achieving,beutel2017data}. The goal is to extract fair representation that contributes to the target task but invalidates the bias task. We followed the previous studies and implemented Adv debiasing using transfer reversing gradient~\cite{ganin2014unsupervised}. \noindent\textbf{$\bullet$} CycleGAN: generating synthetic data by altering bias variables~\cite{zhu2017unpaired}. We implemented this to compare \emph{AEDA} with the line of solutions explicitly supplementing minority samples. \noindent\textbf{Evaluation metrics}\hspace{2mm} For target task performance, since the examined datasets involve with multi-label classification, we use balanced average accuracy (bACC) for evaluation: bACC is defined as the average accuracy of each group with different target variables and bias variables. From a fairness perspective, the importance of people with different bias variables is equal. An unbiased evaluation of accuracy should assign the same weight to groups with different bias variables. The higher the bACC, the more accurate the target task classifier is. For the model bias $bias(\theta)$, the lower the $bias(\theta)$, the more fair the model is. \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{figs/sec_4_2a.pdf} \setlength{\abovecaptionskip}{3pt} \setlength{\belowcaptionskip}{-4mm} \caption{The training and testing setting of simulated debiasing.} \label{fig7a} \end{figure} \subsection{Simulated Debiasing Evaluation} We modified the C-MNIST data set for background color debiasing performance evaluation. In the training data, digits 0$\sim$4 are associated with red background color, 5$\sim$9 are associated with brown background color, and other data are dropped. The data with $\left \langle0\sim9, red\right \rangle$ and $\left \langle0\sim9, brown\right \rangle$ are both used for testing. The modified training and testing settings are illustrated in Figure~\ref{fig7a}. It is easy to see that the background color of the digits is the simulated bias variable, which contains high-predictive information but are actually independent of the target digit classes. With the testing set as a fair distribution setting, we used it to evaluate the target task accuracy and model bias. The debiasing methods are to make the model independent of the background color of the image in digit recognition. Table~\ref{tab1} summarizes the performance for different methods. It is shown that with the very different distributions between training and testing dataset, the \emph{Original} classifier achieves a rather low bACC at $55.62\%$. Moreover, since the modified C-MNIST training set is extremely imbalanced with only one background color for each digit, it is impossible to employ \emph{Down-sampling} and \emph{Reweighting}. Other main observations include: (1) all the proposed three settings of \emph{AEDA} obtain superior bACC and model bias than that of \emph{Original}. By simultaneously robustly training the bias classifier and coupling it with the target task classifier training, \emph{AEDA\_robust} demonstrates the best performance over all baselines in both bACC and model bias. (2) \emph{CycleGAN} obtains similar performance with \emph{AEDA\_pre}, showing very limited debiasing result due to the separation of pseudo sample generation and target task training. (3) \emph{Adv debiasing} obtains noticeable improvement in both bACC and model bias. We owe this result to the relative easy disentanglement between shape and color in the C-MNIST dataset, where \emph{Adv debiasing} manages to extract fair representation focusing on shape feature and only correlating to the target task. \begin{table} \caption{Performances comparison on the C-MNIST data set. } \label{tab1} \begin{tabular}{l|c c} \toprule Methods&bACC (\%)&Model bias\\ \midrule Original& 55.62& 7.84\\ Under-sampling~\cite{drummond2003c4,zhou2005training}& --& --\\ Reweighting~\cite{kamiran2012data}& --& --\\ Adv debiasing~\cite{ganin2014unsupervised,wadsworth2018achieving,beutel2017data}& 89.93& 1.37\\ CycleGAN~\cite{zhu2017unpaired}& 65.23& 5.42\\ \midrule AEDA$\_$pre& 64.53& 5.89\\ AEDA$\_$online& 80.57& 3.20\\ AEDA$\_$robust& \textbf{91.80}& \textbf{0.53}\\ \bottomrule \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{figs/sec_4_2b.pdf} \setlength{\belowcaptionskip}{-3mm} \caption{Digit classification confusion matrices for testing subset of $\left \langle0\sim9, red\right \rangle$ (left) and $\left \langle0\sim9, brown\right \rangle$ (right).} \label{fig7b} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.99\linewidth]{figs/Sec4exampleimage.pdf} \setlength{\abovecaptionskip}{-3pt} \setlength{\belowcaptionskip}{-5mm} \caption{Example images of 5 facial attributes to be predicted. The CelebA data set also provides gender annotation for each image.} \label{fig8a} \end{figure} To further evaluate the influence of training distribution on model performance for different target task variables, we examined the classification accuracy for each digit class on the $\left \langle0\sim9, red\right \rangle$ and $\left \langle0\sim9, brown\right \rangle$ testing subsets respectively. The results obtained by \emph{Original} and the proposed \emph{AEDA\_robust} are summarized as confusion matrices in Figure~\ref{fig7b}. Observations include: (1) The classification accuracy of \emph{Original} is severely dependent on the background color of the testing subset, while \emph{AEDA\_robust} obtains consistent classification performance between the two subsets. This coincides with the definition of model bias in Eqn.~\eqref{eqn1} and validates the model debiasing effectiveness. (2) For \emph{Original} classifier, when the training and testing set have very different data distributions (e.g., bottom-left for $\left \langle0\sim9, red\right \rangle$ subset and upper-right for $\left \langle0\sim9, brown\right \rangle$ subset), the testing accuracy tends to be very low. This also demonstrates that the imbalanced training data distribution heavily influences not only the model bias but also the target task performance. \begin{figure*}[t] \centering \includegraphics[width=0.97\linewidth]{figs/Sec43abc.pdf} \setlength{\abovecaptionskip}{1pt} \setlength{\belowcaptionskip}{-7pt} \caption{(a) Cross-task transferability during training the target task. (b) Model bias and cross-task transferability in \emph{AEDA\_pre} and \emph{AEDA\_robust}. (c) The debiasing effectiveness and traget task accuracy for each examined facial attribute.} \label{fig9} \end{figure*} \begin{table}[t] \caption{Performances comparison on the CelebA data set.} \label{tab2} \begin{tabular}{l|c c} \toprule Methods&bACC (\%)&Model bias\\ \midrule Original& 73.57& 5.48\\ Under-sampling~\cite{drummond2003c4,zhou2005training}& 66.35& \textbf{2.35}\\ Reweighting~\cite{kamiran2012data}& 73.82& 4.39\\ Adv debiasing~\cite{ganin2014unsupervised,wadsworth2018achieving,beutel2017data}& 72.82& 4.23\\ CycleGAN~\cite{zhu2017unpaired}& 73.65& 4.75\\ \midrule AEDA$\_$pre& 73.68& 5.23\\ AEDA$\_$online& 74.03& 4.22\\ AEDA$\_$robust& \textbf{74.30}& 3.27\\ \bottomrule \end{tabular} \end{table} \subsection{Real-world Debiasing Evaluation} We evaluated the real-world gender debiasing performance on the CelebA data set and used VGG-16~\cite{simonyan2014very} as the backbone network. The same 34 facial attributes are employed as the target tasks. Example images of 5 facial attributes to be predicted are illustrated in Figure~\ref{fig8a}. Moreover, in the original CelebA, different images of the same person may appear in both training and testing set. To prevent person-specific features contributing to facial attribute prediction, we keep one image for each person in either training or testing set. Table~\ref{tab2} summarizes the performance for different methods on CelebA. All reported results are averaged from 5-fold cross validation. The consistent observations with the above simulated debiasing evaluation include: (1) Among the the three \emph{AEDA} settings, \emph{AEDA\_robust} performs best in both bACC and model bias. (2) \emph{CycleGAN} demonstrates limited debiasing performance. New observations include: (1) \emph{Original} has normal bACC at $73.57\%$. In this case, instead of sacrificing accuracy for fairness, it is interesting to see that the proposed \emph{AEDA} solutions still obtain superior performance than \emph{Original} in both accuracy and fairness. In the next subsection, we will discuss further the role of data augmentation-based solution in improving accuracy as well as fairness. (2) \emph{Down-sampling} and \emph{Reweighting} are valid to reduce the model bias for facial attribute prediction, with \emph{Down-sampling} obtaining remarkable debiasing performance by explicitly balancing data distribution. However, by discarding samples with majority-bias variable and employing a fragmented training set, \emph{Down-sampling} achieves a very poor target task performance (bACC=$66.35\%$). (3) \emph{Adv debiasing} shows certain debiasing performance, but not as good as that in the simulated evaluation. This difference is due to the complicated coupling between the target and bias variables in real-world debiasing scenarios. \noindent\textbf{The necessity of online coupled adversarial attack.}\hspace{2mm} To further understand the mechanism of online coupled adversarial attack, we reported more results by examining the training process. Recalling from Section~\ref{sec2.3}, cross-task transferability is critical for debiasing. Therefore, we first calculated the bias generalization accuracy $r^{(m)}$ at each training epoch for the proposed \emph{AEDA} solutions, which is shown in Figure~\ref{fig9}(a). While \emph{AEDA\_pre} shows rather poor bias generalization accuracy, \emph{AEDA\_online} and \emph{AEDA\_robust} demonstrate stable and superior capability in maintaining cross-task transferability. To analyze the role of online continuous attack, a fourth setting named \emph{AEDA\_once} is also implemented and compared, which is similar to \emph{AEDA\_online} except with the difference that adversarial examples $\mathcal{X}_{adv}$ are generated only once instead of continuously updated during the training process. As shown in Figure~\ref{fig9}(a), although better than \emph{AEDA\_pre}, the cross-task transferability of \emph{AEDA\_once} fails to retain as training proceeds, validating the necessity for online coupled adversarial attack. We then analyzed the influence of cross-task transferability to model debiasing. Using \emph{AEDA\_pre} and \emph{AEDA\_robust} as examples, we examined both the bias generalization accuracy (right y-axis) and the target model bias (left y-axis) in Figure~\ref{fig9}(b). After around 15 training epochs, it is shown that the model bias of \emph{AEDA\_robust} is stable at a relative low level ($3.1$). However, the model bias of \emph{AEDA\_pre} continues to increase as its cross-task transferability gradually loses, with a final model bias as high as $5.3$. Moreover, we modified the training process of \emph{AEDA\_robust} and examined the changes of cross-task transferability and model bias when terminating online update of adversarial example generation at $40^{th}$ epoch. Shown with green curves in Figure~\ref{fig9}(b), the cross-task transferability and model bias approach those of \emph{AEDA\_pre} after $40^{th}$ epoch, further demonstrating the effectiveness of online coupled adversarial attack in retaining cross-task transferability and reducing model bias. \subsection{Discussion} \subsubsection{The consistency between accuracy and fairness} In addition to alleviating model bias, the proposed \emph{AEDA} solution gives rise to two interesting discussions. First discussion involves with the accuracy-fairness paradox. It is recognized in conventional debiasing attempts ~\cite{zhao2019inherent,fish2016confidence,menon2017cost,feldman2015certifying} that, there exists tradeoff between accuracy and fairness and the goal is to reduce model bias under the condition of equal, if not slightly decreased accuracy. However, the results in both simulated and real-world debiasing evaluation demonstrate \emph{AEDA}'s capability in simultaneously improving model accuracy and fairness. In Figure~\ref{fig9}(c), we show the model bias and target task accuracy for each examined facial attribute. Other than the consistently reduced model bias from \emph{Original}, it is interesting to find the close relation between distribution balance, model bias and accuracy: the facial attributes with more balanced data distribution tend to have lower model bias and higher accuracy. This inspires us to examine the common attributions of accuracy and fairness to data distribution. By generating supplementary samples to augment the training data, a balanced dataset contributes to removing the erroneous dependence on bias variables and thus making the inference of target tasks more dependent on intrinsic features. Without discarding training data or adding constraints to affecting target task learning, the proposed data augmentation-based solution provides alternative perspective to address model debiasing problem and validates the possibility to simultaneously improving fairness and accuracy. \subsubsection{Adversarial attack-based pseudo sample generation} The previous study has discovered that human and model may rely on very different features~\cite{ilyas2019adversarial}. In the above experiments, we verified this in the context of model debiasing tasks that model can extract useful information from training samples that human hardly recognizes. In most supervised learning scenarios, the main challenge is to collect adequate samples recognizable from human's perspective. Without the restrictions of human recognition, adversarial attack actually provides alternative solution to generate pseudo samples and makes up for the common data shortage issues. In this spirit, we examined the potential of adversarial attack-generated samples in few shot learning. Specifically, the proposed \emph{AEDA\_robust} is implemented and 4 handwritten digit classes as few shot classes experiment is conducted in the C-MNIST settings. A $27\%$ average improvement in few shot classes is obtained than \emph{Original}, showing the feasibility of adversarial attack-generated pseudo sample in addressing problems beyond debiasing. Moreover, adversarial attack enjoys advantages that human prior knowledge is readily to be incorporated to generate pseudo samples with desired properties to increase sample diversity. \section{Conclusion} In this work, we propose to balance data distribution via adding supplementary adversarial example towards visual debiasing. The proposed solution couples the operation of target task model training, bias task model training, and adversarial sample generation in an online learning fashion. The experimental results in both simulated and real-world debiasing evaluation demonstrates its effectiveness in consistently improving fairness and accuracy. In addition to study the common attribution behind fairness and accuracy, it will also be interesting to employ adversarial example to assist learning tasks other than model debiasing in the future. \begin{acks} This work is supported by the National Key R\&D Program of China (Grant No. 2018AAA0100604), and the National Natural Science Foundation of China (Grant No. 61632004, 61832002, 61672518). \end{acks} \bibliographystyle{ACM-Reference-Format} \balance
{ "timestamp": "2020-08-14T02:10:18", "yymm": "2007", "arxiv_id": "2007.13632", "language": "en", "url": "https://arxiv.org/abs/2007.13632" }
\section{\label{intro} Introductory Comment} The energy exchange between an impinging light beam and virus particles is a cornerstone in several biophotonic operations with important applications in Microbiology, Pharmacology, Medical Physics and Biochemistry. The study of this interaction between virions and electromagnetic fields has been greatly assisted by a large set of analytical methods and physical concepts reported for inorganic particles \cite{BohrenBook}. That inevitably led in their implementation and translation to understand how microbes absorb and re-emit visible light in various directions \cite{MicrobialLaser} or what are the biophysical processes behind ultraviolet germicidal irradiation \cite{KowalskiHandbook}. Furthermore, applications of diverse types of scattering techniques to systems of microorganisms have been reviewed \cite{ApplicationsLightScat} while the diffusion behavior of viral macromolecules into liquids has been extensively elaborated \cite{DynamicLightScat}. Importantly, highly sensitive virus detection has become possible based on optical trapping \cite{OpticalTrap} or via the reactive shift of a whispering-gallery mode \cite{SingleVirusDetection}, facilitating convenient medical diagnosis and food inspection. In addition, the coupling of the incident beams with virions has been utilized in measuring the refractive index of the cells with high precision \cite{AnimalVirus} and analyzing single viruses with a resolution comparable to that of electron microscopy \cite{LabelFreeAnalysis}. Mie theory \cite{HulstBook}, admitting rigorous solution to the scattering of electromagnetic waves by multilayered spheres, makes a powerful tool for treating similar problems involving radiation impinging on virus particles, due to their quasi-spherical shapes. In particular, simple formulas are derived towards the interpretation of the characteristic anomalies in the optical activity of membrane suspensions \cite{OpticalActivity} of for the evaluation of collective backscattering from abundant viruses into sea water \cite{ViralSuspensions}. Mie scattering has been also used to model the light intensity produced by virion-like nano-objects in biosensors \cite{VirusLikeParticles}, flow cytometers \cite{CharacterizingLight} and phase microscopes setups \cite{NuclearRefIndex}. Finally, analytical expressions have been provided for the shifts in the resonance frequencies of spherical dielectric microresonators owing to plasmonic nanoparticles \cite{TheoryResonanceShifts} or protein binding \cite{UltrasensitiveDetection}, paving the way to highly-efficient bioimaging. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[height=4.2cm]{Fig1a} \label{fig:Fig1a}} \subfigure[]{\includegraphics[height=4.2cm]{Fig1b} \label{fig:Fig1b}} \caption{(a) Illustrative representation revealing the coronavirus morphology when viewed via electron microscope. Both the gray core and the brown spikes, that adorn the outer surface of the virus imparting the look of a corona surrounding the virion, are made of proteins. The image has taken from Public Health Image Library (\href{https://phil.cdc.gov/Details.aspx?pid=23311}{PHIL}) and is free of any copyright restrictions. (b) The adopted model in the present study; the spikes are replaced by an isotropic shell of permittivity $\e_s$ and thickness $(b-a)$ surrounding a homogeneous core of permittivity $\e_c$ and radius $a$; the structure is hosted by a background medium of permittivity $\e_b$. The virion is bombarded by an electromagnetic (EM) pulse of high intensity and central wavelength $\lambda$.} \label{fig:Figs1} \end{figure} Coronaviruses constitute a special category of viruses whose genome is hosted into protein cells, with rod-shaped spikes projecting from their surfaces; these elongated bumps, when seen though electron microscope, create an image mimicking the solar corona, to which the viruses owe their name. Since the 2020 global pandemic outbreak \cite{ANewCoronavirus, NovelCoronavirus}, the whole world became familiar with the term ``coronavirus'' while medical scientists struggle to handle \cite{StrategicApproach} that continued threat, responsible for hundreds of thousands of deaths and unprecedented socio-economic damage. Due to the alarming situation, numerous experimental efforts have been devoted to test the photonic response of that specific coronavirus (SARS-CoV-2) for various objectives like the fast biosensing that secures reliable viral disease diagnosis \cite{DualFunctional} or the development of cellular nanosponges that are allegedly able to neutralize the virus \cite{CellularNanosponge}. Interestingly, several studies also indicate that phototherapy has immense potential to reduce the impact of coronavirus diseases \cite{LightAsTreatment}, and offers suggested ways that the healthcare industry can integrate modern light technologies in the fight against SARS-CoV-2 and its mutant versions. However, analytical modeling via Mie theory is very rarely \cite{PhotopolarimetricalProperties} involved to works studying the light-coronavirus interactions, despite its rigorous yet simple formulation. In this paper, we systematically examine the Mie scattering of a protein nanosphere covered by a suitably homogenized shell emulating the presence of amine radial spikes. The size of the core \cite{ArchitectureSelfAssembly} and the spikes length \cite{SpikeProteins} are varying between their realistic limits, according to the adopted conceptual layouts \cite{SpikeProteins} and the images taken from electron microscopes \cite{FusionCore}. As far as the background media that host the virions are concerned, they are also selected based on the presently available data, since the virus can transmit through the air \cite{AirHost}, while can exist into blood \cite{BloodHost} and the human organs like liver \cite{LiverHost} as well. The impinging electromagnetic pulse is harmonic with frequency that spans from the hard ultraviolet to the long infrared part of the spectrum; the dispersion of the incorporated materials across this extensive band is taken into account, based on experimental measurements contained in well-established references \cite{ProteinData, BloodData, LiverData}. Our aim is to maximize the extinction power from the core-shell nanoparticle, which is a prerequisite for any action against the virion from disintegration to isolation. The influence of the geometric characteristics of the considered model on the observable quantity is identified and the optimal mid-infrared wavelengths leading to substantial extinction are determined. Importantly, the reported resonances are found highly insensitive to structural changes regardless of the background and thus are applicable to ensembles of corona-virions of diverse features. Our findings may inspire clinical research towards the development of diagnostic products and devices that require significant power interaction with SARS-CoV-2 particles to dissolve or neutralize them. \section{\label{appr} Proposed Approach} \subsection{Core-Shell Model} As shown in the illustrative sketch of Fig. \ref{fig:Fig1a}, a typical corona-virion is composed of a homogeneous core (nucleocapsid), containing a mixture of proteins \cite{ArchitectureSelfAssembly}, surrounded by radial spikes of glycoprotein \cite{StructureFunction}. We treat all proteins as diisopropylamine, also known as DIPA, whose dispersive permittivity $\e_c=\e_c(\lambda)$ can be easily found. As far as the crown of protein rods is concerned, the simplest way to model it is as a homogeneous shell of permittivity given by a weighed sum of the protein dielectric constant $\e_c$ and that of the background medium $\e_b$: \begin{eqnarray} \e_s=(1-s)\e_b+s \e_c, \label{ShellPermittivity} \end{eqnarray} where $0<s<1$ is the filling factor indicating the percentage of the corona volume occupied by the spikes. We could follow alternative and more accurate approaches to model the photonic setup of the virion, like approximating the shell by quasi-homogeneous multilayers \cite{QuasiHomog}, considering radial anisotropy \cite{HomogMulti, SihvolaSphereA} or even assuming systropic properties for the fabric of the spherical particles \cite{SihvolaSphereB}. In this case, the scalar $\e_s$ would have been replaced by a uniaxial tensor $[\e_s]={\rm diag}(\e_r,\e_t,\e_t)$ in spherical coordinates $(r,\theta,\f)$. The radial permittivity $\e_r=\e_s$ will be given by \eqref{ShellPermittivity} since the corresponding depolarization factor of a needle vanishes. On the contrary, the transversal constant (along local $\theta,\f$ directions), $\e_t$, will be written as \cite{SihvolaMixingForm}: \begin{eqnarray} \e_t=\e_b\frac{(\e_b+\e_c)-s(\e_b-\e_c)}{(\e_b+\e_c)+s(\e_b-\e_c)}. \label{TransversalPermittivity} \end{eqnarray} The solution of the wave equation into such a medium involves cylindrical Bessel functions $J_{\nu},Y_{\nu}$ of orders \cite{ElectromagneticTransparency}: \begin{eqnarray} \nu=\nu_n=\frac{1}{2}\sqrt{1+4\frac{\e_t}{\e_r}n(n+1)}, \label{BesselOrders} \end{eqnarray} for $n\in\mathbb{N}^*$. However, the size of the virion is small (average radius $b\cong 60$ nm) and the regarded wavelengths large (average $\lambda\cong 3500$ nm, mid-infrared); accordingly, the incoming beams are not expected to ``feel'' a more advanced setup emphasizing on the geometry details. In other words, a very slowly moving wave will perceive the very complicated actual structure of the protein spikes in the same way as a homogeneous cladding of texture determined by \eqref{ShellPermittivity}. Thus, we advocate that the model in \ref{fig:Fig1b}, where the core (permittivity $\e_c$) of radius $a$ is engulfed by the shell (permittivity $\e_s$) of size $(b-a)$ and host into a homogeneous background (permittivity $\e_b$), captures sufficiently the electromagnetic interactions and the underlying photonic power interplay between the incident fields and the particle. Note finally that the Bessel order \eqref{BesselOrders} becomes complex if the permittivities \eqref{ShellPermittivity},\eqref{TransversalPermittivity} have non-zero imaginary parts, which is the case in our consideration; as a result, numerical issues \cite{WatsonTransform} may emerge if the shell is assumed anisotropic. That makes an extra reason to follow the isotropic and homogeneous modeling via \eqref{ShellPermittivity}. \subsection{Mie-Theory Formulation} We assume that the virion of Fig. \ref{fig:Fig1b} is illuminated by an electromagnetic beam in the form of monochromatic plane wave (with free-space oscillating wavelength $\lambda\equiv 2\pi/k_0$), traveling into the background host. As mentioned above, the symbols $(r,\theta,\f)$ are used for the related spherical coordinates centralized at the cell, while the equivalent Cartesian ones read $(x,y,z)$; the suppressed harmonic time is of the form $e^{+i 2 \pi c t/\lambda}$, where $c$ is the speed of light into free space. For simplicity and without loss of generality (due to the spherical symmetry), we assume that the incident wave propagates along $+z$ axis and its electric field vector is always parallel to $x$ axis oscillating with amplitude $E_0>0$ (measured in $Volt/meter$). This background field can be decoupled in two terms, each of which satisfies Maxwell's laws: one term with no radial electric component (TE) and another one with no radial magnetic component (TM). These terms can be expressed as series of spherical harmonics, which dictate the $\theta-$ and $\f-$ dependence of the field quantities in all the regions defined by the concentric and entire surfaces, according to the rigorous Mie theory \cite{HulstBook}. After imposing the necessary boundary conditions, the scattered fields for $r>b$ (electric field vector of TE set and magnetic one for TM set), are written as \cite{NANOSPHERE}: \begin{eqnarray} \textbf{E}_{scat}^{TE}=E_0 \sum_{n=1}^{+\infty}i^{-n}S_n^{TE}h_n(k_br) \left\{\begin{array}{c}-\hat{\bm{\theta}} \csc\theta p_n(\theta)\cos\f \\ +\hat{\bm{\f}}p_n'(\theta)\sin\f \end{array}\right\} \nonumber, \label{ETEscat}\\ \textbf{H}_{scat}^{TM}=\frac{E_0\sqrt{\e_b}}{\eta_0} \sum_{n=1}^{+\infty}i^{-n}S_n^{TM}h_n(k_br) \left\{\begin{array}{c}-\hat{\bm{\theta}} \csc\theta p_n(\theta)\sin\f \\ -\hat{\bm{\f}}p_n'(\theta)\cos\f \end{array}\right\} \nonumber, \label{HTMscat} \end{eqnarray} where $p_n(\theta)=P_n^1(\cos\theta)$ is the Legendre polynomial of first order, degree $n$ and argument $\cos\theta$; in addition, $h_n$ is the spherical Hankel function of order $n$ and second type. The symbol $k_b=k_0\sqrt{\e_b}=2\pi\sqrt{\e_b}/\lambda$ stands for the wavenumber into the background medium and $\eta_0=120\pi~{\rm \Omega}$ the wave impedance into free space. The coefficients $S_n^{TE/TM},$ are complex dimensionless quantities and not shown here for brevity \cite{AchievingTransparency, QUANTUM}. The power $P_{scat}$ carried by the TE and TM scattered component, which constitutes a self-consistent electromagnetic field into the background host, expresses how much the sphere perturbs the background field distribution externally to it. It can be easily computed with use of Poynting's theorem and expansions of $h_n(k_br)$ for large arguments $k_br\gg 1$ (in the far region), as follows \cite{SingleSeries}: \begin{eqnarray} P_{scat}=P_0\sum_{n=1}^{+\infty}\frac{n^2(n+1)^2}{2n+1}\left(\left|S_n^{TE}\right|^2+\left|S_n^{TM}\right|^2\right), \label{Pscat} \end{eqnarray} where $P_0=\frac{\pi E_0^2}{k_0^2 \eta_0\sqrt{\e_b}}>0$ is a quantity measured in $Watt$ and $k_0$ is the free-space wavenumber. The power absorbed by the particle, given the presence of lossy constituent media, is evaluated by applying again Poynting's theorem but for the total field this time. Indeed, if we integrate the power spatial density across any sphere of radius $r>b$ (even the infinite $k_b r \rightarrow +\infty$ one), we obtain: \begin{eqnarray} P_{abs}=-P_{scat}-P_0\sum_{n=1}^{+\infty}n(n+1)\Re\left[S_n^{TE}+S_n^{TM}\right]. \label{Pabs} \end{eqnarray} Obviously, $P_{scat}, P_{abs}>0$, which means that the series in (\ref{Pabs}) should converge to negative values smaller than $(-P_{scat}/P_0)$. In the absence of any losses, we have $P_{abs}=0$ and the aforementioned sum equals to $(-P_{scat}/P_0)$. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[height=4.2cm]{Fig12a} \label{fig:Fig12a}} \subfigure[]{\includegraphics[height=4.2cm]{Fig12b} \label{fig:Fig12b}} \caption{Dispersive permittivities $\e$ as functions of the incident wavelength $\lambda$. (a) Real and imaginary parts of the permittivity $\e_c$ for protein (diisopropylamine, DIPA). The data is obtained from well-established source \cite{ProteinData} and expanded to short wavelength limit, based on lossless assumption \cite{ProteinShortWavelength}. (b) Real parts of the background permittivity $\e_b$ for alternative hosts: air, human blood and human organs (like liver). The various backgrounds are assumed to be lossless and the corresponding data has been obtained from reliable experimental measurements \cite{LiverData, BloodData}.} \label{fig:Figs12} \end{figure} \subsection{Parameters and Observables} Before proceeding to the numerical results and the discussion, let us first clarify the value ranges of the parameters incorporated into our model. In particular, the incident electromagnetic pulse is taken with a free-space central wavelength $\lambda$ belonging to extensive band spanning from hard ultraviolet short waves ($\lambda=150$ nm, UV-C) to long infrared radiation ($\lambda=15$ $\mu$m, IR-C). In addition, we consider an external radius for the virion varying into the interval: $30~{\rm nm}<b<100~{\rm nm}$, representing an assortment of various sizes \cite{NovelCoronavirus} and a radii ratio with $0.5<a/b<0.9$, corresponding to different lengths of the protein spikes \cite{SpikeInfo}. When it comes to the density $s$ of the spikes, we regard all possible values: $0<s<1$ from an absent crown ($s=0$) to a big homogeneous protein sphere of radius $b$ ($s=1$). The variation for the dispersive permittivity for the homogeneous core $\e_c=\e_c(\lambda)$ is depicted in Fig. \ref{fig:Fig12a}, where the data has been taken from reliable source \cite{ProteinData} and expanded to short wavelength limit \cite{ProteinShortWavelength}. We notice that the regarded protein (diisopropylamine, DIPA) is lossless across large parts of the wavelength spectrum except for two bands around $\lambda\cong 3.5~\mu{\rm m}$ and $\lambda\cong~9.5~\mu{\rm m}$; these losses are responsible for the corresponding variations of the real part $\Re[\e_c]$, according to the Kramers–Kronig relations that demand causal responses \cite{BohrenBook}. As far as the permittivity of the host is concerned, it is lossless and non-dispersive for air and human organs like liver \cite{LiverData} as shown in Fig. \ref{fig:Fig12b}. In the case of blood \cite{BloodData}, $\e_b$ exhibits some variation accompanied by moderate losses that are ignored ($\Im[\e_b]=0$) for a better formulation of the primary plane-wave excitation; otherwise a modification should be performed \cite{AbsorbingMedium}. Note that the real parts of the dielectric constants for the proteins, the human organs and the human blood are very close each other making a configuration of low textural contrast, where photonic power concentration is particularly challenging. Such a feature ``pushed'' us towards large operational frequencies (ultra-small wavelengths $\lambda$), as indicated above; only then the incoming light can interact strongly or resonate \cite{GeneralScattering} with an inclusion constituting a so mild perturbation of the refractive index. When it comes to the output of the system, we normalize the absorbed and scattered power by the power of the incident illumination passing through the geometrical cross section of the scatterer, namely $P_{inc}=\frac{\pi E_0^2 b^2\sqrt{\e_b}}{2 \eta_0}$; in this way, the response of the virion becomes more meaningful. The most important quantity, however, is the extinction power $P_{ext}\equiv P_{scat}+P_{abs}$, namely the sum of scattering and absorption from \eqref{Pscat},\eqref{Pabs}, so it represents the total effect of the particle on radiation traveling into the background medium; thus, the basic metric of our study is given by: \begin{eqnarray} \frac{P_{ext}}{P_{inc}}=-\frac{2}{(k_0b)^2\e_b}\sum_{n=1}^{+\infty}n(n+1)\Re\left[S_n^{TE}+S_n^{TM}\right]. \label{MyMetric} \end{eqnarray} The series \eqref{MyMetric} is rapidly converging once the optical size $a/\lambda$ of the virion is small \cite{WatsonTransform}, which is the case in the present formulation. Our aim will be to identify the conditions under which that ratio $P_{ext}/P_{inc}$ is maximized so that the external electromagnetic beam couples optimally with the virion admitting its disintegration, conversion or neutralization. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[height=4.2cm]{Fig2a} \label{fig:Fig2a}} \subfigure[]{\includegraphics[height=4.2cm]{Fig2b} \label{fig:Fig2b}} \caption{The extinction power $P_{ext}$ by the virion normalized by the incident power through the spherical cross section $P_{inc}$ for various backgrounds $\e_b$, as a function of: (a) the core-shell radii ratio $a/b$ ($s=0.5$) and (b) the filling factor $s$ of the shell ($a/b=0.7$). Plot parameters: $\lambda=250$ nm, $b=80$ nm. The effect of the material contrast is identified.} \label{fig:Figs2} \end{figure} \section{\label{resdis} Results and Discussion} \subsection{Maximal Extinction Power} In Fig. \ref{fig:Fig2a}, we represent the extinction power $P_{ext}$ normalized by the incident power $P_{inc}$ with respect to the core-shell radii ratio $a/b$ for the three different background media indicated by Fig. \ref{fig:Fig12b}. We notice that the quantity showing how much the particle absorbs the radiation and ``shakes'' the local field distribution gets increased once the virion's corona becomes thinner indicating a more substantial power concentration around the core. In addition, the beneficial influence of the material contrast between the spherical cell and the background on $P_{ext}/P_{inc}$ can be identified since the highest values are recorded for airborne particles; on the contrary, the extinction power is small when $\e_b\cong\e_c$, as happening in the case of virions hosted into human organs. Quantitatively speaking, the magnitude $P_{ext}/P_{inc}$, at least for the adopted short wavelength ($\lambda\cong 250$ nm), is quite high and, for a densely populated crown by protein spikes, it surpasses unity. In other words, the presence of particle participates in a huge power exchange with its environment concerning the whole electromagnetic radiation passing through its transection. Obviously a $P_{ext}/P_{inc}>1$ is feasible since the object may interact with rays that are not directly incident on its spherical surface but travel in the vicinity of it \cite{MAXABS}. In Fig. \ref{fig:Fig2b}, we show the metric $P_{ext}/P_{inc}$ from \eqref{MyMetric} as a function of the filling factor $s$ for alternative hosts. A rapidly increasing trend is observed, demonstrating once more the amplifying effect of the needle-shaped rods on the particle-beam interaction. Furthermore, the textural contrast is again recognized as a factor that boosts the extinction power, while the measured quantity is somehow higher compared to Fig. \ref{fig:Fig2a}. It is also noticed that, despite the low difference between the refractive index of blood and organs as shown in Fig. \ref{fig:Fig12b}, the corresponding extinction power in human blood background is much higher; indeed, what counts is the spread between $\e_b$ and the protein's permittivity $\e_c$ as depicted in Fig. \ref{fig:Fig12a}. Given the fact that Fig. \ref{fig:Figs2} concerns the interplay of the particle with the incoming illumination at ultra-violet frequencies, it is important to understand the response of the spherical virion across the whole of the considered band. In particular, in Fig. \ref{fig:Fig3a}, we represent the variation of the ratio $P_{ext}/P_{inc}$ with respect to oscillation wavelength $\lambda$ for the three regarded hosts. When the wavelengths are tiny ($\lambda<1$ $\mu$m), it is natural to spot a declining trend since the particle is not optically big enough to interact substantially with the incident electromagnetic pulse. However, beyond 1.5 $\mu$m, where proteins exhibit significant losses (see Fig. \ref{fig:Fig12a}), the metric increases by orders of magnitudes to reach a strong local maximum at $\lambda\cong 3.37$ $\mu$m, regardless of the background. Therefore, in order to maximally engage with that specific type of corona-virions, one should concentrate the impinging power in the spectral vicinity of that frequency; this conclusion is one of the major findings of our study. Similar resonances are exploited for protein molecule monolayers detection by covering the sensor to produce a controllable amount of resonance redshift \cite{ProteinDetection}. It should be finally stressed that the results are host-indifferent only when the operating wavelengths $\lambda$ are large enough; for $\lambda<1$ $\mu$m, the relative order of $P_{ext}/P_{inc}$ for different hosts is dictated by the material contrast of Fig. \ref{fig:Figs12}, as happening in Fig. \ref{fig:Figs2}. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[height=4.8cm]{Fig3a} \label{fig:Fig3a}} \subfigure[]{\includegraphics[height=4.8cm]{Fig3b} \label{fig:Fig3b}} \caption{The extinction power $P_{ext}$ by the virion normalized by the incident power through the spherical cross section $P_{inc}$ as a function of the incoming wavelength $\lambda$ for: (a) various host environments $\e_b$ ($b=60$ nm), (b) various external sizes $b$ of the virions (in blood). Plot parameters: $a/b=0.7$, $s=0.5$. The strong local optimum at mid-infrared ($\lambda\cong 3370$ nm) is spotted, regardless of background and size.} \label{fig:Figs3} \end{figure} In Fig. \ref{fig:Fig3b}, we repeat the calculations of Fig. \ref{fig:Fig3a} but for different sizes of virions $b$, all existing into human blood. The variation of $P_{ext}/P_{inc}$ is similar to that of Fig. \ref{fig:Fig3a} but the larger size makes a difference and increases, on average, the represented metric, even at lower frequencies ($\lambda>1.5$ $\mu$m). However, the major maximum at $\lambda\cong 3.37$ $\mu$m discussed above, is present no matter how small is the virion and, importantly, it gives almost the same relative extinction power. Such a feature remarks further the significance of the reported resonance, since the electromagnetic beam concentrated around a single wavelength allows for maximal interplay with an ensemble of virus particles possessing various sizes. Note that mid-infrared frequency range makes a privileged band for biosensing \cite{SvetaPaper} and chemical identification of biomolecules through their vibrational fingerprints, namely, photonic operation at this resonance is experimentally feasible \cite{PlasmonicBiosensing}. \subsection{Power Spatial Distribution} After having understood the influence of structural ($a/b$, $s$), textural (several background) and spectral ($\lambda$) parameters on the way that corona-virions interact with the incident beams, it would be meaningful to show the spatial distribution of electromagnetic power inside and outside the core-shell particle for characteristic cases. In Fig. \ref{fig:Fig4a}, we show the relative field quantity $|\textbf{E}/E_0|^2$ across $zx$ plane, when the structure is excited at the optimal mid-infrared wavelength ($\lambda\cong 3.37$ $\mu$m) into human blood host; the represented quantity may be discontinuous as one crosses an interface between two different media, due to the change in texture. By inspection of Fig. \ref{fig:Fig4a}, one directly notices that the values of the electric field magnitude $|\textbf{E}|$ are very close to that of the incident plane wave $E_0$; however, it is natural given the very low permittivity contrast of the virion with its environment. On the contrary, the power concentration at the interior of the spherical volume is counter-intuitive and noteworthy since it clearly demonstrates the substantial interaction of the entire virion with the incoming pulse. As recently reported \cite{OpticalFocusing}, such a physical focus of light is useful to photomedicine while can be directly utilized for thermal damage in biological applications. That significant property is also illustrated when the $zy$ plane is considered (Fig. \ref{fig:Fig4b}) but the distribution is not identical to that of Fig. \ref{fig:Fig4a} due to the vectorial nature of the incident plane wave which is polarized along $x$ axis. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[width=4.1cm]{Fig4a} \label{fig:Fig4a}} \subfigure[]{\includegraphics[width=4.1cm]{Fig4b} \label{fig:Fig4b}} \subfigure[]{\includegraphics[width=4.1cm]{Fig4c} \label{fig:Fig4c}} \subfigure[]{\includegraphics[width=4.1cm]{Fig4d} \label{fig:Fig4d}} \caption{Spatial distribution of the total electric field $|\textbf{E}/E_0|^2$ across $zx$ and $zy$ planes when the incoming pulse travels along $+z$ axis for: (a,b) $\lambda=3370$ nm (optimal mid-infrared wavelength) (c,d) $\lambda=2200$ nm (arbitrary smaller wavelength). Plot parameters: $a/b=0.7$, $s=0.5$, $b=65$ nm, human blood background. The blue lines indicate the boundaries between two different media.} \label{fig:Figs4} \end{figure} To show that the bright spots of Figs \ref{fig:Fig4a},\ref{fig:Fig4b} are not easily achieved, in Figs \ref{fig:Fig4c},\ref{fig:Fig4d} we show the distribution of $|\textbf{E}/E_0|^2$ when the incoming wavelength is arbitrarily picked ($\lambda=2.2$ $\mu$m). Despite the fact that the pulse is faster making the virion look larger and admitting it to develop more complex dynamics, the field into the object is lower than the background level forming a bipolar pattern; as a result, the ability of the primary excitation to engage with the particle is severely diminished compared to the optimal case of Fig. \ref{fig:Fig4a}. Similar conclusions can be drawn by juxtaposing Figs \ref{fig:Fig4c},\ref{fig:Fig4d} referring to $zy$ plane; in the scenario of randomly selected wavelength, the pattern is omni-directional and the power is weak into the scatterer, contrary to the signal focusing exhibited in Fig. \ref{fig:Fig4d}. In Fig. \ref{fig:Figs5}, higher frequencies are examined; more specifically, in Fig. \ref{fig:Fig5a} we show the electric field across $zx$ plane under violet color illumination ($\lambda=350$ nm). One directly notices that the field is stronger at the rear side of the virion but, if an averaging is performed, the relative signal within the volume of the particle is comparable to that of Fig. \ref{fig:Fig4a}, even though the incident wavelength is ten times smaller and much less harmful for the surroundings (healthy cells, tissues, organs) into human body. It is also found that the field increases along $z$ axis into the homogeneous core, contrary to what is happening to Figs \ref{fig:Fig4a},\ref{fig:Fig4b}, where the opposite trend is recorded. In Fig. \ref{fig:Fig5b}, a larger wavelength corresponding to red color ($\lambda=700$ nm) is assumed; we observe a substantially poorer relative power concentration compared to both Figs \ref{fig:Fig4a},\ref{fig:Fig5a} for a much faster pulse than the one operated at the optimal regime. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[width=4.1cm]{Fig5a} \label{fig:Fig5a}} \subfigure[]{\includegraphics[width=4.1cm]{Fig5b} \label{fig:Fig5b}} \caption{Field concentration at shorter wavelengths. Spatial distribution of the total electric field $|\textbf{E}/E_0|^2$ across $zy$ plane when the incoming pulse travels along $+z$ axis for: (a) $\lambda=350$ nm, (b) $\lambda=700$ nm. Plot parameters: $a/b=0.7$, $s=0.5$, $b=100$ nm, human blood background. The blue lines indicate the boundaries between two different media.} \label{fig:Figs5} \end{figure} In Fig. \ref{fig:Figs6}, we examine the effect of the radii ratio $a/b$ on the spatial distribution of the electric field $|\textbf{E}/E_0|^2$, when working close to the reported optimal point ($\lambda\cong 3.37$ $\mu$m). It is apparent that the power accumulation is significant in all the considered cases and gets higher for an increasing $a/b$, as also indicated by Fig. \ref{fig:Fig2a}. In the same way that Fig. \ref{fig:Fig3b} shows insensitivity of the identified effect from the viral size $b$, Fig. \ref{fig:Figs6} designates that the resonance is practically indifferent to the corona thickness $(b-a)$ and thus a universal effectiveness against large families of virions possessing diverse characteristics, is demonstrated. Such an optical trap allows for manipulation and energy exchange with individual viral nanoparticles that can lead to their disintegration or neutralization; similar biomagnifying effects are reported for different applications \cite{SingleCell} like optical imaging and assembly of bionanomaterials. The experimental potential of the described effect is further underlined by the fact that this type of resonances make typical spectrophotometers to be sensitive in the activity of individual enzymes; thus, it renders the device suitable \cite{OnChipSpectro} for on-chip antibody or antigen detection and chromogenic-based operations such as bacteria detection. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[width=4.1cm]{Fig6alfa} \label{fig:Fig6a}} \subfigure[]{\includegraphics[width=4.1cm]{Fig6be} \label{fig:Fig6b}} \subfigure[]{\includegraphics[width=4.1cm]{Fig6ce} \label{fig:Fig6c}} \subfigure[]{\includegraphics[width=4.1cm]{Fig6delta} \label{fig:Fig6d}} \caption{Effect of aspect ratio on the field concentration at optimal mid-infrared wavelength. Spatial distribution of the total electric field $|\textbf{E}/E_0|^2$ across $zx$ plane when the incoming pulse travels along $+z$ axis for: (a) $a/b=0.6$, (b) $a/b=0.7$, (c) $a/b=0.8$, (d) $a/b=0.9$. Plot parameters: $s=0.6$, $b=80$ nm. $\lambda=3367$ nm, human blood background.} \label{fig:Figs6} \end{figure} \section{\label{concl} Concluding Remark} A single corona virion is modeled as a homogeneous spherical core surrounded by a conformal isotropic shell, under illumination from electromagnetic pulses. Rigorous Mie theory is applied to obtain the analytical solution of the formulated boundary value problem and compute the total extinction power of the particle. This quantity indicates how successfully the considered cells interacts with the incoming fields and is found to be maximal at a specific mid-infrared resonance that is independent from the structural characteristics or the background host. A substantial power exchange between impinging beams and the object is necessary for a series of actions dealing with the virion from thermal damage and dissolution to neutralization and isolation; therefore, the reported findings may pave the way to more efficient radiation treatments against SARS-CoV-2. The proposed model can be refined to include anisotropic multilayers both at the core for more detailed description of the engulfed genome and at the shell to take into account the radial distribution of the protein spikes. Such a process will involve complex-ordered Bessel functions calling for careful computation \cite{WatsonTransform}; similarly, alternative excitation beams or time-restricted causal pulses instead of plane waves can be taken into account by properly evaluating complex Fourier integrals \cite{GaussianPulse}. Importantly, an interesting follow-up of our approach would be to regard clusters of cells and investigate their collective dynamics by implementing suitable transforms for the summation \cite{Summation1, Summation2} of the responses from randomly or deterministically placed cells \cite{MontiPaper}. In this way, the work at hand can be considered as the first step towards the successful modeling of corona-virions and the derivation of analytical formulas for the exchanged power which will simplify significantly the subsequent optimizations and make easier the designation of optimal operation regimes. \begin{acknowledgments} This work was partially supported by Nazarbayev University Grant No. 090118FD5349 entitled: ``Super transmitters, radiators and lenses via photonic synthetic matter''. Funding from MES RK state-targeted program BR05236454 is also acknowledged. The data that supports the findings of this study are available within the article. \end{acknowledgments}
{ "timestamp": "2020-07-28T02:41:07", "yymm": "2007", "arxiv_id": "2007.13598", "language": "en", "url": "https://arxiv.org/abs/2007.13598" }
\section{Introduction}\label{sec:introduction} ``Quasimodular forms'' as a notion were introduced by M.~Kaneko and D.~Zagier in \cite{Kaneko_Zagier1995:generalized_jacobi_theta}. They have found applications in various areas of mathematics and are of interest on their own. For excellent introductions to the subject we refer to \cite{Royer2012:quasimodular_forms_introduction, Zagier2008:elliptic_modular_forms,Choie_Lee2019:jacobi_like_forms}. In \cite{Kaneko_Koike2006:extremal_quasimodular_forms} M.~Kaneko and M.~Koike introduced the notion of \emph{extremal quasimodular forms}. These are quasimodular forms of weight $w$ and depth $r$, which show extremal order of vanishing at $z=i\infty$ amongst all forms of that weight and depth. The authors conjectured certain arithmetic properties of the Fourier coefficients of these forms for depth $r\leq4$. These were established in \cite{Grabner2020:quasimodular_forms}; in \cite{Pellarin_Nebe2020:extremal_quasi_modular} and \cite{Mono2020:conjecture_kaneko_koike} these were proved independently for $r=1$. A second part of the conjecture stated in \cite{Kaneko_Koike2006:extremal_quasimodular_forms} concerned the positivity of the Fourier coefficients of extremal quasimodular forms. In this paper we prove that for any $w$ and $r\leq4$ all but possibly finitely many Fourier coefficients are positive. Using a bound given by P.~Jenkins and R.~Rouse \cite{Jenkins_Rouse2011:bounds_coefficients_cusp} we could verify the conjecture for $1\leq r\leq4$ and $w\leq200$. \section{Notation and preliminary results} \label{sec:notat-prel-results} In this section we collect some basic facts about modular and quasimodular forms. \subsection{Modular forms}\label{sec:modular-forms} The modular group $\Gamma$ is the group of $2\times2$-matrices with integer entries and determinant $1$ \begin{equation*} \Gamma=\mathrm{PSL}(2,\mathbb{Z})=\left\{ \begin{pmatrix} a&b\\c&d \end{pmatrix}\Bigm| a,b,c,d\in\mathbb{Z}, ac-bd=1 \right\}/\{\pm I\}. \end{equation*} It acts on the upper half plane $\mathbb{H}=\{z\in\mathbb{C}\mid\Im z>0\}$ by M\"obius transformation \begin{equation*} \begin{pmatrix} a&b\\c&d \end{pmatrix}z=\frac{az+b}{cz+d}. \end{equation*} The group $\Gamma$ is generated by \begin{equation} \label{eq:ST} Sz=-\frac1z\quad Tz=z+1, \end{equation} which satisfy the relations $S^2=\mathrm{id}$ and $(ST)^3=\mathrm{id}$. A holomorphic function $f:\mathbb{H}\to\mathbb{C}$ is called a \emph{holomorphic modular form of weight $w$}, if it satisfies \begin{equation} \label{eq:modular} (cz+d)^{-w}f\left(\frac{az+b}{cz+d}\right)=f(z) \end{equation} for all $z\in\mathbb{H}$ and all $\bigl( \begin{smallmatrix}a & b\\ c & d\end{smallmatrix}\bigr)\in\Gamma$, and the limit \begin{equation*} f(i\infty):=\lim_{\Im z\to+\infty}f(z) \end{equation*} exists. The vector space $\mathcal{M}_w(\Gamma)$ of holomorphic modular forms is non-trivial only for even $w\geq4$ and $w=0$. Its dimension equals \begin{equation*} \dim\mathcal{M}_w(\Gamma)= \begin{cases} \left\lfloor\frac w{12}\right\rfloor&\text{for }w\equiv 2\pmod{12}\\ \left\lfloor\frac w{12}\right\rfloor+1&\text{otherwise.} \end{cases} \end{equation*} Prominent examples of modular forms are the Eisenstein series \begin{equation} \label{eq:eisenstein-2k} E_{2k}(z)=\frac1{2\zeta(2k)}\sum\limits_{(m,n)\in\mathbb{Z}\setminus\{(0,0)\}} \frac1{(mz+n)^{2k}} \end{equation} for $k\geq2$, which are modular forms of weight $2k$. They admit a Fourier expansion (setting $q=e^{2\pi iz}$ as usual in this context) \begin{equation}\label{eq:eisenstein-fourier} E_{2k}=1-\frac{4k}{B_{2k}}\sum_{n=1}^\infty\sigma_{2k-1}(n)q^n, \end{equation} where $\sigma_\alpha(n)=\sum_{d\mid n}d^\alpha$ denotes the divisor sum of order $\alpha$ and $B_{2k}$ denote the Bernoulli numbers. The defining series \eqref{eq:eisenstein-2k} does not converge for $k=1$ in the given form. Nevertheless, the series \eqref{eq:eisenstein-fourier} converges for $k\geq1$. This entails a slightly more complicated transformation behaviour under the action of $S$ \begin{equation} \label{eq:E2S} z^{-2}E_2(Sz)=E_2(z)+\frac6{\pi iz}. \end{equation} Every holomorphic modular form can be expressed as a complex polynomial in $E_4$ and $E_6$, furthermore \begin{equation*} \bigoplus_{k=0}^\infty \mathcal{M}_{2k}(\Gamma)=\mathbb{C}[E_4,E_6]. \end{equation*} By the invariance under $T$, every holomorphic modular form $f$ has a Fourier expansion \begin{equation*} f(z)=\sum_{n=0}^\infty a_f(n)e^{2\pi inz}=\sum_{n=0}^\infty a_f(n)q^n. \end{equation*} In the sequel we will follow the convention to freely switch between dependence on $z$ and $q$. A holomorphic form $f$ is called a \emph{cusp form}, if $f(i\infty)=0$. The prototypical example of a cusp form is \begin{equation} \label{eq:Delta} \Delta=\frac1{1728}\left(E_4^3-E_6^2\right). \end{equation} The space of cusp forms is denoted by $\mathcal{S}_w(\Gamma)$. Since we only deal with modular forms for the full modular group $\Gamma$, we will omit reference to the group in the sequel. For a detailed introduction to the theory of modular forms we refer to \cite{Shimura2012:modular_forms_basics, Berndt_Knopp2008:heckes_theory_modular, Bruinier_Geer_Harder+2008:1_2_3-modular, Stein2007:modular_forms_computational, Diamond_Shurman2005:first_course_modular, Iwaniec1997:topics_classical_automorphic, Lang1995:introduction_to_modular}. \subsection{Quasimodular forms}\label{sec:quasimodular-forms} The vector space of quasimodular forms of weight $w$ and depth $\leq r$ is given by \begin{equation} \label{eq:quasi-space} \mathcal{QM}_w^{r}=\bigoplus_{\ell=0}^rE_2^\ell\mathcal{M}_{w-2\ell}. \end{equation} Quasimodular forms occur naturally as derivatives of modular forms (see \cite{Royer2012:quasimodular_forms_introduction, Zagier2008:elliptic_modular_forms,Choie_Lee2019:jacobi_like_forms}). This aspect will be used and elaborated later. Throughout this paper we use the notation \begin{equation*} Df=\frac1{2\pi i}\frac{df}{dz}=q\frac{df}{dq}. \end{equation*} Higher derivatives are always expressed as powers of $D$. Upper indices will never denote derivatives. With this notation Ramanujan's identities read \begin{equation} \label{eq:ramanujan} \begin{split} DE_2&=\frac1{12}\left(E_2^2-E_4\right)\\ DE_4&=\frac13\left(E_2E_4-E_6\right)\\ DE_6&=\frac12\left(E_2E_6-E_4^2\right)\\ D\Delta&=E_2\Delta. \end{split} \end{equation} These give rise to the definition of the Ramanujan-Serre derivative \begin{equation*} \partial_wf=Df-\frac w{12}E_2f, \end{equation*} where $w$ is (related to) the weight of $f$. We will use the product rule \begin{equation*} \partial_{w_1+w_2}(fg)=\left(\partial_{w_1}f\right)g+ f\left(\partial_{w_2}g\right) \end{equation*} and also make frequent use of the following immediate consequences of \eqref{eq:ramanujan} \begin{equation} \label{eq:serre-ramanujan} \begin{split} \partial_1E_2&=-\frac1{12}E_4\\ \partial_4E_4&=-\frac13E_6\\ \partial_6E_6&=-\frac12E_4^2\\ \partial_{12}\Delta&=0. \end{split} \end{equation} From the second and third equation together with the fact that every holomorphic modular form is a polynomial in $E_4$ and $E_6$, it follows immediately that for a form $f\in\mathcal{M}_w$ we have $\partial_wf\in\mathcal{M}_{w+2}$, and for $f\in\mathcal{S}_w$ we have $\partial_wf\in\mathcal{S}_{w+2}$. We set \begin{equation*} \mathcal{QS}_w^r=\bigoplus_{\ell=0}^rE_2^\ell\S_{w-2\ell}=\Delta\mathcal{QM}_{w-12}^r \end{equation*} for the space of quasimodular forms with cusp form coefficients for all powers of $E_2$. For the spaces of quasimodular forms we have the alternative descriptions as \begin{equation}\label{eq:QM-D} \mathcal{QM}_w^r=\bigoplus_{\ell=0}^rD^\ell\mathcal{M}_{w-2\ell}, \end{equation} see for instance \cite[Proposition~14.3]{Choie_Lee2019:jacobi_like_forms}, and \begin{equation}\label{eq:QS-D} \mathcal{QS}_w^r=\bigoplus_{\ell=0}^rD^\ell\S_{w-2\ell}. \end{equation} The second decomposition follows from the last equation in \eqref{eq:ramanujan}. The direct sum in \eqref{eq:QM-D} can be further refined as \begin{equation}\label{eq:QM-decomp} \mathcal{QM}_w^r=\bigoplus_{\ell=0}^rD^\ell\left(\S_{w-2\ell}\oplus\mathbb{C} E_{w-2\ell}\right)= \mathcal{QS}_w^r\oplus\bigoplus_{\ell=0}^r\mathbb{C} D^\ell E_{w-2\ell}. \end{equation} We set \begin{equation} \label{eq:eisenstein} \mathcal{QE}_w^r=\mathcal{QM}_w^r/\mathcal{QS}_w^r \end{equation} the ``Eisenstein space''. We write $\overline{f}$ for $f+\mathcal{QS}_w^r$. Notice that this notation implicitly depends on $w$ and $r$. Then $D$ maps $\mathcal{QE}_w^r$ to $\mathcal{QE}_{w+2}^{r+1}$. Similarly, $\partial_{w-r}$ maps $\mathcal{QE}_{w}^r$ to $\mathcal{QE}_{w+2}^r$ by \cite[Lemma~2.2]{Grabner2020:quasimodular_forms}. Notice that it makes sense to define $\overline{f}(i\infty)$ for $\overline{f}\in\mathcal{QE}_w^r$. Thus we can define \begin{equation*} \mathcal{Q}\hspace*{-1pt}^{^0}\hspace*{-3pt}\mathcal{E}_w^r=\{\overline{f}\in\mathcal{QE}_w^r\mid \overline{f}(i\infty)=0\}. \end{equation*} Then for $w\geq2r+4$ \begin{equation*} \{\overline{D^\ell E_{w-2\ell}}\mid \ell=0,\ldots,r\} \end{equation*} is a basis of $\mathcal{QE}_w^r$, and \begin{equation*} \{\overline{D^\ell E_{w-2\ell}}\mid \ell=1,\ldots,r\} \end{equation*} is a basis of $\mathcal{Q}\hspace*{-1pt}^{^0}\hspace*{-3pt}\mathcal{E}_w^r$. Notice that for $v,w\geq4$ \begin{equation*} E_{v+w}-E_vE_w \end{equation*} is a cusp form of weight $v+w$, which allows to write \begin{equation}\label{eq:EwEv} \overline{E_vE_w}=\overline{E_{v+w}}. \end{equation} Using the definition of the Serre derivative, we obtain \begin{equation*} \partial_wE_w=DE_w-\frac w{12}E_2E_w, \end{equation*} which is a modular form of weight $w+2$ with constant coefficient $-\frac w{12}$. Thus we have \begin{equation*} \partial_wE_w=-\frac w{12}E_{w+2}+\text{cusp form}, \end{equation*} which we write as \begin{equation} \label{eq:serre-Ew} \overline{\partial_wE_w}=\partial_w\overline{E_w}= -\frac w{12}\overline{E_{w+2}}. \end{equation} Using this we obtain \begin{equation*} \partial_{w-r}\overline{E_{w-2\ell}E_2^\ell}= \overline{\left(\partial_{w-2\ell}E_{w-2\ell}\right)E_2^\ell}+ \overline{E_{w-2\ell}\left(\partial_{2\ell-r}E_2^\ell\right)}, \end{equation*} from which we derive \begin{equation} \label{eq:serre-EwE2} \begin{split} \partial_{w-r}\overline{E_{w-2\ell}E_2^\ell}&= -\frac\ell{12}\overline{E_{w-2\ell}E_4E_2^{\ell-1}} -\frac{w-2\ell}{12}\overline{E_{w-2\ell+2}E_2^\ell}- \frac{\ell-r}{12}\overline{E_{w-2\ell}E_2^{\ell+1}}\\ &=-\frac\ell{12}\overline{E_{w-2\ell+4}E_2^{\ell-1}} -\frac{w-2\ell}{12}\overline{E_{w-2\ell+2}E_2^\ell}- \frac{\ell-r}{12}\overline{E_{w-2\ell}E_2^{\ell+1}}. \end{split} \end{equation} Here we have used \begin{equation*} \partial_{2\ell-r}E_2^\ell=-\frac\ell{12}E_4E_2^{\ell-1} -\frac{\ell-r}{12}E_2^{\ell+1}. \end{equation*} Consider the forms \begin{equation}\label{eq:f_wk} f_w^{(k)}=\sum_{\ell=0}^k(-1)^\ell\binom k\ell E_{w-2\ell}E_2^\ell, \end{equation} where we set $E_0=1$ and omit the term for $w-2\ell=2$, which only occurs if $w\leq2k+2$. We compute using \eqref{eq:serre-EwE2} \begin{align*} &\partial_{w-r}\overline{f_w^{(k)}}=\sum_{\ell=0}^k(-1)^\ell\binom k\ell\\ &\times \left(-\frac\ell{12}\overline{E_{w-2\ell+4}E_2^{\ell-1}} -\frac{w-2\ell}{12}\overline{E_{w-2\ell+2}E_2^\ell} -\frac{\ell-r}{12}\overline{E_{w-2\ell}E_2^{\ell+1}}\right). \end{align*} Expanding the binomial coefficients and shifting the summation index gives \begin{align*} &\partial_{w-r}\overline{f_w^{(k)}}=-\frac1{12} \sum_{\ell=0}^{k+1}(-1)^\ell \overline{E_{w-2\ell+2}E_2^\ell}\\ &\times\left(-k\binom{k-1}\ell+w\binom k\ell -2k\binom{k-1}{\ell-1}-k\binom{k-1}{\ell-2}+r\binom k{\ell-1}\right), \end{align*} where we have set $\binom km=0$ for $m<0$ and $m>k$. The term in parenthesis is then equal to \begin{equation*} (w-r)\binom k\ell+(r-k)\binom{k+1}\ell, \end{equation*} which gives \begin{equation}\label{eq:Serre-fk} \partial_{w-r}\overline{f_w^{(k)}}=-\frac{w-r}{12}\overline{f_{w+2}^{(k)}} -\frac{r-k}{12}\overline{f_{w+2}^{(k+1)}} \end{equation} for $k=0,\ldots,r$ and $w\geq2r+2$ (the case $k=r$ and $w=2r+2$ has to be checked separately). We also have for $w\geq4$ \begin{equation}\label{eq:DEw} \overline{D^kE_w}=(-1)^k\frac{(w)_k}{12^k}\overline{f_{w+2k}^{(k)}}, \end{equation} where $(w)_k=w(w+1)\cdots(w+k-1)$ denotes the Pochhammer symbol. We prove \eqref{eq:DEw} by induction. For $k=0$ it obviously holds. The induction step reads as (using \eqref{eq:Serre-fk} for $r=k$) \begin{align*} \overline{D^{k+1}E_w}&=(-1)^k\frac{(w)_k}{12^k}\overline{Df_{w+2k}^{(k)}}\\ &= (-1)^k\frac{(w)_k}{12^k}\left(\overline{\partial_{w+k}f_{w+2k}^{(k)}} +\frac{w+k}{12}\overline{E_2f_{w+2k}^{(k)}}\right)\\ &= (-1)^{k+1}\frac{(w)_k}{12^{k+1}}\left((w+k)\overline{f_{w+2k+2}^{(k)}} -(w+k)\overline{E_2f_{w+2k}^{(k)}}\right)\\ &=(-1)^{k+1}\frac{(w)_{k+1}}{12^{k+1}}\overline{f_{w+2k+2}^{(k+1)}}. \end{align*} Furthermore, we have \begin{equation} \label{eq:f_w+l} \overline{E_{v}f_w^{(k)}}=\overline{f_{w+v}^{(k)}} \end{equation} for $w\geq2k+4$ and $v\geq4$, which follows from \eqref{eq:EwEv}. For later reference we notice that \begin{equation}\label{eq:D-eisenstein} D^kE_w(z)=-\frac{2w}{B_w}\sum_{n=1}^\infty n^k\sigma_{w-1}(n)q^n \end{equation} for $k\geq1$ and even $w\geq2$. We will use the following convention for iterated Serre derivatives \begin{equation*} \partial_w^0f=f,\quad \partial_w^{k+1}=\partial_{w+2k}\left(\partial_w^kf\right). \end{equation*} With this we get the following expressions for higher derivatives in terms of Serre derivatives, which we will need later \begin{equation} \label{eq:D-serre} \begin{split} Df&=\partial_{w-2}f+E_2\frac{w-2}{12}f\\ D^2f&=\left(\partial_{w-4}^2f-\frac{w-4}{144}E_4f\right) +E_2\frac{w-3}6\partial_{w-4}f +E_2^2\frac{(w-3)(w-4)}{144}f\\ D^3f&=\left(\partial_{w-6}^3f-\frac{3w-16}{144}E_4\partial_{w-6}f+ \frac{w-6}{432}E_6f\right)\\ &+E_2\left(\frac{w-4}4\partial_{w-6}^2f-\frac{(w-4)(w-6)}{576}E_4f\right)\\ &+E_2^2\frac{(w-4)(w-5)}{48}\partial_{w-6}f+ E_2^3\frac{(w-4)(w-5)(w-6)}{1728}f\\ D^4f&=\left(\partial_{w-8}^4f-\frac{3w-20}{72}E_4\partial_{w-8}^2f +\frac{2w-15}{216}E_6\partial_{w-8}f+\frac{(w-8)(w-14)}{6192}E_4^2f \right)\\ &+E_2\left(\frac{w-5}3\partial_{w-8}^3f -\frac{(3w-22)(w-5)}{432}E_4\partial_{w-8}f +\frac{(w-5)(w-8)}{1296}E_6f\right)\\ &+E_2^2\left(\frac{(w-5)(w-6)}{24}\partial_{w-8}^2f -\frac{(w-5)(w-6)(w-8)}{3456}E_4f\right)\\ &+E_2^3\frac{(w-5)(w-6)(w-7)}{432}\partial_{w-8}f +E_2^4\frac{(w-5)(w-6)(w-7)(w-8)}{20736}f. \end{split} \end{equation} \begin{prop}\label{prop1} Let $g\in\mathcal{QS}_w^r$ be given by its Fourier expansion \begin{equation*} g(z)=\sum_{n=1}^\infty a(n)q^n. \end{equation*} Then $a(n)=\mathcal{O}(n^{\frac{w-1}2}\sigma_0(n))$. \end{prop} \begin{proof} Let $g$ first be in $D^\ell\S_{w-2\ell}$ for some $\ell\geq0$. Then $g$ is the $\ell$-th derivative of a cusp form $G\in\S_{w-2\ell}$. By Deligne's estimate \cite[Th\'eor\`eme~8.2]{Deligne1974:la_conjecture_weil} (see also \cite[Section~14.9]{Iwaniec_Kowalski2004:analytic_number_theory}) the Fourier coefficients of $G$ are bounded by $\mathcal{O}(n^{\frac{w-1}2-\ell}\sigma_0(n))$ . The effect of $\ell$-fold derivation is multiplication with $n^\ell$, which gives the desired estimate for this special case. Since the same estimate holds for all spaces in the direct sum \eqref{eq:QS-D}, it holds for every $g$ in $\mathcal{QS}_w^r$. \end{proof} \begin{prop}\label{prop2} Let $f_w^{(k)}$ be the form given by \eqref{eq:f_wk} and let \begin{equation*} f_w^{(k)}=\delta_{k,0}+\sum_{n=1}^\infty a_w^{(k)}(n)q^n \end{equation*} be its Fourier expansion. Then the asymptotic expansion \begin{equation} \label{eq:a_wk} a_w^{(k)}(n)= \begin{cases} -\frac{2w}{B_w}\sigma_{w-1}(n)&\text{for }k=0\\ (-1)^{k+1}\frac{2\cdot12^k}{(w-2k+1)_{k-1}B_{w-2k}}n^k\sigma_{w-2k-1}(n) &\text{for }k>0\\ \quad+\mathcal{O}\left(n^{\frac{w-1}2}\sigma_0(n)\right)& \end{cases} \end{equation} holds. \end{prop} \begin{proof} The case $k=0$ is just the Fourier expansion of the Eisenstein series given in \eqref{eq:eisenstein-2k}. For $k>0$ we use \eqref{eq:DEw} to obtain \begin{equation*} f_w^{(k)}=(-1)^k\frac{12^k}{(w-2k)_k}D^kE_{w-2k}+h_{w}^{(k)}, \end{equation*} where $h_{w}^{(k)}\in\mathcal{QS}_{w}^{(k)}$. The Fourier coefficient of the first term equals \begin{equation*} (-1)^{k+1}\frac{12^k}{(w-2k)_k}\frac{2(w-2k)}{B_{w-2k}}n^k\sigma_{w-2k-1}(n) \end{equation*} using \eqref{eq:D-eisenstein}. The Fourier coefficient of $h_{w}^{(k)}$ is estimated using Proposition~\ref{prop1} to obtain \eqref{eq:a_wk}. \end{proof} We recall the dimension formulas for the spaces $\mathcal{QM}_w^r$ for $1\leq r\leq4$: \begin{equation} \label{eq:dimqm} \begin{split} \dim\mathcal{QM}_w^1&=\left\lfloor\frac w{6}\right\rfloor+1\\ \dim\mathcal{QM}_w^2&=\left\lfloor\frac w{4}\right\rfloor+1\\ \dim\mathcal{QM}_w^3&=\left\lfloor\frac w{3}\right\rfloor+1\\ \dim\mathcal{QM}_w^4&= \begin{cases} \left\lfloor\frac {5w}{12}\right\rfloor&\text{if }w\equiv10\pmod{12}\\ \left\lfloor\frac {5w}{12}\right\rfloor+1&\text{otherwise;} \end{cases} \end{split} \end{equation} see, for instance \cite[Proposition~2.1]{Grabner2020:quasimodular_forms}. \section{Extremal quasimodular forms} \label{sec:extr-quas-forms} The notion of an \emph{extremal quasimodular form} was introduced in \cite{Kaneko_Koike2006:extremal_quasimodular_forms}. They are defined as quasimodular forms achieving the maximal possible order of vanishing at $z=i\infty$ for given weight $w$ and depth $r$. It follows from a simple dimension argument that the order of vanishing $\dim\mathcal{QM}_w^r-1$ can be achieved. It was shown in \cite[Theorem~1.3]{Pellarin_Nebe2020:extremal_quasi_modular} and independently in \cite[Remark~4.7]{Grabner2020:quasimodular_forms} that for $r\leq4$ this is actually the precise order of vanishing for such forms. In \cite{Kaneko_Koike2006:extremal_quasimodular_forms} differential equations satisfied by extremal quasimodular forms are found for $r=1$ and $r=2$. Furthermore, two conjectures about these forms for $r\leq4$ are stated: \begin{itemize} \item if the first non-zero Fourier coefficient of the extremal quasimodular form equals $1$ (the form is called \emph{normalised} then), the denominators of all Fourier coefficients are then divisible only by primes less than the weight. \item if the first non-zero Fourier coefficient of the extremal quasimodular form is positive, then all Fourier coefficients are positive. \end{itemize} The first conjecture has been proved for $r=1$ in \cite{Pellarin_Nebe2020:extremal_quasi_modular} and \cite{Mono2020:conjecture_kaneko_koike}. It has been proved in full generality for $1\leq r\leq4$ in \cite{Grabner2020:quasimodular_forms}. In the course of the following section we will prove the following theorem, which partially settles the second conjecture. \begin{theorem}\label{thm1} Let $g_w^{(r)}$ be a normalised extremal quasimodular form of weight $w$ and depth $r\leq4$. Then all but possibly finitely many Fourier coefficients are positive. \end{theorem} \section{Proof of Theorem~\ref{thm1}} The proof of Theorem~\ref{thm1} will be done separately for the values of the depth parameter $r$. We will make frequent use of recursive relations for extremal quasimodular forms derived in \cite{Grabner2020:quasimodular_forms}. Notice that by definition an extremal quasimodular form is in $\mathcal{Q}\hspace*{-1pt}^{^0}\hspace*{-3pt}\mathcal{E}_w^{(r)}\oplus\mathcal{QS}_w^{(r)}$. The proofs follow the general scheme \begin{itemize} \item express the form $g_w^{(r)}$ in terms of a linear recurrence obtained in \cite{Grabner2020:quasimodular_forms} \item use this recurrence to obtain a linear recurrence for the coefficients of $\overline{f_w^{(\ell)}}$ $(\ell=1,\ldots,r)$ in the decomposition of $\overline{g_w^{(r)}}$ \item rewrite this decomposition in terms of $\overline{D^\ell E_{w-2\ell}}$ and observe the positivity of the asymptotic main term originating from $DE_{w-2}$. \end{itemize} The recursions obtained in \cite[Section~6]{Grabner2020:quasimodular_forms} contain a positive factor, which ensures that the forms are normalised, which is important in the context there. In this section we use these recursions without this factor and at some occasions change this factor, which does not affect the sign of the coefficients. \subsection{Depth $1$}\label{sec:depth-1} Using \cite[Proposition~6.1]{Grabner2020:quasimodular_forms} we define a sequence of quasimodular forms by \begin{equation}\label{eq:recurr1} \begin{split} g_6^{(1)}&=E_2E_4-E_6=-f_6^{(1)}=3DE_4\\ g_{w+6}^{(1)}&=E_4\partial_{w-1}g_w^{(1)}-\frac{w+1}{12}E_6g_w^{(1)}\\ g_{w+2}^{(1)}&=\frac{12}{w-1}\partial_{w-1}g_w^{(1)}\\ g_{w+4}^{(1)}&=E_4g_w^{(1)} \end{split} \end{equation} for $w\equiv0\pmod6$. These forms are then extremal quasimodular forms of weight $w$ and depth $1$ with positive coefficient of the first non vanishing term of its Fourier expansion. By the fact that $\mathcal{Q}\hspace*{-1pt}^{^0}\hspace*{-3pt}\mathcal{E}_w^1$ is one dimensional we set \begin{equation*} \overline{g_w^{(1)}}=C_w\overline{f_w^{(1)}}. \end{equation*} Inserting this into \eqref{eq:recurr1} and using \eqref{eq:Serre-fk} and \eqref{eq:f_w+l} gives \begin{align*} \overline{g_{w+6}^{(1)}}&=C_w\left(\overline{E_4\partial_{w-1}f_w^{(1)}} -\frac{w+1}{12}\overline{E_6f_w^{(1)}}\right)\\ &= -C_w\left(\frac{w-1}{12}\overline{E_4f_{w+2}^{(1)}}+ \frac{w+1}{12}\overline{E_6f_w^{(1)}}\right)= -\frac{w}{6}C_w\overline{f_{w+6}^{(1)}}, \end{align*} from which we derive \begin{equation*} C_{w}=(-1)^{w/6}\left(\frac w6-1\right)!. \end{equation*} Together with \eqref{eq:DEw} this gives \begin{equation*} \overline{g_{6k}^{(1)}}=(-1)^{k-1}\frac{6(k-1)!}{3k-1}\overline{DE_{6k-2}}. \end{equation*} Applying the third equation in \eqref{eq:recurr1} we obtain \begin{equation*} \overline{g_{6k+2}^{(1)}}=\frac{12}{6k-1}\partial_{6k-1}\overline{g_{6k}^{(1)}} =(-1)^k\frac{2(k-1)!}{k}\overline{DE_{6k}} \end{equation*} and \begin{equation*} \overline{g_{6k+4}^{(1)}}=\overline{E_4g_{6k}^{(1)}}= (-1)^{k-1}\frac{6(k-1)!}{3k+1}\overline{DE_{6k+2}}, \end{equation*} which gives \begin{equation}\label{eq:g_w-final} \overline{g_w^{(1)}}=(-1)^{\frac w2-1} \frac{12\left(\lfloor\frac w6\rfloor-1\right)!}{w-2}\overline{DE_{w-2}}. \end{equation} The Fourier coefficients of $g_{w}^{(1)}$ are then given by \begin{equation*} \frac{24\left(\lfloor\frac w6\rfloor-1\right)!} {|B_{w-2}|}n\sigma_{w-3}(n)+ \mathcal{O}\left(n^{\frac{w-1}2}\sigma_0(n)\right). \end{equation*} Notice that the first term is of order $n^{w-2}$. Thus we have proved Theorem~\ref{thm1} for $r=1$. \subsection{Depth $2$}\label{sec:depth-2} Using \cite[Proposition~6.2]{Grabner2020:quasimodular_forms} we define a sequence of quasimodular forms by \begin{equation} \label{eq:gw-2} \begin{split} g_4^{(2)}&=E_4-E_2^2=-12DE_2=2f_4^{(1)}-f_4^{(2)}\\ g_{w+4}^{(2)}&=w(w+1)E_4g_w^{(2)}-36\partial_{w-2}^2g_w^{(2)}\\ g_{w+2}^{(2)}&=\frac{12}{w-2}\partial_{w-2}g_w^{(2)} \end{split} \end{equation} for $w\equiv0\pmod4$. The form $g_w^{(2)}$ is then an extremal quasimodular form of weight $w$ and depth $2$ with positive coefficient of the first non vanishing term in its Fourier expansion. We make the ansatz \begin{equation*} \overline{g_{4k}^{(2)}} =a_{4k}\overline{f_{4k}^{(1)}}+b_{4k}\overline{f_{4k}^{(2)}} \end{equation*} with $a_4=2$ and $b_4=-1$. Applying \eqref{eq:Serre-fk} twice gives \begin{align*} \overline{\partial_{4k-2}^2f_{4k}^{(1)}}& =\frac{2k(2k-1)}{36}\overline{f_{4k+4}^{(1)}}+ \frac{4k-1}{72}\overline{f_{4k+4}^{(2)}}\\ \overline{\partial_{4k-2}^2f_{4k}^{(2)}}& =\frac{2k(2k-1)}{36}\overline{f_{4k+4}^{(2)}}. \end{align*} Inserting this into the recurrence \eqref{eq:gw-2} then gives \begin{equation}\label{eq:matrix-recurr2} \begin{pmatrix} a_{4k+4}\\b_{4k+4} \end{pmatrix}= \begin{pmatrix} 6k(2k+1)&0\\ -\frac{4k-1}2&6k(2k+1) \end{pmatrix} \begin{pmatrix} a_{4k}\\b_{4k} \end{pmatrix}. \end{equation} This recurrence has the solutions \begin{align*} a_{4k}&=2\cdot3^{k-1}(2k-1)!\\ b_{4k}&=-3^k(2k-1)!\left(\frac13+ \sum_{\ell=1}^{k-1}\frac{4\ell-1}{18\ell(2\ell+1)}\right), \end{align*} which can be seen from \begin{equation*} a_{4k+4}=6k(2k+1)a_{4k}=3(2k+1)2k\cdot 2\cdot 3^{k-1}(2k-1)!=2\cdot3^k(2k+1)! \end{equation*} and \begin{align*} b_{4k+4}&=6k(2k+1)b_{4k}-\frac{4k-1}{2}a_{4k}\\ &= -3^{k+1}(2k+1)!\left(\frac13+ \sum_{\ell=1}^{k-1}\frac{4\ell-1}{18\ell(2\ell+1)}\right)- (4k-1)3^{k-1}(2k-1)!\\ &= -3^{k+1}(2k+1)!\left(\frac13+ \sum_{\ell=1}^{k-1}\frac{4\ell-1}{18\ell(2\ell+1)} +\frac{4k-1}{18k(2k+1)}\right). \end{align*} Similarly we obtain \begin{align*} a_{4k+2}&=-2\cdot3^{k-1}(2k-1)!\\ b_{4k+2}&=3^k(2k-1)!\left(\frac13+ \sum_{\ell=1}^{k-1}\frac{4\ell-1}{18\ell(2\ell+1)}-\frac1{3(2k-1)}\right). \end{align*} Thus we have \begin{equation*} \overline{g_w^{(2)}}=-\frac{12}{w-2}a_w\overline{DE_{w-2}} +\frac{144}{(w-4)(w-3)}b_w\overline{D^2E_{w-4}}, \end{equation*} where we have used \eqref{eq:DEw} to rewrite $\overline{f_w^{(k)}}$ ($k=1,2$) in terms of $\overline{DE_{w-2}}$ and $\overline{D^2E_{w-4}}$. Observing that the sign of $-\frac{a_w}{B_{w-2}}$ is always positive, whereas the sign of $\frac{b_w}{B_{w-4}}$ is always negative, we derive the $n$-th Fourier coefficient of $g_w^{(2)}$ using \eqref{eq:DEw} and \eqref{eq:D-eisenstein} \begin{equation*} \frac{24|a_w|}{|B_{w-2}|}n\sigma_{w-3}(n) -\frac{288|b_w|}{(w-3)|B_{w-4}|}n^2\sigma_{w-5}(n) +\mathcal{O}(n^{\frac{w-1}2}\sigma_0(n)). \end{equation*} Notice that the first term is asymptotically dominating and positive, whereas the second term is negative. This proves Theorem~\ref{thm1} for $r=2$. \subsection{Depth $3$} Using \cite[Proposition~6.3]{Grabner2020:quasimodular_forms} we define a sequence of quasimodular forms by \begin{equation} \label{eq:gw-3} \begin{split} g_6^{(3)}&=5E_2^3-3E_2E_4-2E_6= -12f_6^{(1)}+15f_6^{(2)}-5f_6^{(3)}\\ g_{w+6}^{(3)}&=48(7w^2+42w+60)\partial_{w-3}^3g_w^{(3)}\\ &- (15 w^4+96 w^3+ 151 w^2-30 w-116)E_4\partial_{w-3}g_w^{(3)} \\ &-\frac16(w+1)(9w^4+45w^3+40w^2+24w+144)E_6g_w^{(3)}\\ g_{w+2}^{(3)}&=\partial_{w-3}g_w^{(3)}\\ g_{w+4}^{(3)}&=(w+1)(3w+1)E_4g_w^{(3)}-48\partial_{w-3}^2g_w^{(3)} \end{split} \end{equation} for $w\equiv0\pmod6$. These forms are then extremal quasimodular forms of weight $w$ and depth $3$ with positive coefficient of the first non vanishing term of its Fourier expansion. We make the ansatz \begin{equation*} \overline{g_{w}^{(3)}}= a_{w}\overline{f_{w}^{(1)}}+b_{w}\overline{f_{w}^{(2)}} +c_{w}\overline{f_{w}^{(3)}} \end{equation*} with $a_6=-12$, $b_6=15$, and $c_6=-5$ and first consider the case $w=6k$. Applying \eqref{eq:Serre-fk} thrice gives \begin{align*} \overline{\partial_{6k-3}^3f_{6k}^{(1)}}&= -\frac{(6k+1)(6k-1)(2k-1)}{576}\overline{f_{6k+6}^{(1)}} -\frac{108k^2-36k-1}{864}\overline{f_{6k+6}^{(2)}}\\ &\quad- \frac{6k-1}{288}\overline{f_{6k+6}^{(3)}}\\ \overline{\partial_{6k-3}^3f_{6k}^{(2)}}&= -\frac{(6k+1)(6k-1)(2k-1)}{576}\overline{f_{6k+6}^{(2)}} -\frac{108k^2-36k-1}{1728}\overline{f_{6k+6}^{(3)}}\\ \overline{\partial_{6k-3}^3f_{6k}^{(3)}}&= -\frac{(6k+1)(6k-1)(2k-1)}{576}\overline{f_{6k+6}^{(3)}}. \end{align*} Then a computation similar to the one which gave \eqref{eq:matrix-recurr2} gives the recurrence \begin{multline*} \begin{pmatrix} a_{6k+6}\\b_{6k+6}\\c_{6k+6} \end{pmatrix}\\= \begin{pmatrix} \scriptscriptstyle-96 k(2k+1)^2 (3k+1)(3k+2) & 0 & 0 \\ \scriptscriptstyle8 (2k+1) \left(108 k^3+99 k^2+17 k-2\right) & \scriptscriptstyle-96 k(2k+1)^2 (3k+1)(3k+2) & 0 \\ \scriptscriptstyle-(6k-1)(42k^2+42k+10) & \scriptscriptstyle4 (2k+1) \left(108 k^3+99 k^2+17 k-2\right) & \scriptscriptstyle-96 k(2k+1)^2 (3k+1)(3k+2) \\ \end{pmatrix} \begin{pmatrix} a_{6k}\\b_{6k}\\c_{6k} \end{pmatrix}. \end{multline*} Notice that this recurrence implies that $(-1)^ka_{6k}$, $(-1)^{k-1}b_{6k}$, and $(-1)^kc_{6k}$ are positive for all $k\geq1$. Applying the third equation in \eqref{eq:gw-3} and using \eqref{eq:Serre-fk} gives \begin{multline*} \overline{g_{6k+2}^{(3)}}= -\frac{2k-1}4a_{6k}\overline{f_{6k+2}^{(1)}}- \left(\frac{2k-1}4b_{6k}+\frac16a_{6k}\right)\overline{f_{6k+2}^{(2)}}\\ -\left(\frac{2k-1}4c_{6k}+\frac1{12}b_{6k}\right)\overline{f_{6k+2}^{(3)}}; \end{multline*} similarly, the fourth equation in \eqref{eq:gw-3} gives \begin{multline*} \overline{g_{6k+4}^{(3)}}= \frac{1}{48} \left(5172 k^2+1160 k+47\right)a_{6k}\overline{f_{6k+4}^{(1)}}\\ +\left(\frac{1}{48} \left(5172 k^2+1160 k+47\right)b_{6k} -\frac1{36}a_{6k}\right)\overline{f_{6k+4}^{(2)}}\\+ \left(\frac{1}{48} \left(5172 k^2+1160 k+47\right)c_{6k} -\frac1{144}b_{6k}\right)\overline{f_{6k+4}^{(3)}}. \end{multline*} Together with \eqref{eq:DEw} and \eqref{eq:D-eisenstein} this gives \begin{multline*} a_w\frac{24}{B_{w-2}}n\sigma_{w-3}(n) +b_w\frac{288}{(w-3)B_{w-4}}n^2\sigma_{w-5}(n)\\ +c_w\frac{3456}{(w-4)(w-5)B_{w-6}}n^3\sigma_{w-7}(n) +\mathcal{O}(n^{\frac{w-1}2}\sigma_0(n)). \end{multline*} The first term is positive by our discussion of the sign of $a_{6k}$ and the signs of the Bernoulli numbers. It is of order $n^{w-2}$ and thus dominates the other terms. This implies the assertion of Theorem~\ref{thm1} for $r=3$. \subsection{Depth $4$}\label{sec:depth-4} Using \cite[Proposition~6.4]{Grabner2020:quasimodular_forms} we define a sequence of quasimodular forms by \begin{align*} g_{12}^{(4)}&=13025 E_4^3-12796 E_6^2+ 3852 E_2E_4E_6-2706 E_2^2 E_4^2\\ &+27500 E_2^3 E_6- 28875 E_2^4 E_4\\&=34560f_{12}^{(1)}-93456f_{12}^{(2)}+88000f_{12}^{(3)} -28875f_{12}^{(4)}-\frac{15377966208}{691}\Delta\\ g_{w+12}^{(4)}&=-p_0(w)E_4\partial_{w-4}^4g_w^{(4)}+ \frac{(w+4)^4}{12}p_1(w)E_6\partial_{w-4}^3g_w^{(4)}\\ &+\frac1{720}p_2(w) E_4^2\partial_{w-4}^2g_w^{(4)} +\frac1{8640}p_3(w) E_4E_6\partial_{w-4}g_w^{(4)}\\ &+\left(\frac{w+1}{25920}p_4(w)E_4^3+ \frac{(w+1)(w+4)^4}{15}p_5(w)\Delta\right)g_w^{(4)} \end{align*} \begin{align*} g_{w+2}^{(4)}&=\partial_{w-4}g_w^{(4)}\\ g_{w+4}^{(4)}&=(w+1)(2w+1)E_4g_w^{(4)}-18\partial_{w-4}^2g_w^{(4)}\\ g_{w+6}^{(4)}&=\scriptstyle\left(17 w^2+78 w+90\right)\partial_{w-4}^3g_w^{(4)} - \frac1{144}\left(191 w^4+1008 w^3+1504 w^2+192 w-576\right)E_4\partial_{w-4}g_w^{(4)}\\ &\scriptstyle-\frac1{432}(w+1) \left(81 w^4+376 w^3+560 w^2+528 w+576\right)E_6g_w^{(4)} \end{align*} \begin{align*} g_{w+8}^{(4)}&=\scriptstyle-\left(1313 w^6+28678 w^5+255122 w^4+1183008 w^3 +3016512 w^2+ 4012416 w+2177280\right)\partial_{w-4}^4g_w^{(4)}\\ &\scriptstyle +\frac1{144}\bigl(13423 w^8+295800 w^7+2645368 w^6+12166080 w^5+29311504 w^4+29020416 w^3-15653376 w^2\\ &\quad\scriptstyle-56692224 w-33094656\bigr)E_4\partial_{w-4}^2g_w^{(4)}\\ &\scriptstyle+\frac1{432}\bigl(6561 w^9+136994 w^8+1139536 w^7+4759344 w^6+10294016 w^5+11541472 w^4+14671104 w^3 \\&\quad\scriptstyle+41398272 w^2+63016704 w+31974912\bigr)E_6\partial_{w-4}g_w^{(4)}\\ &\scriptstyle+\frac1{2592}(w+1) \bigl(2048 w^9+38685 w^8+287792 w^7+1130616 w^6+3110288 w^5+8497968 w^4\\ &\quad\scriptstyle+18484992 w^3+14141952w^2-20570112 w-30855168\bigr) E_4^2g_w^{(4)}\\ g_{w+10}&=\scriptstyle\left(293 w^4+4332 w^3+22968 w^2+51192 w+40824\right)E_4 \partial_{w-4}^3g_w^{(4)}\\ &\scriptstyle-\frac43\left(w^5+15 w^4+90 w^3+270 w^2+405 w+243\right) E_6\partial_{w-4}^2g_w^{(4)}\\ &\scriptstyle-\frac1{144}\left(3311 w^6+51234 w^5+291550 w^4+731040 w^3+717696 w^2-2592 w-256608\right)E_4^2\partial_{w-4}g_w^{(4)}\\ &\scriptstyle-\frac1{432}(w+1) \left(1313 w^6+19430 w^5+104354 w^4+251616 w^3+310464 w^2+300672 w+248832\right)E_4E_6g_w^{(4)} \end{align*} for $w\equiv0\pmod{12}$. These forms are then extremal quasimodular forms of weight $w$ and depth $4$ with positive coefficient of the first non vanishing term of its Fourier expansion. The polynomials $p_0,\ldots,p_5$ are given by \begin{align*} p_0(w)&=\scriptstyle 53567 w^{14}+4499628 w^{13}+173318340 w^{12}+4055616864 w^{11}+ 64374205218 w^{10}\notag\\ &\scriptstyle+732790207224 w^9+6165100658404 w^8+38914973459904 w^7+ 185044363180416 w^6\notag\\ &\scriptstyle +659055640624128 w^5+1729058937394176 w^4+ 3237068849283072 w^3\notag\\ &\scriptstyle +4084118362128384 w^2+3105388005949440 w+1072718335180800\notag\\ \end{align*} \begin{align*} p_1(w)&=\scriptstyle21257 w^{11}+1465884 w^{10}+45186990 w^9+821051740 w^8+9759703548 w^7\notag\\ &\scriptstyle + 79588527156 w^6 +453687847200 w^5 +1804779218520 w^4+4900200364800 w^3\notag\\ &\scriptstyle+ 8628400143360 w^2 +8845395333120 w+3990767616000\notag\\ \end{align*} \begin{align*} p_2(w)&=\scriptstyle2662740 w^{16}+224120550 w^{15}+8648003840 w^{14}+202621853220 w^{13}\notag\\ &\scriptstyle+ 3217542322665 w^{12} +36586266504480 w^{11}+306658234963680 w^{10}+\notag\\ &\scriptstyle 1919356528986240 w^9 +8970889439482816 w^8+30866477857195008 w^7\notag\\ &\scriptstyle+ 75319919247624192 w^6 +118664936756305920 w^5+83296021547483136 w^4\notag\\ &\scriptstyle -82769401579438080 w^3 -258790551639293952 w^2-245119018746249216 w \notag\\ &\scriptstyle -86822757140004864\notag\\ \end{align*} \begin{align*} p_3(w)&=\scriptstyle4272785 w^{17}+351970350 w^{16}+13234823080 w^{15}+300533087760 w^{14}\notag\\ &\scriptstyle+4592608729932 w^{13} +49787752253076 w^{12}+392868254956864 w^{11}\notag\\ &\scriptstyle +2274866661846720 w^{10}+9597118952486912 w^9 +28789901067644544 w^8 \notag\\ &\scriptstyle +58741997991303168 w^7 +79017091035181056 w^6 +100071999240486912w^5\notag\\ &\scriptstyle+278562611915587584 w^4+779359222970449920 w^3 +1260737947219525632 w^2\notag\\ &\scriptstyle+1054463073573666816 w+355736061701259264\notag\\ \end{align*} \begin{align*} p_4(w)&=\scriptstyle517135 w^{17}+40772970 w^{16} +1455719580 w^{15}+31076826800 w^{14}+441034824168 w^{13}\notag\\ &\scriptstyle +4375275488634 w^{12}+31084796008256 w^{11} +160090786631040 w^{10}+608772267089664 w^9\notag\\ &\scriptstyle +1834128793979392 w^8 +5229385586024448 w^7+15775977503047680 w^6\notag\\ &\scriptstyle +40287913631023104 w^5+57115900062203904 w^4-19258645489385472 w^3\notag\\ &\scriptstyle -224285038806564864 w^2 -343616934723452928 w-182090547421249536\notag\\ \end{align*} \begin{align*} p_5(w)&=\scriptstyle531441 w^{13}+36690686 w^{12}+1133566168 w^{11}+20680195920 w^{10}+247548700336 w^9\notag\\ &\scriptstyle +2043291298652 w^8+11897624359104 w^7+49185666453888 w^6 +143692776009216 w^5\notag\\ &\scriptstyle +293687697411072 w^4+418695721574400 w^3+426532499288064 w^2\notag\\ &\scriptstyle +316421756411904 w+135523565862912.\notag \end{align*} As before we make the ansatz \begin{equation*} \overline{g_{w}^{(4)}}=a_{w}\overline{f_{w}^{(1)}} +b_{w}\overline{f_{w}^{(2)}}+c_{w}\overline{f_w^{(3)}} +d_w\overline{f_w^{(4)}}, \end{equation*} which gives a recurrence \begin{equation*} \begin{pmatrix} a_{12(k+1)}\\b_{12(k+1)}\\c_{12(k+1)}\\d_{12(k+1)}\\ \end{pmatrix}= \begin{pmatrix} \lambda_k&0&0&0\\ -*&\lambda_k&0&0\\ +*&-*&\lambda_k&0\\ -*&+*&-*&\lambda_k \end{pmatrix} \begin{pmatrix} a_{12k}\\b_{12k}\\c_{12k}\\d_{12k}\\ \end{pmatrix}, \end{equation*} where $\lambda_k$ is a polynomial of degree $18$, which factors into rational linear factors and $\pm*$ denotes positive/negative entries. This together with the signs of the initial values $a_{12}=34560$, $b_{12}=-93456$, $c_{12}=88000$, and $d_{12}=-28875$ shows that $a_{12k}$, $-b_{12k}$, $c_{12k}$, and $-d_{12k}$ are all positive. From this it follows that $(-1)^{\lfloor\frac w2\rfloor}a_w$ is positive. Finally, this gives the asymptotic formula \begin{multline*} \frac{24a_w}{B_{w-2}}n\sigma_{w-3}(n) +\frac{288b_w}{(w-3)B_{w-4}}n^2\sigma_{w-5}(n)\\+ \frac{3456c_w}{(w-4)(w-5)B_{w-6}}n^3\sigma_{w-7}(n)\\ -\frac{41472d_w}{(w-5)(w-6)(w-7)B_{w-8}}n^4\sigma_{w-9}(n)+ \mathcal{O}\left(n^{\frac{w-1}2}\sigma_0(n)\right) \end{multline*} for the Fourier coefficients of $g_w^{(4)}$, where we have used \eqref{eq:DEw} and \eqref{eq:D-eisenstein} for the explicit expression of the terms coming from $f_w^{(k)}$ ($k=1,\ldots,4$). The first term asymptotically dominates and is positive by our discussion of the sign of $a_w$ and the sign of the Bernoulli number. This implies the theorem for $r=4$. \section{Numerical experiments}\label{sec:numer-exper} In \cite{Jenkins_Rouse2011:bounds_coefficients_cusp} an explicit bound for the Fourier coefficients of cusp forms has been derived. \begin{theorem}[Theorem~1 in \cite{Jenkins_Rouse2011:bounds_coefficients_cusp}] \label{thm-jenkins-rouse} Let \begin{equation*} G(z)=\sum_{n=1}^\infty g(n)q^n \end{equation*} be a cusp form of weight $w$. Then \begin{equation} \label{eq:jenkins-rouse} \begin{split} |g(n)|&\leq\sqrt{\log w}\Biggl( 11\sqrt{\sum_{m=1}^\ell\frac{|g(m)|^2}{m^{w-1}}}\\ &+ \frac{e^{18.72}(41.41)^{w/2}}{w^{(w-1)/2}} \left|\sum_{m=1}^\ell g(m)e^{-7.288m}\right|\Biggr)n^{\frac{w-1}2}\sigma_0(n), \end{split} \end{equation} where $\ell$ is the dimension of the space of cusp forms of weight $w$. \end{theorem} For an application of this theorem we write an extremal quasimodular form of depth $r$ as \begin{equation}\label{eq:gwr-decomp} g_w^{(r)}=\sum_{\ell=1}^r c_\ell D^\ell E_{w-2\ell}+ \sum_{\ell=0}^rD^\ell\alpha_{w-2\ell}, \end{equation} where $c_1,\ldots,c_r$ are the coefficients computed in Sections~\ref{sec:depth-1} to~\ref{sec:depth-4} for the according values of $r$, and $\alpha_{w-2r},\ldots,\alpha_w$ are cusp forms of weights $w-2r,\ldots,w$. Rewriting the forms $g_w^{(r)}$ is done using the expressions for derivatives given in \eqref{eq:D-serre}. In order to make this more clear, we give the according conversion formula for the case $r=1,2$ \begin{align*} &A_w+E_2B_{w-2}=\left(A_w-\frac{12}{w-2}\partial_{w-2}B_{w-2}\right)+ D\left(\frac{12}{w-2}B_{w-2}\right)\\ &A_w+B_{w-2}E_2+C_{w-4}E_2^2\\ &=\left(A_w-\frac{12}{w-2}\partial_{w-2}B_{w-2} +\frac{144}{(w-2)(w-3)}\partial_{w-4}^2C_{w-4}+\frac1{w-3}E_4C_{w-4}\right)\\ &+D\left(\frac{12}{w-2}B_{w-2} -\frac{288}{(w-2)(w-4)}\partial_{w-4}C_{w-4}\right)\\ &+D^2\left(\frac{144}{(w-3)(w-4)}C_{w-4}\right). \end{align*} The cases $r=3,4$ are much more complex; the computations were done using \texttt{Mathematica}. The \texttt{Mathematica} source code is available at \cite{Grabner2020:mathematica_files}. Theorem~\ref{thm-jenkins-rouse} can then be applied to the forms $\alpha_{w-2\ell}$ ($\ell=0,\ldots,r$) to derive bounds of the form $C_\ell n^{\frac{w-1}2}\sigma_0(n)$ for the Fourier coefficients of the forms $D^\ell\alpha_{w-2\ell}$. This gives the bound $(C_0+\cdots+C_r)n^{\frac{w-1}2}\sigma_0(n)$ for the Fourier coefficient of the second sum in \eqref{eq:gwr-decomp}. The Fourier coefficients of the terms in the first sum are \begin{equation*} c_\ell \frac{2(w-2\ell)}{B_{w-2\ell}}n^\ell\sigma_{w-2\ell-1}(n). \end{equation*} For these we use the bounds \begin{equation*} n^{w-2\ell-1}\leq\sigma_{w-2\ell-1}(n)\leq n^{w-2\ell-1}\sum_{d\mid n}d^{2\ell+1-w}\leq\zeta(w-2\ell-1)n^{w-2\ell-1} \end{equation*} and $\sigma_0(n)\leq2\sqrt n$ to derive an explicit lower bound for the Fourier coefficients of $g_w^{(r)}$. This bound is positive for $n\geq N_0$ for an explicitly computable value $N_0$. For the remaining finitely many Fourier coefficients positivity can be checked with the help of a computer. We have performed these computations for $1\leq r\leq4$ and $w\leq200$. \begin{acknowledgement} The author is grateful to an anonymous referee for the many valuable comments that improved the readability of the paper. \end{acknowledgement} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2020-11-16T02:15:29", "yymm": "2007", "arxiv_id": "2007.13569", "language": "en", "url": "https://arxiv.org/abs/2007.13569" }
\section{Introduction} In condensed matter physics, fluctuations, whether thermal or quantum, usually suppress order. However, this is not a rigorous rule. Some systems undergo an ``Order-by-Disorder'' (ObD) transition in which the fluctuations restore order in an otherwise disordered ground state~\cite{villain1980order,shender1996order}. This ObD transition is, more precisely, the mechanism whereby a system with a non-trivially degenerate ground state develops long-range order by the effect of classical or quantum fluctuations. Therefore, a classical system exhibiting this transition has no long-range order when the temperature is strictly zero and develops some at non-vanishing temperature. This phenomenon was first exhibited in the classical 2D Domino Model~\cite{andre1979frustration}. An experimental 3D realisation, in the form of Ising pyrochlores with staggered antiferromagnetic order frustrated by an applied magnetic field was recently proposed~\cite{guruciaga2016field,guruciaga2019monte} (concrete examples could be Nd$_2$Hf$_2$O$_7$ or Nd$_2$Zr$_2$O$_7$). Indeed, the zero temperature ObD transition is relatively common in highly frustrated magnetic models~\cite{diep2016theoretical,Chalker11}. In this context, the geometry of the lattice and/or the nature of the interactions make the simultaneous minimisation of each term contributing to the energy impossible~\cite{moessner2006geometrical}. Two consequences of frustration are the increase of the ground state energy compared to the one of the unfrustrated model and the scaling of the number of degenerate ground states (sublinearly) with the size of the system. Although the reason for the classical ObD transition is clear, it has been very difficult to exhibit experimental evidence for it. One of the reasons is that the transition occurs at zero temperature and it is therefore difficult to establish whether order is selected through the ObD mechanism or it is due to energetic contributions not taken into account that actually lift the ground state degeneracy. The aim of this paper is to propose a way to probe the ObD transition in an indirect way which should be relatively easy to implement in the lab. The idea, as we explain in the main part of the article, is to use external magnetic fields to transform the zero temperature transition into a finite temperature sharp crossover, or maybe even a genuine phase transition, and then detect the latter with usual methods. For concreteness, we explain how this is achieved in the context of the 2D Domino Model. The paper is organized as follows. In Sec.~\ref{sec:domino} we recall the definition and main properties of the Domino Model. In particular, we establish the effective 1D model that describes its low energy properties \cite{villain1980order}, which we will use in the rest of our study. In Sec.~\ref{sec:random} we add quenched disorder in the form of columnar random magnetic fields as a first attempt to displace the ObD transition to a finite temperature. We start by showing, with an Imry-Ma argument~\cite{imry1975random}, that such a 2D disordered model cannot have a finite temperature phase transition but just a crossover. Still, we characterise the pseudo ferromagnetic order thus achieved studying a random field 1D effective model with the renormalization group approach. The next strategy, described in Sec.~\ref{sec:staggered}, is to use alternate columnar magnetic fields. With them we achieve the goal of finding a finite critical temperature but we lose a bit of the phenomenology of the ObD transition, as we explain in the body of the paper. In each Section we analyse the quench dynamics of the pure and disordered Domino Models using Monte Carlo simulations, and we describe how the temporal evolution confirms the static behaviour expected asymptotically. A Section with our conclusions closes the article. \section{The Domino Model} \label{sec:domino} The Domino Model is a 2D model defined on a square lattice with two kinds of ions A and B that carry Ising spins and are placed on alternating columns~\cite{villain1980order,andre1979frustration}. There are thus three different interactions $J_{AA}$, $J_{BB}$ and $ J_{AB}$ between nearest neighbor spins. $J_{AA}$ and $ J_{AB}$ are ferromagnetic ($J_{AA}>0$, $J_{AB} >0$) while $J_{BB}$ is antiferromagnetic ($J_{BB}<0$). With these parameters, all plaquettes in the lattice are frustrated. The system has size $N\times N$ ($N/2$ columns A and $N/2$ columns B each of length $N$) and we assume periodic boundary conditions. Therefore, the Hamiltonian is \begin{equation} H = J_{AB}\sum_{i,j} s_{i,j} s_{i,j+1} + J_{AA}\sum_{\substack{i\\j\,even}} s_{i,j} s_{i+1,j} + J_{BB}\sum_{\substack{i\\j\,odd}} s_{i,j} s_{i+1,j} \; , \end{equation} with $s_{i,j} = \pm 1$ the Ising spins sitting on the vertices of the square lattice. Henceforth, the rows are labeled $i=1,2,...,N$ and the columns are labeled $j=1,2,...,N$. We assume that the interactions respect the hierarchy \begin{equation} J_{AA}\gg \lvert J_{BB} \rvert > J_{AB} \; . \label{eq:hierarchy} \end{equation} \subsection{Ground and first excited states} With the choice of parameters in Eq.~(\ref{eq:hierarchy}), the ground states have ferromagnetic order on each A column and antiferromagnetic order on each B column. Moreover, A and B columns are effectively uncoupled because half of the spins are up and half down in a B column. As a consequence, it has the same cost for the A columns to be up or down. The ground states are frustrated because only half of the horizontal bonds with coupling constant $J_{AB}$ can be satisfied in an optimal configuration. Moreover, looking at a typical ground state as the one displayed in Fig.~1(a), we can see that the A columns are either up or down and the spins on the B columns alternate between up and down, yielding a vanishing global magnetization $M = 0$ (in the $N\to\infty$ limit). The same occurs in all $T=0$ ground states. It is easy to see that there are $2^N$ such ground states. The ground state entropy is then sub-extensive, $S\propto N$, but still much larger than the usual ${\mathcal O}(1)$ one of, say, the 2D ferromagnetic Ising model. \begin{figure}[h!] \hspace{2.5cm} (a) \hspace{5cm} (b) \begin{center} \includegraphics[scale=0.45]{figures/domino.png} \hspace{0.5cm} \includegraphics[scale=0.45]{figures/domino_excited.png} \caption{\small (a) A typical ground state of the Domino Model. Ferromagnetic interactions are represented by full lines, antiferromagnetic interactions by dashed lines. The hierarchy in Eq.~(\ref{eq:hierarchy}) is illustrated with bold and thin lines. (b) Two possible excitations are highlighted in red and green. The red one has lower energy than the green one because it is sandwiched between two A columns of the same sign. Flipping a spin of the A columns would cost even more energy because of the hierarchy in the coupling constants in Eq.~(\ref{eq:hierarchy}). } \end{center} \end{figure} Starting from the ground state, we can construct the first excited state by taking a B column sandwiched in between two A columns of the same sign and turning one of its spin from being anti-aligned to being aligned with the A spins, see the red + in Fig.~1(b). We see that we lose $4\lvert J_{BB} \rvert$ energy and we gain $4J_{AB}$ energy from this process. Fixing $E_{\rm GS}=0$ for the ground state energy, the excited state has energy $E=\epsilon_1 = 4(\lvert J_{BB} \rvert - J_{AB})$. The other possible excitation in a B column is one in which the flipped spin is in between two anti-aligned A columns, see the green + in Fig.~1(b), which has energy $\epsilon_2 = 4 |J_{BB}|$ and it is a higher excited state than the previous one. So, at low (but finite) temperature, when only the first excited states are statistically relevant, A columns tend to be aligned for these excitations to exist. This entropic effect forces the system to have long range ferromagnetic order of the A columns at low temperature and thus exhibit the zero temperature ObD transition~\cite{villain1980order}. Order is maintained until the critical temperature $T_c=1/\beta_c$ (we set $k_B=1$) given by \begin{equation} \sinh(2\beta_c J_{AB}) \sinh (\beta_c |J_{AA}+J_{BB}|) = 1 \label{eq:critical-temp} \end{equation} beyond which the system becomes a conventional paramagnet. \subsection{The effective 1D model} \label{sec:eff_1D} An effective 1D model for the low temperature, $T\ll \epsilon_2$, properties of the system that focuses on the A columns was derived in~\cite{villain1980order}. The argument goes as follows. First, since $J_{AA}$ is much stronger than the two other couplings, see Eq.~(\ref{eq:hierarchy}), one assumes that the A columns are perfectly aligned and then represents them as macro-spins of value $N$. Second, \\ -- If a B chain is sandwiched in between two A chains with parallel spins, the first excitations have energy \begin{equation} \epsilon_1 = 4(\lvert J_{BB} \rvert - J_{AB}) \; , \end{equation} and $N/2$ of them are possible, as explained in the previous Subsection. The partition function of the B chain in this background (that we indicate with the subscript $F$) is $$ Z_F \simeq [1 + \exp(-\beta \epsilon_1)]^{N/2} \; . $$ -- If, instead, the two A chains have opposite orientation, the second $N/2$ excitations have energy $\epsilon_2 = 4\lvert J_{BB} \rvert$. In this other background (that we label $AF$) the partition function is $$Z_{AF} \simeq [1 + \exp(-\beta \epsilon_2)]^{N/2} \; . $$ We can now integrate out the spins of the B columns to get an effective nearest-neighbor coupling $J_{\rm eff}$ between the spins of two nearby A chains. Thinking in terms of a 1D effective model of size $N/2$, the probabilities $P_F$ of two neighboring A chains (of size $N$) being parallel, and $P_{AF}$ of two neighboring A chains being anti-parallel, are $$ P_F = \frac{\exp(\betaJ_{\rm eff})}{2\cosh(\beta J_{\rm eff})} \qquad \text{and} \qquad P_{AF} = \frac{\exp(-\beta J_{\rm eff})}{2\cosh(\beta J_{\rm eff})} \; , $$ respectively. On the other hand, the same probabilities in the original model are $$ P_F = \frac{Z_F}{Z_F + Z_{AF}} \qquad \text{and} \qquad P_{AF} = \frac{Z_{AF}}{Z_F + Z_{AF}} \; . $$ Using these equations we find that $$ \frac{P_F}{P_{AF}} = \exp(2\beta J_{\rm eff}) = \frac{(1 + \exp(-\beta \epsilon_1))^{N/2}}{(1 + \exp(-\beta \epsilon_2))^{N/2}} \simeq (1 + \exp(-\beta \epsilon_1))^{N/2} $$ since we choose $\lvert J_{BB} \rvert$ of the same order as $J_{AB}$ which makes $\epsilon_1 = 4(\lvert J_{BB} \rvert - J_{AB}) \ll \epsilon_2 = 4\lvert J_{BB} \rvert$. In conclusion we find a temperature dependent and ${\mathcal O}(N)$ effective coupling constant \begin{equation} J_{\rm eff}(\beta,N) = \frac{N}{4\beta}\ln[1+ \exp(-\beta \epsilon_1)] \label{eq:Jeff} \end{equation} and the effective Hamiltonian of the 1D system is \begin{equation} H_{\rm eff}(T) = -J_{\rm eff}(T,N) \sum_{j=1}^{N/2} s_j s_{j+1} \label{eq:Heff1D} \end{equation} with the new ${\mathcal O}(1)$ Ising spins, $s_j=\pm 1$, representing the $N/2$ A columns. We see that although the model in Eq.~(\ref{eq:Heff1D}) is one dimensional, the coupling constant is of macroscopic order ($\propto N$), allowing for long-range order in the effective model that represents the ordering of the 2D system. In this way, as soon as $T>0$, $J_{\rm eff} >0$ forcing the system into a ferromagnetic phase as a regular 2D ferromagnetic Ising model, even though only the A columns are ferromagnetically ordered: in the thermodynamic limit, the global magnetisation density $m=N^{-1}\sum_{j=1}^{N/2} s_j$ jumps from $0$ to $1/2$ in a discontinuous way. This approximation is valid as long as we use the hierarchy of coupling constants in Eq.~(\ref{eq:hierarchy}) and $\lvert J_{BB} \rvert \sim J_{AB}$. Indeed, we need $J_{AA} \gg (\lvert J_{BB}\rvert, J_{AB})$ to consider the A columns as macro-spins and $\epsilon_1 = 4(\lvert J_{BB} \rvert - J_{AB}) \ll \epsilon_2 = 4\lvert J_{BB} \rvert$ to keep only the first excitation accessible at the temperatures we study. \subsection{Dynamics} As far as we know, the stochastic evolution of the kinetic 2D Domino Model has not been studied in detail yet. We will do it in later Sections of this paper, where we will compare it to the ones of the disordered models. \section{Columnar random fields} \label{sec:random} Let us add quenched disorder to the 2D Domino Model in the form of $N/2$ columnar random magnetic fields $h_{i,j}$ that couple bilinearly to the spins, $\sum_{i,j} h_{i,j} s_{i,j}$, but only to those on the A columns and independently of the row index. In order words, $h_{i,j} = h_j \neq 0$ only for $j$ even. The $h_j$'s are random i.i.d. variables drawn from a Gaussian distribution $\mathcal{N}(0,\sigma^2)$. The typical local random fields take absolute values of the order of $\sigma = {\mathcal O}(1)$. The Imry-Ma argument can be easily applied to show that such disordered 2D model cannot have a phase transition, as we discuss below (Sec.~\ref{sec:ImryMa}). Nevertheless, the finite size model can still present a finite temperature crossover from a disordered low temperature state to a quasi ferromagnetically ordered state at a higher temperature, in a way that mimics the ObD transition but at a non-zero temperature, before disordering it again at a still higher temperature. We analyse the first crossover in the context of the effective 1D model that we assume remains the same as the one derived in Sec.~\ref{sec:eff_1D}, represented by the $J_{\rm eff}$'s in Eq.~(\ref{eq:Jeff}), even under the random fields which are, therefore, supposed to be very weak compared to $J_{\rm eff}$ (Sec.~\ref{subsec:1Ddisordered}). Finally, we study the quench dynamics of the 2D model using different initial states and final temperatures mostly in the region with quasi ferromagnetic order (Sec.~\ref{subsec:dynamics}). \subsection{The Imry-Ma argument} \label{sec:ImryMa} Here we show that by extending the Imry-Ma argument~\cite{imry1975random} to this model, the ferromagnetic phase of the 2D system should be destroyed by the addition of the columnar random magnetic fields. \comments{ The Imry-Ma argument states that, for a usual RFIM in dimension $d$ with Hamiltonian $$H = -\sum_{\langle i,j \rangle}Js_i s_j - \sum_{i}h_is_i$$ with $\langle i,j\rangle$ designating the nearest-neighbors and $h_i$ i.i.d. random variables, with $\mathbb{E}[h_i] = 0$, and $\mathbb{E}[h_i^2] = \sigma^2$, if we flip a spin domain $\Omega$ of characteristic length $l$ starting from an ordered phase we loose: $\epsilon_{boundary} \simeq 2J\partial\Omega$ but we can gain energy from the bulk: $\epsilon_{bulk} \simeq -2\sum_{i \in \Omega} h_i$ giving a total variation of energy \begin{equation} \Delta E \simeq 2J\partial\Omega - 2\sum_{i \in \Omega} h_i \sim 2Jl^{d-1} - 2l^{d/2} Y \end{equation} with $\partial\Omega$ designating the border of the $\Omega$ domain and $Y = \mathcal{N}(0,\sigma^2)$ making use of the CLT in the second line. Comparing both terms we see that for $d>2$, there is no macroscopic domain of size $l$ that we could flip without increasing the energy of the system ($\Delta E$ is positive for large $l$). Instead for $d=2$ (and lower), we can have $\Delta E < 0$ so we can lower the energy by flipping a macroscopic domain of spins. Therefore, no ordered phase can exist in the thermodynamic limit. This argument estimates the lower critical dimension to $d=2$ for the RFIM. } Let us sketch why this is so. In order to simplify the discussion, take a homogeneous ferromagnetically coupled ($J_{AB}=J_{AA}=J_{BB}=J>0$) Ising model in 2D with columnar random fields. The energy variation due to the reversal of an isotropic domain of aligned spins with linear size $l$ in $D$ dimensions is of the order \begin{equation} \Delta E \sim 2J l^{D-1} - 2l^{\frac{D+1}{2}} Y \; , \label{eq:ImryMa} \end{equation} with $Y$ representing a Gaussian random variable, $Y = \mathcal{N}(0,\sigma^2)$. The first term is the energy cost due to the inclusion of a domain wall with length of the order of $l$ and the second term is the energy gain that one can achieve from the bulk of the domain due to the correlated random fields. The excess energy $\Delta E$ in Eq.~(\ref{eq:ImryMa}) is interpreted as a function of $l$. This function has a maximum at a given $l$ as long as $D>D_\ell$. Accordingly, the lower critical dimension in a RFIM with columnar correlated fields is $D_\ell=3$, higher than the one with i.i.d. local random fields, which is $D_\ell=2$. Therefore the correlated random fields are even more efficient in destroying the ferromagnetic order than the perfectly random ones, as could have been expected. Still, this reasoning only applies in the thermodynamic limit. We may still see a pseudo ferromagnetic phase in small systems. For this reason, we will propose that a pseudo transition survives under the columnar random fields and study it with an effective 1D model before presenting a dynamic analysis that gives support to this assumption. \subsection{The 1D disordered model} \label{subsec:1Ddisordered} We now have an effective 1D Random Field Ising Model (RFIM), with the A columns of size $N$ considered as $N/2$ spins taking values $N s_j=N (\pm 1)$, leading to an effective coupling constant $J_{\rm eff}\propto N$ between the spins $s_j = \pm 1$, and i.i.d. random fields with absolute value of order $\sigma= {\mathcal O}(1)$ that couple linearly to the Ising spins. Its Hamiltonian is \begin{equation} H_{\rm eff}(T) = -J_{\rm eff}(T,N) \sum_{j=1}^{N/2} s_j s_{j+1} - N\sum_{j=1}^{N/2} h_j s_j \; . \label{eq:1DRFIM} \end{equation} For each choice of the $h_j$'s we can compute the partition function, the free energy density and the magnetisation and then average over the different realisations of disorder. Because at $T=0$, $J_{\rm eff}=0$, see Eq.~(\ref{eq:Jeff}), at zero temperature the macro-spins are uncoupled and simply align with their associated magnetic field $h_j$. This single ground state still has magnetisation $M'_{GS} = 0$ because in the infinite size limit, half of the $h_j$'s point up and half down. Nonetheless, this ground state has now a lower energy than the one of the model without disorder ($E_{\rm GS} = 0$); more precisely, \begin{equation} E'_{\rm GS} = -N\sum_{j=1}^{N/2} \lvert h_j \rvert \; . \end{equation} In the $N \gg 1$ limit, using $\mathbb{E}[\lvert h_j \rvert] = \sqrt{2/\pi} \ \sigma$ and the central limit theorem \begin{equation} E'_{\rm GS} \simeq -\sqrt{1/(2\pi)} \ \sigma \; N^2 \; . \end{equation} At very low temperatures $J_{\rm eff}$ is very weak and the system is expected to stay in this zero magnetization ground state until a sufficiently high temperature is reached \--- and $J_{\rm eff}(T,N)$ is made strong enough \--- for some ferromagnetic order to appear despite some of the spins having to be anti-aligned with their magnetic field. The energy gain by aligning the $N/2$ macro-spins is $E_F = -(N/2)J_{\rm eff} (T_{\rm ObD}^{\rm ran})$. We can estimate the crossover temperature $T_{\rm ObD}^{\rm ran}$ by comparing $E_F$ and $E'_{\rm GS}$, leading to \begin{equation} J_{\rm eff}(T_{\rm ObD}^{\rm ran},N) \sim \sqrt{2/\pi} \; \sigma N \label{eq:JeffTobd} \end{equation} with $J_{\rm eff}$ still given by Eq.~(\ref{eq:Jeff}). Using this equation and setting $\epsilon_1 = 1$, we find that for $\sigma = 0.01$ we should have $T_{\rm ObD}^{\rm ran} \sim 0.4$ and for $\sigma = 0.005$, $T_{\rm ObD}^{\rm ran} \sim 0.33$. More generally, $T_{\rm ObD}^{\rm ran}$ is an increasing function of $\sigma$ that vanishes at $\sigma=0$. We insist upon the fact that the effective 1D RFIM that we constructed does not have a genuine phase transition, in the same way as the conventional 1D RFIM does not have one either. Still we can use the random fields to create a sharp crossover at a finite temperature. Our intuition is that this crossover should be reminiscent of a first order phase transition because there is no continuity in the two different states of lower energy before and after the crossover. We can estimate the length of the system $N_{IM}$ beyond which the pseudo ferromagnetic phase ceases to exist, that is, when flipping a macroscopic domain can lower the energy of the system ($\Delta E < 0$). Thinking in terms of the 1D effective model with the coupling constant $J_{\rm eff}$, the reversal of a domain of length $L$ implies an energy cost equal to $4J_{\rm eff}$ and an eventual energy gain equal to $-N\sigma \sqrt{L/(2\pi)}$ due to the random fields. These two scales are equal for \begin{equation} L_{IM} \sim \bigg(\frac{4J_{\rm eff} \sqrt{2\pi}}{N \sigma}\bigg)^2 \; , \label{eq:Nc} \end{equation} and gives an order of magnitude of the system length, $N_{IM}\simeq L_{IM}$ beyond which ferromagnetic ordering cannot be sustained. A simple way to compute the equilibrium properties of the 1D effective model with periodic boundary conditions and random fields is to use the exact renormalisation decimation procedure~\cite{le1999random,dasgupta1980low,IgloiMonthus}. Starting from the partition function \begin{equation} Z = \sum_{s_0,s_1,...s_{N/2-1}} \exp \bigg({\sum\limits_{j=0}^{N/2-1}K_j s_j s_{j+1}+ \sum\limits_{j=0}^{N/2-1} H_j s_j}\bigg) \; , \label{eq:Z-disordered} \end{equation} where we set $K_j = \beta J_{\rm eff}$ and $H_j =\beta h_j$, we can sum over the odd spins and rewrite it in the same form \begin{equation} \begin{split} Z &= \sum_{s_0,s_2,...s_{N/2-2}} \exp\Bigg(\sum_{k=0}^{\frac{N}{4}-1} H_{2k} s_{2k}\Bigg)\prod_{j=0}^{\frac{N}{4}-1} \sum_{s_{2j+1} = \pm 1} \exp(K_{2j}(s_{2j} + s_{2j+2} + H_{2j+1})s_{2j+1})\\ &= \Bigg(\prod_{k=0}^{\frac{N}{4}-1} c_{2k+1} \Bigg) \sum_{s_0,s_2,...s_{N/2-2}} \exp\Bigg(\sum_{j=0}^{\frac{N}{4}-1}K_{2j}^{'}s_{2j} s_{2j+2} + (H_{2j} + H^{'}_{2j} + H^{'}_{2j+2}) s_j\Bigg) \end{split} \label{eq:Z-fact-disordered} \end{equation} with $K^{'}$ the rescaled coupling constant and $H^{'}$ the extra magnetic field we add to rescale $H$. Equating Eq.~(\ref{eq:Z-disordered}) and Eq.~(\ref{eq:Z-fact-disordered}) we find the system of equations \begin{equation} \begin{split} &c_{2j+1} \ e^{K^{'}_{2j} + H^{'}_{2j} + H^{'}_{2j+2}} \ \, = 2\cosh(K_{2j} + K_{2j+1} + H_{2j+1}) \; , \\ &c_{2j+1} \ e^{K^{'}_{2j} - H^{'}_{2j} - H^{'}_{2j+2}} \ \, = 2\cosh( -K_{2j} - K_{2j+1} + H_{2j+1}) \; , \\ &c_{2j+1} \ e^{-K^{'}_{2j} + H^{'}_{2j} - H^{'}_{2j+2}} = 2\cosh( K_{2j} - K_{2j+1} + H_{2j+1}) \; , \\ &c_{2j+1} \ e^{-K^{'}_{2j} - H^{'}_{2j} + H^{'}_{2j+2}} = 2\cosh(- K_{2j} + K_{2j+1} + H_{2j+1}) \; , \end{split} \end{equation} for $j = 0,...,\frac{N}{4} -1$. We iterate the decimation until there are only two spins left in the system: $s_0$ and $s_{N/4}$ and we then compute $Z$ for the $4$ configurations of the decimated system. From it we derive the free-energy and the mean magnetization \begin{equation} F = -\frac{1}{\beta} \ln Z \qquad \text{and} \qquad M = \left. \frac{ \partial F}{\partial (\delta h)} \right|_{{\delta h}=0} \; , \end{equation} where $\delta h$ is an infinitesimal shift added as a global perturbing magnetic field. \begin{figure}[h!] \begin{center} \subfloat[]{\includegraphics[scale=0.55]{figures/RG_M_withsmall}} \subfloat[]{\includegraphics[scale=0.55]{figures/RG_M_sigma510-3}} \caption{\small Renormalisation group calculation of the mean magnetization density $m = M/N$ in the 1D Random Field Ising model (\ref{eq:1DRFIM}) with different system sizes given in the key. (a) $\sigma = 0.01$ and (b) $\sigma = 0.005$ with $\sigma^2$ the variance of the Gaussian distribution from which the magnetic fields are drawn. } \label{fig:magn-dens-1D} \end{center} \end{figure} In Fig.~\ref{fig:magn-dens-1D} we observe that, in all cases, the magnetisation density $m$ smoothly increases from $0$ to a value close to $1$. For small $N$, $N\leq 128$, the curves are non-monotonic and $m$ decays again after reaching a maximum. Instead, for sufficiently large $N$, say $N\geq 512$, $m$ monotonically approaches $1$. However, for these large system sizes there is no crossing of curves, of the kind expected in a phase transition. This confirms that the random fields may destroy the 2D ferromagnetic phase in the limit $N \rightarrow \infty$, as the Imry-Ma argument that we present in Sec.~\ref{sec:ImryMa} shows that indeed occurs. For these sizes, the curves still approach $m=1$ because $J_{\rm eff}$ increases with temperature. However, at fixed $T$, the ferromagnetic order is lowered as the size of the system is increased and one can argue it will disappear in the infinite size limit. If we ignore the fact that there is a strong size dependence in our results, we can still reckon that the magnetisation reaches, say, $0.5$ in a system with $N=256$ and $\sigma=0.01$ (a) at $T_{\rm ObD}^{\rm ran} \sim 0.75$ while it takes the same value in a system with the same system size and $\sigma=0.005$ (b) at a lower temperature, $T_{\rm ObD}^{\rm ran} \sim 0.6$. This trend is in agreement with the estimate in Eq.~(\ref{eq:JeffTobd}), and the numerical values are not too far from the ones given in the text right below this equation. \subsection{Dynamics} \label{subsec:dynamics} In order to confirm the quasi ferromagnetic order reached by the ObD mechanism in a finite range of non-zero temperatures, we focus now on the quench dynamics of the bidimensional model with random columnar magnetic fields, following the evolution of different initial conditions at the target temperatures. To study the 2D model we implement a Monte Carlo simulation using the Metropolis algorithm \cite{metropolis1953equation}. A time-step is defined as $N^2$ random flip attempts as the system is of size $N\times N$. For this simulation, we took the parameters $J_{AA} = 2$, $J_{BB} = -1$ and $J_{AB} = 0.75$ to keep the energy of the first excitation at $\epsilon_1 = 4(\lvert J_{BB} \rvert - J_{AB}) = 1$, and to make the energy between the ground state and the second excited state much larger $\epsilon_2 = 4\lvert J_{BB} \rvert = 4$. The critical temperature between the ferromagnetic and paramagnetic phases of the pure Domino Model, see Eq.~(\ref{eq:critical-temp}), is $T^{\rm pure}_c \simeq 1.40$ for these parameters. \subsubsection{Quenches from high temperatures} \label{subsec:highTquench} Here we investigate the dynamics following the usual quench protocol~\cite{bray2002theory,Puri09-article,CorberiPoliti}: starting from a completely random high temperature initial state, $s_{i,j} =\pm 1$ with probability $1/2$, we evolve it with the Metropolis rule at $T=1$, where the system should tend to order ferromagnetically for the finite system sizes used here. Indeed, we estimated the temperature above which no ferromagnetic ordering should be reached to be $T^{\rm ran}_c \sim 1.35$ using several runs of the Monte Carlo code for different temperatures and sizes (not shown). This value is close the one found using Eq.~(\ref{eq:critical-temp}), $T^{\rm pure}_c = 1.4$, for the pure Domino Model considering it should be a bit lower in our case as an effect of disorder. Also, using Eq.~(\ref{eq:Nc}), we find that the Imry-Ma length is of order $N_{IM}\sim 10^4$, ensuring that we are below this length in the simulations and that the system should tend to order ferromagnetically for the sizes accessible in numerical simulations. We recall that, for the model with random columnar fields, $T^{\rm ran}_{\rm ObD} \simeq 0.4$ for $\sigma = 0.01$ and $T^{\rm ran}_{\rm ObD} \simeq 0.33$ for $\sigma = 0.005$. \vspace{0.25cm} \noindent {\it Snapshots} \vspace{0.25cm} \begin{figure}[h!] \captionsetup[subfigure]{labelformat=empty} \begin{center} \subfloat[$t=0$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_pur_t0}}\hspace{0.1cm} \subfloat[$t=2^8$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_pur_t8}}\hspace{0.1cm} \subfloat[$t=2^{16}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_pur_t16}}\hspace{0.1cm} \subfloat[$t=2^{20}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_pur_t20}}\hspace{0.1cm}\\ \subfloat[$t=0$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_dis_t0}}\hspace{0.1cm} \subfloat[$t=2^8$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_dis_t8}}\hspace{0.1cm} \subfloat[$t=2^{16}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_dis_t16}}\hspace{0.1cm} \subfloat[$t=2^{20}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_dis_t20}}\hspace{0.1cm} \caption{\small Snapshots of the system with $N=128$ after a quench from a random initial condition across the ferromagnetic transition (pseudo in the random problem) to $T=0.35$ which is, moreover, also lower than $T_{\rm ObD}$ in the disordered model. The first line shows four representative snapshots of the instantaneous state of the pure model and the second line the same for the model with quenched random columnar fields with $\sigma = 0.08$ and $T^{\rm ran}_{\rm ObD}\simeq 0.90$. The time at which the images were stored are indicated below them. } \label{fig:snapshots} \end{center} \end{figure} The dynamics of frustrated magnets are expected to be slower than the ones of the pure counterparts~\cite{Walter2008,Walter2009} and in many cases they can also be anisotropic~\cite{Grousson01,Mulet07,Levis12,Levis13,Cannas18,Udagawa18}. Indeed, the Domino Model is essentially anisotropic and the growth of order should reflect this anisotropy. More precisely, ferromagnetic ordering along the A columns in the horizontal and vertical directions may, in principle, occur in different time scales, as well as anti-ferromagnetic ordering along the B columns. We focus on the growth of ferromagnetic order on A columns. Figure~\ref{fig:snapshots} displays the evolution of a system with $N=128$, quenched from a totally disordered initial condition and evolved at $T=0.35$. Red and white cells represent up and down spins. The first row presents four snapshots of the pure Domino Model at the times written below the images ($T=0.35 <T^{\rm pure}_c$ in this case). The initial state is fully disordered with as many up as down spins placed at random in the box. The system progressively orders and, as it is clear from the later images, it does faster in the vertical direction. A typical length of domains in the horizontal direction is also growing at a slower speed. Once flat interfaces between the up and down domains are created it will take much longer to kill them and fully order the sample ferromagnetically on all A columns. More details of the configurations can be seen in the zoom in Fig.~\ref{fig:zoom}(a). The snapshots of the pure model can be confronted to the ones of the model with the columnar random fields that are shown in the second row of Fig.~\ref{fig:snapshots} ($T=0.35 <T^{\rm ran}_{\rm ObD}\simeq 0.90$ in this case). Globally, the evolution is similar to the one of the pure model although some quantitative differences, as the fact that the horizontal extent of the ferromagnetic domains is shorter in the random model, are easy to spot. The reason for this is the pinning character of the random fields, which is further exhibited in Fig.~\ref{fig:zoom}(b). \begin{figure} \hspace{1cm} \subfloat[]{\includegraphics[scale=0.7]{figures/Snapshot/zoom}} \hspace{1.5cm} \subfloat[]{\includegraphics[scale=0.7]{figures/alignment}} \caption{\small (a) Zoom on a snapshot. The different behaviour of A and B columns is clear here. (b) Fraction of the spins of A columns aligned with the local random magnetic field following the quench dynamics of the system with columnar random fields. The upper curve corresponds to the case showed in Fig.~\ref{fig:snapshots}.} \label{fig:zoom} \end{figure} The plot in Fig.~\ref{fig:zoom}(b) shows the evolution of the fraction of spins of A columns that are aligned with their local columnar magnetic field. This fraction increases with time as the system approaches equilibrium and with the typical strength of the fields, $\sigma$. The blue curve is associated with the evolution of the system we follow on the second row of Fig.~\ref{fig:snapshots} and shows that, in the last snapshot ($t=2^{20}$), more than $90\%$ of the spins are already aligned with their magnetic field, probing the pinning (and disordering) character of the latter. We deduce that the spin domains on this snapshots are mostly due to parts of the system where the $h_j$ have the same sign. The figures suggest that while the horizontal length scale of the domains between flat walls is of the order of the system size in the pure model, the domains are of finite horizontal size in the disordered case. \vspace{0.25cm} \noindent {\it Magnetisation and correlations} \vspace{0.25cm} After the generic discussion of the snapshots in Fig.~\ref{fig:snapshots}, in Fig.~\ref{M-evol-PMFM} we show the time evolution of $m_A$ defined as \begin{equation} m_A(t) = \frac{2}{N^2}\Big\langle\Big\lvert\sum_{k_A=1}^{N^2/2} s_{k_A}(t) \Big\rvert\Big\rangle \label{eq:magn-A} \end{equation} with $\langle \dots \rangle$ the average over many realisations of the dynamics and $k_A$ running over the A spins indices only. For $N=512$ the magnetisation remains smaller than 0.1 until $t \simeq 10^4$. The analysis of the coarsening process will be done for such linear system size ensuring that the evolution remains sufficiently far from any possible equilibration. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{figures/M_MC} \caption{\small Monte Carlo dynamics at $T_{\rm ObD} < T=1 < T^{\rm ran}_c$ of the 2D Domino Model with columnar random fields with $\sigma=0.01$. Time evolution of the mean magnetisation density of the A columns, $m_A(t)$ defined in Eq.~(\ref{eq:magn-A}), after quenches from a fully random initial condition across the PM-FM crossover at $T^{\rm ran}_c$ and above the one at $T^{\rm ran}_{\rm ObD}$. Different curves correspond to different sizes given in the key. } \label{M-evol-PMFM} \end{center} \end{figure} \begin{figure}[t!] \begin{center} \subfloat[]{\includegraphics[scale=0.55]{figures/C_PMFM_L512}} \subfloat[]{\includegraphics[scale=0.55]{figures/radius_quench_compare}} \caption{\small Monte Carlo dynamics after a quench (cooling) from a completely random initial condition across the PM-FM and above the Order by Disorder crossovers to $T=1$. Square system with linear length $N=512$. (a) Horizontal correlation function $C_x(x,t)$ as a function of $x$ for a system with columnar random fields ($\sigma=0.01$), at different times given in the key. (b) Evolution of the typical growing correlation length $R_x(t)$ for systems with different standard variation of the random fields given in the key. } \label{fig:corr-PMFM} \end{center} \end{figure} The plot in Fig.~\ref{fig:corr-PMFM}(a) is representative of the coarsening dynamics across a second order phase transition~\cite{bray2002theory,Puri09-article,CorberiPoliti}. We display the horizontal correlation function of the spins sitting on the A columns \begin{equation} \begin{split} C_x(x,t) &=\frac{ \frac{2}{N^2} \Big( \sum_{i,j} s_{2i,j} s_{2(i+x),j} - \big( \sum_{i,j} s_{2i,j} \big)^2 \Big)} {1 - \big( \sum_{i,j} s_{2i,j} \big)^2} = \frac{\frac{2}{N^2} \sum_{i,j} s_{2i,j} s_{2(i+x),j} - m_A(t)^2}{1-m_A(t)^2} \end{split} \end{equation} of a system with $N=512$ for which the ferromagnetic magnetisation density of these columns at the longest time $t\simeq 10^5$ should be of order of $m_A \simeq 0.1$, see Fig.~\ref{M-evol-PMFM}. The system progressively orders, and this is represented by a $C_x(x,t)$ that decays to $0$ with distance in a slower manner for increasing times. These curves can be compared, for example, to the ones in Fig. 17 in Ref.~\cite{sicilia2007domain}, where similar data for the 2D Ising Model are shown. We obtain the typical growing correlation length in the $x$-direction from the standard criterion $C_x(R_x(t),t)\sim1/e$ (see the horizontal dotted line in Fig.~\ref{fig:corr-PMFM}(a)). We then plot the evolution of $R_x(t)$ with time in panel (b). We find that at short time scales the pure and disordered Domino Models have $R_x(t) \propto t^{1/2}$, as expected for the curvature driven dynamics of a non-conserved scalar order parameter system. The various curves correspond to different strengths of the random fields, as quantified by their standard deviation $\sigma$ given in the key. At the longest time scales that we show the growth in the model with random fields saturates, to a value that decreases with increasing $\sigma$. The annihilation of these domain walls should involve much longer time scales (see, for example~\cite{Redner02,Blanchard17}, for their study in the pure 2D Ising model) and it needs thermal activation to create a bump on the otherwise flat interfaces that, moreover, are pinned by the random fields. Figure~\ref{vertical_corr_PMFM} confirms that the system orders faster vertically than horizontally. In panel (a) we present the vertical correlation function of the spins belonging to columns A \begin{equation} C_y(y,t) = \frac{\frac{2}{N^2} \sum_{i,j} s_{2i,j} s_{2i,j+y} - m_A(t)^2}{1-m_A(t)^2} \end{equation} while in panel (b) we represent the corresponding growing length as a function of time. \begin{figure}[H] \hspace{-0.5cm} \subfloat[]{\includegraphics[scale=0.55]{figures/Cv1_L512_s2e4}} \subfloat[]{\includegraphics[scale=0.55]{figures/Cv1_quench_compare}} \caption{\small Monte Carlo dynamics of the 2D square Domino Model, of linear length $N=512$, under columnar random fields with $\sigma=0.01$. Dynamics after a quench from a completely random initial condition across the PM-FM, and above the Order by Disorder, crossovers to $T=1$. (a) Vertical correlation function on A columns, $C_{y}(y,t)$, as a function of $y$ for different times given in the key. (b) The growing correlation length, $R_y(t)$, in the vertical direction. } \label{vertical_corr_PMFM} \end{figure} \subsubsection{Heating from the disordered ground state} \label{subsec:ObDquench} We now investigate the dynamics across the ObD crossover itself, starting from the ground state (corresponding to $T=0$) and fixing the working temperature to $T=1$ as in the sub-critical quenches discussed in Sec.~\ref{subsec:highTquench} where the system, for the sizes we use, should eventually approach a ferromagnetic configuration. Snapshots in Fig.~\ref{fig:snapshots-heat} show an example of these dynamics. In the pure system (top panels) the final configuration is one in which the system ordered ferromagnetically on the A columns with -1 spins. In the disordered case (bottom panels) the dynamics is slower and the stationary state has not been reached yet. Data for the pure model are gathered using, for each Monte Carlo run, an initial state chosen randomly among the collection of all possible ones. Instead, the simulations with random fields are started from the unique ground state, which is determined for each run by the magnetic fields that we draw from the Gaussian distribution. The average is then computed over random fields and/or Monte Carlo random numbers. \begin{figure}[h!] \captionsetup[subfigure]{labelformat=empty} \begin{center} \subfloat[$t=0$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_pure_heat_t0}} \subfloat[$t=2^8$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_pure_heat_t8}} \subfloat[$t=2^{16}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_pure_heat_t16}} \subfloat[$t=2^{22}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_pure_heat_t22}}\\ \subfloat[$t=0$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_heat_08_t0}} \subfloat[$t=2^8$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_heat_08_t8}} \subfloat[$t=2^{16}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_heat_08_t16}} \subfloat[$t=2^{22}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_heat_08_t22}} \caption{\small Snapshots of a system of size $N=128$ after a sudden increase in temperature from the disordered ground state to $T=1$. In the pure model, this temperature is below $T^{\rm pure}_c$. In the disordered one ($\sigma=0.01$), it is in between the pseudo critical temperatures $T_{\rm ObD}^{\rm ran}$ and $T_c^{\rm ran}$. The time at which the images were stored are indicated below them.} \label{fig:snapshots-heat} \end{center} \end{figure} In Fig.~\ref{fig:corr-ObD} one finds the horizontal correlation functions as functions of distance, for different times, in panel (a). The curves have the same qualitative behaviour as the ones already shown for the quenches from the infinite temperature state. However, the growing length is pretty different from, and much slower than, the usual $t^{1/2}$ curvature driven form, as can be seen in panel (b). \begin{figure}[H] \begin{center} \subfloat[]{\includegraphics[scale=0.55]{figures/C-heat-fit_L512}} \subfloat[]{\includegraphics[scale=0.55]{figures/radius_heat_compare}} \caption{\small Heating across the ObD crossover in a square 2D system with linear length $N=512$, and columnar random fields with $\sigma=0.01$, evolving from one of the zero temperature ground states. (a) Horizontal correlation $C_x(x,t)$ as a function of $x$, at $T=1$, $\sigma=0.01$, and for different times given in the key. (b) Evolution of the typical growing correlation length $R_x(t)$ for different values of the disorder strength. } \label{fig:corr-ObD} \end{center} \end{figure} \section{Staggered columnar magnetic fields} \label{sec:staggered} In order to have a phase transition towards a ferromagnetically order state upon increasing temperature, circumventing the Imry-Ma argument, we no longer use random fields, but alternate columnar magnetic fields $h_j = (-1)^j h$. Because the $h_j$ are not random but staggered, the formation of macroscopic reversed ferromagnetic domains is no longer possible. The drawback is that we lose some specificity of the ObD phenomenon because the zero-temperature ground state is now antiferromagnetic as the staggered magnetic fields impose. Still, the strategy is to use these fields as a probe to exhibit the underlying conventional ObD transition. The idea is to impose an antiferromagnetic equilibrium state at very low temperature, that would be replaced by the ferromagnetic one at a first order phase transition taking place at a finite temperature below the one at which the system reaches the paramagnetic high temperature phase. \subsection{The 1D model} The Hamiltonian of the effective 1D model under staggered local fields is \begin{equation} H_{\rm eff}(T) = -J_{\rm eff}(T,N) \sum_{j=1}^{N/2} s_j s_{j+1} - Nh\sum_{j=1}^{N/2} (-1)^j s_j \end{equation} with $J_{\rm eff}(T,N)\propto N$, as given in Eq.~(\ref{eq:Jeff}). \vspace{0.2cm} \begin{figure}[H] \begin{center} \includegraphics[scale=0.65]{figures/M_stagg2} \caption{\small Mean magnetization of the 1D model with staggered magnetic fields with amplitude $h=0.01$, for different system sizes given in the key. } \label{fig:magn-stagg} \end{center} \end{figure} We compute the mean magnetisation using the transfer matrix method with a matrix $\mathcal{T}=W_1 W_2$ representing a block of two columns with $W_1$ for a column with a positive magnetic field and $W_2$ for a negative one \begin{align*} & W_1 = \begin{pmatrix} e^{\frac{N}{4}\ln(1+e^{-1/T}) + \frac{N\delta h}{T}} & e^{-\frac{N}{4} \ln(1+e^{-1/T}) + \frac{Nh}{T}} \\ e^{-\frac{N}{4} \ln(1+e^{-1/T}) - \frac{Nh}{T}} & e^{\frac{N}{4}\ln(1+e^{-1/T}) - \frac{N\delta h}{T}} \end{pmatrix} \; , \\ & W_2 = \begin{pmatrix} e^{\frac{N}{4}\ln(1+e^{-1/T}) + \frac{N\delta h}{T}} & e^{-\frac{N}{4} \ln(1+e^{-1/T}) - \frac{Nh}{T}} \\ e^{-\frac{N}{4} \ln(1+e^{-1/T}) + \frac{Nh}{T}} & e^{\frac{N}{4}\ln(1+e^{-1/T}) - \frac{N\delta h}{T}} \end{pmatrix} \; , \end{align*} with $\delta h>0$ an infinitesimal magnetic field we add to compute the magnetisation. Writing $\lambda_+$ and $\lambda_-$ the eigenvalues of $\mathcal{T}$, the free energy per spin is \begin{equation} f = -\frac{2T}{N^2} \; \ln\Big( \lambda_+^{N/4} + \lambda_-^{N/4}\Big) \end{equation} and the mean magnetization per spin $m$ is \begin{equation} m = - \frac{\partial f}{\partial(\delta h)}\bigg|_{\delta h=0} \; . \end{equation} We find a transition temperature $T_{\rm ObD}^{\rm col} = 0.35$ with $h=0.01$ (see Fig.~\ref{fig:magn-stagg}) which corresponds to what we expect by comparing the interaction energy governed by the effective coupling constant at $T_{\rm ObD}^{\rm col} $ and the energetic contribution of the magnetic field \begin{equation} \frac{J_{\rm eff}(T_{\rm ObD}^{\rm col} , N)}{N} = \frac{h}{2} \; . \end{equation} We note that, apart from numerical constants, this is the same equation as (\ref{eq:JeffTobd}), where $\sigma$ has been replaced by $h$. $T_{\rm ObD}^{\rm col}$ for the columnar field model is also an increasing function of $h$ departing from 0. The numerical data displayed in Fig.~\ref{fig:magn-stagg}, which represent $m$ as a function of $T$, confirm the fact that the system undergoes a first order phase transition at $T_{\rm ObD}^{\rm col}$. \subsection{Dynamics} \label{subsec:dynamics-alternate} We now turn to the analysis of the quench dynamics of the 2D model with alternate columnar magnetic fields. \begin{figure}[h!] \captionsetup[subfigure]{labelformat=empty} \begin{center} \subfloat[$t=0$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_stagg_t0}} \subfloat[$t=2^6$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_stagg_t6}} \subfloat[$t=2^{12}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_stagg_t12}} \subfloat[$t=2^{18}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_PMFM_stagg_t18}}\\ \subfloat[$t=0$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_stagg_PMObD_t0}} \subfloat[$t=2^{6}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_stagg_PMObD_t6}} \subfloat[$t=2^{12}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_stagg_PMObD_t12}} \subfloat[$t=2^{18}$]{\includegraphics[scale=0.5]{figures/Snapshot/Snap_stagg_PMObD_t18}} \caption{\small Snapshots of the system with $N=128$ and $h=0.08$ after a quench from a random initial condition to $T=0.35<T_{\rm ObD}^{\rm col}$ (first line) and $T_{\rm ObD}^{\rm col}<T=1<T_c^{\rm col}$ (second line). The time at which the images were stored are indicated below them. On the latest image of the first line, the configuration is one of the ground states with alternate ordering of A columns, whereas on the latest image of the second line, the ordering of A columns is ferromagnetic. } \label{fig:snapshots-stagg} \end{center} \end{figure} Figure \ref{fig:snapshots-stagg} shows the evolution of a system with $N=128$ and staggered columnar magnetic fields of strength $h=0.08$, quenched from a disordered initial condition and evolved at $T=0.35<T_{\rm ObD}^{\rm col}$. If we compare these snapshots with those on Fig.~\ref{fig:snapshots}, we see that as for the two systems studied before (the pure Domino Model and the one with random magnetic fields), the system progressively orders in the vertical direction. The difference here is that the domains are not growing in the horizontal direction because of the pinning character of the alternate magnetic fields. On the last snapshot, we can see that the system reached its equilibrium state at $T=0.35$ which is also a ground state of the pure Domino Model at $T=0$. In Fig.~\ref{fig:corr-stagg-ObD} we display the horizontal correlation functions of the Domino Model with columnar alternate fields of strength $h=0.01$ for increasing times given in the key of (a). In panel (b) the growing correlation is reported and compared to the $t^{1/2}$ law as well as to the growing correlation of the random fields case with $\sigma = 0.01$. The data confirm that the system orders ferromagnetically on the A columns in between $T_{\rm ObD}^{\rm col}$ and $T_c^{\rm col}$. \clearpage \begin{figure}[H] \begin{center} \subfloat[]{\includegraphics[scale=0.55]{figures/C_stagg_PMFM_L512}} \subfloat[]{\includegraphics[scale=0.55]{figures/radius_quench_stagg}} \caption{\small Monte Carlo dynamics of the 2D Domino Model with columnar alternate fields ($h=0.01$). Dynamics after a quench from a completely random initial condition across the PM-FM transition. The evolution is followed at $T=1$, that is, $T_{\rm ObD} < T < T_c^{\rm col}$. (a) Horizontal correlation $C_x(x,t)$ as a function of $x$ for different times given in the key. (b) Evolution of the typical growing correlation length $R_x(t)$. Square system of linear length $N=512$. } \label{fig:corr-stagg-ObD} \end{center} \end{figure} \section{Conclusion} The goal of our work was to find a way to displace the thermal ObD transition from zero to a non-vanishing temperature. The idea was to thus render the experimental observation of this phenomenon easier. To reach this aim we followed two routes, using the Domino Model as the testing ground. On the one hand, we added well-tuned quenched columnar random fields. These fields lift the degeneracy of the ground states, selecting one that still has zero magnetisation but lower energy than the one under no fields. Consequently, the system is stuck in this state until the temperature is high enough for it to access the large number of first excited states. In this case, the ObD crossover happens at a finite temperature $T_{\rm ObD} > 0$ but long-range order is suppressed by this type of disorder in the thermodynamic limit. Still, we observed an ObD crossover at a finite temperature for small system sizes using various numerical and theoretical methods that were in good agreement with our predictions. In the second approach, we used alternate columnar magnetic fields that do indeed displace the transition at finite temperature. In this case we computed the theoretical ObD transition temperature using the transfer matrix method and we confirmed it with dynamic measurements. We also mentioned some indications that the ObD transition is first order. Even though both random and alternate fields impose the ground state, the finite temperature crossover or transition can be used to probe the ObD phenomenon in the model without applied fields. Since the crossover or transition temperatures can be tuned at will, our procedure allows one to probe the ObD mechanism without going to too low temperature, where other kind of energetic contribution might interfere with it. In conclusion, we think that these methods should be useful to check whether a system exhibits the ObD transition. \vspace{0.5cm} \noindent {\bf Acknowledgements} We are grateful to P. Guruciaga, J. Restrepo and A. Tartaglia for early discussions of this problem. \bibliographystyle{ieeetr}
{ "timestamp": "2020-07-28T02:39:44", "yymm": "2007", "arxiv_id": "2007.13556", "language": "en", "url": "https://arxiv.org/abs/2007.13556" }
\section{Introduction} \label{S1} The Gelfand-Shilov spaces of type $S$ provide a natural framework for studying infinite order pseudodifferential operators whose symbols have faster than polynomial growth at infinity. Various classes of pseudodifferential operators of this kind were investigated in recent papers~\cite{ACT,CPP,CT,P} with special attention to their continuity, composition, and invariance properties. Similar issues arise in the context of noncommutative quantum field theory, where spaces of type $S$ were used to characterize the violations of locality and causality~\cite{Ch,S2007-II} and to analyze the behavior of propagators in some noncommutative models~\cite{FS,FS11,Zahn}. It is crucial for these applications that, under natural restrictions specified in~\cite{S2007}, the Gelfand-Shilov spaces of functions on the linear symplectic space $\oR^{2d}$ are algebras under the Weyl-Moyal product \begin{equation} (f\star_\hbar g)(x)= (\pi\hbar)^{-2d}\int_{\oR^{4d}} f(x-x')g(x-x^{\prime\prime})e^{(2i/\hbar)x'\cdot Jx^{\prime\prime}}dx'dx^{\prime\prime}, \label{1.1} \end{equation} where $J=\begin{pmatrix} 0&I_d\\-I_d&0\end{pmatrix}$ is the standard symplectic matrix, $x=(q,p)$, $x'\!\cdot\! Jx''= q'p''-q'' p'$, and $\hbar$ is the Planck constant. In this and previous papers~\cite{S2019,S2019-2,S2020}, we study the algebras of multipliers of generalized Gelfand-Shilov spaces with respect to the noncommutative product~\eqref{1.1}. Their importance is due to the fact that these algebras extend this operation to the maximum possible class of functions, including some elements of the duals of spaces of type $S$. From the viewpoint of the Weyl symbol calculus, the multiplier algebras consist of the symbols of the operators that map the corresponding Fourier-invariant spaces of type $S$ continuously into itself. These algebras generalize the Moyal multiplier algebra $\mathscr M_\hbar(S)$ for the Schwartz space $S(\oR^{2d})$ of all infinitely differentiable rapidly decreasing functions. The algebra $\mathscr M_\hbar(S)$ has been studied in many papers starting from~\cite{A1,G-BV1,Mail,G-BV2} and its applications to quantum field theory on noncommutative spaces have been discussed in~\cite{G2004,G-BV3}. We treat the product~\eqref{1.1} as a deformation of the ordinary pointwise product and use the notation $\star_\hbar$, accepted in deformation quantization theory, instead of the notation $\#$ which is used for the composition of Weyl symbols in~\cite{Fol,G2001,H3} and corresponds to $\hbar=1$. Typically, applications use Gelfand-Shilov spaces of a particular type, denoted in~\cite{GS2} by $S^\beta_\alpha$. The algebras of their poitnwise multipliers have been explicitly described by Palamodov~\cite{P1962} in terms of spaces of type $\mathscr E$. The noncommutative deformation violates the equalities established in~\cite{P1962}, but preserves some inclusion relations which are the subject of our study. The starting point for us is that the Moyal multiplier algebras of the spaces of type $S$ contain the duals of their convolutor spaces. This inclusion was proved for $S^\beta_\alpha$ in~\cite{S2011} and holds true in the general case, as shown in~\cite{S2019, S2020}. The spaces of convolutors for Gelfand-Shilov spaces were studied in~\cite{DPV,DPPV} and, most thoroughly, by Debrouwere and Vindas in~\cite{DV2018,DV2019}, where the short-time Fourier transform and a projective description of inductive limits were used for this purpose. Here we develop an alternative approach based on the continuous extension of the convolutors of spaces of type $S$ to the corresponding spaces of type $\mathscr E$. A simple and natural way of such extension was proposed earlier in~\cite{S2012-I} and was also used in~\cite{S2020}. We show that this approach provides a complete characterization of the convolutor spaces for the generalized spaces of type $S$. Significantly, it is also well suited for the cases that have not yet been considered, for example, where a space of type $S$ is nontrivial, whereas its projective counterpart is trivial. In combination with Theorem~4 of~\cite{S2019-2}, this approach provides a direct and simple proof of the continuous embedding of generalized Palamodov spaces of type $\mathscr E$ into the Moyal multipliers algebras of the corresponding spaces of type $S$, which is the main result of this paper. The paper is structured as follows. In Section~\ref{S2}, we give the basic definitions concerning spaces of type $S$ and type $\mathscr E$ and introduce the notation. We try to follow the original notation in~\cite{GS2,P1962} and let $a_n$ and $b_n$ denote the sequences defining the function spaces and specifying the behavior at infinity and the degree of smoothness of their elements. But instead of the notation $S^{b_n}_{a_k}$ introduced in~\cite{GS2}, we use $S^{\{b\}}_{\{a\}}$, where the curly brackets mean that this space is the inductive limit of a family of normed spaces. The projective limit of the same family is denoted by $S^{(b)}_{(a)}$. Such a rule was used by Komatsu~\cite{K1973} and in many subsequent papers, and we apply it also to the spaces of type~$\mathscr E$. In Section~\ref{S3}, we prove that the convolutor spaces for $S^{\{b\}}_{\{a\}}$ and $S^{(b)}_{(a)}$ contain respectively the duals of the Palamodov spaces $\eab$ and $\eabp$ and that these inclusions are continuous. In Section~\ref{S4}, we give a complete characterization of the convolutor spaces $C\bigl(S^{\{b\}}_{\{a\}}\bigr)$ and $C\bigl(S^{(b)}_{(a)}\bigr)$ in terms of the spaces of type $\mathscr E$. In particular, we show that $\eab$ is canonically isomorphic to the strong dual of $C\bigl(S^{\{b\}}_{\{a\}}\bigr)$ and $\eabp$ is canonically isomorphic to the strong dual of $C\bigl(S^{(b)}_{(a)}\bigr)$. In Section~\ref{S5}, we define the left, right, and two-sided multipliers for the noncommutative algebras $\bigl(\Sab,\star_\hbar\bigr)$ and $\bigl(\sab,\star_\hbar\bigr)$ and prove that these algebras have approximate identities. This allows us to prove the equivalence of three different definitions of their corresponding multiplier algebras. In Section~\ref{S6}, we show that $\eab$ is continuously embedded in the algebra ${\mathscr M}_\hbar\bigl(\Sab\bigr)$ of two-sided Moyal multipliers for $\Sab$ and $\eabp$ is continuously embedded in ${\mathscr M}_\hbar\bigl(\sab\bigr)$. In the same section we extend these theorems to other translation invariant star products. Additional inclusion relations are established in the case of the Fourier-invariant spaces $S_{\{a\}}^{\{a\}}$ and $S_{(a)}^{(a)}$. Section~\ref{S7} contains concluding remarks. \section{Preliminaries and notation} \label{S2} Let $a=(a_n)_{n\in\oZ_+}$ be a sequence of positive numbers such that \begin{equation} a_0=1,\qquad a_{n+1}\ge a_n, \label{2.1} \end{equation} \begin{equation} a_n^2\le a_{n-1} a_{n+1}, \label{2.2} \end{equation} \begin{equation} a_{k+n}\le K H^{k+n}a_ka_n, \label{2.3} \end{equation} where $K$ and $H$ are positive constants. The logarithmic convexity condition~\eqref{2.2} coupled with the normalization condition $a_0=1$ implies the inequality \begin{equation} a_ka_n\le a_{k+n}, \label{2.4} \end{equation} which will be also used throughout the paper. Let $b=(b_n)_{n\in\oZ_+}$ be another sequence of positive numbers satisfying the same conditions. The Gelfand-Shilov space $\Sab(\oR^d)$ consists of all infinitely differentiable functions $f(x)$ defined on $\oR^d$ and satisfying the inequalities \begin{equation} |x^\alpha\partial^\beta f(x)|\le C A^{|\alpha|}B^{|\beta|} a_{|\alpha|}b_{|\beta|}\qquad \forall \alpha,\beta\in \oZ_+^d, \notag \end{equation} where $C$, $A$, and $B$ are positive constants depending on $f$, $\oZ^d_+$ is the set of $d$-tuples of nonnegative integers, and the standard multi-index notation is used. In what follows, we write for brevity $\Sab$ instead of $\Sab(\oR^d)$ when this cannot cause confusion. This space is the union of a family of Banach spaces $\{S^{b,B}_{a,A}\}_{A,B>0}$ whose norms are given by \begin{equation} \|f\|_{A,B}= \sup_{x,\alpha,\beta} \frac{|x^\alpha\partial^\beta f(x)|}{A^{|\alpha|}B^{|\beta|} a_{|\alpha|}b_{|\beta|}}, \label{2.5} \end{equation} and its topology is defined to be the inductive limit topology with respect to the inclusion maps $S^{b,B}_{a,A}\to \Sab$. The most frequently used Gelfand-Shilov spaces $S^\beta_\alpha$ are defined in~\cite{GS2} by sequences of the form \begin{equation} a_n=n^{\alpha n},\quad b_n=n^{\beta n}, \label{2.6} \end{equation} where in this case $\alpha$ and $\beta$ are nonnegative numbers, which should not be confused with the multi-indices in~\eqref{2.5}. We will also consider the spaces \begin{equation} \sab=\bigcap_{A\to0,B\to0} S^{b,B}_{a,A} \label{2.7} \end{equation} equipped with the projective limit topology. If $\varliminf_{n\to\infty}a_n^{1/n}=0$, then the spaces $\Sab$ and $\sab$ are trivial, i.e., contain only the identically zero function. If $0<\varliminf_{n\to\infty}a_n^{1/n}<\infty$, then all functions in $\Sab$ are of compact support, and this space coincides with the space defined by $b_n$ and $a_n\equiv 1$, which is usually denoted by $\mathcal D^{\{b\}}$. The space $\sab$ is trivial in this case, and considering the spaces~\eqref{2.7} we always assume that \begin{equation} \lim_{n\to\infty}a_n^{1/n}=\infty. \label{2.8} \end{equation} There are also other non-triviality conditions for the spaces of type $S$, see~\cite{GS2}. Their precise description is not needed for what follows, but we assume throughout the paper that the spaces under consideration are nontrivial. A non-quasianalyticity condition is often imposed on $b_n$ to ensure that the space contains sufficiently many functions of compact support (see~\cite{GS2,K1973}), but this condition is not used in the proofs given below. The Fourier transformation \begin{equation} F\colon f(x)\to \widehat f(\zeta)=(2\pi)^{-d/2}\int_{\oR^d}e^{-ix\cdot\zeta}f(x)dx \notag \end{equation} maps $\Sab$ isomorphically onto $\Sba$ and maps $\sab$ isomorphically onto $\sba$. Using~\eqref{2.4}, it is easy to see that $\Sab$ and $\sab$ are algebras under pointwise multiplication and that this operation is continuous in their topologies. As a consequence, these spaces are also topological algebras under convolution. The norm~\eqref{2.5} can be written as \begin{equation} \|f\|_{A,B}= \sup_{x,\beta}\frac{w_a(|x|/A)|\partial^\beta f(x)|}{B^{|\beta|} b_{|\beta|}}, \label{2.9} \end{equation} where $|x|=\max\limits_{i\le j\le d}|x_j|$ and \begin{equation} w_a(t)\coloneq\sup_{n\in\oZ_+}\frac{t^n}{a_n},\qquad t\ge 0. \label{2.10} \end{equation} This function is often called a weight function. We note in this connection that the replacement $\sup_x|\cdot|\to\|\cdot\|_{L^1}$ in~\eqref{2.9} gives an equivalent system of norms (see, e.g, Lemma A.2 in~\cite{S2019} for a proof) and then $w_a(t)$ plays the role of a weight in the integral. Under condition~\eqref{2.8}, the function $w_a(t)$ is finite and continuous. If $a_n\equiv 1$, then its corresponding function $w_1(t)$ is equal to 1 for $0\le t\le1$ and is infinite for $t>1$, i.e., $1/w_1(t)$ is the characteristic function of the interval $[0,1]$. It follows from the definition and from~\eqref{2.1} that $w_a(t)\ge 1$ and that this function is convex and monotonically increases faster than $t^n$ for any $n$. Therefore, \begin{equation} w_a\left(\frac{t_1+ t_2}{A_1+A_2}\right)\le w_a\left(\frac{t_1}{A_1}\right)\, w_a\left(\frac{t_2}{A_2}\right) \label{2.11} \end{equation} for any positive $A_1$ and $A_2$. In particular, $w_a((t_1+ t_2)/2)\le w_a(t_1)\, w_a(t_2)$. Setting $t_1=|x|$ and $t_2=|y|$, we obtain the inequality \begin{equation} w_a\left(\frac12|x+y|\right)\le w_a(|x|)\, w_a(|y|), \label{2.12} \end{equation} which is most often used below. The condition~\eqref{2.3} implies that \begin{equation} w_a(t)^2\le K w_a(Ht). \label{2.13} \end{equation} Along with the spaces of rapidly decreasing functions, we will consider spaces of rapidly increasing functions with the same degree of smoothness. Namely, let $\mathcal E^{b,B}_{a,A}$ be the Banach space of all functions with the finite norm\footnote{In the case where $a_n\equiv 1$, the functions in $\mathcal E^{b,B}_{a,A}$ are defined only for $|x|\le A$ and are regarded as zero if they are zero in this domain.} \begin{equation} \|h\|^A_B= \sup_{x,\beta} \frac{|\partial^\beta h(x)|}{w_a(|x|/A)\,B^{|\beta|} b_{|\beta|}}. \label{2.14} \end{equation} Using the family $\{\mathcal E^{b,B}_{a,A}\}_{A,B>0}$, we define the spaces \begin{gather} \Eab\coloneq \bigcap_{A\to\infty}\bigcup_{B\to\infty}\mathcal E^{b,B}_{a,A},\qquad \Eabp\coloneq \bigcap_{B\to0}\bigcup_{A\to0}\mathcal E^{b,B}_{a,A}, \label{2.15}\\ \eab\coloneq \bigcup_{B\to\infty}\bigcap_{A\to\infty}\mathcal E^{b,B}_{a,A},\qquad \eabp\coloneq \bigcup_{A\to0}\bigcap_{B\to0}\mathcal E^{b,B}_{a,A}, \label{2.16} \end{gather} where the intersections are endowed with the projective limit topology and the unions are endowed with the inductive limit topology. The spaces~\eqref{2.15} and~\eqref{2.16} play the same role in the theory of ultradistributions defined on $\Sab$ and $\sab$ as the spaces $\mathcal O_M$ and $\mathcal O_C$ in the Schwartz theory of tempered distributions~\cite{Schwartz}. They were introduced by Palamodov~\cite{P1962} (with somewhat different notation) for the case~\eqref{2.6} and were called spaces of type $\mE$. This terminology is quite natural because the notation $\mE(\oR^d)$ was used by Schwartz for the space of all infinitely differentiable functions on $\oR^d$. In the case where $a_n\equiv 1$, we write $\mathcal E^{\{b\}}$ instead of $\mathcal E^{\{b\}}_{(1)}$ and $\breve{\mathcal E}^{\{b\}}$ instead of $\breve{\mathcal E}^{\{b\}}_{(1)}$, which is in agreement with the notation in~\cite{K1973}. Obviously, we have the continuous inclusions \begin{equation} \eab\hookrightarrow \Eab,\qquad \eabp\hookrightarrow \Eabp. \label{2.17} \end{equation} It should be noted that the spaces $\mathcal O_C^{\{M_p\},\{A_p\}}(\oR^d)$ and $\mathcal O_C^{(M_p), (A_p)}(\oR^d)$ considered in~\cite{DV2019} are respectively $\breve{\mathcal E}^{\{M\}}_{(A)}$ and $\breve{\mathcal E}^{(M)}_{\{A\}}$ in our notation, and the symbol classes $\Gamma_s^\infty(\mathbf R^d)$ and $\Gamma_{0,s}^\infty(\mathbf R^d)$ studied in~\cite{CT} coincide respectively with $\breve{\mathcal E}^{\{a\}}_{(a)}$ and $\breve{\mathcal E}^{(a)}_{\{a\}}$, where $a_n=n^{sn}$. The spaces $S_{\{a\}}^{\{a\}}$ and $S_{(a)}^{(a)}$ with $a_n=n^{sn}$ are denoted in~\cite{CT} by $\mathcal S_s(\mathbf R^d)$ and $\Sigma_s(\mathbf R^d)$. We also note that the notation in~\cite{S2019,S2019-2,S2020} is related to the notation used here as follows: $S^b_a\leftrightarrow \Sab$, $\mS^b_a\leftrightarrow \sab$, $E^b_a \leftrightarrow\Eab$, $\mE^b_a\leftrightarrow\Eabp$, $\breve E^b_a \leftrightarrow\eab$, $\breve \mE^b_a\leftrightarrow\eabp$, $a(t)\leftrightarrow w_a(t)$. The spaces of type $S$ and type $\mE$ have nice topological properties which follow from the fact that the inclusion maps $S^{b,B}_{a,A}\hookrightarrow S^{b,\bar B}_{a,\bar A}$ and $\mathcal E^{b,B}_{a,\bar A}\hookrightarrow \mathcal E^{b,\bar B}_{a, A}$, where $A<\bar A$ and $B<\bar B$, are compact. This is well known for the spaces of type $S$ and is proved in the same manner as in~\cite{GS2} for $S^\beta_\alpha$. In the case of spaces of type $\mE$, the compactness of the inclusion maps can also be proved analogously, see, e.g., Lemma~2 in~\cite{S2019-2}. It follows that $\sab$ is an (FS)-space (Fr\'echet-Schwartz space) and $\Sab$ belongs to the dual class of (DFS)-spaces (see~\cite{K1967,MV1997} for the basic properties of (FS) and (DFS)-spaces). In particular, these space are complete, barrelled, reflexive, and Montel. An important consequence is that the inductive limit $\Sab$ is regular, i.e., every bounded subset of it is contained and bounded in some $S^{b,B}_{a,A}$. We let $\dSab$ and $\dsab$ denote the dual spaces of $\Sab$ and $\sab$ and assume that the duals are endowed with the strong topology. Then the first of them is an (FS)-space and the second is a (DFS)-space. The inductive system of spaces $\mathcal E_{(a)}^{b,B}=\bigcap_{A\to\infty}\mathcal E^{b,B}_{a,A}$, $B>0$, is equivalent to the system $\mathcal E_{(a)}^{b,B+}=\bigcap_{A\to\infty,\epsilon\to0}\mathcal E^{b,B+\epsilon}_{a,A}$, because there are continuous inclusions $\mathcal E_{(a)}^{b,B}\subset\mathcal E_{(a)}^{b,B+}\subset \mathcal E_{(a)}^{b,\bar B}$ for $B<\bar B$. In turn, the inductive system $\mathcal E_{a,A}^{(b)}=\bigcap_{B\to0}\mathcal E^{b,B}_{a,A}$, $A>0$, is equivalent to the system $\mathcal E_{a,A-}^{(b)}=\bigcap_{B\to0, \epsilon\to0} E^{b,B}_{a,A-\epsilon}$. As a consequence, the projective system of the duals $\bigl(\mathcal E_{(a)}^{b,B}\bigr)'$ is equivalent to the system $\bigl(\mathcal E_{(a)}^{b,B+}\bigr)'$ and the projective system $\bigl(\mathcal E_{a,A}^{(b)}\bigr)'$ is equivalent to the system $\bigl(\mathcal E_{a,A-}^{(b)}\bigr)'$. By Lemma~2 in~\cite{S2019-2}, $\mathcal E_{(a)}^{b,B+}$ and $\mathcal E_{a,A-}^{(b)}$ are (FS)-spaces and their strong duals are hence (DFS)-spaces. The representations \begin{equation} \eab=\varinjlim_{B\to\infty}\mathcal E_{(a)}^{b,B+} ,\qquad \eabp=\varinjlim_{A\to0}\mathcal E_{a,A-}^{(b)} \label{2.18} \end{equation} play an essential role in our study. For any topological vector space $E$, we let $\mathcal L(E)$ denote the space of all continuous linear maps $E\to E$, equipped with the topology of uniform convergence on bounded subsets. If $E$ is an (FS)-space or a (DFS)-space, then $\mathcal L(E)$ is complete (see Sect.~39.6 in~\cite{K1979}). If $E$ is a test-function space on $\oR^d$, then a functional $u\in E'$ is called a convolutor for $E$ if the function \begin{equation} (u\ast f)(x)= \langle u,f(x-\cdot)\rangle \notag \end{equation} belongs to $E$ for any $f\in E$ and the map $f\to u\ast f$ is continuous on $E$. In the case of the spaces of type $S$, the continuity condition is automatically satisfied by the closed graph theorem (Theorem~8.5 in Ch.~IV in~\cite{Sch}). The set of all convolutors for $E$ is denoted by $C(E)$ and is equipped with the topology induced by that of $\mathcal L(E)$. The Fourier transformation maps $C\bigl(\Sab\bigr)$ and $C\bigl(\sab\bigr)$ isomorphically onto the spaces of pointwise multipliers for $\Sba$ and $\sba$, which we respectively denote by $M\bigl(\Sba\bigr)$ and $M\bigl(\sba\bigr)$. \section{The duals of spaces of type $\mathscr E$ as spaces of convolutors } \label{S3} Clearly, we have the canonical continuous inclusions \begin{equation} \Sab\hookrightarrow \eab, \qquad \sab\hookrightarrow \eabp. \label{3.1} \end{equation} \begin{proposition} \label{P3.1} If the spaces $\Sab$ and $\sab$ are nontrivial, then the inclusion maps~\eqref{3.1} have dense ranges. \end{proposition} This was proved in~\cite{S2019} and we reproduce the proof below in the course of proving Theorems~\ref{T4.1} and~\ref{T4.3}. It follows that the adjoint maps $\deab\to \dSab$ and $\deabp\to \dsab$ are injective. Hence $\deab$ and $\deabp$ are naturally identified with vector subspaces of $\dSab$ and $\dsab$, respectively. \begin{proposition} \label{P3.2} The space $\deab$ is contained in the convolutor space $C\bigl(\Sab\bigr)$ and $\deabp$ is contained in $C\bigl(\sab\bigr)$. \end{proposition} \begin{proof} Standard arguments show that for any $u\in \dSab$ and $f\in \Sab$, the convolution $(u\ast f)(x)$ is infinitely differentiable and $\partial^\beta(u\ast f)=u\ast\partial^\beta f$ (see Lemma~A.4 in~\cite{S2019}). If $u$ belongs to $\deab$, then it is continuous on every space $\mathcal E^{b,B}_{(a)}=\varprojlim_{A\to\infty}\mathcal E^{b,B}_{a,A}$, $B>0$, and is bounded in some norm $\|\cdot\|^A_B$, where $A$ depends on $B$. Hence, \begin{equation} |\partial^\beta(u\ast f)(x)|\le \|u\|^A_B\, \|\partial^\beta f(x-\cdot)\|^A_B. \label{3.2} \end{equation} Let $f\in S^{b,B_0}_{a,A_0}$ and $B\ge HB_0$. Using~\eqref{2.3} applied to $b_n$ and~\eqref{2.11}, we obtain \begin{multline} \|\partial^\beta f(x-\cdot)\|^A_B=\sup_{y,\gamma} \frac{|\partial^{\beta+\gamma}f(x-y)|}{w_a(|y|/A)B^{|\gamma|}b_{|\gamma|}}\\ \le \|f\|_{A_0,B_0} \sup_{y,\gamma} \frac{B_0^{|\beta+\gamma|}b_{|\beta+\gamma|}}{w_a(|y|/A)w_a(|x-y|/A_0)B^{|\gamma|}b_{|\gamma|}} \le K \|f\|_{A_0,B_0} \frac{(H B_0)^{|\beta|}b_{|\beta|}} {w_a(|x|/(A+A_0))}. \label{3.3} \end{multline} It follows from~\eqref{3.2} and~\eqref{3.3} that $u\ast f\in S^{b,HB_0}_{a,A+A_0}$ and \begin{equation} \|u\ast f\|_{A+A_0,HB_0}\le K\|u\|^A_B\, \|f\|_{A_0,B_0}. \label{3.4} \end{equation} Hence, $\deab\subset C\bigl(\Sab\bigr)$. The proof holds true for $a_n\equiv 1$. In this case, the supports of the functions $1/w_1(|y|/A)$ and $1/w_1(|x-y|/A_0)$ are disjoint for $|x|> A+A_0$, and $(u\ast f)(x)=0$ for each $x$ in this region. If $u\in\deabp$, then $u$ is continuous on every space $\mathcal E_{a,A}^{(b)}=\varprojlim_{B\to0}\mathcal E^{b,B}_{a,A}$, $A>0$, and is bounded in some norm $\|\cdot\|^A_B$, where $B$ depends on $A$. For $f\in\sab$, the norm $\|f\|_{A_0,B_0}$ is finite for arbitrarily small $A_0$ and $B_0$. Choosing $B_0\le B/H$, we again arrive at~\eqref{3.4}, which implies the inclusion $\deabp\subset C\bigl(\sab\bigr)$. \end{proof} \begin{proposition} \label{P3.3} The inclusion maps \begin{equation} \deab\hookrightarrow C\bigl(\Sab\bigr),\qquad \deabp\hookrightarrow C\bigl(\sab\bigr) \label{3.5} \end{equation} are continuous. \end{proposition} \begin{proof} By the duality between projective and inductive topologies (Sect.~IV.4.5 in~\cite{Sch}), it follows from~\eqref{2.18} that $\deab$ is algebraically identified with $\varprojlim_{B\to \infty}\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'$. A base of neighborhoods of the origin in $\deab$ is formed by the polars of the bounded subsets of $\eab$, and this topology is finer than the projective limit topology because every bounded subset of $\mathcal E^{(b,B)}_{(a)}$ is bounded in $\eab$. We show that the projective limit topology is in turn finer than the topology induced by $C\bigl(\Sab\bigr)$. Let $Q$ be a bounded set in $\Sab$, let $V$ be a $0$-neighborhood in $\Sab$, and let $W_{Q,V}$ be the set of the operators in $\mathcal L\bigl(\Sab\bigr)$ that map $Q$ into $V$. The family of all sets $W_{Q,V}$ with various $Q$ and $V$ forms a base of $0$-neighborhoods in $\mathcal L\bigl(\Sab\bigr)$. We assert that for every $W_{Q,V}$, there exists a $0$-neighborhood $U$ in $\varprojlim_{B\to \infty}\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'$ such that all operators of convolution by elements of $U$ are contained in $W_{Q,V}$. Taking the projective limit $\mathcal E^{b,B+}_{(a)}=\varprojlim_{A\to\infty,\epsilon\to0} \mathcal E^{b,B+\epsilon}_{a,A}$ in the reduced form, i.e., replacing every space $\mathcal E^{b,B+\epsilon}_{a,A}$, $B>0$, with the closure of $\mathcal E^{b,B+}_{(a)}$ in this space and letting $E^{b,B+\epsilon}_{a,A}$ denote this closure, we have $\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'=\varinjlim_{A\to\infty,\epsilon\to0}\bigl(E^{b,B+\epsilon}_{a,A}\bigr)'$. By Theorem~11 in ~\cite{K1967}, this identity holds algebraically and topologically because the inclusion maps $E^{b,B}_{a,\bar A}\hookrightarrow E^{b,\bar B}_{a, A}$, where $A<\bar A$ and $B<\bar B$, are compact. To simplify the notation, we use an injective sequence of spaces that is equivalent to the system $S^{b,B}_{a,A}$. Since the inductive limit $\Sab$ is regular, the set $Q$ is contained in $S_{a,k}^{b,k/H}$ with sufficiently large $k\in\oN$ and is bounded in its norm by a constant $C$. By the definition of the inductive topology, $V$ contains a set of the form $\bigcup_{n\in \oN}\sum_{m\le n}V_m$, where $V_m=\{f\in S_{a,m}^{b,m}\colon \|f\|_{m,m}\le\varepsilon_m\}$ with some $\varepsilon_m>0$. Let $U$ be the intersection of $\deab$ with a $0$-neighborhood in $\bigl(\mathcal E^{b,k+}_{(a)}\bigr)'=\varinjlim_{m\to\infty}\bigl(E^{b,k+1/m}_{a,m}\bigr)'$ which has the form $\bigcup_{n\in \oN}\sum_{m\le n}U_m$, where $U_m=\{u\in \bigl(\mathcal E^{b,k+}_{(a)}\bigr)'\colon \|u\|^{m}_{k+1/m}\le\varepsilon_{k+m}/(KC) \}$. It follows from~\eqref{3.4} that $u\ast f\in S^{b,k}_{a,k+m}$ and $\|u\ast f\|_{k+m,k}\le\varepsilon_{k+m}$ for all $f\in Q$ and $u\in U_m$. Because $\|u\ast f\|_{k+m,k+m}\le \|u\ast f\|_{k+m,k}$, we conclude that the operator of convolution by $u\in U_m$ maps $Q$ into $V_{k+m}$. Therefore, all the operators of convolution by elements of $U$ map $Q$ into $V$, as claimed. We now show that the inclusion map $\varprojlim_{A\to 0}\bigl(\mathcal E_{a,A-}^{(b)}\bigr)'\to C\bigl(\sab\bigr)$ is continuous. Each bounded set $Q\subset\sab$ is bounded in the norm of every space $S_{a,1/m}^{b,1/(Hm)}$ by a constant $C_m$. Any $0$-neighborhood $V$ in $\sab$ contains the intersection of $\sab$ with a set of the form $V_{k,\varepsilon}=\{f\in S_{a,1/k}^{b,1/k}\colon \|f\|_{1/k,1/k}\le \varepsilon\}$, where $k\in\oN$ and $\varepsilon>0$. We take $U$ to be the intersection of $\deabp$ with the $0$-neighborhood in $\bigl(\mathcal E^{(b)}_{a,1/k-}\bigr)'$, which is the absolutely convex hull of the union of the sets $U_m=\{u\in \bigl(\mathcal E^{(b)}_{a,1/k-}\bigr)'\colon \|u\|^{1/k-1/m}_{1/m}\le\varepsilon/(KC_m) \}$, $m> k$. It follows from~\eqref{3.4} that $\|u\ast f\|_{1/k, 1/m}\le\varepsilon$ for all $f\in Q$ and $u\in U_m$. Because $\|u\ast f\|_{1/k,1/k}\le \|u\ast f\|_{1/k,1/m}$ for $m> k$, we conclude that the operators of convolution by elements of $U_m$ map $Q$ into $V_{k,\varepsilon}$. The set $V_{k,\varepsilon}$ is absolutely convex, and all the operators of convolution by elements of $U$ hence belong to $W_{Q,V}$. This completes the proof. \end{proof} \section{Complete characterization of the convolutor spaces for the spaces of type $S$} \label{S4} Theorem~1 in~\cite{S2020} establishes that every functional $u\in C\bigl(\Sab\bigr)$ has a unique continuous extension to $\eab$ and every $u\in C\bigl(\sab\bigr)$ extends uniquely to a continuous linear functional on $\eabp$. Therefore, there are natural inclusion maps $C\bigl(\Sab\bigr)\to\deab$ and $C\bigl(\sab\bigr)\to\deabp$. Clearly, their compositions with the respective inclusions~\eqref{3.5} are the identity on $C\bigl(\Sab\bigr)$ and on $C\bigl(\sab\bigr)$. Since the extensions are unique, the compositions of these maps in reverse order are the identity on $\deab$ and on $\deabp$. Hence, the convolutor space $C\bigl(\Sab\bigr)$ consists of the same elements of $\dSab$ as $\deab$ and $C\bigl(\sab\bigr)$ coincides algebraically with $\deabp$. We now show that the extension procedure used in~\cite{S2012-I,S2020} makes clear the relation between the topologies of these spaces. \begin{theorem} \label{T4.1} The space $C\bigl(\sab\bigr)$ of convolutors for $\sab$ is canonically isomorphic to $\varprojlim_{A\to 0}\bigl(\mathcal E_{a,A}^{(b)}\bigr)'$. \end{theorem} \begin{proof} If $u$ belongs to $C\bigl(\sab\bigr)$, then it can be extended to a continuous functional on $\eabp$ in the following way. Let $f_0$ be a function in $\sab$ such that $\int f_0(\xi)d\xi=1$ and let $h\in \eabp$. We set \begin{equation} \langle\tilde u,h\rangle\coloneq\int \langle u,h(\cdot) f_0(\xi-\cdot)\rangle d\xi\equiv\int(u*h_\xi)(\xi) d\xi,\quad \text{where $h_\xi(x)\coloneq h(\xi-x)f_0(x)$}. \label{4.1} \end{equation} The integrand in~\eqref{4.1} is well-defined and continuous in $\xi$ because translations act continuously on $\sab$ and $h$ is a pointwise multiplier of this space by Theorem~2 in~\cite{S2019-2}. We examine the behaviour of $(u*h_\xi)(\xi)$ at infinity. The norm $\|f_0\|_{A_0,B_0}$ is finite for any $A_0,B_0>0$ and there is an $A$ such that $\|h\|^A_B<\infty$ for any $B>0$. Using~\eqref{2.4} and \eqref{2.12}, we obtain \begin{multline} |\partial^\beta_x h_\xi(x)|\le \sum_{\gamma\le\beta}\binom{\beta}\gamma |\partial^\gamma h(\xi-x)\partial^{\beta-\gamma} f_0(x)|\\\le \|h\|^A_B\,\|f_0\|_{A_0,B_0}\sum_{\gamma\le\beta}\binom{\beta}\gamma B^{|\gamma|}B_0^{|\beta-\gamma|}b_{|\gamma|} b_{|\beta-\gamma|}\frac{w_a(|\xi-x|/A)}{w_a(|x|/A_0)} \\\le K\|h\|^A_B\,\|f_0\|_{A_0,B_0} (B+B_0)^{|\beta|}b_{|\beta|}\frac {w_a(2|\xi|/A)w_a(2|x|/A)}{w_a(|x|/A_0)}. \label{4.2} \end{multline} Choosing $A_0\le A/(2H)$ and using~\eqref{2.13}, the $x$-dependent factor in~\eqref{4.2} is estimated as follows \begin{equation} \frac{w_a(2|x|/A)}{w_a(|x|/A_0)}\le \frac{w_a(|x|/HA_0)}{w_a(|x|/A_0)}\le \frac{K}{w_a(|x|/HA_0)}. \notag \end{equation} Consequently, $h_\xi\in S^{b,B+B_0}_{a,HA_0}$ and \begin{equation} \|h_\xi\|_{HA_0,B+B_0}\le K\|h\|^A_B\,\|f_0\|_{A_0,B_0} w_a(2|\xi|/A). \label{4.3} \end{equation} Let $U=\{f\in \sab\colon\|f\|_{A/2H,\,1}<1\}$. Since the map $f\to u\ast f$ is continuous on $\sab$, there is a neighborhood $U_1=\{f\in \sab\colon\|f\|_{A_1,B_1}\le\varepsilon\}$ such that $u*f\in U$ for all $f\in U_1$. Choose $A_0$, $B_0$ and $B$ such that $HA_0\le A_1$ and $B+B_0\le B_1$. Then $\|f\|_{A_1,B_1}\le \|f\|_{HA_0,B+B_0}$ and~\eqref{4.3} implies that \begin{equation} \|u\ast h_\xi\|_{A/2H,\,1}\le \varepsilon^{-1}K\|h\|^A_B\,\|f_0\|_{A_0,B_0} w_a(2|\xi|/A). \notag \end{equation} Using~\eqref{2.13} again, we obtain \begin{multline} |(u\ast h_\xi)(\xi)|\le \varepsilon^{-1}K\|h\|^A_B\,\|f_0\|_{A_0,B_0}\frac{w_a(2|\xi|/A)}{w_a(2H|\xi|/A)}\\ \le \varepsilon^{-1}K^2\|h\|^A_B\,\|f_0\|_{A_0,B_0}\frac{1}{w_a(2|\xi|/A)}. \label{4.4} \end{multline} Therefore, the integral in~\eqref{4.1} is absolutely convergent and defines a linear functional $\tilde u$ on $\eabp$. Since the right-hand side of~\eqref{4.4} contains $\|h\|_B^A$, this functional is continuous on every space $\mathcal E^{(b)}_{a,A}$, $A>0$, and therefore on $\eabp$. Now we show that $\tilde u|_{\sab}=u$. If $h\in \sab$, then for any positive $A_0$, $B_0$, $A_1$, and $B_1$, we have the inequality \begin{equation} |\partial_x^\beta(h(x)f_0(\xi-x))|\le \|h\|_{A_1,B_1}\|f_0\|_{A_0,B_0}(B_1+B_0)^{|\beta|}b_{|\beta|}\frac{1}{w_a(|x|/A_1)w_a(|\xi-x|/A_0)}. \label{4.5} \end{equation} If $HA_1\le A_0$, then $1/w_a(|x|/A_1)\le K/w_a^2(|x|/A_0)$ and~\eqref{4.5} coupled with~\eqref{2.12} gives \begin{equation} \|(h(\cdot)f_0(\xi-\cdot)\|_{A_0,B_1+B_0}\le K \|h\|_{A_1,B_1}\|f_0\|_{A_0,B_0}\frac{1}{w_a(|\xi|/2A_0)}. \notag \end{equation} We see that in this case, the integral in~\eqref{4.1} remains absolutely convergent when $u$ is replaced with any functional in $\dsab$ and hence the sequence of Riemann sums \begin{equation} s_n(x)=\sum_{\alpha\in\oZ^d, |\alpha|\le n^2}h(x) f_0(\alpha /n-x)/n^d \label{4.6} \end{equation} for $\int_{\oR^d} h(x)f_0(\xi-x)d\xi$ is weakly Cauchy in $\sab(\oR^d)$. Since $\sab$ is a Montel space, it is weakly sequentially complete and weak sequential convergence implies convergence in its topology. So, the sequence $s_n(x)$ converges in $\sab$. Its limit cannot be anything other than $h(x)$, because the topology of this space is finer than the topology of simple convergence. Hence, $\langle \tilde u,h\rangle=\lim_{n\to \infty}\langle u,s_n\rangle=\langle u, h\rangle$ if $h\in\sab$. Similar arguments show that $\sab$ is dense in $\eabp$. Namely, if $h\in\mathcal E^{(b)}_{a,A}$, then for any positive $A_0$, $B_0$, and $B$ we have \begin{equation} |\partial_x^\beta(h(x)f_0(\xi-x))|\le \|h\|^A_B\|f_0\|_{A_0,B_0}(B+B_0)^{|\beta|}b_{|\beta|}\frac{w_a(|x|/A)}{w_a(|\xi-x|/A_0)}, \label{4.7} \end{equation} where $w_a(|x|/A)\le K w_a(H|x|/A)/w_a(|x|/A)$. Choosing $A_0\le A$ and using~\eqref{2.12}, we obtain \begin{equation} \|(h(\cdot)f_0(\xi-\cdot)\|^{A/H}_{B+B_0}\le K \|h\|^A_B\|f_0\|_{A_0,B_0}\frac{1}{w_a(|\xi|/2A)}. \label{4.7*} \end{equation} Hence in this case, the integral in~\eqref{4.1} is absolutely convergent for any $u$ in the dual of the Montel space $\mathcal E^{(b)}_{a,A/H-}$ and the sequence~\eqref{4.6} converges to $h$ in this space and, a fortiori, in $\eabp$. This proves Proposition~\ref{P3.1}, or more precisely, its part concerning the inclusion $\sab\hookrightarrow \eabp$ and shows that the continuous extension of $u$ to $\eabp$ is unique. In combination with Proposition~\ref{P3.2}, this also completes the proof of the algebraic isomorphism between $C\bigl(\sab\bigr)$ and $\deabp$. It remains to show that the map $u\to\tilde u$ from $C\bigl(\sab\bigl)$ to $\varprojlim_{A\to 0}\bigl(\mathcal E_{a,A}^{(b)}\bigr)'$ is continuous, since the continuity of its inverse was proved in the proof of Proposition~\ref{P3.3}. Let $\mathscr B_A$ be the set of all bounded subsets of $\mathcal E^{(b)}_{a,A}$ and $\mathscr B=\bigcup_{A>0}\mathscr B_A$. The polars of sets in $\mathscr B$ form a $0$-neighborhood base for $\varprojlim_{A\to 0}\bigl(\mathcal E_{a,A}^{(b)}\bigr)'$. Therefore it suffices to show that for any given $G\in\mathscr B_A$, there are a bounded set $Q$ and a $0$-neighborhood $V$ in $\sab$ such that $u\ast Q\subset V$ implies $\sup_{h\in G}|\langle\tilde u,h\rangle|\le 1$. It follows from~\eqref{4.3} that the set $Q=\{w_a(2|\xi|/A)^{-1} h_\xi\colon h\in G, \xi\in\oR^d\}$ is bounded in $\sab(\oR^d)$. Let $V=\{f\in \sab\colon \|f\|_{A/2H,1}< \varepsilon\}$ and let $\varepsilon^{-1}=K\int_{\oR^d} w_a(2|\xi|/A)^{-1}d\xi$. If $u\ast Q\subset V$, then \begin{equation} |(u\ast h_\xi)(\xi)|\le \varepsilon\frac{w_a(2|\xi|/A)}{w_a(2H|\xi|/A)}\le \frac{\varepsilon K}{w_a(2|\xi|/A)}. \notag \end{equation} Hence, $|\langle\tilde u,h\rangle|\le\int_{\oR^d}|(u\ast h_\xi)(\xi)|d\xi\le 1$, which completes the proof. \end{proof} We turn to the case of spaces $\Sab$. To prove a theorem similar to Theorem~\ref{T4.1}, we need the following simple lemma. \begin{lemma} \label{L4.2} The weight function $w_a(t)$ satisfies, for any $n\in\oN$, the inequality \begin{equation} t^n w_a(t)\le C_n\, w_a(Ht),\quad t\ge 0. \label{4.8} \end{equation} \end{lemma} \begin{proof} It follows from the definition~\eqref{2.10} and from~\eqref{2.3} that \begin{equation} t^nw_a(t)=\sup_{k\in\oZ_+}\frac{t^{k+n}}{a_k}\le Ka_n\sup_{k\in\oZ_+}\frac{(Ht)^{k+n}}{a_{k+n}}\le Ka_n\, w_a(Ht). \notag \end{equation} Hence, \eqref{4.8} is satisfied with $C_n=Ka_n$. \end{proof} \begin{theorem}\label{T4.3} The space $C\bigl(\Sab\bigr)$ of convolutors for $\Sab$ is canonically isomorphic to $\varprojlim_{B\to \infty}\bigl(\mathcal E^{b,B}_{(a)}\bigr)'$. \end{theorem} \begin{proof} Every $u\in C\bigl(\Sab\bigr)$ can be extended to a functional $\tilde u\in \deab$ in the same manner as given by~\eqref{4.1}, using $f_0\in \Sab$ with $\int f_0(\xi)d\xi=1$. In this case there are $A_0,B_0>0$ such that $\|f_0\|_{A_0,B_0}<\infty$ and there is a $B>0$ such that $\|h\|^A_B<\infty$ for every $A>0$. If $A\ge 2HA_0$, then the inequality~\eqref{4.3} holds. The unit ball of $S^{b,B+B_0}_{a,HA_0}$ is bounded in $\Sab$ and its image under the continuous map $f\to u\ast f$ is also bounded in $\Sab$. Since the inductive limit $\Sab$ is regular, this image is contained and bounded in some $S^{b,B_1}_{a,A_1}$, where $A_1$ and $B_1$ are independent of $A$. Hence there exists a constant $C>0$ such that \begin{equation} \|u\ast h_\xi\|_{A_1,B_1}\le C\|h\|^A_B\, w_a(2|\xi|/A). \label{4.9} \end{equation} In particular, for $A\ge 2HA_1$, we have \begin{equation} |(u\ast h_\xi)(\xi)|\le C\|h\|^A_B\, \frac{w_a(2|\xi|/A)}{w_a(|\xi|/A_1)}\le CK\|h\|^A_B\,\frac{1}{w_a(|\xi|/HA_1)}. \label{4.10} \end{equation} Therefore, the integral in~\eqref{4.1} is absolutely convergent and the functional $\tilde u$ is well defined on $\eab$. Since the right-hand side of~\eqref{4.10} contains $\|h\|_B^A$, this functional is continuous. We now show that $\tilde u$ coincides with $u$ on $\Sab$. If $h\in S^{b,B_1}_{a,A_1}$, then the inequality~\eqref{4.5} holds and, choosing $A'\ge \max(HA_1,A_0)$ and using~\eqref{2.12} and~\eqref{2.13}, we obtain \begin{multline} \|h(\cdot)f_0(\xi-\cdot)\|_{A',B_1+B_0}\le \|h\|_{A_1,B_1}\|f_0\|_{A_0,B_0}\sup_x\frac{w_a(|x|/A')}{w_a(|x|/A_1)w_a(|\xi-x|/A_0)}\\ \le K\|h\|_{A_1,B_1}\|f_0\|_{A_0,B_0}\frac{1}{w_a(|x|/A')w_a(|\xi-x|/A_0)} \le K\|h\|_{A_1,B_1}\|f_0\|_{A_0,B_0} \frac{1}{w_a(|\xi|/2A')}. \notag \end{multline} Hence, in this case, the integral in~\eqref{4.1} is absolutely convergent for any $u\in \dSab$ and the sequence~\eqref{4.6} of Riemann sums converges to $h$ in $\Sab$ by the same argument as for $\sab$ because $\Sab$ is also a Montel space. In a similar manner we find that $\Sab$ is dense in $\eab$. Indeed, if $h\in \mathcal E^{b,B}_{(a)}$, then~\eqref{4.7} holds for any $A>0$ and~\eqref{4.7*} holds for $A\ge A_0$. Therefore, in this case, the integral in~\eqref{4.1} is absolutely convergent for any $u$ in the dual of the Montel space $\mathcal E^{b,(B+B_0)+}_{(a)}$, and the sequence~\eqref{4.6} converges to $h$ in this space and, a fortiori, in $\eab$. It remains to show that the map $u\to\tilde u$ from $C\bigl(\Sab\bigr)$ to $\varprojlim_{B\to \infty}\bigl(\mathcal E^{b,B}_{(a)}\bigr)'$ is continuous, i.e., that for any $B>0$ and every bounded set $G$ in $\mathcal E^{b,B}_{(a)}$, there are a bonded set $Q$ and a $0$-neighborhood $V$ in $\Sab$ such that $u\ast Q\subset V$ implies $\sup_{h\in G}|\langle\tilde u,h\rangle|\le 1$. If $A\ge 2HA_0$, then it follows from~\eqref{4.3} that \begin{equation} \sup_{h\in G}\|h_\xi\|_{HA_0,B+B_0}\le K \|f_0\|_{A_0,B_0}\sup_{h\in G}\|h\|^A_B\, w_a(2|\xi|/A). \label{4.11} \end{equation} Let $\mathrm w(\xi)$ be defined by \begin{equation} \mathrm w(\xi)=\inf_{A\ge 2HA_0}\sup_{h\in G}\|h\|^A_B\, w_a(2|\xi|/A). \notag \end{equation} This function is measurable, locally integrable and bounded from below by a positive constant. It follows from~\eqref{4.11} that the set of functions $h_\xi(x)/\mathrm w(\xi)$, where $h$ runs over $G$ and $\xi$ runs over $\oR^d$, is bounded in $\Sba(\oR^d)$. We take this set as $Q$ and note that \begin{equation} \left\{f\in \Sab(\oR^d)\colon \sup_{x\in \oR^d}\left|f(x)\mathrm w(x)(1+|x|^{d+1})\right|<\epsilon\right\} \qquad (\epsilon>0) \label{4.12} \end{equation} is a $0$-neighborhood in $\Sab(\oR^d)$ because by Lemma~\ref{L4.2} we have \begin{multline} \sup_x\left|f(x)\mathrm w(x)(1+|x|^{d+1})\right|\le\sup_{h\in G}\|h\|^A_B\sup_x\left|f(x)(1+|x|^{d+1})w_a(2|x|/A)\right|\le \\ \le \sup_{h\in G}\|h\|^A_B\sup_x\left|f(x)(1+C_{d+1}A^{d+1})w_a(2H|x|/A)\right|\le C'_{d,A}\sup_{h\in G}\|h\|^A_B\,\|f\|_{A/2H,B_1} \notag \end{multline} for any $A\ge2HA_0$ and $B_1>0$; hence the norm $\|f\|_{\mathrm w}=\sup_x|f(x)\mathrm w(x)(1+|x|^{d+1})|$ is weaker than the norm of any space $S_{a,A_1}^{b,,B_1}$, $A_1, B_1>0$. Taking for $V$ the neighborhood~\eqref{4.12} with $\epsilon^{-1}=\int_{\oR^d} (1+|\xi|^{d+1})^{-1}d\xi$, we conclude that for each $h\in G$, the inclusion $u\ast Q\subset V$ implies \begin{equation} |\langle\tilde u,h\rangle|\le \int_{\oR^d}|(u\ast h_\xi)(\xi)|d\xi=\int_{\oR^d}\left|\left(u\ast \frac{h_\xi}{\mathrm w(\xi)}\right)\!(\xi)\,\mathrm w(\xi) \right| d\xi<\epsilon \int_{\oR^d} \frac{d\xi}{1+|\xi|^{d+1}}= 1. \notag \end{equation} The proof is somewhat simpler in the particular case $a_n\equiv 1$. Returning to the second line of~\eqref{4.2}, we see that if $A\ge2A_0$ and $|\xi|\le A/2$, then $w_1(|\xi-x|/A)=1$ on the support of $1/w_1(|x|/A_0)$. Hence, the inequality~\eqref{4.3} is replaced by $\|h_\xi\|_{A_0, B+B_0}\le \|h\|^A_B \|f_0\|_{A_0,B_0}$, where $|\xi|\le A/2$. Instead of~\eqref{4.9}, we obtain $\|u\ast h_\xi\|_{A_1,B_1}\le C\|h\|^A_B$ for all $\xi$ in this region. The function $u\ast h_\xi$ has compact support in this case and~\eqref{4.10} holds with $K=1$, $H=1$, the integration in~\eqref{4.1} is over a bounded domain and $\tilde u$ is well defined and continuous on $\breve{\mathcal E}^{\{b\}}=\breve{\mathcal E}^{\{b\}}_{(1)}$. An easy adaptation of the above arguments shows that $\tilde u=u$ on $S^{\{b\}}_{\{1\}}=\mathcal D^{\{b\}}$ and $\mathcal D^{\{b\}}$ is dense in $\breve{\mathcal E}^{\{b\}}$. Finally, let $G$ be a bounded set in $\breve{\mathcal E}^{b,B}$. Since $\|h\|^A_B$ increases monotonically with $A$, we have for any $\xi$ \begin{equation} \sup_{h\in G}\|h_\xi\|_{A_0,B+B_0}\le \|f_0\|_{A_0,B_0}\sup_{h\in G}\|h\|^{2A_0+ 2|\xi|}_B. \notag \end{equation} Setting $\mathrm w(\xi)=\sup_{h\in G}\|h\|^{2A_0+ 2|\xi|}_B$, we see that $Q=\{h_\xi(x)/\mathrm w(\xi): h\in G, \xi\in \oR^d\}$ is bounded in $\mathcal D^{\{b\}}(\oR^d)$. The set defined by~\eqref{4.12} is a $0$-neighborhood in this space because $S^{b, B_1}_{\{1\}, A_1}$ consists of functions supported in $\{x\in\oR^d \colon |x|\le A_1\}$ and is continuously embedded into the normed space with the norm $\|\cdot\|_{\mathrm w}$. Taking for $V$ this set, we conclude as before that $u\ast Q\subset V$ implies $\sup_{h\in G}|\langle\tilde u,h\rangle|\le1$. The proof is complete. \end{proof} \begin{corollary} \label{C4.4} The spaces $C\bigl(\Sab\bigr)$ and $C\bigl(\sab\bigr)$ are complete and semi-reflexive. \end{corollary} This directly follows from the heredity properties of projective limits (Sect.~IV.5.8 in~\cite{Sch}). The same conclusion can be made taking into account that $C\bigl(\Sab\bigr)$ and $C\bigl(\sab\bigr)$ are closed subspaces of $\mathcal L\bigl(\Sab\bigr)$ and $\mathcal L\bigl(\sab\bigr)$, respectively, and that $\mathcal L(E)$ is complete and nuclear (and therefore semi-reflexive) for any nuclear (FS) or (DFS)-space $E$ (Sect.~39.6 in~\cite{K1979} and Sect.~IV.9.7 in~\cite{Sch}). \begin{corollary} \label{C4.5} The space $\eab$ is canonically isomorphic to the strong dual of $C\bigl(\Sab\bigr)$ and $\eabp$ is canonically isomorphic to the strong dual of $C\bigl(\sab\bigr)$. \end{corollary} \begin{proof} The projective limit $\varprojlim_{B\to \infty}\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'$ is reduced because $\Sab$ is contained and dense in each of the spaces $\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'$, $B>0$. Indeed, the map $ \mathcal E^{b,B+}_{(a)}\to \dSab \colon h\to \left(f\to\int h(x)f(x)dx\right)$ is the adjoint of the natural inclusion $\Sab\to\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'$ with respect to the dualities $\left\langle \Sab, \dSab\right\rangle$ and $\left\langle\bigl(\mathcal E^{b,B+}_{(a)}\bigl)', \mathcal E^{b,B+}_{(a)}\right\rangle$ and is injective because $\Sab$ has sufficiently many functions. Therefore $\Sab$ is weakly dense in $\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'$ and is also strongly dense because $\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'$ is a (DFS)-space. By Theorem~4.4 in Ch.~IV of~\cite{Sch}, the Mackey dual of $\varprojlim_{B\to \infty}\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'$ is identified with $\varinjlim_{B\to\infty}\bigl(\mathcal E^{b,B+}_{(a)}\bigr)^{\prime\prime}$, i.e., with $\eab$, because the spaces $\mathcal E^{b,B+}_{(a)}$ are reflexive. Since $\varprojlim_{B\to \infty}\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'$ is semi-reflexive, the strong topology on its dual coincides with the Mackey topology (Theorem~5.5 in Ch.~IV of~\cite{Sch}). Hence the strong dual of $\varprojlim_{B\to \infty}\bigl(\mathcal E^{b,B+}_{(a)}\bigr)'$ is identified with $\eab$ and the adjoint of the map $u\to\tilde u$ in Theorem~\ref{T4.3} is an algebraic and topological isomorphism of $\eab$ onto $C^{\,\prime}\bigl(\Sab\bigr)$. In the same way we see that $\eabp$ is isomorphic to the strong dual of $C\bigl(\sab\bigr)$. \end{proof} The statement of Corollary~\ref{C4.5} is an analog of the well-known fact established by Grothendieck (Sect.~4.4 in Ch.~II of~\cite{Grot1955}) that the strong dual of the space of convolutors of the Schwartz space $S(\oR^d)$ is isomorphic to the space $\mathcal O_C(\oR^d)$ of very slowly increasing smooth functions. Under an additional condition on the defining sequences $a$ and $b$, it follows from Corollary~\ref{C4.5} combined with Theorem~1 in~\cite{S2019} that the spaces $\eab$ and $\eabp$ are invariant under an important class of ultradifferential operators. Following Komatsu~\cite{K1973}, we write $b_n\subset a_n$ if there exist constants $C$ and $L$ such that \begin{equation} b_n\le C L^n a_n\qquad \forall n\in \oZ_+. \label{4.13} \end{equation} \begin{corollary}\label{C4.6} Let $\mathcal Q=(\mathcal Q^{jk})$ be a $d\times d$ matrix with real entries and let $b_n\subset a_n$. If the space $\Sab(\oR^d)$ is nontrivial, then the operator $e^{i\mathcal Q^{jk}\partial_j\partial_k}$ is a homeomorphism of $\eab(\oR^d)$ onto itself, and if $\sab(\oR^d)$ is nontrivial, then $e^{i\mathcal Q^{jk}\partial_j\partial_k}$ is a homeomorphism of $\eabp(\oR^d)$ onto itself. \end{corollary} \begin{proof} The operator $e^{i\mathcal Q^{jk}\partial_j\partial_k}$ is defined via the inverse Fourier transform of $e^{-i\mathcal Q^{jk}\zeta_j\zeta_k}$, as in the case with the Schwartz space $S(\oR^d)$ considered in Sect.~7.6 in~\cite{H1}. In view of Corollary~\ref{C4.5}, the statement of Corollary~\ref{C4.6} amounts to saying that the multiplication by $e^{-i\mathcal Q^{jk}\zeta_j\zeta_k}$ is a self-homeomorphism of the multiplier spaces $M\bigl(\Sba\bigr)$ and $M\bigl(\sba\bigr)$. This in turn follows from Theorem~1 in~\cite{S2019} which states that under condition~\eqref{4.13} the function $e^{-i\mathcal Q^{jk}\zeta_j\zeta_k}$ is a pointwise multiplier of $\Sba$ and $\sba$. \end{proof} For the case of spaces $S_\beta^\alpha$, $\beta\le\alpha$, defined by sequences of the form~\eqref{2.6}, a simple proof of the fact that $e^{-i\mathcal Q^{jk}\zeta_j\zeta_k}$ belongs to $M(S_\beta^\alpha)$ was given in the course of proving Theorem~1 in~\cite{S2007}. Similar invariance properties of the symbol spaces $\Gamma_s^\infty$ and $\Gamma_{0,s}^\infty$ (which coincide respectively with $\breve{\mathcal E}^{\{a\}}_{(a)}$ and $\breve{\mathcal E}^{(a)}_{\{a\}}$ for $a_n=n^{sn}$, as noted in Sect.~\ref{S2}) were also proved by different methods in Theorem~4.1 of~\cite{CT}, and in Proposition~4.4 of~\cite{CW}. \section{The Moyal multiplier algebras for the spaces of type $S$} \label{S5} Theorem~2 in~\cite{S2019} shows that if~\eqref{4.13} is satisfied, then $\Sab(\oR^{2d})$ and $\sab(\oR^{2d})$ are topological algebras under the twisted multiplication~\eqref{1.1}. ({For $S^\beta_\alpha$, $\beta\le\alpha$, this has been proved in~\cite{S2007}.) The Fourier transform converts the Weyl-Moyal product into the twisted convolution \begin{equation} (\widehat f\ast_{\hbar} \widehat g)(\zeta)= \int_{\oR^{2d}}\widehat f(\zeta') \widehat g(\zeta-\zeta')e^{(i\hbar/2)\zeta\cdot J\zeta'}d\zeta' \label{5.1} \end{equation} (multiplied by $(2\pi)^{-d}$). Under the same condition~\eqref{4.13}, the spaces $\Sba=F\bigl[\Sab\bigr]$ and $\sba=F\bigl[\sab\bigr]$ are topological algebras with the twisted convolution as multiplication. We will consider them in parallel with $\bigl(\Sab,\star_\hbar\bigr)$ and $\bigl(\sab,\star_\hbar\bigr)$. The products $\langle f\star_\hbar u\rangle$ and $\langle u\star_\hbar f\rangle$ of a function $f\in \Sab$ with an element $u$ of the dual space $\dSab$ are defined by \begin{equation} \langle f\star_\hbar u,h\rangle\coloneq \langle u,h\star_\hbar f\rangle ,\qquad \langle u\star_\hbar f,h\rangle\coloneq \langle u,f\star_\hbar h\rangle\qquad \forall h\in \Sab \label{5.2} \end{equation} and analogously for the dual pair $\bigl\langle\sab, \dsab \bigr\rangle$. Since the right-hand sides in~\eqref{5.2} are linear and continuous in $h$, these products are well defined as elements of $\dSab$. The formulas~\eqref{5.2} agree with the definition of the operation $\star_\hbar$ in $\Sab$ due to the identity \begin{equation} \int(f\star_\hbar g)(x)dx=\int f(x)g(x)dx, \label{5.3} \end{equation} which is equivalent to the obvious identity $(\widehat f\ast_\hbar\widehat g)(0)=(\widehat f\ast\widehat g)(0)$. Indeed, if $f,g,h\in \Sab$, then using~\eqref{5.3} and the associativity of the Weyl-Moyl product, we obtain \begin{equation} \langle f\star_\hbar g, h\rangle\equiv\int(f\star_\hbar g)(x) h(x)dx= \int(h\star_\hbar(f\star_\hbar g))(x) dx = \int((h\star_\hbar f)\star_\hbar g)(x) dx=\langle g,h\star_\hbar f\rangle. \notag \end{equation} Under condition~\eqref{4.13}, the twisted convolution product of $g\in \Sba$ and $v\in \dSba$ can also be defined by duality, namely: \begin{equation} \langle v\ast_\hbar g,h\rangle\coloneq \langle v,\check{g}\ast_\hbar h\rangle,\quad \langle g\ast_\hbar v,h\rangle\coloneq \langle v,h\ast_\hbar\check{g} \rangle \qquad \forall h\in \Sba, \notag \end{equation} where $\check{g}(\zeta)=g(-\zeta)$. Then we clearly have the relations \begin{equation} \widehat{u\star_\hbar f} = (2\pi)^{-d}\widehat{u}\ast_\hbar\widehat{f},\qquad \widehat{f\star_\hbar u} = (2\pi)^{-d}\widehat{f}\ast_\hbar\widehat{u}. \notag \end{equation} The spaces of left and right multipliers for the algebra $(\Sab,\star_\hbar)$ are defined as follows: \begin{gather} \mM_{\hbar,L}\bigl(\Sab\bigr)\coloneq \left\{u\in \dSab\colon u\star_\hbar f\in \Sab\quad \forall f\in \Sab\right\},\label{5.4} \\ \mM_{\hbar,R}\bigl(\Sab\bigr)\coloneq\left \{u\in \dSab\colon f\star_\hbar u \in \Sab\quad \forall f\in\Sab\right\}, \label{5.5} \end{gather} The definitions of $\mM_{\hbar,L}\bigl(\sab\bigr)$ and $\mM_{\hbar,R}\bigl(\sab\bigr)$ are similar. The mappings $f\to u\star_\hbar f$ and $f\to f\ast_\hbar u$ of $\Sab$ into itself and of $\sab$ into itself are continuous by the closed graph theorem, and the multiplier spaces are naturally endowed with the respective topologies induced by $\mathcal L\bigl(\Sab\bigr)$ and $\mathcal L\bigl(\sab\bigr)$. Theorem~3 in~\cite{S2019} establishes that under condition~\eqref{4.13}, the spaces ${\mM}_{\hbar,L}\bigl(\Sab\bigr)$, ${\mM}_{\hbar,R}\bigl(\Sab\bigr)$, ${\mM}_{\hbar,R}\bigl(\sab\bigr)$, and ${\mM}_{\hbar,L}\bigl(\sab\bigr)$ are unital algebras with separately continuous multiplication $\star_\hbar$. (For the case of spaces $S^\beta_\alpha$, $\beta\le\alpha$, this was proved in ~\cite{S2011}.) The Fourier transformation maps ${\mM}_{\hbar,L}\bigl(\Sab\bigr)$ and ${\mM}_{\hbar,R}\bigl(\Sab\bigr)$ respectively onto the algebras of left and right twisted convolution multipliers \begin{gather} \mathcal C_{\hbar,L}\bigl(\Sba\bigr)= \left\{v\in \dSba\colon v\ast_\hbar g\in \Sba\quad \forall g\in \Sba\right\}, \label{5.6} \\ \mathcal C_{\hbar,R}\bigl(\Sba\bigr)=\left \{v\in \dSba\colon g\ast_\hbar v \in \Sba\quad \forall g\in \Sba\right\}. \label{5.7} \end{gather} Analogously, $\widehat{\mM}_{\hbar,L}\bigl(\sab\bigr)=\mathcal C_{\hbar,L}\bigl(\sba\bigr)$ and $\widehat{\mM}_{\hbar,R}\bigl(\sab\bigr)=\mathcal C_{\hbar,R}\bigl(\sba\bigr)$. \begin{lemma} \label{L5.1} Let $b_n\subset a_n$. The algebras $\bigl(\Sab,\star_\hbar\bigr)$, $\bigl(\sab,\star_\hbar\bigr)$, $\bigl(\Sba,\ast_\hbar\bigr)$, and $\bigl(\sba,\ast_\hbar\bigr)$, have sequential approximate identities. \end{lemma} \begin{proof} Since the algebra $\bigl(\Sab,\star_\hbar\bigr)$ is isomorphic, via the Fourier transform, to the algebra $\bigl(\Sba,\ast_\hbar\bigr)$ and $\bigl(\sab,\star_\hbar\bigr)$ is isomorphic to $\bigl(\sba,\ast_\hbar\bigr)$, it suffices to consider the case of twisted convolution. Every nontrivial space $\Sba(\oR^{2d})$ contains a function $e_1(\zeta)$ such that $\int_{\oR^{2d}} e_1(\zeta)d\zeta=1$ and $e_1(\zeta)\ge 0$, because it is an algebra under pointwise multiplication. We claim that the sequence $e_n(\zeta)= n^{2d} e_1(n\zeta)$, $n\in\oN$, is an approximate identity for $\bigl(\Sba,\ast_\hbar\bigr)$, i.e., that for any $g\in \Sba$, the limit relations $e_n\ast_\hbar g\to g$ and $g\ast_\hbar e_n\to g$ hold in the topology of $\Sab$ as $n\to\infty$. Clearly, $(e_n\ast_\hbar g)(\zeta)\to g(\zeta)$ and $(g\ast_\hbar e_n)(\zeta)\to g(\zeta)$ at every $\zeta$ because $e_n$ is a delta-like sequence. We show that the sequence $e_n\ast_\hbar g$ is bounded in $\Sba$. Using~\eqref{2.4} and the inequality $t^k\le {A'}^k a_k\, w_a(t/A')$, valid for any $A'>0$, and also the inequality $w_a(t)\le Cw_b(Lt)$ following from~\eqref{4.13}, we obtain \begin{multline} |\partial^\alpha(e_n\ast_\hbar g)(\zeta)|\le \int_{\oR^{2d}}e_n(\zeta')\left|\partial^\alpha_\zeta \left(e^{(i\hbar/2)\zeta\cdot J\zeta'} g(\zeta-\zeta')\right)\right|d\zeta'\le \\ \le \|g\|_{A,B}\sum_{\gamma\le\alpha}{\alpha\choose\gamma} A^{|\alpha-\gamma|}a_{|\alpha-\gamma|} \int_{\oR^{2d}}\frac{e_n(\zeta')\,(\hbar|\zeta'|/2)^{|\gamma|}}{w_b(|\zeta-\zeta'|/B)}d\zeta' \le \\ \le C \|g\|_{A,B} (A+A'\hbar/2)^{|\alpha|} a_{|\alpha|} \int_{\oR^{2d}}\frac{e_n(\zeta')\,w_b(L|\zeta'|/A')}{w_b(|\zeta-\zeta'|/B)}d\zeta'. \label{5.8} \end{multline} The function $e_1$ belongs to $S^{a, A_1}_{b,B_1/H}$ with sufficiently large $A_1$ and $B_1$ and by~\eqref{2.11} we have \begin{equation} \frac{1}{w_b(|\zeta-\zeta'|/B)}\le\frac{w_b(|\zeta'|/(HB_1))}{w_b(|\zeta|/(B+HB_1))}. \label{5.9} \end{equation} Let $A'=LHB_1$. Then it follows from~\eqref{5.8}, \eqref{5.9} and~\eqref{2.13} that \begin{equation} |\partial^\alpha(e_n\ast_\hbar g)(\zeta)|\le CK \|g\|_{A,B} \frac{(A+LHB_1\hbar/2)^{|\alpha|} a_{|\alpha|} }{w_b(|\zeta|/(B+HB_1))}\int_{\oR^{2d}} e_n(\zeta')\,w_b(|\zeta'|/B_1)d\zeta'. \label{5.10} \end{equation} Since $e_n(\zeta')\le C_n/ w_b(H|\zeta'|/B_1)$, the integral in the right-hand side is finite for any $n$ in view~\eqref{2.13} and tends to $w_b(0)=1$ as $n\to\infty$. Therefore, the sequence $e_n\ast_\hbar g$ is contained and bounded in $S^{a, A_2}_{b,B_2}$, where $A_2=A+LHB_1\hbar/2$ and $B_2=B+HB_1$. This sequence has at least one limit point in the Montel space $\Sba$. The topology of $\Sba$ is finer than the topology of simple convergence, and only $g$ can be the limit point. Hence $e_n\ast_\hbar g$ converges to $g$ in $\Sba$, because otherwise it would have a limit point other than $g$, which is impossible. Similarly $g\ast_\hbar e_n\to g$ in $\Sba$. The proof is simpler in the case where $b_n\equiv 1$. Then $e_1\in\mathcal D^{\{a\}}=S^{\{a\}}_{\{1\}}$ is supported in a compact set $\{x \colon|x|\le B_1\}$ and the integral in the second line of~\eqref{5.8} is clearly less than $(B_1\hbar/2)^{|\gamma|}a_{|\gamma|} \int_{|\zeta-\zeta'|\le B} e_n(\zeta')d\zeta'$, which immediately implies the boundedness of the sequence $e_n\ast_\hbar g$ in $\mathcal D^{\{a\}}$. In the case of spaces $\sba$, the norm $\|g\|_{A,B}$ is finite for any $A,B>0$, and $e_1\in S^{a, A_1}_{b,B_1/H}$ for arbitrarily small $A_1$ and $B_1$. Hence, the same estimate~\eqref{5.10} shows that the sequence $e_n\ast_\hbar g$ is bounded in $\sba$. The rest of the proof is the same as for $\Sba$, because $\sba$ is also a Montel space. \end{proof} The above proof generalizes and simplifies a proof given in~\cite{S2011} for $(S^\beta_\alpha,\star_\hbar)$, $\beta\le\alpha$. \begin{theorem}\label{T5.2} The algebra $\mM_{\hbar,L}\bigl(\Sab\bigr)$ is canonically identified with the closure in $\mathcal L\bigl(\Sab\bigr)$ of the set of all operators of the left $\star_\hbar$-multiplication by elements of $\Sab$. This closure consists of all $V\in\mathcal L\bigl(\Sab\bigr)$ such that \begin{equation} V(f\star_\hbar g)=V(f)\star_\hbar g\qquad \forall f,g\in \Sab. \label{5.11} \end{equation} A similar statement holds for $\mM_{\hbar,R}\bigl(\Sab\bigr)$, with the replacement of the left $\star_\hbar$-multiplication by the right $\star_\hbar$-multiplication and with the condition $V(f\star_\hbar g)=f\star_\hbar V(g)$ instead of~\eqref{5.11}. Analogous statements are true for $\mM_{\hbar,L}\bigl(\sab\bigr)$ and $\mM_{\hbar,R}\bigl(\sab\bigr)$. \end{theorem} \begin{proof} Let $u\in \mM_{\hbar,L}\bigl(\Sab\bigr)$ and let $L_u$ denote the map $f\to u\star f$ from $\Sab$ to itself. (For brevity, we temporarily suppress the index $\hbar$ in the notation of the star product). Using an approximate identity $e_n$ for $\bigl(\Sab,\star\bigr)$ and passing to the limit in $\langle u,e_n\star f\rangle=\langle u\star e_n,f\rangle$, we find that the map $u\to L_u$ of $\mM_{\hbar,L}\bigl(\Sab\bigr)$ to $\mathcal L\bigl(\Sab\bigr)$ is injective. It follows from the definition~\eqref{5.2} and from the associativity of the $\star$-multiplication in $\Sab$ that \begin{equation} u\star(f\star g)=(u\star f)\star g,\qquad (f\star g)\star u=f\star(g\star u). \label{5.12} \end{equation} In terms of $L_u$, the first of these relations takes the form $L_u(f\star g)=L_u(f)\star g$. On the other hand, to every $V\in \mathcal L\bigl(\Sab\bigr)$ there corresponds a unique $v\in \dSab$ such that \begin{equation} \langle v,f\rangle=\int V(f)\,dx. \notag \end{equation} If $V$ satisfies~\eqref{5.11}, then~\eqref{5.2} and~\eqref{5.3} give \begin{equation} \langle v\star f, g\rangle=\langle v,f\star g\rangle=\int V(f\star g)\,dx=\int V(f) g\,dx. \notag \end{equation} Hence, $v\star f=V(f)$, $v\in\mM_{\hbar,L}\bigl(\Sab\bigr)$, and $L_v=V$. The relation $V(e_n\star f)= V(e_n)\star f$ implies that the sequence of operators of the left $\star$-multiplication by $V(e_n)\in \Sab$ converges pointwise to $V$. Since $\Sab$ is barrelled, the convergence is uniform on every precompact subset of $\Sab$ by the Banach-Steinhaus theorem (Theorem 4.6 in Ch.~III in~\cite{Sch}). Since, further, $\Sab$ is a Montel space, every its bounded subset is precompact, and we conclude that the sequence in question converges to $V$ in the topology of $\mathcal L\bigl(\Sab\bigr)$. On the other hand, if $V\in \mathcal L\bigl(\Sab\bigr)$ is the limit of a net of operators of the left $\star$-multiplication by $h_\nu\in \Sab$, then \begin{equation} V(f\star g)=\lim_\nu h_\nu\star(f\star g)=\lim_\nu (h_\nu\star f)\star g= V(f)\star g \notag \end{equation} and hence $V$ satisfies~\eqref{5.2}. The case of right multipliers is treated similarly, using the second of relations~\eqref{5.12}. The same arguments apply to $\mM_{\hbar,L}\bigl(\sab\bigr)$ and $\mM_{\hbar,R}\bigl(\sab\bigr)$, which completes the proof. \end{proof} \begin{remark}\label{R5.3} Similar theorems hold for $\mathcal C_{\hbar,L}\bigl(\Sba\bigr)$, $\mathcal C_{\hbar,L}\bigl(\sba\bigr)$, $\mathcal C_{\hbar,R}\bigl(\Sba\bigr)$, and $\mathcal C_{\hbar,R}\bigl(\sba\bigr)$. These algebras can also be characterized in another way. Proposition~2 in~\cite{S2012-I} shows that $\mathcal C_{\hbar,L}\bigl(\Sba\bigr)$ and $\mathcal C_{\hbar,L}\bigl(\sba\bigr)$ can be identified with the respective sets of those operators in $\mathcal L\bigl(\Sba\bigr)$ and in $\mathcal L\bigl(\sba\bigr)$ that commute with the twisted translations $\tau_\xi\colon g(\zeta)\to e^{(i\hbar/2)\xi\cdot J\zeta}g(\zeta-\xi)$, $\xi\in \oR^{2d}$. Analogous statements are valid for $\mathcal C_{\hbar,R}\bigl(\Sba\bigr)$ and $\mathcal C_{\hbar,R}\bigl(\sba\bigr)$, but with $\bar\tau_\xi\colon g(\zeta)\to e^{-(i\hbar/2)\xi\cdot J\zeta}g(\zeta-\xi)$ in place of $\tau_\xi$. Theorem~\ref{T5.2} and the above characterizations hold true at $\hbar=0$, i.e., in the case of pointwise multiplication and ordinary convolution. \end{remark} In the sequel, we consider the spaces of two-sided multipliers \begin{gather} \hspace{-1.1mm}\mM_\hbar\bigl(\Sab\bigr)=\mM_{\hbar,L}\bigl(\Sab\bigr)\bigcap \mM_{\hbar,R}\bigl(\Sab\bigr),\quad\!\!\! \mM_\hbar\bigl(\sab\bigr)=\mM_{\hbar, L}\bigl(\sab\bigr)\bigcap\mM_{\hbar, R}\bigl(\sab\bigr)\label{5.13} \\ {\mathcal C}_\hbar\bigl(\Sba\bigr)={\mathcal C}_{\hbar,L}\bigl(\Sba\bigr)\bigcap {\mathcal C}_{\hbar,R}\bigl(\Sba\bigr),\quad {\mathcal C}_\hbar\bigl(\sba\bigr)={\mathcal C}_{\hbar, L}\bigl(\sba\bigr)\bigcap{\mathcal C}_{\hbar, R}\bigl(\sba\bigr). \label{5.14} \end{gather} The space $\mM_\hbar\bigl(\Sab\bigr)$ is naturally endowed with the initial topology with respect to the inclusion maps $\mM_\hbar\bigl(\Sab\bigr)\to \mM_{\hbar,L}\bigl(\Sab\bigr)$ and $\mM_\hbar\bigl(\Sab\bigr)\to \mM_{\hbar,R}\bigl(\Sab\bigr)$. The spaces $\mM_\hbar\bigl(\sab\bigr)$, ${\mathcal C}_\hbar\bigl(\Sba\bigr)$, and ${\mathcal C}_\hbar\bigl(\sba\bigr)$ are topologized in the same manner. All these spaces are unital involutive algebras with separately continuous multiplication (for more detail, see~\cite{S2019} and also~\cite{S2011} for the case of spaces $S_\alpha^\beta$). We note that multiplication in $\mM_\hbar\bigl(\Sab\bigr)$ and in $\mM_\hbar\bigl(\sab\bigr)$ can be defined by either of the two formulas \begin{equation} \langle u\star v,f\rangle\coloneq\langle u, v\star f\rangle,\qquad \langle u\star v,f\rangle\coloneq\langle v,f\star u\rangle. \notag \end{equation} Indeed, replacing $f$ by $e_n\star f$ and using~\eqref{5.12} and then~\eqref{5.2}, we can write their right-hand sides as $\int (f\star u)(v\star e_n) dx$. Passing to the limit as $n\to\infty$ and using the continuity of the maps $f\to v\star f$ and $f\to f\star u$, we see that these definitions are equivalent. \section{Inclusion relations between the Moyal multiplier algebras and spaces of type $\mathscr E$} \label{S6} Along with the inclusions~\eqref{3.1}, we have the continuous inclusions \begin{equation} \Sba\hookrightarrow \Eba,\qquad \sba \hookrightarrow \Ebap, \label{6.1} \end{equation} where $\Eba$ and $\Ebap$ are defined by~\eqref{2.15}. (Recall that the upper index determines the smoothness of the space elements, and the lower index determines their behavior at infinity.) Lemmas~3 and 4 in~\cite{S2019-2} show that these inclusions are dense. Therefore, $\dEba$ and $\dEbap$ are naturally identified with the respective vector subspaces of $\dSba$ and $\dsba$. It follows from~\eqref{2.17} that $\dEba\subset\deba$ and $\dEbap\subset\debap$. The noncommutative deformation of convolution violates the inclusion relations~\eqref{3.5}, but Theorem~4 in~\cite{S2019-2} shows that under condition~\eqref{4.13}, the inclusions \begin{equation} \dEba\subset\mathcal C_\hbar\bigl(\Sba\bigr),\qquad \dEbap \subset \mathcal C_\hbar\bigl(\sba\bigr) \label{6.2} \end{equation} are valid. They are the starting point for deriving other inclusion relationships in this section. \begin{theorem}\label{T6.1} The inclusions~\eqref{6.2} are continuous. \end{theorem} \begin{proof} As in the case of spaces~\eqref{2.16}, it is useful to represent $\Eba$ and $\Ebap$ as limits of families of spaces with nice topological properties, namely: \begin{equation} \Eba=\varprojlim_{B\to\infty}\mathcal E^{\{a\}}_{b,B+},\qquad \Ebap=\varprojlim_{A\to0} \mathcal E^{a,A-}_{\{b\}}, \label{6.3} \end{equation} where \begin{equation} \mathcal E^{\{a\}}_{b,B+}\coloneq\varinjlim\limits_{A\to\infty,\epsilon\to0}\mathcal E_{b,B+\epsilon}^{a,A}, \qquad \mathcal E^{a,A-}_{\{b\}}\coloneq\varinjlim\limits_{B\to0,\epsilon\to0}\mathcal E^{a,A-\epsilon}_{b,B} . \notag \end{equation} By Lemma~2 in~\cite{S2019-2}, $\mathcal E^{\{a\}}_{b,B+}$ and $\mathcal E^{a,A-}_{\{b\}}$ are (DFS)-spaces. It follows that the projective limits~\eqref{6.3} are semi-reflexive. Furthermore, Lemmas~3 and 4 in~\cite{S2019-2} show that they are reduced. The duals $\bigl(\mathcal E^{\{a\}}_{b,B+}\bigr)'$ and $\bigl(\mathcal E^{a,A-}_{\{b\}}\bigr)'$ are (FS)-spaces and therefore Mackey spaces. Using, as in the proof of Corollary~\ref{C4.5}, the duality between projective and inductive limits, we conclude that \begin{equation} \dEba= \varinjlim\limits_{B\to\infty}\bigl(\mathcal E^{\{a\}}_{b,B+}\bigr)',\qquad \dEbap= \varinjlim\limits_{A\to0} \bigl(\mathcal E^{a,A-}_{\{b\}}\bigr)' , \notag \end{equation} because the semi-reflexivity of $\dEba$ and $\dEbap$ implies that the strong topology on $\dEba$ and $\dEbap$ coincides with the Mackey topology. It suffices to show now that the maps $\bigl(\mathcal E^{\{a\}}_{b,B+}\bigr)'\to {\mathcal C}_\hbar\bigl(\Sba\bigr)$ and $\bigl(\mathcal E^{a,A-}_{\{b\}}\bigr)'\to {\mathcal C}_\hbar\bigl(\sba\bigr)$ are continuous for every $B>0$ and every $A>0$. We note that for any fixed $g\in \Sba$, the graphs of the maps \begin{equation} \bigl(\mathcal E^{\{a\}}_{b,B+}\bigr)'\to \Sba\colon v\to v\ast_\hbar g ,\qquad \bigl(\mathcal E^{\{a\}}_{b,B+}\bigr)'\to \Sba\colon v\to g\ast_\hbar v \label{6.4} \end{equation} are closed. Indeed, if $v_\nu$ is a net in $\bigl(\mathcal E^{\{a\}}_{b,B+}\bigr)'$ such that $v_\nu\to v\in\bigl(\mathcal E^{\{a\}}_{b,B+}\bigr)'$ and $v_\nu\ast_\hbar g\to f\in \Sba$, then for any $h\in \Sba$, we have \begin{equation} \int f(\zeta) h(\zeta)d\zeta= \lim_\nu \langle v_\nu\ast_\hbar g, h\rangle= \lim_\nu\langle v_\nu,\check{g}\ast_\hbar h\rangle=\langle v, \check{g}\ast_\hbar h\rangle=\langle v\ast_\hbar g, h\rangle, \notag \end{equation} hence $f=v\ast_\hbar g$. Consequently, the maps~\eqref{6.4} are continuous by the closed graph theorem. The rest of the proof is similar to that of Theorem~\ref{T5.2}. Let $v_n$ be a sequence in $\bigl(\mathcal E^{\{a\}}_{b,B+}\bigr)'$ converging to zero. Then $v_n\ast_\hbar g\to0$ and $g\ast_\hbar v_n\to0$ for any $g\in \Sba$ and the convergence is uniform on every precompact subset of $\Sba$ by the Banach-Steinhaus theorem. Since $\Sba$ is a Montel space, every its bounded subset is precompact. Hence, $v_n\to 0$ in the topology of ${\mathcal C}_\hbar\bigl(\Sba\bigr)$. Analogous arguments show that the second of inclusions~\eqref{6.2} is also continuous, which completes the proof. \end{proof} It is clear from the definitions~\eqref{2.14}-\eqref{2.16} that the Palamodov spaces $\Eab$ and $\eab$ are naturally embedded in $\dSab$, whereas $\mathcal E^{(b)}_{\{a\}}$ and $\breve{\mathcal E}^{(b)}_{\{a\}}$ are naturally embedded in $\dsab$. \begin{theorem} \label{T6.2} If $b_n\subset a_n$, then the following continuous embeddings hold: \begin{equation} \eab\hookrightarrow {\mathscr M}_\hbar\bigl(\Sab\bigr),\qquad \eabp\hookrightarrow {\mathscr M}_\hbar\bigl(\sab\bigr). \label{6.5} \end{equation} \end{theorem} \begin{proof} Theorem~2 in~\cite{S2019-2} shows that the functions of $\Eba$ are pointwise multipliers for $\Sba$, the functions of $\Ebap$ are pointwise multipliers for $\sba$, and the inclusion maps \begin{equation} \Eba\to M\bigl(\Sba\bigr),\qquad \Ebap\to M\bigl(\sba\bigr), \label{6.6} \end{equation} are continuous. These maps clearly have dense ranges (and are even surjective, but we will not prove this here), and their adjoints \begin{equation} M'\bigl(\Sba\bigr) \to \dEba ,\qquad M'\bigl(\sba\bigr) \to \dEbap. \label{6.7} \end{equation} are continuous and injective. The compositions of the canonical inclusions $\dEba\hookrightarrow\dSba$ and $\dEbap\hookrightarrow\dsba$ with their respective maps in~\eqref{6.7} are the adjoints of the canonical inclusions $\Sba\hookrightarrow M\bigl(\Sba\bigr)$ and $\sba\hookrightarrow M\bigl(\sba\bigr)$. If $b_n\subset a_n$, then~\eqref{6.7} in combination with Theorem~\ref{T6.1} gives \begin{equation} M'\bigl(\Sba\bigr) \hookrightarrow \mathcal C_\hbar\bigl(\Sba\bigr) ,\qquad M'\bigl(\sba\bigr) \hookrightarrow \mathcal C_\hbar\bigl(\sba\bigr). \label{6.8} \end{equation} After Fourier transforming we obtain \begin{equation} C'\bigl(\Sab\bigr) \hookrightarrow \mathscr M_\hbar\bigl(\Sab\bigr) ,\qquad C'\bigl(\sab\bigr) \hookrightarrow \mathscr M_\hbar\bigl(\sab\bigr). \label{6.9} \end{equation} By Corollary~\ref{C4.5}, $\eab$ is isomorphic to $C'\bigl(\Sab\bigr)$ and this isomorphism is implemented by the adjoint of the map $\mathbf i\colon u\to \tilde u$ from Theorem~\ref{T4.3}. Let $\mathbf j$ denote the natural embedding of $\Sab$ into $C\bigl(\Sab\bigr)$. The composition $\mathbf j'\circ \mathbf i'$ of the adjoint maps is precisely the natural embedding of $\eab$ into $\dSab$ because $\langle (\mathbf j'\circ\mathbf i') (h), f\rangle=\langle h, \mathbf i(\mathbf j(f))\rangle$ for any $h\in \eab$ and $f\in \Sab$ and by the definition~\eqref{4.1} we have for $u= \mathbf j (f)$ \begin{equation} \langle \mathbf i(\mathbf j (f)), h\rangle=\int\!\int f(x)h(x)f_0(\xi-x)dxd\xi =\int f(x)h(x)dx. \notag \end{equation} Similarly, the second of embeddings~\eqref{6.9}, together with Corollary~\ref{C4.5}, implies the second of embeddings~\eqref{6.5}. \end{proof} \begin{remark}\label{R6.3} Unlike $\eab$, the space $\Eab$ is not contained in $\mathscr M_\hbar\bigl(\Sab\bigr)$ and $\Eabp$ is not contained in $\mathscr M_\hbar\bigl(\sab\bigr)$. In particular, the function $e^{(2i/\hbar)p\cdot q}$, where $(p,q)$ are symplectic coordinates on $\oR^{2d}$, belongs to $\mathcal E^{\{b\}}_{(a)}(\oR^{2d})$, but does not belong to $\mathscr M_\hbar\bigl(S_{\{a\}}^{\{b\}}(\oR^{2d})\bigr)$, see Proposition~6 in~\cite{S2012-II}. \end{remark} In the important case of the Fourier-invariant spaces of type $S$, we obtain the following additional result. \begin{corollary}\label{C6.4} If $\hbar\ne0$, then $\breve{\mathcal E}^{\{a\}}_{(a)}$ is contained in ${\mathcal C}_\hbar\bigl(S_{\{a\}}^{\{a\}}\bigr)$ and $\breve{\mathcal E}^{(a)}_{\{a\}}$ is contained in ${\mathcal C}_\hbar\bigl(S_{(a)}^{(a)}\bigr)$ with continuous inclusions. \end{corollary} \begin{proof} Here we use the symplectic Fourier transforms defined by \begin{equation} (F_J f)(y)\coloneq (\pi\hbar)^{-d}\int\limits_{\oR^{2d}} f(x)\,e^{-(2i/\hbar)x\cdot Jy} dx,\quad (\bar F_J f)(y)\coloneq (\pi\hbar)^{-d}\int\limits_{\oR^{2d}} f(x)\,e^{(2i/\hbar)x\cdot Jy} dx. \notag \end{equation} Performing the integration over one of the variables $x'$ or $x''$ in~\eqref{1.1}, we obtain \begin{multline} (f\star_\hbar g)(x)= (\pi\hbar)^{-d}\int\limits_{\oR^{2d}}\! (F_Jf)(x'') g(x- x'')\,e^{(2i/\hbar)x\cdot Jx''} dx''= (\pi\hbar)^{-d}\big((F_J f)\ast_{4/\hbar} g\big)(x) \\ =(\pi\hbar)^{-d}\int\limits_{\oR^{2d}}\! f (x- x') (\bar F_J g)(x') \,e^{(2i/\hbar)x'\cdot Jx} dx'= (\pi\hbar)^{-d} \big( f\ast_{4/\hbar}(\bar F_J g)\big)(x). \label{6.10} \end{multline} The maps $v\to f\star v$ and $v\to v\star g$ of $S_{\{a\}}^{\{a\}\prime}$ into itself are the adjoints of the respective continuous maps $h\to h\star f$ and $h\to g\star h$ of $S_{\{a\}}^{\{a\}}$ into itself and are therefore continuous. The twisted convolution products in~\eqref{6.10} have similar continuity properties. Furthermore, $S_{\{a\}}^{\{a\}}$ is dense in $S_{\{a\}}^{\{a\}\prime}$. Hence, it follows from~\eqref{6.10} that \begin{equation} f\star_\hbar v= (\pi\hbar)^{-d}(F_J f)\ast_{4/\hbar} v,\quad v\star_\hbar g=(\pi\hbar)^{-d} v\ast_{4/\hbar} (\bar F_J g)\quad\forall f,g\in S_{\{a\}}^{\{a\}},\, v\in S_{\{a\}}^{\{a\}\prime}. \notag \end{equation} If $v\in \breve{\mathcal E}^{\{a\}}_{(a)}$, then $f\star_\hbar v\in S_{\{a\}}^{\{a\}}$ and $v\star_\hbar g\in S_{\{a\}}^{\{a\}}$ for any $\hbar$ by Theorem~\ref{T6.2}. Because $F_J$ and $\bar F_J$ are automorphisms of $S_{\{a\}}^{\{a\}}$, we conclude that $\breve{\mathcal E}^{\{a\}}_{(a)}\subset {\mathcal C}_\hbar\bigl(S_{\{a\}}^{\{a\}}\bigr)$ for $\hbar\ne0$. The reasoning used in proving Theorem~\ref{T6.1} shows that this inclusion is continuous. The proof for the case of $S_{(a)}^{(a)}$ is entirely analogous. \end{proof} Theorem~\ref{T6.2} can be extended to the multiplier algebras associated with the quantization map \begin{equation} f \longmapsto \Op_{S}(f)=(2\pi)^{-d}\int_{\oR^{2d}}\! \widehat f(\zeta) e^{(i\hbar/4)\zeta\cdot S\zeta} T_\zeta^\hbar d\zeta, \label{6.11} \end{equation} where $S$ is a real symmetric matrix and $T_\zeta^\hbar$ is a Weyl system of unitary operators satisfying \begin{equation} T^\hbar_\zeta T^\hbar_{\zeta'}= e^{-(i\hbar/2)\zeta\cdot J\zeta'}T^\hbar_{\zeta+\zeta'}. \label{6.12} \end{equation} This generalization covers, in particular, various operator orderings that differ from the Weyl (totally symmetric) ordering corresponding to $S=0$ when we use the notation $\Op(f)$. The quantization map~\eqref{6.11} implies the following composition law for symbols \begin{equation} (f\star_{\hbar, S} g)(x)=(\pi\hbar)^{-2d}\int_{\oR^{4d}} f\left(x-(J+ S)x'\right)g(x-x'')e^{(2i/\hbar)x'\cdot x''}dx'dx''. \label{6.13} \end{equation} The Fourier transform converts~\eqref{6.13} into the deformed convolution \begin{equation} (\widehat f\ast_{\hbar, S}\widehat g)(\zeta)= \int_{\oR^{2d}} \widehat f(\zeta')\widehat g(\zeta-\zeta')e^{-(i\hbar/2)\zeta'\cdot (J+ S)(\zeta-\zeta')}d\zeta'. \label{6.14} \end{equation} (multiplied by $(2\pi)^{-d}$). It follows from the definition~\eqref{6.11} and relation~\eqref{6.12} that \begin{equation} \Op_{S}(f\star_{\hbar,S} g)=\Op_{S}(f)\Op_{S}(g), \notag \end{equation} as can be verified by using the symmetry of the matrix $S$ and the antisymmetry of $J$, see, e.g., Sect.~4 in~\cite{S2019} for more details. Let $j_S$ be the operator of multiplication by $e^{(i\hbar/4)\zeta\cdot S\zeta}$. It is easy to see that the deformed convolution~\eqref{6.14} and the twisted convolution~\eqref{5.1} are connected by the relation \begin{equation} j_S(g_1\ast_{\hbar, S} g_2)= j_S(g_1)\ast_\hbar j_S(g_2). \notag \end{equation} As already noted above, Theorem~1 in~\cite{S2019} shows that under the condition $b_n\subset a_n$, the function $e^{(i\hbar/4)\zeta\cdot S\zeta}$ is a pointwise multiplier for $\Sba$ and for $\sba$. Hence these spaces are algebras under the deformed convolution~\eqref{6.14} and the map $g(\zeta)\to e^{(i\hbar/4)\zeta\cdot S\zeta} g(\zeta)$ is an algebraic and topological isomorphism of $\bigl(\Sba,\ast_{\hbar, S}\bigr)$ onto $\bigl(\Sba,\ast_\hbar\bigr)$ and of $\bigl(\sba,\ast_{\hbar, S}\bigr)$ onto $\bigl(\sba,\ast_\hbar\bigr)$. The spaces of multipliers with respect to the products $\ast_{\hbar, S}$ and $\star_{\hbar, S}$ are defined in complete analogy with~\eqref{5.4}--\eqref{5.7}, \eqref{5.13}, \eqref{5.14}, and we obtain another corollary of Theorem~\ref{T6.2}. \begin{corollary}\label{C6.5} Let $b_n\subset a_n$, let $S$ be a real symmetric matrix, and let ${\mM}^S_{\hbar}\bigl(\Sab\bigr)$ and ${\mM}^S_{\hbar}\bigl(\sab\bigr)$ be, respectively, the algebras of two-sided multipliers for $\bigl(\Sab, \star_{\hbar, S}\bigr)$ and $\bigl(\sab, \star_{\hbar, S}\bigr)$, where $\star_{\hbar, S}$ is defined by~\eqref{6.13}. Then we have the continuous embeddings \begin{equation} \eab\hookrightarrow {\mM}^S_\hbar\bigl(\Sab\bigr),\qquad \eabp\hookrightarrow {\mM}^S_\hbar\bigl(\sab\bigr). \label{6.15} \end{equation} \end{corollary} \begin{proof} Since $\mathcal C^S_\hbar\bigl(\Sba\bigr)$ and $\mathcal C^S_\hbar\bigl(\sba\bigr)$ are obtained from the algebras~\eqref{5.14} by multi\-plication by $e^{-(i\hbar/4)\zeta\cdot S\zeta}$ and the spaces $M'\bigl(\Sba\bigr)$ and $M'\bigl(\sba\bigr)$ are invariant under this operation, it follows from~\eqref{6.8} that \begin{equation} M'\bigl(\Sba\bigr) \hookrightarrow \mathcal C^S_\hbar\bigl(\Sba\bigr) ,\qquad M'\bigl(\sba\bigr) \hookrightarrow \mathcal C^S_\hbar\bigl(\sba\bigr). \notag \end{equation} After Fourier transforming and applying Corollary~\ref{C4.5}, we obtain~\eqref{6.15}. \end{proof} \section{Concluding remarks} \label{S7} The quantization map~\eqref{6.11} extends uniquely to a continuous bijection from $S_{\{a\}}^{\{a\}\prime}(\oR^{2d})$ onto the space $\mathcal L\bigl(S_{\{a\}}^{\{a\}}(\oR^d), S_{\{a\}}^{\{a\}\prime}(\oR^d)\bigr)$, as well as to a continuous bijection from $S_{(a)}^{(a)\prime}(\oR^{2d})$ onto $\mathcal L\bigl(S_{(a)}^{(a)}(\oR^d),S_{(a)}^{(a)\prime}(\oR^d)\bigr)$ (for a proof, see Theorem~2 in~\cite{S2012-II}). These extensions are analogous to the extension of the Weyl map to tempered distributions in~\cite{Fol,H3}. In this way $\Op_S(u)$ is well defined for any $u\in S_{\{a\}}^{\{a\}\prime}(\oR^{2d})$ as a continuous linear map of $S_{\{a\}}^{\{a\}}(\oR^d)$ into $S_{\{a\}}^{\{a\}\prime}(\oR^d)$ and coincides with the operator whose Weyl symbol is $F^{-1}\bigl[\hat u e^{-(i\hbar/4)\zeta\cdot S\zeta}\bigr]$. It follows directly from the definitions that the algebra ${\mM}^S_\hbar \bigl(S_{\{a\}}^{\{a\}}(\oR^{2d})\bigr)$ is transformed by the extended map~\eqref{6.11} into the same set of operators as ${\mM}_\hbar\bigl(S_{\{a\}}^{\{a\}}(\oR^{2d})\bigr)$ by the Weyl map. In addition, Corollary~\ref{C4.6} implies that the image of $\breve{\mathcal E}^{\{a\}}_{(a)}(\oR^{2d})$ under the map $u\to \Op_S(u)$ is the same as its image under the Weyl map. Analogous statements are true regarding ${\mM}^S_\hbar\bigl(S_{(a)}^{(a)}(\oR^{2d}\bigr)$ and $\breve{\mathcal E}^{(a)}_{\{a\}}(\oR^{2d})$. Theorem~3 in~\cite{S2012-II} shows that the Weyl map transforms the algebra ${\mM}_{\hbar, L}\left(S^\alpha_\alpha(\oR^{2d})\right)$ of left Moyal multipliers for $S^\alpha_\alpha(\oR^{2d})$ into the algebra of operators mapping $S^\alpha_\alpha(\oR^{d})$ continuously into itself. In~\cite{S2020}, this result is extended to the general case of spaces $S_{(a)}^{(a)}$ and $S_{(a)}^{(a)}$, and it implies, in particular, that the pseudodifferential operators whose Weyl symbols belong to $\breve{\mathcal E}^{\{a\}}_{(a)}$ are continuous on $S_{\{a\}}^{\{a\}}$ and the operators with Weyl symbols in $\breve{\mathcal E}^{(a)}_{\{a\}}$ are continuous on $S_{(a)}^{(a)}$. Pseudodifferential operators with symbols in the spaces $\Gamma^\infty_s$ and $\Gamma^\infty_{0,s}$, which coincide with $\breve{\mathcal E}^{\{a\}}_{(a)}$ and $\breve{\mathcal E}^{(a)}_{\{a\}}$ for $a_n=n^{sn}$, were studied by Cappiello and Toft in~\cite{CT}. Their continuity properties are proved there by a different method based on using modulation spaces and the short time Fourier transform. Similar results on the continuity properties of pseudodifferential operators in the Gelfand-Shilov setting were also derived in another way by Prangoski~\cite{P}, but for slightly smaller symbol classes than $\breve{\mathcal E}^{\{a\}}_{(a)}$ and $\breve{\mathcal E}^{(a)}_{\{a\}}$. It follows from the above that the continuity properties of operators obtained from $\breve{\mathcal E}^{\{a\}}_{(a)}$ and $\breve{\mathcal E}^{(a)}_{\{a\}}$ by applying the map~\eqref{6.11} with $S\ne 0$ are the same as operators obtained by applying the Weyl map and do not require a separate consideration. Besides the spaces~\eqref{2.15} and~\eqref{2.16}, Palamodov introduced in~\cite{P1962} two more classes of spaces, which in our notation are defined by $\mathcal E^{(b)}_{(a)}=\bigcap_{A\to\infty,B\to0}\mathcal E^{b,B}_{a,A}$ and $\mathcal E^{\{b\}}_{\{a\}}=\bigcup_{A\to0,B\to\infty}\mathcal E^{b,B}_{a,A}$. The dual of $\mathcal E^{(b)}_{(a)}$ is the space of convolutors for $S^{\{a\}}_{(b)}=\bigcap_{B\to0}\bigcup_{A\to\infty}S^{b,B}_{a,A}$ and the dual of $\mathcal E^{\{b\}}_{\{a\}}$ is the space of convolutors for $S^{\{b\}}_{(a)}=\bigcap_{A\to0}\bigcup_{B\to\infty}S^{b,B}_{a,A}$. We note that the symbol class denoted in~\cite{CT} by $\Gamma^\infty_{1,s}$ is, in our notation, $\mathcal E^{\{a\}}_{\{a\}}$ for $a_n=n^{sn}$. It is clear that $\mathcal E^{(b)}_{(a)}\subset \eab\bigcap\eabp$. Therefore, the pseudodifferential operators with symbols in $\mathcal E^{(a)}_{(a)}(\oR^{2d})$ are continuous from $S^{\{a\}}_{\{a\}}(\oR^d)$ to $S^{\{a\}}_{\{a\}}(\oR^d)$ and from $S^{(a)}_{(a)}(\oR^d)$ to $S^{(a)}_{(a)}(\oR^d)$. In the case of symbols in $\mathcal E^{\{a\}}_{\{a\}}(\oR^{2d})$, the situation is completely different because $S_{(a)}^{\{a\}}(\oR^{2d})$ is not closed under the Weyl-Moyal product. Since $\mathcal E^{\{a\}}_{\{a\}}\subset S^{(a)\prime}_{(a)}$, its corresponding operators are well defined as elements of $\mathcal L\bigl(S_{(a)}^{(a)}(\oR^d),S_{(a)}^{(a)\prime}(\oR^d)\bigr)$, but $\mathcal E^{\{a\}}_{\{a\}}(\oR^{2d})$ includes functions $u$ such that the image of $S_{(a)}^{(a)}(\oR^d)$ under $\Op(u)$ is not contained in $L^2(\oR^d)$.
{ "timestamp": "2020-08-18T02:34:58", "yymm": "2007", "arxiv_id": "2007.13627", "language": "en", "url": "https://arxiv.org/abs/2007.13627" }
\section{Acknowledgements} We would like to thank H. Yoon and M. Han for fruitful discuss of ML process. This work was supported by the Ministry of Science through NRF-2018R1D1A1B07048139 (Hunpyo Lee), and Ministry of Education, Science and Technology NRF-2020R1I1A1A01071535 and POSTECH (Taegeun Song). RV acknowlegdes support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through TRR 288 - 422213477 (project B05).
{ "timestamp": "2020-07-28T02:41:28", "yymm": "2007", "arxiv_id": "2007.13610", "language": "en", "url": "https://arxiv.org/abs/2007.13610" }
\section{Introduction} \label{sec:introduction} \IEEEPARstart{I}{mage} registration is the task of finding the correspondences across two or more images. This is often used to tackle problems in the field of medical imaging, remote sensing, etc. For instance, when we want to analyze how the anatomy of a patient's body part changes over time, we need snapshots of it over time. Not only could the source camera change, the location, orientation of the camera as well as anatomy of the patient are variables that could change over time. In such scenarios, doing a comprehensive analysis becomes difficult and hence, registration becomes a prerequisite before any further analysis can be done. In some cases, there might be different cameras used to capture complimentary information of the same organ at a given time. In such a case, the information from different sources needs to be registered and such a task is called multi-modality image registration. This is done often to provide a more holistic analysis of a subject. For example, MRI (Magnetic Resonance Imaging) scans could be conducted at 1.5T or 3T, with T (Tesla) specifying the strength of the magnet used in the MRI machine. The body's tissues, muscles, fats, etc. all react differently to differing MRI exposures, and this often helps to provide complimentary information by using multiple imaging sources. There are multiple approaches to image registration, and may be broadly classified into two families based on how the registration parameters are obtained: learning based and optimization based. For instance, Cheng et al. \cite{cheng2018deep} train a binary classifier to learn the correspondence of two image patches and the classification output is transformed to a continuous probability value, which is then used as the similarity score. Alireza et al. \cite{sedghi2018semi} have a similar approach, but rather than needing well aligned training data, they propose a strategy to learn a deep similarity metric from roughly aligned training data. The benefit of such learning based approaches is that once trained on a dataset, inference is quite fast. On the other hand, the drawback of learning based approaches is that they need large amounts of training data to achieve satisfactory results and furthermore, they will not perform well on pairs which are drastically different from the training set. In this work, we utilize an optimization based approach where running time may be longer than the learning based methods, but it is accurate and it does not require any training data. Often the model used to parametrize the registration parameters are fashioned in a hierarchical manner; i.e. first a global transformation using homography (or it's subsets) is used to register the images as much as possible before applying more elaborate techniques such as deformable registration. Thus, the success of deformable registration methods is highly dependent on how successful the initial registration step was. Our approach is based on optimization to solve for this initial global homographic transform. Even though several software toolboxes exist for optimization-based image registration using the family of homography transformations, we believe that opportunities still exist for improvement. In this work, we point out that representing transformation matrices using a matrix exponential, especially, complex matrix exponential (CME) leads to faster convergence. CME enjoys a theoretical guarantee that repeated compositions of matrix exponential are not required during optimization, unlike the real case. Furthermore, using a matrix exponential, both the forward and the reverse transformations can be easily added to the registration objective function for a robust design. In this work, we also point out that a precise design of transformation matrix is possible for the multi-resolution image registration using a dynamical system modeled by a neural network. This dynamical system leads to an initial value ordinary differential equation (ODE) that can adapt a transformation matrix quite accurately to the multi-resolution image pyramids, which are significant for image registration. Our ODE-based framework leads to a more accurate image registration algorithm. Using the aforementioned two elements, ODE and CME, we present a novel multi-resolution image registration algorithm ODECME that can accommodate both 2D and 3D image registration, mono-modal and multi-modal \cite{MI_survey} cases, and any differentiable loss or objective function including MINE (mutual information neural estimation) \cite{belghazi2018mine}. Our implementation uses PyTorch \cite{NEURIPS2019_9015}, which has a capability of GPU acceleration and automatic gradient computation. Experiments on four publicly available benchmark datasets demonstrate new state-of-the-art performance using ODECME. \section{Background} \subsection{Matrix Exponential for Image Registration} Optimization-based image registration retrieves a transformation matrix $H$ (e.g., homography, affine, rigid body, similarity, etc.) that warps a moving image $M$ to the template image $T$ by optimizing a cost function $D$: \begin{equation} \min_H D(T,Warp(M,H)). \label{eqn:opt} \end{equation} For a differentiable loss function $D$ and a differentiable $Warp$ program, gradient descent can minimize (\ref{eqn:opt}). Representing the transformation matrix $H$ by matrix exponential \cite{schroter2010lie, Nan2020} offers several advantages, e.g., a rigid body transformation matrix can be implicitly represented without any explicit constraints on the elements of $H,$ making the optimization unconstrained. Using matrix exponential a transformation matrix $H$ is represented by the exponential map or a number of compositions of such maps from suitable matrix Lie algebras to the corresponding matrix Lie groups, such as \textit{SO(3)}, \textit{SE(2)}, etc. \cite{taylor1994minimization, trouve1998diffeomorphisms}. Wachinger \& Navab \cite{wachinger2012simultaneous} show that spatial transformations represented by matrix exponential help because unconstrained optimization can be performed over 3D rigid transformations. Among more recent works, data representations in orientation scores as a function on the Lie group \textit{SE(2)} has been used for template matching\cite{bekkers2017template} with cross-correlation. As a concrete example, to represent the 2D affine transformations, \textit{Aff(2)} group, the following six generators are used in Lie algebra \cite{Nan2020}: \begin{equation*} \begin{split} & B_1= \begin{bmatrix} 0 & 0 & 1\\ 0 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix}, B_2= \begin{bmatrix} 0 & 0 & 0\\ 0 & 0 & 1\\ 0 & 0 & 0\\ \end{bmatrix}, B_3= \begin{bmatrix} 0 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix}, \\ & B_4= \begin{bmatrix} 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0\\ \end{bmatrix}, B_5= \begin{bmatrix} 1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 0\\ \end{bmatrix}, B_6= \begin{bmatrix} 0 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 1\\ \end{bmatrix}. \end{split} \end{equation*} Using these six generators, an affine transformation matrix can be expressed as $Mexp(\sum_{i=1}^6v_i B_i)$, where $v=[v_1,v_2,...,v_6]$ is a parameter/coefficient vector. $Mexp$ is the matrix exponentiation operation that can be computed by the power series on matrix $B$ \cite{Hall2015}, \begin{equation} Mexp(B) = \sum_{n=0}^{\infty} \frac{B^n}{n!}, \label{eqn:mat_exp} \end{equation} which can be truncated after a few terms (e.g., 10) for an accurate enough representation of a transformation matrix \cite{Nan2020}. A more sophisticated algorithm can also be applied for matrix exponential computation \cite{mat_exp}, as long as it is easily differentiable. Using the matrix exponential representation for a transformation matrix, $H=Mexp(\sum_i v_i B_i)$, the image registration optimization (\ref{eqn:opt}) takes the following form: \begin{equation} \begin{split} \min_{v_1,v_2,...} & D(T,Warp(M,Mexp(\sum_i v_i B_i))) + \\ & D(M,Warp(T,Mexp(-\sum_i v_i B_i))), \end{split} \label{eqn:opt_me} \end{equation} where we have also added a cost for registering template $T$ to moving image $M,$ making the optimization more robust. The symmetric objective function denotes a clear advantage of matrix exponential, where the inverse transform can be easily added to the differentiable cost function. Thus, we can apply gradient descent by automatic differentiation (i.e., chain rule) to adjust parameters $v_i$. For 3D data, we can have the \textit{SE(3)} and \textit{Sim(3)} groups. The \textit{SE(3)} group represents all 3D rigid transformations, i.e. it has six degrees of freedom, which are the three axes of rotation and three directions of translation. The six generators \cite{eade2013lie} are: \begin{equation*} \begin{split} & B_1= \begin{bmatrix} 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{bmatrix}, B_2= \begin{bmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{bmatrix}, \\ & B_3= \begin{bmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0\\ \end{bmatrix}, B_4= \begin{bmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{bmatrix},\\ & B_5= \begin{bmatrix} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0\\ -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{bmatrix}, B_6= \begin{bmatrix} 0 & -1 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{bmatrix}. \end{split} \end{equation*} The similarity group \textit{Sim(3)} adds another degree of scaling to 3D rigid transformations \textit{SE(3)}. Their generators are the same except for an additional one: \begin{equation*} B_7= \begin{bmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & -1\\ \end{bmatrix}. \end{equation*} \subsection{Multi-resolution Computation} Literature on scale-space \cite{Witkin, Lindeberg} has shown that objects and edges have intrinsic scales in an image. Gaussian pyramid has become a standard and discrete method for capturing the continuous scale-space for an image. Image pyramid-based computations are routinely used for motion estimation \cite{szeliski2004image} that can drastically reduce computations by means of a hierarchical search beginning at the top of the image pyramid and ending at the bottom of the pyramid, i.e., the original resolution of the image. Optimization-based optical flow computation also adopts this multi-resolution technique \cite{Ray2011}. Image registration methods have also adopted multi-resolution image pyramids \cite{thevenaz1998pyramid, kruger1998image, alhichri2002multi} that have shown better convergence and accuracy for gradient-based optimizations. Image resolution and registration cost, such as mutual information have their complex interactions. Irani and Anandan \cite{irani1998robust} have studied that effectiveness of mutual information decreases as one moves towards coarser resolution. Wu and Chung \cite{wu2004multimodal} have combined mutual information and sum of difference (SAD) for multi-modal and multi-resolution registration. Sun et al. \cite{sun2013simultaneous} have shown that instead of computing transformations at the coarsest level of the pyramid and propagating it towards the finer levels, one can use all the pyramid levels simultaneously for better convergence and accuracy. In our recent study \cite{Nan2020}, we have also noted that adding cost of registration simultaneously for all pyramid levels and optimizing the combined cost function is more beneficial. The registration optimization problem (\ref{eqn:opt_me}) using the multi-resolution approach takes the following form: \begin{equation} \begin{split} \min_{v_1,v_2,...} \sum_{l=1}^L & \{D(T_l,Warp(M_l,Mexp(\sum_i v_i B_i))) + \\ & D(M_l,Warp(T_l,Mexp(-\sum_i v_i B_i)))\}, \end{split} \label{eqn:opt_me_mr} \end{equation} where $T_l$ and $M_l$ for $l=1,...,L,$ are two image pyramids, with $L$ being the coarsest/maximum level in the pyramid. $T_1=T$ and $M_1=M$ are the original template and moving images, respectively. This formulation assumes that image interpolation $Warp$ and transformation matrices, $Mexp(\sum_i v_i B_i)$ and $Mexp(-\sum_i v_i B_i)$, use the same range of pixel coordinates, such as the canonical range, $[-1,1]\times[-1,1]$ for all resolutions. \subsection{Image Registration Metrics} For gradient descent (or ascent)-based optimization, image registration requires a differentiable loss/cost/objective function. Mean squared error (MSE) \cite{MSE} and normalized cross correlation (NCC) \cite{NCC} are two widely used cost functions. While MSE is not suitable for multi-modal image registration, NCC can often serve as an objective here. More specialized measures for multi-modal image registration includes mutual information (MI) \cite{JHMI} and normalized mutual information (NMI) \cite{NMI}. However, not all forms of MI are easily differentiable and for multi-channel images, MI computation may not be trivial. Recently, a differentiable form for mutual information called mutual information neural estimation (MINE) \cite{belghazi2018mine} has been proven quite successful for image registration \cite{Nan2020}. MINE between two images $P$ and $Q$ is defineed as follows \cite{belghazi2018mine}: \begin{equation} \begin{split} MINE(P,Q)= & \frac{1}{N}\sum_{i}f_\theta(P_{I_i}, Q_{I_i}) - \\ & log(\frac{1}{N}\sum_{i}exp(f_\theta(P_{I_i}, Q_{I^{rp}_i}))), \end{split} \label{eqn:MINE} \end{equation} where $I$ denotes a randomly sampled set of $N$ pixel locations. The set $I^{rp}$ denotes a random permutation of the set $I.$ $I_i$ and $I^{rp}_i$ denote $i^\text{th}$ element (i.e., pixel location) of the sets $I$ and $I^{rp},$ respectively. $f_\theta$ is a fully connected neural network with parameters $\theta$ \cite{Nan2020}. \section{Proposed Method} \subsection{ODE for Multi-resolution Image Registration} Image structures are slightly shifted through multi-resolution Gaussian image pyramids \cite{Witkin}. So, a transformation matrix suitable for a coarse resolution may need a slight correction when used for a finer resolution. To mitigate this issue, we model matrix exponential parameters as a continuous function $v(s)$ of resolution $s.$ The change in $v(s)$ over resolution $s$ can be modeled by a neural network $g_\phi$ with parameters $\phi$: \begin{equation} \frac{dv(s)}{ds} = g_\phi(s,v(s)). \label{eq:ode} \end{equation} Using Euler method \cite{Butcher2016} the ordinary differential equation (ODE) (\ref{eq:ode}) can be solved for all resolution levels $1, 2, ..., L$: \begin{equation} \begin{split} v_L & = u\\ \text{for}~ l&=L-1,L-2,\cdots,1\\ v_l & = v_{l+1} + (s_l-s_{l+1})g_\phi(s_{l+1},v_{l+1}), \end{split} \label{eq:euler} \end{equation} where $v_l = [v_{l,1},v_{l,2},...]$ are the matrix exponential coefficients for resolution level $l$ and $s_l = d^{-l+1},~l=1,2,..,L$ denote the discrete resolutions in powers of the downscale factor $d$. $u=[u_1,u_2,...]$ is the initial value vector in the ODE and it is an optimizable parameter of the model along with the neural network parameters $\phi.$ We have also used 4-point Runge-Kutta method (RK4) \cite{Butcher2016} for the above recursion: \begin{equation} \begin{split} v_L & = u\\ \text{for}~ l&=L-1,L-2,\cdots,1\\ h & = s_l-s_{l+1}, \\ k_1 & = hg_\phi(s_{l+1},v_{l+1}),\\ k_2 & = h g_\phi(s_{l+1}+\tfrac{1}{3}h,v_{l+1}+\tfrac{1}{3}k_1),\\ k_3 & = h g_\phi(s_{l+1}+\tfrac{2}{3}h,v_{l+1}-\tfrac{1}{3}k_1+k_2),\\ k_4 & = h g_\phi(s_l,v_{l+1}+k_1-k_2+k_3),\\ v_l &= v_{l+1} + \tfrac{1}{8}(k_1+3k_2+3k_3+k_4). \end{split} \label{eq:RK4} \end{equation} Generating matrix exponential coefficients $v_l,~l=1,2,..,L$ by the ODE solution (\ref{eq:euler}) or (\ref{eq:RK4}), the optimization for image registration (\ref{eqn:opt_me_mr}) using mutual information (\ref{eqn:MINE}) now becomes: \begin{equation} \begin{split} \max_{\substack{u_1,u_2, \cdots \\ \phi, \theta}} \sum_{l=1}^L & \{MINE(T_l,Warp(M_l,Mexp(\sum_i v_{l,i} B_i)))+\\ & MINE(M_l,Warp(T_l,Mexp(-\sum_i v_{l,i} B_i)))\}. \end{split} \label{eqn:opt_me_mr2} \end{equation} The autograd feature of modern packages (e.g., PyTorch, Tensorflow) can easily work through the Euler or RK4 recursions for the optimization (\ref{eqn:opt_me_mr2}). Fig. \ref{fig:imag_ode} shows the adaptation of eight coefficients of complex matrix exponential over six resolution levels (0 being the original resolution). \begin{figure}[h] \centering \includegraphics[scale=0.4]{images/coefficients.jpg} \caption{Adaptation of matrix exponential coefficients over six levels for a registered image pair from ANHIR dataset. Level 5 is the coarsest resolution in the pyramid.} \label{fig:imag_ode} \end{figure} \subsection{Complex Matrix Exponential} It is well known that exponential of real valued matrix is not globally surjective, i.e., not all transformation matrices (affine or homography) can be obtained by the exponential of real-valued matrices \cite{Gallier2020}. One way to overcome this issue is to compose matrix exponential a few times to compute the transformation matrix. In this work, we propose to use complex matrix exponential an alternative to the scheme using composition, because complex matrix exponential is globally surjective \cite{Gallier2020}. Thus, a complex matrix, $B^r+\sqrt{-1}B^i = \sum_i v_i B_i$, produced by complex parameters, $v_i = v_i^r+\sqrt{-1}v_i^i$, can use matrix exponential series (\ref{eqn:mat_exp}) to create a complex transformation matrix, \begin{equation} H^r + \sqrt{-1} H^i = Mexp(B^r + \sqrt{-1} B^i). \label{eq:cme} \end{equation} Next, we choose to transform a point $(x,y)$ to another point $(x^\prime,y^\prime)$ using the following: \begin{equation} \begin{split} [x^r,y^r,z^r]^T = H^r [x,y,1]^T,~ & [x^i,y^i,z^i]^T = H^i [x,y,1]^T,\\ x^\prime = \frac{x^r z^r + x^i z^i}{(z^r)^2+(z^i)^2},~ & y^\prime = \frac{y^r z^r + y^i z^i}{(z^r)^2+(z^i)^2}. \end{split} \label{eq:complex_hom} \end{equation} Note that under our chosen transformation (\ref{eq:complex_hom}) the straight lines are not guaranteed to remain straight. However, if $H^i=0$, transformation (\ref{eq:complex_hom}) degenerates to a linear transformation using homogeneous coordinates. Fig. \ref{fig:grids} shows four randomly generated grids using (\ref{eq:cme}) and (\ref{eq:complex_hom}). When imaginary coefficients are zeros (top-left panel, where $B^i=0$ and consequently $H^i=0$), the transformation acts as a homography, whereas the degree of the non-linearity in the transformation increases as the magnitude of $B^i$ increases. Unlike, a 2D Mobius transformation \cite{Kisil2012}, the proposed complex transformation does not guarantee self-intersection. However, note that 2D Mobius transformation is more restrictive, as for example, it cannot generate a perspective transformation. \begin{figure}[h] \centering \includegraphics[scale=0.2]{images/grids.jpg} \caption{Randomly generated grids by complex matrix exponential (\ref{eq:cme}) and complex transformation (\ref{eq:complex_hom}). Elements of $B^r$ were generated by a zero mean Gaussian with 0.1 standard deviation (SD) for all four panels. Elements of $B^i$ were generated by a zero mean Gaussian with SD as follows: 0 for top-left, 0.1 for top-right, 0.2 for bottom-left and 0.3 for bottom-right panel.} \label{fig:grids} \end{figure} \subsection{ODECME Algorithm} Combining the aforementioned two elements, ordinary differential equation (ODE) and complex matrix exponential (CME), our proposed Algorithm \ref{alg:ODECME} (ODECME) first builds two image pyramids, one for the fixed and another for the moving image. It then computes Euler recursion for ODE-based computation of CME coefficients. Alternatively, we have also used RK4 recursion (\ref{eq:RK4}) in our experiments. Note also that ``Mexp'' may refer to real or complex matrix exponential, depending on whether $u$ and $v_l$ are complex or real. Also, ``MINE'' can be replaced by any differentiable loss for image registration. $f_\theta$ (refer to (\ref{eqn:MINE})) is a fully connected neural network \cite{Nan2020}. $g_\phi$ is also a fully connected neural network appearing in (\ref{eq:euler}) and (\ref{eq:RK4}). Taking advantage of matrix exponential, we use a symmetric loss, which uses both the forward and the inverse transformation matrices. For any gradient computation, such as $\nabla_{\theta}MI$ or $\nabla_{\phi}MI$, we use autograd (bult-in optimizers) of PyTorch \cite{NEURIPS2019_9015}. Algorithm \ref{alg:ODECME} finally outputs original resolution transformation matrix and its inverse. \begin{algorithm}[h] \SetAlgoLined Build multiresolution image pyramids $\{T_l,M_l\}_{l=1}^{L}$ \; Set learning rates $\alpha$, $\beta$ and $\gamma$\; Use random initialization for $\theta$ and $\phi$ \; Initialize $u$ to the 0 vector \; \For {each iteration}{ $v_L = u$ \; $H_L = Mexp(\sum_i v_{L,i} B_i)$ \; $H_L^{-1} = Mexp(-\sum_i v_{L,i} B_i)$ \; \For {$l = [L-1,..,1]$}{ $v_l = v_{l+1} + (s_l-s_{l+1})f_\phi(s_{l+1},v_{l+1})$ \; $H_l = Mexp(\sum_i v_{l,i} B_i)$ \; $H_l^{-1} = Mexp(-\sum_i v_{l,i} B_i)$ \; } $MI = 0$ \; \For {$l = [1,L]$}{ $ MI \mathrel{+}= MINE(T_l, Warp(M_l, H_l))$ \; $ MI \mathrel{+}= MINE(M_l, Warp(T_l, H_l^{-1}))$ \; } Update parameter: $\theta \mathrel{+}= \alpha \nabla_{\theta} MI$ \; Update parameter: $\phi \mathrel{+}= \beta \nabla_{\phi} MI$ \; Update parameter: $u \mathrel{+}= \gamma \nabla_u MI$ \; } Compute final transformation matrices:\\ $v_L = u$ \; \For {$l = [L-1,..,1]$}{ $v_l = v_{l+1} + (s_l-s_{l+1})f_\phi(s_{l+1},v_{l+1})$ \; } $H_1 = Mexp(\sum_i v_{1,i} B_i)$ \; $H_1^{-1} = Mexp(-\sum_i v_{1,i} B_i)$\; \caption{ODECME} \label{alg:ODECME} \end{algorithm} \section{Datasets for Experiments} In order to evaluate our algorithm, we choose four datasets, two of them have 2D images: FIRE \cite{hernandez2017fire} and ANHIR \cite{ANHIR} and the other two are 3D volumes: IXI\cite{IXI} and ADNI \cite{wyman2013standardization}. FIRE and IXI are used to perform mono-modal registration, while ANHIR and ADNI are used for multi-modal registration. \subsection{FIRE} The FIRE dataset consists of 134 retinal fundus image pairs. These pairs are classified into three categories depending on what purpose they were collected for: S, P (Mosaicing) and A. Of these, for Category P pairs having $<75\%$ overlap, the registration optimization diverges in a lot of cases, and hence we leave out this subset of images in our experiments and use only Categories S and A. The FIRE dataset provides ground truth in the form of coordinates of 10 corresponding points between the fixed and the moving image. Also, while the images are square in shape, the retinal fundus is circular in shape and hence the gap between the edges of the fundus and the image border is quite large.So, we crop the central portion of the image to only include only the fundus ($1941 \times 1941$ pixels). For evaluating registration accuracy, we compute the Euclidean distance between these corresponding points after registration and average them. Also the image coordinates are scaled between 0 and 1 so that images of different sizes can be compared using the same benchmark. We call this measure the Normalized Average Euclidean Distance (NAED). Most competing methods do not support a homography based registration model, so an affine model was used to be consistent all across. \subsection{ANHIR} The ANHIR dataset provides high-resolution histopathological tissue images stained with different dyes. This provides a multi-modal challenge. The ground truth is provided in a similar format to the FIRE dataset. We use only the training set (230) provided in the database, since only these pairs have the ground truth available. The ANHIR dataset has some very large resolution images (upto $100k \times 200k$ pixels). Some of the competing registration frameworks were unable to process such large images and so, we downscaled every image by a factor of 5 to make them available to every framework. Furthermore, each staining can have a different resolution, so, to remedy this, we rescale the image with a smaller aspect ratio to match the width of the paired image and then it's height is padded to match the other image as well. This preprocessing is consistent across all algorithms and allows us to maintain the aspect ratios of the individual images and have both images in a pair at the same resolution. We use NAED as the evaluation metric here as well and use an affine model for transformation. \subsection{IXI} The IXI dataset has about 600 MR images from healthy patients. It includes T1, T2, PD-weighted images, MRA images, and Diffusion-weighted images. Unfortunately, the IXI dataset does not come with any form of ground truth, so we resort to standard measures \cite{ghosal2017deep} for registration accuracy such as SSIM and PSNR. We choose 51 T1 volumes (at random) which have the same size and designate one as the Atlas (reference volume) and register the other 50 volumes against it. The SSIM and PSNR scores are computed after every registration with the Atlas and averaged and then reported for each algorithm. The transformation model used has 7 degrees of freedom: isotropic scaling with three axes of rigid transformation and three axes for rotation. \subsection{ADNI} The ADNI dataset provides 1.5T and 3T MRI scans of patients scanned over different periods of time. We chose one volume as the atlas (template volume) from the ADNI1:Screening 1.5T collection and another 50 volumes from the ADNI1:Baseline 3T collection. All volumes were normalized, and resized to match the reference volume ($160 \times 192 \times 192$) before feeding into any of the algorithms. Similar to the IXI dataset, no ground truth is available here for registration, so we report the averaged SSIM and PSNR metrics for the dataset after registration. The transformation model is the same as the one used for IXI. \begin{figure}[h!] \centering \includegraphics[scale=0.3]{images/FIRE_Comp2.jpg} \caption{Visual results for a pair of registered images from the FIRE dataset. Bottom row shows difference images after registration with the best three algorithms. Here ODECME refers to the RK4-Complex version.} \label{fig:fire_samples} \end{figure} \section{Experiments} Algorithm \ref{alg:ODECME} with real matrix exponential and without multi-resolution adaptation by ODE has been published as DRMIME \cite{Nan2020}. We compare our proposed enhanced version ODECME with DRMIME and other competing methods for all four datasets. For all algorithms and datasets, we use $L=6$ for the maximum level of Gaussian image pyramid. We use $\alpha=0.1, \beta=\gamma=0.01$ for the 2D datasets and $\alpha=0.01, \beta=\gamma=0.001$ for the 3D datasets in Algorithm \ref{alg:ODECME}. We use MINE as the objective function for all four datasets. Our implementation of the network $f_\theta$ for MINE uses a fully connected network with twice the number of input channels as the input layer, e.g., for a color image it is $3 \times 2 = 6$. There are two hidden layers with 100 neurons in each and the output layer has a scalar output. Apart from the output layer which has no activation, ReLU activation is used. For ODE, the input layer for $g_\phi$ consists of $7$ and $8$ neurons in case of FIRE/ANHIR and IXI/ADNI, respectively. The reasoning being, that one neuron accounts for the scale of the level in the Gaussian pyramid and remaining neurons are for the number of matrix exponential coefficients ($6$ for 2D and $7$ for 3D datasets in our experiments). For complex coefficients, these input dimensions become $13$ and $17$, respectively. $g_\phi$ has a single hidden layer with ReLU activation and $100$ neurons and the final output layer consists of neurons equal to the number of matrix exponential coefficients. For all evaluations, we also conduct a paired t-test with DRMIME to investigate if the results are statistically significant (p-value $<$ 0.05). \subsection{Competing Methods} We evaluate our method against the following off-the-shelf registration algorithms from popular registration frameworks. For a fair comparison, we perform random grid search to set various hyperparameters of these toolboxes. A detained description of these hyper parameters can be found in DRMIME \cite{Nan2020}. We compare ODECME with the following methods: \begin{enumerate} \item Mattes Mutual Information (MMI) \cite{mattes2001nonrigid, mattes2003pet, MMI} \item Joint Histogram Mutual Information (JHMI) \cite{thevenaz2000optimization, JHMI} \item Normalized Cross Correlation (NCC)\cite{NCC} \item Mean Square Error (MSE)\cite{MSE} \item AirLab Mutual Information (AMI)\cite{DBLP:journals/corr/abs-1806-09907} \item Normalized Mutual Information (NMI)\cite{studholme1999overlap, NMI}, and \item DRMIME \cite{Nan2020} \end{enumerate} The implementations of the above algorithms were used from these packages: \begin{itemize} \item SITK: MMI, JHMI, NCC, MSE \item AirLab: AMI \item SimpleElastix: NMI \end{itemize} \subsection{Accuracy Comparisons} Fig. \ref{fig:fire_samples} shows registration results for a randomly chosen image pair from FIRE dataset. Table \ref{tab:fire_res} shows the NAED for all algorithms on the FIRE dataset. We observe that Runge-Kutta ODE recursion with complex matrix exponential, ODE (RK4-Complex), performs significant better than the competitors including DRMIME. We also note that Runge-Kutta version is more effective than the Euler version. Complex version did not have any significant advantage over the real version for accuracy. Fig. \ref{fig:fire_res} presents box plots for ODE (RK4-Complex) and results from four other toolboxes. We notice that number of outliers is the lowest in ODECME illustrating its robustness. \begin{table}[t] \caption{NAED for FIRE dataset along with paired t-test significance values} \centering \begin{tabular}{||c|c|c||} \hline Algorithm & NAED (Mean $\pm$ STD) & p-value \\ \hline\hline ODE (RK4-Complex) & \textbf{0.00380} $\pm$ 0.012 & 0.0032 \\ \hline ODE (RK4-Real) & 0.00385 $\pm$ 0.014 & 0.0032 \\ \hline ODE (Euler-Complex) & 0.0047 $\pm$ 0.019 & 0.0822 \\ \hline ODE (Euler-Real) & 0.0049 $\pm$ 0.016 & 0.1053 \\ \hline DRMIME (Complex) & 0.00482 $\pm$ 0.031 & 0.1021 \\ \hline DRMIME (Real) & 0.00482 $\pm$ 0.026 & - \\ \hline NCC & 0.0194 $\pm$ 0.033 & 1.3e-04 \\ \hline MMI & 0.0198 $\pm$ 0.034 & 5.4e-05 \\ \hline NMI & 0.0228 $\pm$ 0.032 & 1.7e-08 \\ \hline JHMI & 0.0311 $\pm$ 0.046 & 4.5e-07 \\ \hline AMI & 0.0441 $\pm$ 0.028 & 1.4e-27 \\ \hline MSE & 0.0641 $\pm$ 0.094 & 3.5e-03 \\ [1ex] \hline \end{tabular} \label{tab:fire_res} \end{table} \begin{figure}[h] \centering \includegraphics[scale=0.4]{images/FIRE_Stats.jpg} \caption{Box plot for NAED of the best 5 performing algorithms on FIRE} \label{fig:fire_res} \end{figure} \begin{figure*}[h] \centering \includegraphics[scale=0.4]{images/ANHIR_samples.jpg} \caption{Visual results for a pair of registered images from the ANHIR dataset. Bottom row shows difference images after registration with the best three algorithms. Here ODECME refers to the RK4-Complex version.} \label{fig:anhir_samples} \end{figure*} Fig. \ref{fig:anhir_samples} shows registration results for a sample image pair from ANHIR dataset. Table \ref{tab:anhir_res} presents the NAED metrics, where once again we notice that ODE (RK4-Complex) produced the best accuracy. Similar to the FIRE dataset, Runge-Kutta method produced better results than the Euler recursion. For accuracy, as before, we have not spotted any obvious advantage of complex coefficients over the real ones. The box-plots in Fig. \ref{fig:anhir_res} also emphasise the same conclusion as we saw before, i.e. ODECME outperforms other competing algorithms. \begin{table}[h] \caption{NAED for ANHIR dataset along with paired t-test significance values} \centering \begin{tabular}{||c|c|c||} \hline Algorithm & NAED (Mean $\pm$ STD) & p-value \\ \hline\hline ODE (RK4-Complex) & \textbf{0.0344} $\pm$ 0.045 & 0.0441 \\ \hline ODE (RK4-Real) & 0.0348 $\pm$ 0.044 & 0.0642 \\ \hline ODE (Euler-Complex) & 0.0358 $\pm$ 0.075 & 1.0e-03 \\ \hline ODE (Euler-Real) & 0.0391 $\pm$ 0.035 & 1.5e-03 \\ \hline DRMIME (Complex) & 0.0373 $\pm$ 0.021 & 0.0619 \\ \hline DRMIME (Real) & 0.0373 $\pm$ 0.015 & - \\ \hline NCC & 0.0461 $\pm$ 0.084 & 7.0e-04 \\ \hline MMI & 0.0490 $\pm$ 0.082 & 6.2e-05 \\ \hline MSE & 0.0641 $\pm$ 0.094 & 5.5e-14 \\ \hline NMI & 0.0765 $\pm$ 0.090 & 3.0e-31 \\ \hline AMI & 0.0769 $\pm$ 0.090 & 3.7e-30 \\ \hline JHMI & 0.0827 $\pm$ 0.100 & 8.3e-21 \\ [1ex] \hline \end{tabular} \label{tab:anhir_res} \end{table} \begin{figure}[h] \centering \includegraphics[scale=0.4]{images/ANHIR_Stats.jpg} \caption{Box plot for top 5 performing algorithms on ANHIR} \label{fig:anhir_res} \end{figure} Fig. \ref{fig:ixi_samples} shows sample results for IXI dataset with three top performing algorithms. We compute the SSIM and PSNR scores after registration and present them in Fig. \ref{fig:ixi_ssim} and \ref{fig:ixi_psnr}, respectively. Both these figures show that ODE (RK4-Complex) is significantly better. We plot SSIM and PSNR scores for ADNI dataset in Figures \ref{fig:adni_ssim} and \ref{fig:adni_psnr}, respectively. For ODECME the median scores not only are higher, but also produced the lowest spread/range. Fig. \ref{fig:ADNI_samples} shows Two different slices with three views before and after registrations with ODECME and MSE algorithms, which according to our experiments, is the next best algorithm for this dataset. \subsection{Effect of CME} While the NAED performance results are not statistically significant to be able to conclude better accuracy for CME, it speeds up convergence. For instance, with real matrix exponential, for the FIRE dataset, it takes about 500 epochs to converge, while with ANHIR it takes about 1500 epochs. In case of complex matrix exponential, it only takes about 300 epochs in case of FIRE, and about 1300 epochs for ANHIR. Fig. \ref{fig:compvsreal} shows the NAED convergence graphs (error bar plots) for 10 randomly selected pairs from FIRE for Algorithm \ref{alg:ODECME} using both real and complex matrix exponential coefficients. The plots show that convergence using CME is much better. \begin{figure}[h] \centering \includegraphics[scale=0.4]{images/IXI_sample_2.png} \caption{Difference images from the middle slice from a pair of volumes from the IXI dataset shown before and after registration with three top performing algorithms. ODECME refers to the RK4-Complex version.} \label{fig:ixi_samples} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.4]{images/ssim_ixi.jpg} \caption{Box plot for SSIM values for each algorithm on the IXI datset after registration. ODECME refers to the RK4-Complex version.} \label{fig:ixi_ssim} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.4]{images/psnr_ixi.jpg} \caption{Box plot for PSNR values for each algorithm on the IXI datset after registration. ODECME refers to the RK4-Complex version.} \label{fig:ixi_psnr} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.4]{images/adni_ssim.jpg} \caption{Box plot for SSIM values for each algorithm on the ADNI dataset after registration. ODECME refers to the RK4-Complex version.} \label{fig:adni_ssim} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.4]{images/adni_psnr.jpg} \caption{Box plot for PSNR values for each algorithm on the ADNI dataset after registration. ODECME refers to the RK4-Complex version.} \label{fig:adni_psnr} \end{figure} \begin{figure*}[h] \centering \includegraphics[scale=0.40]{images/ADNI_samples.jpg} \caption{Same slices from the difference volume after before and after registration with the two top performing algorithm.} \label{fig:ADNI_samples} \end{figure*} \begin{figure}[h] \centering \includegraphics[scale=0.3]{images/compvsreal.jpg} \caption{NAED averaged for 10 randomly selected pairs from FIRE, plotted over 500 epochs. Error bars represent the range of NAED values over 10 registrations.} \label{fig:compvsreal} \end{figure} \subsection{Effect of ODE} To demonstrate that Algorithm \ref{alg:ODECME} can fine-tune transformation matrices over the resolution levels, we compute the range of each complex coefficient after registering FIRE dataset over six resolution levels. Table \ref{tab:ode_avg_std} shows the average and standard deviations of these range values over the entire FIRE dataset. Note that the transformation matrices act over the canonical pixel value range of $[-1,1]\times[-1,1].$ Thus, the small variations of the matrix exponential coefficients are still significant changes for the transformation matrices. Also note the contribution from imaginary coefficients are significant in designing the transformation matrices. \begin{center} \begin{table}[h] \caption{The average range and standard deviation of real and imaginary coefficients after registering the FIRE dataset} \centering \begin{tabular}{||c|c|c||} \hline Coefficient & (Real) Mean Range $\pm$ SD & (Imag.) Mean Range $\pm$ SD \\ \hline\hline $v_{0}$ & 0.0107 $\pm$ 0.0100 & 0.0665 $\pm$ 0.0605\\ \hline $v_{1}$ & 0.0119 $\pm$ 0.0074 & 0.0188 $\pm$ 0.0116\\ \hline $v_{2}$ & 0.0400 $\pm$ 0.0122 & 0.0637 $\pm$ 0.0371\\ \hline $v_{3}$ & 0.0233 $\pm$ 0.0177 & 0.0299 $\pm$ 0.0222\\ \hline $v_{4}$ & 0.0212 $\pm$ 0.0107 & 0.0654 $\pm$ 0.0407\\ \hline $v_{5}$ & 0.0135 $\pm$ 0.0052 & 0.0892 $\pm$ 0.0624\\ \hline \end{tabular} \label{tab:ode_avg_std} \end{table} \end{center} \subsection{Running Time} To provide the running times of all the algorithms, we choose FIRE dataset and run all algorithms for $1000$ iterations on a desktop computer with a single NVIDIA GeForce GTX 1080 Ti, Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz, 32GB RAM. Table \ref{tab:perf_table1} shows running time. We note that except ODECME, DRMIME, and AMI, all other algorithms did not use GPU accelerations. Additionally, we ran ODECME (RK4) for 50 epochs that resulted in an average NAED, which is better than the most competitors for a similar running time. Note that our software is not optimized, unlike SimpleElastix (NMI) for example. \begin{center} \begin{table}[h] \caption{Time taken for 1000 epochs and resultant NAED (lower is better)} \centering \begin{tabular}{||c|c c||} \hline Algorithm & Time (seconds) & NAED \\ \hline\hline ODECME (RK4) (50 epochs) & 108 & 0.01921\\ \hline NMI & 60 & 0.02503\\ \hline AMI & 620 & 0.02942\\ \hline DRMIME & 1425 & 0.00368 \\ \hline ODECME (RK4) & 1601 & \textbf{0.00360}\\ \hline ODECME (Euler) & 1469 & 0.00452\\ \hline MMI & 2904 & 0.00598\\ \hline JHMI & 1859 & 0.00605\\ \hline NCC & 3804 & 0.00697\\ \hline MSE & 2847 & 0.02918\\ [1ex] \hline \end{tabular} \label{tab:perf_table1} \end{table} \end{center} \section{Conclusion and Future Work} Optimization-based image registration using homography as a transformation is classical. However, recent toolboxes with autograd capability and strong GPU acceleration has created opportunity to improve these classical registration algorithms. In this work, we show that using complex matrix exponential convergence can be accelerated for such algorithms. Also using ordinary differential equation, we can further refine the accuracy of such algorithms for multi-resolution image registration problems that is able to employ any differentiable objective function. Our algorithm yields state-of-the-art accuracy for benchmark 2D and 3D datasets. We plan to employ the ODE framework for deformable registration in our future endeavor. \noindent \textbf{Author Roles:} The first author wrote the software for this work, conducted experiments, and produced results and contributed in writing. Other authors supervised the first author. The last author contributed in conceptualizing and writing.
{ "timestamp": "2020-07-28T02:44:13", "yymm": "2007", "arxiv_id": "2007.13683", "language": "en", "url": "https://arxiv.org/abs/2007.13683" }
\section{Introduction} The present work proposes an extension of the Mixed Virtual Element meth\-od for meshes with elements having curved edges, for bi-dimensional elliptic problems in mixed form. The method allows to handle domains with curved boundaries, or domains with embedded curved interfaces, or even with mesh elements having all curved edges. Mixed methods are well suited for the discretization of vector field in $H(\text{div})$. Classes of mixed methods, in addition to the here considered Mixed Virtual Element Methods (MVEM)~\cite{BeiraodaVeiga2014b} are the well known Raviart-Thomas (RT) ~\cite{Raviart1977,Roberts1991,Arnold2005,Boffi2013} and Brezzi-Douglas-Marini (BDM)~\cite{Brezzi1985,Nedelec1986,Brezzi1987,Boffi2013} finite element schemes. Dealing with curved boundaries/interfaces for approximation degrees greater than one has some pitfalls. Indeed, as the polynomial accuracy increases, the geometrical error due to the approximation of curved boundaries or interfaces via piece-wise linear edges dominates the numerical error of the scheme, thus bounding the convergence rate. Curved edge discretizations have been investigated in the virtual element framework for the first time in~\cite{BeiraodaVeiga2019} where an elliptic bi-dimensional problem in primal formulation is considered. The proposed approach is based on standard virtual elements (VEM) and it is well suited for problems where the computational domain is characterized by fixed curved boundaries or interfaces. After this pioneering work, other strategies have been proposed to extended the VEM to curved edge elements. In~\cite{Bertoluzza2019}, for example, the authors keep the standard definition of VEM spaces and suggest properly modified bi-linear forms to take into account elements with curved boundaries. In~\cite{brezziCurvo} the virtual element space proposed by~\cite{BeiraodaVeiga2019} is modified to contain polynomials. Such extension is crucial to preserve convergence rates when the mesh element diameter decreases while boundary curvature remains fixed. Other important classes of methods have been developed to handle curved edges: isogeometric analysis~\cite{Hughes2005,Bazilevs2006,Montardini2017}, non-affine isoparametric elements~\cite{Ciarlet1972,Zlamal1973,Lenoir1986} and also Mimetic Finite Differences~\cite{Brezzi:2006} or Hybrid High Order schemes~\cite{dipietro}. The main advantage of VEM based approaches for curved edge elements lies in the possibility of exactly reproducing the curved interface or domain boundary without introducing any geometrical approximation, provided a suitable parametric description of the curve is available. The use of mixed discretizations, further, is particularly well suited for problems where local mass conservation is of paramount importance. These features, combined with the great flexibility of VEM make the proposed approach particularly well suited for single or multi-phase flow problems in heterogeneous porous media, with or without the presence of fractures, for the analysis of absorbing materials, or composite materials, or materials with inclusions of arbitrary shapes. Indeed, these applications are characterized by complex domains with arbitrary shape interfaces, or multiple intersecting interfaces, and coefficients with strong variations. Examples of applications of the MVEM with rectilinear edge meshes can be found, e.g. in~\cite{BeiraoVeiga2016,Benedetto2016,Fumagalli2016a,Fumagalli2017,Benedetto2017,Fumagalli2017a,Dassi2019,hyperVEM,pichlerVEM,Fumagalli2020b}. Here the MVEM is extended to curved edge elements following the approach proposed in~\cite{BeiraodaVeiga2019}. Concerning the choice of the degrees of freedom, the proposed scheme can be seen as a generalization of RT elements to order $k\geq 0$ to curved edges. Moreover, the new scheme is an extension to the curved case of the classical virtual mixed spaces. Indeed, when the domain has no curved boundaries or interfaces, the proposed virtual spaces boil down to the spaces defined in~\cite{BeiraodaVeiga2014b,BeiraoVeiga2016}, with a slightly different choice of the degrees of freedom, that is particularly suited for curved elements. {The paper is organized as follows. In Section~\ref{sec:mathematical_model} we discuss some technical details as notations, mathematical model and hypothesis on curved edges. In Section~\ref{sec:discrete_spaces} we present the mesh assumptions, introduce the discrete spaces, with the associated set of degrees of freedom and define the discrete bilinear forms, then we present the discrete problem. In Section~\ref{sec:theory} we analyse the theoretical properties of the proposed method: we introduce the Fortin operator, we establish the discrete inf-sup condition and provide the interpolation estimate for the curved MVEM. Then we prove the stability bounds for the associated discrete bilinear form. At the end of this section we recover the optimal order of convergence of the present method. In Section~\ref{sec:numExe} we provide some experiments to give numerical evidence of the behaviour of the proposed scheme. Finally, Section \ref{sec:conclusion} is devoted to conclusion.} \section{Notations and Preliminaries}\label{sec:mathematical_model} {Throughout the paper, we will follow the usual notation for Sobolev spaces and norms as in \cite{Adams:1975}. Hence, for a bounded domain $\omega$, the norms in the spaces $W^s_p(\omega)$ and $L^p(\omega)$ are denoted by $\|{\cdot}\|_{W^s_p(\omega)}$ and $\|{\cdot}\|_{L^p(\omega)}$ respectively. Norm and seminorm in $H^{s}(\omega)$ are denoted respectively by $\|{\cdot}\|_{s,\omega}$ and $|{\cdot}|_{s,\omega}$, while $(\cdot,\cdot)_{\omega}$ and $\|\cdot\|_{\omega}$ denote the $L^2$-inner product and the $L^2$-norm (the subscript $\omega$ may be omitted when $\omega$ is the whole computational domain $\Omega$). Moreover with a usual notation, the symbols $\nabla$, $\Delta$ denote the gradient and Laplacian for scalar functions, while $\diver$ denotes the divergence for vector fields. Furthermore, for a scalar function $\psi$ and a vector field $\boldsymbol{v} = (v_1, v_2)$ we set \[ \ROT \psi \vcentcolon= \left(\frac{\partial \psi}{\partial y}\,, - \frac{\partial \psi}{\partial x} \right)^\top \,, \qquad \rot \boldsymbol{v} \vcentcolon= \frac{\partial v_2}{\partial x} - \frac{\partial v_1}{\partial y} \,. \] Finally we recall the following well known functional spaces which will be useful in the sequel \begin{gather*} H(\diver, \omega) \vcentcolon= \{\bm{v} \in [L^2(\omega)]^2: \, \diver \bm{v} \in L^2(\omega)\} \,, \\ H(\rot, \omega) \vcentcolon= \{\bm{v} \in [L^2(\omega)]^2 :\, \rot \bm{v} \in L^2(\omega)\} \,. \end{gather*} } \subsection{Mathematical model} We consider a (curved) domain $\Omega\subset\mathbb{R}^2$ with Lipschitz continuous boundary and external unit normal $\boldsymbol{n}$. The boundary of $\Omega$ named $\partial \Omega$ is divided into two parts $\partial_e \Omega$ and $\partial_n \Omega$ such that $\overline{\partial \Omega} = \overline{\partial_e \Omega} \cup \overline{\partial_n \Omega}$ and $\mathring{\partial_e \Omega} \cap \mathring{\partial_n \Omega} = \emptyset$. For simplicity, we assume that $\mathring{\partial_n \Omega} \neq \emptyset$. For a positive definite tensor $\kappa$, a positive real number $\mu$, and a scalar source $f$, the following problem is set in $\Omega$: \begin{problem}[Model problem]\label{pb:darcy_model} Find $(\bm{q}, p)$ such that \begin{subequations \begin{align}\label{eq:darcy_model_eq} \left \{ \begin{aligned} &\mu \bm{q} + \kappa \nabla p = \bm{0} \\ &\diver \bm{q} + f = 0 \end{aligned} \right . \qquad \text{in } \Omega, \end{align} supplied with the following boundary conditions \begin{equation} \label{eq:darcy_model} \left \{ \begin{aligned} &p = \overline{p} & \qquad \text{on } \partial_n \Omega,\\ &\bm{q} \cdot \bm{n} = \overline{q} & \qquad \text{on } \partial_e \Omega. \end{aligned} \right . \end{equation} \end{subequations} \end{problem} This problem describes, for example, the pressure $p$ and the Darcy velocity $\bm{q}$ of a single phase fluid in a porous medium, characterized by a permeability tensor $\kappa$, a fluid dynamic viscosity $\mu$, and fluid sinks/sources $f$. { In the following we assume null $\overline{q}$, otherwise a lifting technique should be considered. Before introducing the weak problem associated to Problem \ref{pb:darcy_model}, we fix the following notation \[ \boldsymbol{V} \vcentcolon= \left\{ \bm{v} \in H(\diver, \Omega) \quad \text{s.t.} \quad \bm{v} \cdot \bm{n} = 0 \text{ on } \partial_e \Omega \right\} \quad \text{and} \quad Q \vcentcolon= L^2(\Omega) \,, \] equipped with natural inner products and induced norms. } The spaces $\boldsymbol{V}$ and $Q$, with their structures, are thus Sobolev spaces. In the previous definition of $\boldsymbol{V}$ the condition on the essential part of $\partial \Omega$ can be detailed as: \begin{gather*} \langle \bm{v} \cdot \bm{n}, w \rangle = 0 \quad \forall w \in H^{\frac{1}{2}}_{00}(\partial_e \Omega) \end{gather*} where $\langle\cdot, \cdot\rangle$ is the duality pair from $H^{-\frac{1}{2}}(\partial_e \Omega)$ to $H^{\frac{1}{2}}_{00}(\partial_e \Omega)$. See \cite{Boffi2013} for more details. We introduce now the weak formulation of Problem \ref{pb:darcy_model}. The procedure is rather standard and leads to the definition of the following forms \begin{align}\label{eq:continuous_forms} \begin{aligned} &a(\cdot, \cdot)\colon \boldsymbol{V} \times \boldsymbol{V} \to \mathbb{R} \qquad &&a(\bm{u}, \bm{v}) \vcentcolon= (\mu \kappa^{-1} \bm{u}, \bm{v})_\Omega \quad &\forall (\bm{u}, \bm{v}) \in \boldsymbol{V} \times \boldsymbol{V}\\ &b(\cdot, \cdot)\colon \boldsymbol{V} \times Q \to \mathbb{R} \qquad &&b(\bm{u}, v) \vcentcolon= -(\diver \bm{u}, v)_\Omega \quad &\forall(\bm{u}, v)\in \boldsymbol{V} \times Q. \end{aligned} \end{align} We have furthermore assumed that $\kappa \in [L^\infty(\Omega)]^{2\times2}$, $\mu \in L^\infty(\Omega)$ and it exists $\mu_0 > 0$ such that $\mu \geq \mu_0$. Linear functionals associated to given data are defined as \begin{align}\label{eq:linear_functionals} \begin{aligned} &G(\cdot)\colon \boldsymbol{V} \to \mathbb{R} \quad &&G(\bm{v})\vcentcolon= - (\overline{p}, \bm{v} \cdot \bm{n})_{\partial_n\Omega} \quad &\forall \bm{v} \in \boldsymbol{V}\\ &F(\cdot) \colon Q\to \mathbb{R} \quad &&F(s) \vcentcolon= (f, v)_\Omega \quad &\forall v \in Q, \end{aligned} \end{align} where the data have regularity $\overline{p} \in H^{\frac{1}{2}}_{00}(\partial_n\Omega)$, and $f \in L^2(\Omega)$. We can finally summarize the weak formulation of Problem \ref{pb:darcy_model} as the following. \begin{problem}[Weak problem]\label{pb:darcy_weak} Find the couple Darcy velocity and pressure $(\bm{q}, p)~\in~\boldsymbol{V}~\times~Q$ such that \begin{align} \left \{ \begin{aligned} & a(\bm{q}, \bm{v}) + b(\bm{v}, p) = G(\bm{v}) && \forall \bm{v} \in \boldsymbol{V} \\ & b(\bm{q}, v) = F(v) && \forall v \in Q. \end{aligned} \right. \end{align} \end{problem} \noindent The previous Problem is well posed (see for instance \cite{Boffi2013}). { \subsection{Assumptions on the curved domains} Following the approach in \cite{BeiraodaVeiga2019}, we here detail the assumption on the (curved) domain $\Omega$. We consider a bounded Lipschitz domain $\Omega$ whose boundary $\partial \Omega$ is made up of a finite number of smooth curves $\{\Gamma_i\}_{i=1, \dots, N}$ that fit the boundary split into ``essential'' and ``natural'' part, i.e., $$ \bigcup_{i=1}^{N_e} \Gamma_i = \partial_e \Omega\qquad\text{and}\qquad \bigcup_{i=N_e+1}^{N} \hspace{-0.5em}\Gamma_i = \partial_n \Omega. $$ We assume that: \begin{assumption}[Boundary regularity]\label{ass:regu} We assume that each curve $\Gamma_i$ of $\partial \Omega$ is sufficiently smooth, for instance we require that $\Gamma_i$ is of class $C^{m+1}$ with $m \geq 0$, i.e., there exists a given regular and invertible $C^{m+1}$-parametrization $\gamma_i \colon I_i \to \Gamma_i$ for $i=1, \dots, N$, where $I_i \vcentcolon= [a_i, b_i] \subset \mathbb{R}$ is a closed interval. \end{assumption} Since all the parts $\Gamma_i$ of $\partial \Omega$ will be treated in the same way, in the following we will drop the index $i$ from all the involved maps and parameters, in order to obtain a lighter notation. } \begin{remark}[Internal interfaces] It is important to note that proposed approach is also valid for internal curved interfaces. However, to keep the presentation simple we assume only curved elements on the boundary, being its extension straightforward. Examples in Subsection \ref{sub:inside1} and \ref{sub:inside2} deal with internal interfaces. \end{remark} { \section{Mixed Virtual Elements on curved polygons}\label{sec:discrete_spaces} In this section, we define the virtual formulation of Problem \ref{pb:darcy_weak}. We first discuss the assumptions for the meshes on the curved domain $\Omega$, then we introduce the space for the vector and scalar fields with the associated set of degrees of freedom. We discuss the computability of the $L^2$-projection onto the polynomial space and define the approximated linear form. \subsection{Mesh assumptions} From now on, we will denote with $E$ a general polygon having $\ell_e$ edges $e$, which may any number of curved edges. For each polygon $E$ and each edge $e$ of $E$ we denote by $|E|$, $h_E$, $\boldsymbol{x}_E=(x_E, y_E)$ the measure, diameter and centroid of $E$, respectively. By $h_e$, $\boldsymbol{x}_e$ we denote the length and midpoint of $e$, respectively. Furthermore, $\boldsymbol{n}_E^e$ denotes the unit outward normal vector to $e$ with respect to $E$, while $\boldsymbol{n}_E$ is a generic outward normal of $\partial E$. We call $\boldsymbol{n}^e$ a fixed unit normal vector which is normal to the edge $e$ and $\sigma_{E,e} \vcentcolon= \boldsymbol{n}_E^e \cdot \boldsymbol{n}^e= \pm 1$ (notice that $\boldsymbol{n}^e$ does not depend on $E$). Let $\Omega_h$ be a decomposition of $\Omega$ into general polygons $E$ completed along $\partial \Omega$ by curved elements whose boundary contains an arc $\subset \partial \Omega$, where we define $h~\vcentcolon= ~\sup_{E \in \Omega_h} h_E$, see~\cite{BeiraodaVeiga2019}. We make two assumptions on the mesh elements: there exists a positive uniform constant $\rho$ such that \begin{assumption}[Star-shaped]\label{ass:star} Each element $E$ in $\Omega_h$ is star-shaped with respect to a ball $B_E$ of radius $ \geq\, \rho \, h_E$. \end{assumption} \begin{assumption}[Edges comparable size]\label{ass:mesh} For each element $E$ in $\Omega_h$, for any (possibly curved) edge $e$ of $E$, it holds $h_e \geq \rho \, h_E$. \end{assumption} We denote by $\mathcal{E}_h$ the set of all the mesh edges divided into internal $\mathcal{E}_h^{\rm int}$ and external $\mathcal{E}_h^{\rm ext}$ edges; the latter is split into ``essential edges'' $\mathcal{E}_h^{\partial_e \Omega}$ and ``natural edges'' $\mathcal{E}_h^{\partial_n \Omega}$. For any $E \in \Omega_h$ we denote by $\mathcal{E}_h^{E}$ the set of the edges of $E$. Finally the total number of edges (excluding the ``essential edges'' $\mathcal{E}_h^{\partial_e \Omega}$) and elements in the decomposition $\Omega_h$ are denoted by $L_e$ and $L_E$, respectively. With a slight abuse of notation, we define the following maps to deal with both straight and curved edges: \begin{itemize} \item for any curved edge $e \in \mathcal{E}_h$, we call $\gamma \colon \mathfrak{e} \subset I \to e$ the restriction of $\gamma \colon I \to \partial \Omega$ having image $e$, \item for any straight edge $e \in \mathcal{E}_h$ with endpoints $\boldsymbol{x}_{e_1}$ and $\boldsymbol{x}_{e_2}$, we denote by $\gamma \colon \mathfrak{e} \vcentcolon= [0, h_e] \to e$ the standard affine map $\gamma(t) = \frac{t}{h_e}(\boldsymbol{x}_{e_2} - \boldsymbol{x}_{e_1}) + \boldsymbol{x}_{e_1}$. \end{itemize} \begin{remark} \label{rm:length} We notice that, since the parametrization $\gamma \colon I \to \partial \Omega$ is fixed once and for all, under Assumption \ref{ass:regu}, it follows that for any curved edge $e \in \mathcal{E}_h^{E}$, the length of the interval $\mathfrak{e}$ is comparable with the diameter $h_E$ of the element $E$, since $h_e = \int_{\mathfrak{e}} \|\gamma'(s)\| \, {\rm d}s$ and $\gamma$, $\gamma^{-1} \in W^{1, \infty}$ are fixed. Moreover, since $\gamma$ is fixed, when $h$ approaches zero the straight segment $e'$ whose endpoints are vertexes of $e$ approaches the curved edge $e$. Therefore by Assumption \ref{ass:mesh}, for sufficiently small $h$, the length $h_e$ of the curved edge $e$ is comparable with the diameter $h_E$. \end{remark} In the following the symbol $\lesssim$ will denote a bound up to a generic positive constant, independent of the mesh size $h$, but which may depend on $\Omega$, on the ``polynomial'' order $k$, on the parametrization $\gamma$ in Assumption \ref{ass:regu} and on the shape constant $\rho$ in Assumptions \ref{ass:star} and \ref{ass:mesh}. } { \subsection{Polynomial and mapped polynomial spaces} Using standard VEM notations, for $n \in \mathbb{N}$, $s \in \mathbb{R}^+$, and for any $E\in \Omega_h$, let us introduce the spaces: \begin{itemize} \item $\mathbb{P}_n(E)$ the set of polynomials on $E$ of degree $\leq n$ (with $\mathbb{P}_{-1}(E)=\{ 0 \}$), \item $\mathbb{P}_n(\Omega_h) \vcentcolon= \{q \in L^2(\Omega_h): q_{|E} \in \mathbb{P}_n(E) \, \forall E \in \Omega_h\}$, \item $H^s(\Omega_h) \vcentcolon= \{v \in L^2(\Omega_h): v_{|E} \in H^s(E)\, \forall E \in \Omega_h\}$ equipped with the broken norm and seminorm \[ \|v\|^2_{s,\Omega_h} \vcentcolon= \sum_{E \in \Omega_h} \|v\|^2_{s,E}\,, \qquad |v|^2_{s,\Omega_h} \vcentcolon= \sum_{E \in \Omega_h} |v|^2_{s,E} \,, \] \end{itemize} and we define \[ \pi_n\vcentcolon= \dim(\mathbb{P}_n(E)) = \frac{(n+1)(n+2)}{2} \,. \] Notice that the following useful polynomial decomposition holds \cite{BeiraodaVeiga2014b,Dassi2019} \begin{equation} \label{eq:polydec} [\mathbb{P}_n(E)]^2 = \nabla \mathbb{P}_{n+1}(E) \oplus \boldsymbol{x}^{\perp} \mathbb{P}_{n-1}(E) \end{equation} where $\boldsymbol{x}^{\perp}\vcentcolon= (y, -x)^T$. \begin{remark} \label{rm:rot} Note that \eqref{eq:polydec} implies that the operator $\rot$ is an isomorphism from $\boldsymbol{x}^{\perp} \mathbb{P}_{n-1}(E)$ to the whole $\mathbb{P}_{n-1}(E)$, i.e., for any $q_{n-1} \in \mathbb{P}_{n-1}(E)$ there exists a unique $p_{n-1} \in \mathbb{P}_{n-1}(E)$ such that $q_{n-1} = \rot(\boldsymbol{x}^{\perp} p_{n-1})$. \end{remark} A natural basis associated with the space $\mathbb{P}_n(E)$ is the set of normalized monomials \[ \mathcal{M}_n(E) \vcentcolon= \left\{ \left( \frac{\boldsymbol{x} - \boldsymbol{x}_E}{h_E} \right)^{\bm{\beta}} \text{ with } \abs{\bm{\beta}} \leq n \right\} \] where $\bm{\beta}$ is a multi-index. Notice that $\|m\|_{L^{\infty}(E)} \leq 1$ for any $m \in \mathcal{M}_n(E)$. We extend the basis $\mathcal{M}_n(E)$ for vector valued polynomials $[\mathbb{P}_n(E)]^2$ defining \[ \boldsymbol{\Mk}_n(E) \vcentcolon= \left\{ (m_r, 0)^\top\,, \,\, (0, m_s)^\top\, \quad \text{with $m_r, m_s \in \mathcal{M}_n(E)$} \right\}. \] Let us now introduce the boundary space on the edge $e \in \mathcal{E}_h$. Following the same approach, for any interval $\mathfrak{e} \subset \mathbb{R}$ we denote by $\mathbb{P}_n(\mathfrak{e})$ the set of polynomials on $\mathfrak{e}$ of degree $\leq n$ with the associated basis of normalized polynomials \[ \mathcal{M}_n(\mathfrak{e}) \vcentcolon= \left\{ 1, \frac{x-x_{\mathfrak{e}}}{h_{\mathfrak{e}}},\left( \frac{x - x_{\mathfrak{e}}}{h_{\mathfrak{e}}}\right)^2, \ldots, \left( \frac{x - x_{\mathfrak{e}}}{h_{\mathfrak{e}}}\right)^n \right\}, \] again we notice that $\|m\|_{L^{\infty}(\mathfrak{e})} \leq 1$ for any $m \in \mathcal{M}_n(\mathfrak{e})$. For each edge $e \in \mathcal{E}_h$ we consider the following mapped polynomial and scaled monomial spaces \begin{eqnarray*} \widetilde{\mathbb{P}}_n(e) &\vcentcolon=& \{ \widetilde{q} = q \circ \gamma^{-1}: q \in \mathbb{P}_n(\mathfrak{e}) \}\quad\text{and}\quad\\ \widetilde{\mathcal{M}}_n(e) &\vcentcolon=& \{ \widetilde{m} = m \circ \gamma^{-1}: m \in \mathcal{M}_n(\mathfrak{e}) \}\,, \end{eqnarray*} i.e., $\widetilde{\mathbb{P}}_n(e)$ is made of all functions that are polynomials with respect to the parametrization $\gamma$. It is important to note that the following property holds: \begin{property}\label{prop:subset} For any edge $e \in \mathcal{E}_h^{E}$ we have $\mathbb{P}_n(E)|_e \subset \widetilde{\mathbb{P}}_n(e)$ if $e$ is straight, or $\mathbb{P}_0(E)|_e \subset \widetilde{\mathbb{P}}_n(e)$ and $\mathbb{P}_i(E)|_e \not\subset \widetilde{\mathbb{P}}_n(e)$, for $i > 0$, if $e$ is curved. The same considerations apply to $\widetilde{\mathcal{M}}_n$. \end{property} Finally the local $L^2$-projection operator $\Pi^n_0 \colon [L^2(E)]^2 \to [\mathbb{P}_n(E)]^2$ is defined as follows: given $\boldsymbol{w} \in [L^2(E)]^2$ we have \begin{equation} \label{eq:projection_def} \int_E \Pi^n_0 \boldsymbol{w} \cdot \bm{m} \, {\rm d}E = \int_E \boldsymbol{w} \cdot \bm{m} {\rm d}E \quad\forall \bm{m} \in \boldsymbol{\Mk}_n(E). \end{equation} With a slight abuse of notation, we denote by $\Pi^n_0 \colon [L^2(\Omega)]^2 \to [\mathbb{P}_n(\Omega_h)]^2$ the projection onto the space of piecewise polynomials defined element-wise by $(\Pi^n_0 \boldsymbol{w})|_E \vcentcolon= \Pi^n_0 (\boldsymbol{w}|_E)$ for all $E\in \Omega_h$. Similarly the $L^2$-edge projection operator $\widetilde{\Pi}^n_0 \colon L^2(e) \to \widetilde{\mathbb{P}}_n(e)$ is defined as follows: given $w \in L^2(e)$ \begin{equation} \label{eq:projection_edge} \int_e \widetilde{\Pi}^n_0 w \, \widetilde{m} \, {\rm d}e = \int_e w \, \widetilde{m} \, {\rm d}e \quad\forall \widetilde{m} \in \widetilde{\mathcal{M}}_n(e). \end{equation} } { \subsection{Vector space}\label{subsec:vector_spaces} Let $k \geq 0$ be the polynomial degree of accuracy of the method. We proceed as in a standard virtual element fashion, i.e., we firstly define the virtual spaces element-wise then we globally glue them. We introduce the local virtual space on the curved element $E\in \Omega_h$: \begin{eqnarray} \boldsymbol{V}_k(E) \vcentcolon= \{\boldsymbol{v} \in H(\diver, E) \cap H(\rot, E)&:& \boldsymbol{v} \cdot\boldsymbol{n}^e \in \widetilde{\mathbb{P}}_k(e) \, \forall e \in \mathcal{E}_h^{E},\nonumber\\ \phantom{\boldsymbol{V}_k(E) \vcentcolon= \{\boldsymbol{v} \in H(\diver, E) \cap H(\rot, E)}&\phantom{:}& \diver \boldsymbol{v} \in \mathbb{P}_k(E),\, \rot \boldsymbol{v} \in \mathbb{P}_{k-1}(E) \}\,.\nonumber\\ \label{eqn:spaceCurved} \end{eqnarray} The definition above extends to the curved elements the ``straight'' mixed VEM space introduced in \cite{BeiraodaVeiga2014b,BeiraoVeiga2016} that is the VEM counterpart of the Raviart-Thomas spaces to more general element geometries. An element $\boldsymbol{v}$ belonging to the space $\boldsymbol{V}_k(E)$ is well defined (assuming the compatibility condition of the divergence Theorem), but it is not a-priori specified in the internal part of $E$ as done in the standard finite elements. We have the following choice for the degrees of freedom. \begin{dof}[DoFs for $\boldsymbol{V}_k(E)$]\label{dof:vhk} The set of scaled degrees of freedom associated to the space $\boldsymbol{V}_k(E)$ are given for all $\boldsymbol{w} \in \boldsymbol{V}_k(E)$, by the linear operators $\boldsymbol{D}$ split into three subsets: \begin{itemize} \item $\boldsymbol{D_1}$: the boundary moments \[ \boldsymbol{D_1}^{e,i}(\boldsymbol{w}) \vcentcolon= \frac{1}{h_e} \int_e \boldsymbol{w} \cdot \boldsymbol{n}^e \widetilde{m}_i \, {\rm d}e \quad \forall e \in \mathcal{E}_h^{E}, \, \forall \widetilde{m}_i \in \widetilde{\mathcal{M}}_k(e), i=1, \dots, k+1; \] \item $\boldsymbol{D_2}$: the element moments of the divergence \[ \boldsymbol{D_2}^{j}(\boldsymbol{w}) \vcentcolon= \frac{h_E}{|E|} \int_E \diver \boldsymbol{w} \, m_j \, {\rm d}E \quad \forall m_j \in \mathcal{M}_k(E)\setminus \mathcal{M}_0(E),\, j=2, \dots, \pi_k; \] \item $\boldsymbol{D_3}$: the element moments \[ \boldsymbol{D_3}^l(\boldsymbol{w}) \vcentcolon= \frac{1}{|E|} \int_E \boldsymbol{w} \cdot \boldsymbol{m}^{\perp} m_l \, {\rm d}E \quad \forall m_l \in \mathcal{M}_{k-1}(E),\, l=1, \dots, \pi_{k-1}, \] where $\boldsymbol{m}^{\perp}:= \left(\mathlarger{\frac{(y - y_E)}{h_E}},\,-\mathlarger{\frac{(x - x_E)}{h_E}}\right)$. \end{itemize} \end{dof} \noindent The dimension of $\boldsymbol{V}_k(E)$ is given by \begin{equation} \label{eq:diml} \dim(\boldsymbol{V}_k(E)) = \ell_e (k+1) + (\pi_{k} - 1) + \pi_{k-1} \,. \end{equation} \begin{remark} \label{rm:dofs1} The proof that the linear operators $\boldsymbol{D_1}$, $\boldsymbol{D_2}$ and $\boldsymbol{D_3}$ constitute a set of DoFs for $\boldsymbol{V}_k(E)$ follows the same guidelines of Lemma 3.1, Lemma 3.2 and Theorem 3.1 in \cite{BeiraodaVeiga2014b}. \end{remark} \begin{remark} \label{rm:dofs2} The set of DoFs $\boldsymbol{D_2}$ used in the present work is different from the one suggested in \cite{BeiraodaVeiga2014b}, where, instead, the moments \[ \frac{h_E}{|E|} \int_E \boldsymbol{w} \cdot \nabla m_{k} \, {\rm d}E \quad \forall m_{k} \in \mathcal{M}_{k}(E)\setminus \mathcal{M}_0(E). \] are used. The choice proposed in the present work turns out to be particularly suited for curved elements as explained in Remark \ref{rm:dofs3}. \end{remark} The global space is defined by gluing together all local spaces, which is thus set as \begin{equation} \label{eq:VVG} \boldsymbol{V}_k(\Omega_h) \vcentcolon= \{ \boldsymbol{v} \in \boldsymbol{V}: \, \boldsymbol{v}|_E \in \boldsymbol{V}_k(E)\, \forall E \in \Omega_h \}. \end{equation} More specifically, we require that for any internal edge $e \in \mathcal{E}_h^{E} \cap \mathcal{E}_h^{E'}$ \[ \boldsymbol{v}|_E \cdot \boldsymbol{n}_e^E + \boldsymbol{v}|_{E^\prime} \cdot \boldsymbol{n}_e^{E^\prime} = 0 \quad \forall \boldsymbol{v} \in \boldsymbol{V}_k(\Omega_h), \] that is in accordance with the DoFs definition $\boldsymbol{D_1}$. The dimension of $\boldsymbol{V}_k(\Omega_h)$ is thus given by \begin{equation} \label{eq:dimg} \dim(\boldsymbol{V}_k(\Omega_h)) = L_e (k+1) + (\pi_{k} - 1)L_E + \pi_{k-1} L_E \,, \end{equation} where $L_e$ and $L_E$ are the number of edges and polygons in $\Omega_h$, respectively. \subsection{Scalar space}\label{subsec:scalar_spaces} The approximation of the continuous space $Q$ is made of piecewise discontinuous polynomials in each element. The space $Q_k(\Omega_h) \subset Q$ belongs to the standard finite elements and its elements can be easily handled. Namely for $k \geq 0$, we have \begin{gather*} Q_k(E)\vcentcolon= \{ v \in L^2(E):\, v \in \mathbb{P}_k(E) \}. \end{gather*} For this space we consider the following DoFs \begin{dof}[DoFs for $Q_k(E)$]\label{dof:qhk} The internal scaled moments are the DoFs for $Q_k(E)$, i.e., for any $v \in Q_k(E)$ we consider \begin{itemize} \item $\boldsymbol{D_Q}$: the element moments \[ \boldsymbol{D_Q}^r(v)\vcentcolon= \frac{1}{|E|} \int_E v \,m_r \, {\rm d}E \qquad \forall m_r \in \mathcal{M}_k(E), \, r=1, \dots, \pi_k. \] \end{itemize} \end{dof} \noindent We define the global discrete space as \begin{equation} \label{eq:qh} Q_k(\Omega_h)\vcentcolon= \{v \in Q:\, v|_E \in Q_k(E)\}. \end{equation} Notice that by construction we have $\diver (\boldsymbol{V}_k(\Omega_h)) = Q_k(\Omega_h)$. } { \subsection{Polynomial projector and discrete forms}\label{sec:discrete_forms} As for the straight virtual spaces, a function $\boldsymbol{w} \in \boldsymbol{V}_k(E)$ is not known in closed form, however exploiting the DoFs values of $\boldsymbol{w}$ we can compute some fundamental informations. \paragraph{The polynomial $\boldsymbol{w} \cdot \boldsymbol{n}^e$ is computable} We start by noticing that the normal component $\boldsymbol{w} \cdot \boldsymbol{n}^e$ is explicitly known for all $e \in \mathcal{E}_h^{E}$. Indeed, being $\boldsymbol{w} \cdot\boldsymbol{n}^e \in \widetilde{\mathbb{P}}_k(e)$, there exist $c_1, \dots, c_{k+1} \in \mathbb{R}$ such that \begin{equation} \label{eq:wwb} \boldsymbol{w} \cdot \boldsymbol{n}^e \!=\! \sum_{\rho = 1}^{k+1} c_\rho \widetilde{m}_\rho \!=\! \sum_{\rho = 1}^{k+1} c_\rho {m}_\rho \circ \gamma^{-1} \quad \text{with }\widetilde{m}_\rho \in \widetilde{\mathcal{M}}_k(e)\text{ and }{m}_\rho \in \mathcal{M}_k(\mathfrak{e}). \end{equation} In order to compute the coefficients $c_\rho$ we exploit the DoFs $\boldsymbol{D_1}$: \begin{gather*} \boldsymbol{D_1}^{e,i}(\boldsymbol{w}) = \frac{1}{h_e}\int_e \boldsymbol{w} \cdot \boldsymbol{n}^e \, \widetilde{m}_i \, {\rm d}e = \sum_{\rho = 1}^{k+1}\frac{c_\rho}{h_e} \int_e \widetilde{m}_\rho \, \widetilde{m}_i \, {\rm d}e = \sum_{\rho = 1}^{k+1} \frac{c_\rho}{h_e} \int_{\mathfrak{e}} m_\rho \, m_i \, \|\gamma^\prime\| {\rm d}t \end{gather*} for $i=1, \dots, k+1$. Then it is possible to compute the coefficients $c_\rho$ and thus the explicit expression of $\boldsymbol{w} \cdot \boldsymbol{n}^e$ for any edge $e \in \mathcal{E}_h^{E}$. \paragraph{The polynomial $\diver \boldsymbol{w}$ is computable} In such framework we can explicitly compute $\diver \boldsymbol{w}$ via $\boldsymbol{D_1}$ and $\boldsymbol{D_2}$. Indeed, being $\diver \boldsymbol{w} \in \mathbb{P}_k(E)$, there exist $d_1, \dots, d_{\pi_k} \in \mathbb{R}$ such that \begin{equation} \label{eq:wwd} \diver \boldsymbol{w} = \sum_{\theta = 1}^{\pi_k} d_\theta m_\theta \quad \text{with ${m}_\theta \in \mathcal{M}_k(E)$,} \end{equation} then it follows that \[ \frac{h_E}{|E|}\int_E \diver \boldsymbol{w} \, m_j \, {\rm d}E= \frac{h_E}{|E|} \sum_{\theta = 1}^{\pi_k} d_\theta \int_E m_\theta m_j \, {\rm d}E \quad \forall m_j \in \mathcal{M}_k(E),\, j=1, \dots, \pi_k. \] As before the right-hand side matrix is computable, whereas the left-hand side corresponds to the DoFs $\boldsymbol{D_2}^j(\boldsymbol{w})$ if $m_j \in \mathcal{M}_k(E) \setminus \mathcal{M}_0(E)$, for $j=1$ we exploit the boundary information: \[ \frac{h_E}{|E|}\int_E \diver \boldsymbol{w} \, {\rm d}E= \frac{h_E}{|E|}\int_{\partial E} \boldsymbol{w} \cdot \boldsymbol{n}_E \, {\rm d}e = \sum_{e \in \mathcal{E}_h^{E}} \sigma_{E,e} \frac{h_E h_e}{|E|} \frac{1}{h_e} \int_e \boldsymbol{w} \cdot \boldsymbol{n}^e \, {\rm d}e \] that, recalling Property \ref{prop:subset}, is a linear combination of DoFs $\boldsymbol{D_1}^{e,1}(\boldsymbol{w})$. \paragraph{The projection $\Pi^k_0$ is computable} The computations above allow us to evaluate the projection $\Pi^k_0 \boldsymbol{w}$ for all $\boldsymbol{w} \in \boldsymbol{V}_k(E)$. We consider first the following expansion on vector monomials \begin{gather*} \Pi_0^k \boldsymbol{w} = \sum_{\xi = 1}^{2\pi_k} w_\xi \bm{m}_\xi \qquad \text{with $\bm{m}_\xi \in \boldsymbol{\Mk}_k(E)$} \end{gather*} and then we use definition \eqref{eq:projection_def} to obtain \begin{gather*} \int_E \boldsymbol{w} \cdot \bm{m}_s \, {\rm d}E = \int_E \Pi^k_0 \boldsymbol{w} \cdot \bm{m}_s \, {\rm d}E = \sum_{\xi = 1}^{2\pi_k} w_\xi \int_E \bm{m}_\xi \cdot \bm{m}_s \, {\rm d}E \end{gather*} for all $\bm{m}_s \in \boldsymbol{\Mk}_k(E)$, $s=1, \dots, 2\pi_k$. Unfortunately, the first term involves a virtual function $\bm{w}$ which makes it not computable as it is. To proceed, we can use the decomposition \eqref{eq:polydec} of $\bm{m}_s$ obtaining \[ \bm{m}_s = \nabla p_{k+1} + \sum_{l=1}^{\pi_{k-1}} g_l \boldsymbol{m}^{\perp} \, m_l \] for a suitable polynomial $p_{k+1} \in \mathbb{P}_{k+1}(E) \setminus \mathbb{P}_0(E)$ and suitable coefficients $g_1, \dots, g_{\pi_{k-1}} \in \mathbb{R}$. Therefore integrating by parts, \eqref{eq:wwb} and \eqref{eq:wwd} yield \[ \begin{aligned} &\int_E \bm{w} \cdot \bm{m}_s \, {\rm d}E = \int_E \bm{w} \cdot \nabla p_{k+1} \, {\rm d}E + \sum_{l=1}^{\pi_{k-1}} g_l \int_E \bm{w} \cdot \boldsymbol{m}^{\perp} \, m_l \, {\rm d}E \\ &= \int_{\partial E} \bm{w} \cdot \boldsymbol{n}_E p_{k+1} \, {\rm d}e - \int_E \diver \bm{w} \, p_{k+1} \, {\rm d}E + \sum_{l=1}^{\pi_{k-1}} g_l \int_E \bm{w} \cdot \boldsymbol{m}^{\perp} \, m_l \, {\rm d}E \\ &= \sum_{e \in \mathcal{E}_h^{E}} \sigma_{E,e} \sum_{\rho=1}^{k+1} c_{\rho} \int_{e} \widetilde{m}_\rho p_{k+1} \, {\rm d}e - \sum_{\theta=1}^{\pi_k} d_\theta \int_E m_\theta p_{k+1} \, {\rm d}E + |E|\sum_{l=1}^{\pi_{k-1}} g_l \boldsymbol{D_3}^l(\boldsymbol{w}) \end{aligned} \] that is a computable expression. Following a standard procedure, we define the computable discrete local form $a_k^E(\cdot, \cdot) \colon {\protect\fakebold{{\mathbb{V}}}}_k(E) \times {\protect\fakebold{{\mathbb{V}}}}_k(E) \to \mathbb{R}$, with ${\protect\fakebold{{\mathbb{V}}}}_k(E)\vcentcolon= \boldsymbol{V}_k(E) + [\mathbb{P}_k(E)]^2$, given by \begin{equation} \label{eq:ahE} a_k^E(\boldsymbol{u}_h, \boldsymbol{v}_h) \vcentcolon= \int_E \mu \kappa^{-1} \Pi_0^k \boldsymbol{u}_h \cdot \Pi_0^k \boldsymbol{v}_h \, {\rm d}E + \nu(E) \mathcal{S}^E((I - \Pi_0^k)\boldsymbol{u}_h, (I - \Pi_0^k)\boldsymbol{v}_h) \end{equation} for all $\boldsymbol{u}_h,\boldsymbol{v}_h \in {\protect\fakebold{{\mathbb{V}}}}_k(E)$. In the previous definition the term $\nu(E) \in \mathbb{R}$ is a cell-wise approximation of the physical parameters $\mu k^{-1}$ and the stabilization form $\mathcal{S}^E(\cdot, \cdot) \colon {\protect\fakebold{{\mathbb{V}}}}_k(E) \times {\protect\fakebold{{\mathbb{V}}}}_k(E) \to \mathbb{R}$ is defined by \[ \mathcal{S}^E(\boldsymbol{u}_h, \boldsymbol{v}_h) \vcentcolon= \abs{E} \sum_{s = 1}^{N_{\dofop}(E)} \boldsymbol{D}^s(\boldsymbol{u}_h) \boldsymbol{D}^s(\boldsymbol{v}_h) \] that is \begin{multline} \label{eq:St} \mathcal{S}^E(\boldsymbol{u}_h, \boldsymbol{v}_h) \vcentcolon= |E| \sum_{e \in \mathcal{E}_h^{E}} \sum_{i=1}^{k+1} \boldsymbol{D_1}^{e,i}(\boldsymbol{u}_h) \boldsymbol{D_1}^{e,i}(\boldsymbol{v}_h) + \\+ |E| \sum_{j=2}^{\pi_k} \boldsymbol{D_2}^{j}(\boldsymbol{u}_h) \boldsymbol{D_2}^{j}(\boldsymbol{v}_h) + |E| \sum_{l=1}^{\pi_{k-1}} \boldsymbol{D_3}^{l}(\boldsymbol{u}_h) \boldsymbol{D_3}^{l}(\boldsymbol{v}_h) \end{multline} for all $\boldsymbol{u}_h,\boldsymbol{v}_h \in {\protect\fakebold{{\mathbb{V}}}}_k(E)$, being $N_{\dofop}(E)$ the total number of DoFs on $E$. Since the global form is the sum of the local counterparts, we obtain $a_k(\cdot, \cdot) \colon {\protect\fakebold{{\mathbb{V}}}}_k(\Omega_h)\times {\protect\fakebold{{\mathbb{V}}}}_k(\Omega_h) \to \mathbb{R}$ defined by \begin{equation} \label{eq:ah} a_k(\boldsymbol{u}_h, \boldsymbol{v}_h) \vcentcolon= \sum_{E \in \Omega_h} a_k^E(\boldsymbol{u}_h, \boldsymbol{v}_h) \quad \forall \boldsymbol{u}_h, \boldsymbol{v}_h \in {\protect\fakebold{{\mathbb{V}}}}_k(\Omega_h) \end{equation} \begin{remark}[On the space ${\protect\fakebold{{\mathbb{V}}}}_k$] In the definition of the local discrete form $a_k^E$ \eqref{eq:ahE}, we have considered the sum space ${\protect\fakebold{{\mathbb{V}}}}_k(E)$ for both of its entries. In fact, as reported in Property \ref{prop:subset}, the space $\boldsymbol{V}_k(E)$ may not contain all the polynomials up to degree $k$. However, in order to have the optimal rate of convergence for the proposed scheme, we need to verify the continuity of $a_k^E$ on the sum space ${\protect\fakebold{{\mathbb{V}}}}_k(E)$ (cfr. Proposition \ref{pr:continuity}). \end{remark} \subsection{The discrete problem} \label{sub:dp} Referring to the discrete spaces \eqref{eq:VVG} and \eqref{eq:qh}, the discrete form \eqref{eq:ah}, the virtual element approximation of the Darcy equation is given by \begin{problem}[VEM problem]\label{pb:darcy_vem} Find the couple Darcy velocity and pressure $(\bm{q}_h, p_h) \in \boldsymbol{V}_k(\Omega_h) \times Q_k(\Omega_h)$ such that \begin{align} \left \{ \begin{aligned} & a_k(\bm{q}_h, \bm{v}_h) + b(\bm{v}_h, p_h) = G(\bm{v}_h) && \forall \bm{v}_h \in \boldsymbol{V}_k(\Omega_h) \\ & b(\bm{q}_h, v_h) = F(v_h) && \forall v_h \in Q_k(\Omega_h). \end{aligned} \right. \end{align} \end{problem} \noindent Notice that since for any function $\boldsymbol{v}_h \in \boldsymbol{V}_k(\Omega_h)$ its divergence and its boundary values are explicitly known, we do not need to introduce any approximation for the form $b(\cdot, \cdot)$ and for the linear function $G(\cdot)$. } { \section{Theoretical analysis} \label{sec:theory} In this section, we introduce an interpolation operator that allows us to show the inf-sup stability of the proposed scheme. After, the stability of the stabilization term is studied. \subsection{Interpolation and Inf-sup stability} \label{sub:int} We start by reviewing a classical approximation result for polynomials on star-shaped domains, see for instance \cite{brenner-scott:book}. \begin{lemma}[Bramble-Hilbert] \label{lm:bramble} Under Assumption \ref{ass:star}, let $0 \leq s \leq k+1$. Then, referring to \eqref{eq:projection_def}, for all $\boldsymbol{v} \in \boldsymbol{V} \cap H^s(\Omega_h)$ it holds \[ \|\boldsymbol{v} - \Pi^k_0 \boldsymbol{v}\|_{\Omega_h,0} \lesssim h^s \, |\boldsymbol{v}|_{\Omega_h,s} \,. \] \end{lemma} Let us introduce the linear Fortin operator $\Pi^k_{\rm F} \colon [H^1(\Omega)]^2 \to \boldsymbol{V}_k(\Omega_h)$ defined through the DoFs $\boldsymbol{D_1}$, $\boldsymbol{D_2}$ and $\boldsymbol{D_3}$. For $\boldsymbol{w} \in [H^1(\Omega)]^2$ and for all $e \in \mathcal{E}_h$ and $E \in \Omega_h$, we require the following three conditions \begin{eqnarray} \label{eq:fortin1} \hspace{-1em}\int_e (\boldsymbol{w} - \Pi^k_{\rm F} \boldsymbol{w}) \cdot \boldsymbol{n}^e \widetilde{m}_i \, {\rm d}e = 0 &\forall&\hspace{-1em}\widetilde{m}_i \in \widetilde{\mathcal{M}}_k(e),\, i=1, \dots, k+1; \\ \label{eq:fortin2} \hspace{-1em}\int_E \diver (\boldsymbol{w} - \Pi^k_{\rm F} \boldsymbol{w}) \, m_j \, {\rm d}E = 0 &\forall&\hspace{-1em} m_j \in \mathcal{M}_k(E)\setminus \mathcal{M}_0(E),\, j=2, \dots, \pi_k; \\ \label{eq:fortin3} \hspace{-1em}\int_E (\boldsymbol{w} -\Pi^k_{\rm F} \boldsymbol{w}) \cdot \boldsymbol{m}^{\perp} m_l \, {\rm d}E = 0 &\forall&\hspace{-1em} m_l \in \mathcal{M}_{k-1}(E),\, l=1, \dots, \pi_{k-1}. \end{eqnarray} The definition above easily implies that the following diagram \begin{equation} \label{eq:diagram} \begin{split} [H^1(\Omega)]^2 \, &\xrightarrow[]{ \,\,\,\,\, \text{{$\diver$}} \,\,\,\,\, } \quad \, \, \, Q \quad \, \, \, \xrightarrow[]{\, \, \,\,\,\,\, 0 \,\,\,\,\, \, \,}\, 0 \\ \Pi^k_{\rm F} \bigg\downarrow \qquad & \quad \quad \qquad \Pi_0^k \bigg\downarrow \\ \boldsymbol{V}_k(\Omega_h) \, &\xrightarrow[]{ \,\,\,\,\, \text{{$\diver$}} \,\,\,\,\, }\, Q_k(\Omega_h) \, \, \xrightarrow[]{\, \, \,\,\,\,\, 0 \,\,\,\,\, \, \,}\, 0 \end{split} \end{equation} where $0$ is the mapping that to every function associates the number $0$, is a commutative map. In particular, we have the following property: \begin{equation} \label{eq:divpf} \diver (\Pi^k_{\rm F} \boldsymbol{w}) = \Pi_0^k \diver \boldsymbol{w} \quad \forall \boldsymbol{w} \in [H^1(\Omega)]^2. \end{equation} Indeed, since $\diver (\Pi^k_{\rm F} \boldsymbol{w}) \in Q_k(\Omega_h)$, by definition of $\Pi_0^k$, we need to verify that for all $E \in \Omega_h$ \[ \int_E \diver (\boldsymbol{w} - \Pi^k_{\rm F} \boldsymbol{w})\,m_j \, {\rm d}E= 0 \quad \forall m_j \in \mathcal{M}_k(E),\, j=1, \dots, \pi_k. \] If $m_j \in \mathcal{M}_k(E) \setminus \mathcal{M}_0(E)$ it follows by \eqref{eq:fortin2}, whereas if $j=1$ by Property \ref{prop:subset} and \eqref{eq:fortin1} we have \[ \begin{aligned} \int_E \diver (\boldsymbol{w} - \Pi^k_{\rm F} \boldsymbol{w})\, {\rm d}E &= \int_{\partial E} (\boldsymbol{w} - \Pi^k_{\rm F} \boldsymbol{w}) \cdot \boldsymbol{n}_E \, {\rm d}e \\ &=\sum_{e \in \mathcal{E}_h^{E}} \sigma_{E,e} \int_e (\boldsymbol{w} - \Pi^k_{\rm F} \boldsymbol{w}) \cdot \boldsymbol{n}^e \, {\rm d}e = 0. \end{aligned} \] \begin{remark} \label{rm:dofs3} Notice that property \eqref{eq:divpf} is strictly related to the DoFs $\boldsymbol{D_2}$ and the associated Fortin operator. With the choice of DoFs of Remark \ref{rm:dofs2} and adopted for the ``straight'' MVEM \cite{BeiraodaVeiga2014b} with the associated Fortin operator we have instead \begin{multline*} \int_E \diver (\boldsymbol{w} - \Pi^k_{\rm F} \boldsymbol{w})\,m_k \, {\rm d}E = - \int_E (\boldsymbol{w} - \Pi^k_{\rm F} \boldsymbol{w}) \cdot \nabla m_k \, {\rm d}E + \\ + \sum_{e \in \mathcal{E}_h} \sigma_{E, e} \int_e (\boldsymbol{w} - \Pi^k_{\rm F} \boldsymbol{w}) \cdot \boldsymbol{n}^e m_k\, {\rm d}e \,. \end{multline*} For a curved polygon $E$, the second term is not zero any more since, as observed in Property \ref{prop:subset}, the restriction of $m_k$ on a curved edge $e$ does not belong to $\widetilde{\mathbb{P}}_k(e)$. Therefore the choice of $\boldsymbol{D_2}$ is particularly suited for curved polygons. \end{remark} \noindent As a consequence of the above arguments we have the following results: the first one deals with the approximation property of the space \eqref{eq:VVG} and follows combining \eqref{eq:divpf} and Lemma \ref{lm:bramble} with \cite{duran}, the second one is associated with the commutativity of the diagram \eqref{eq:diagram} and deals with the inf-sup stability of the method \cite{Boffi2013}. \begin{prop} \label{pr:interpolation} Let $\boldsymbol{w} \in \boldsymbol{V} \cap [H^{k+1}(\Omega_h)]^2$ with $\diver \boldsymbol{w} \in H^{k+1}(\Omega_h)$ and let $\Pi^k_{\rm F}$ be the linear Fortin operator. Then under Assumption \ref{ass:star} it holds \[ \begin{gathered} \|\boldsymbol{w} - \Pi^k_{\rm F} \boldsymbol{w}\|_{0, \Omega} \lesssim h^{k+1} \, |\boldsymbol{w}|_{k+1, \Omega_h}\,, \\ \|\diver \boldsymbol{w} - \diver \Pi^k_{\rm F} \boldsymbol{w}\|_{0, \Omega} \lesssim h^{k+1} \, |\diver \boldsymbol{w}|_{k+1, \Omega_h}\,. \end{gathered} \] \end{prop} \begin{prop} \label{pr:infsup} Under Assumption \ref{ass:star} there exists $\beta >0$ such that \[ \inf_{v \in Q_k(\Omega_h)} \sup_{\boldsymbol{w} \in \boldsymbol{V}_k(\Omega_h)} \frac{b(\boldsymbol{w}, v)}{\|v\|_{Q} \|\boldsymbol{w}\|_{\boldsymbol{V}}} \geq \beta \,. \] \end{prop} \subsection{Stability analysis} \label{sub:stab} The aim of the section is to prove the stability bounds for the approximated bilinear form \eqref{eq:ahE} and in particular for the stabilization term $\mathcal{S}^E$. We want to prove that \begin{gather*} \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w}) \gtrsim \|\boldsymbol{w}\|^2_{0,E} \quad \forall \boldsymbol{w} \in \boldsymbol{V}_k(E), \\ \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w}) \lesssim \|\boldsymbol{w}\|^2_{0,E} \quad \forall \boldsymbol{w} \in {\protect\fakebold{{\mathbb{V}}}}_k(E). \end{gather*} We start with the following useful inverse estimates. \begin{lemma} \label{lm:inverse} We assume (\ref{ass:star}) and we fix an integer $n \in \mathbb{N}$. Let $\boldsymbol{w} \in H(\diver, E)$ such that $\diver \boldsymbol{w} \in \mathbb{P}_n(E)$ then \begin{equation} \label{eq:divinv} \|\diver \boldsymbol{w}\|_{0,E} \lesssim h_E^{-1} \, \|\boldsymbol{w}\|_{0,E} \,. \end{equation} Let $\boldsymbol{w} \in H(\rot, E)$ such that $\rot \boldsymbol{w} \in \mathbb{P}_n(E)$ then \begin{equation} \label{eq:rotinv} \|\rot \boldsymbol{w}\|_{0,E} \lesssim h_E^{-1} \, \|\boldsymbol{w}\|_{0,E} \,. \end{equation} \end{lemma} \begin{proof} Under Assumption \ref{ass:star}, let $T_E \subset E$ be an equilateral triangle inscribed in the ball $B_E$. Then for any $p_n \in \mathbb{P}_n(E)$ it holds $\|p_n\|_{0,E} \lesssim \|p_n\|_{0, T_E}$. Let $b_3 \in \mathbb{P}_3(T_E)$ be the cubic bubble with $\|b_3\|_{L^{\infty}(T_E)}=1$. Then, applying a polynomial inverse estimate on $T_E$ we get \[ \begin{split} \|\diver \boldsymbol{w}\|_{0,E}^2 & \lesssim \|\diver \boldsymbol{w}\|_{0,T_E}^2\\[1.em] &\lesssim \int_{T_E} b_3 \diver \boldsymbol{w} \, \diver \boldsymbol{w} \, {\rm d}E = - \int_{T_E} \nabla(b_3 \diver \boldsymbol{w}) \boldsymbol{w} \, {\rm d}E \\[1.em] & \lesssim \|\nabla (b_3 \diver \boldsymbol{w})\|_{0,T_E} \|\boldsymbol{w}\|_{0,T_E} \lesssim h_E^{-1} \|b_3 \diver \boldsymbol{w}\|_{0,T_E} \|\boldsymbol{w}\|_{0,T_E} \\[1.em] & \lesssim h_E^{-1} \|\diver \boldsymbol{w}\|_{0,T_E} \|\boldsymbol{w}\|_{0,T_E} \lesssim h_E^{-1} \|\diver \boldsymbol{w}\|_{0,E} \|\boldsymbol{w}\|_{0,E} \,, \end{split} \] from which follows \eqref{eq:divinv}. The same argument applies to \eqref{eq:rotinv}. \end{proof} \begin{prop} \label{pr:continuity} Let $E \in \Omega_h$. Under Assumptions \ref{ass:regu}, \ref{ass:star} and~\ref{ass:mesh} the following holds \[ \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w}) \lesssim \|\boldsymbol{w}\|_{0,E}^2 \quad \forall \boldsymbol{w} \in {\protect\fakebold{{\mathbb{V}}}}_k(E). \] \end{prop} \begin{proof} By definition \eqref{eq:St}, for $\boldsymbol{w} \in {\protect\fakebold{{\mathbb{V}}}}_k(E)$ we need to prove that \begin{equation} \label{eq:st1} \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w}) = \sum_{e \in \mathcal{E}_h^{E}} \sum_{i=1}^{k+1} |E|\boldsymbol{D_1}^{e,i}(\boldsymbol{w})^2 + \sum_{j=2}^{\pi_k} |E|\boldsymbol{D_2}^{j}(\boldsymbol{w})^2 + \sum_{l=1}^{\pi_{k -1}} |E|\boldsymbol{D_3}^{l}(\boldsymbol{w})^2 \lesssim \|\boldsymbol{w}\|_{0,E}^2 \,. \end{equation} We start analysing the first term in the left-hand side. Employing the $H(\diver)$ trace inequality \cite[Theorem 3.24]{monk:book} and Lemma \ref{lm:inverse} it holds \[ h_E^{-1} \|\boldsymbol{w} \cdot \boldsymbol{n}^E\|_{0,\partial E}^2 \lesssim h_E^{-2} \|\boldsymbol{w} \|_{0,\partial E}^2 +\|\diver \boldsymbol{w} \|_{0, E}^2 \lesssim h_E^{-2} \|\boldsymbol{w} \|_{0,E}^2 \qquad \forall \boldsymbol{w} \in {\protect\fakebold{{\mathbb{V}}}}_k(E). \] Then, since $\|m_i\|_{L^{\infty}(\mathfrak{e})}\leq 1$ and $h_{\mathfrak{e}} \lesssim h_E$ (cfr. Remark \ref{rm:length}), it follows that \begin{gather} \label{eq:st2} \begin{aligned} \sum_{e \in \mathcal{E}_h^{E}} \sum_{i=1}^{k+1} &|E|\boldsymbol{D_1}^{e,i}(\boldsymbol{w})^2 = \sum_{e \in \mathcal{E}_h^{E}} \sum_{i=1}^{k+1} \frac{|E|}{h_e^2} \left(\int_e \boldsymbol{w} \cdot \boldsymbol{n}^e \widetilde{m}_i \, {\rm d}e \right)^2 \\ &\lesssim \sum_{e \in \mathcal{E}_h^{E}} \sum_{i=1}^{k+1} \|\boldsymbol{w} \cdot \boldsymbol{n}^e\|_{0,e}^2 \|\widetilde{m}_i\|_{0,e}^2 \lesssim \sum_{e \in \mathcal{E}_h^{E}} \|\boldsymbol{w} \cdot \boldsymbol{n}^e\|_{0,e}^2 \sum_{i=1}^{k+1} \int_e \widetilde{m}_i^2 \, {\rm d}e \\ &\lesssim \sum_{e \in \mathcal{E}_h^{E}} \|\boldsymbol{w} \cdot \boldsymbol{n}^e\|_{0,e}^2 \sum_{i=1}^{k+1} \int_{\mathfrak{e}} m_i^2 \|\gamma'\| \, {\rm d}\mathfrak{e} \lesssim \sum_{e \in \mathcal{E}_h^{E}} \|\boldsymbol{w} \cdot \boldsymbol{n}^e\|_{0,e}^2 h_E \lesssim \|\boldsymbol{w}\|_{0,E}^2. \end{aligned} \end{gather} Consider the second term of~\eqref{eq:st1}, we apply Lemma~\ref{lm:inverse} and, since $\|m_i\|_{L^{\infty}(E)}\leq~1$, we infer \begin{gather} \label{eq:st3} \begin{aligned} \sum_{j=2}^{\pi_k} |E|\boldsymbol{D_2}^{j}(\boldsymbol{w})^2 &= \sum_{j=2}^{\pi_k} |E| \left(\frac{h_E}{|E|} \int_E \diver \boldsymbol{w} \, m_j \, {\rm d}E\right)^2 \\ &\lesssim \frac{h_E^2}{|E|} \sum_{j=2}^{\pi_k} \|\diver \boldsymbol{w}\|_{0,E}^2 \|m_j\|_{0,E}^2 \lesssim \sum_{j=2}^{\pi_k} h_E^2 \|\diver \boldsymbol{w}\|_{0,E}^2 \lesssim \|\boldsymbol{w}\|_{0,E}^2 \,. \end{aligned} \end{gather} Finally for the last term in \eqref{eq:st1}, using again $$ \|\boldsymbol{m}^{\perp}\|_{L^{\infty}(E)}\leq 1,\qquad\text{and}\qquad\|m_l\|_{L^{\infty}(E)} \leq 1, $$ we get: \begin{gather} \label{eq:st4} \begin{aligned} \sum_{l=1}^{\pi_{k -1}} |E|\boldsymbol{D_3}^{l}(\boldsymbol{w})^2 &= \sum_{l=1}^{\pi_{k -1}} |E| \left(\frac{1}{|E|} \int_E \boldsymbol{w} \cdot \boldsymbol{m}^{\perp} m_l \, {\rm d}E\right)^2 \\ &\lesssim \frac{1}{|E|} \|\boldsymbol{w}\|_{0,E}^2 \|\boldsymbol{m}^{\perp} m_l\|_{0,E}^2 \lesssim \|\boldsymbol{w}\|_{0,E}^2 \,. \end{aligned} \end{gather} Collecting \eqref{eq:st2}, \eqref{eq:st3} and \eqref{eq:st4} in \eqref{eq:st1} we obtain the thesis. \end{proof} The next step is to prove the coercivity of the bilinear form $\mathcal{S}^E$ with respect to the $L^2$-norm. We start by noting that any function $\boldsymbol{w} \in \boldsymbol{V}_k(E)$ can be decomposed as \begin{equation} \label{eq:w1w2} \boldsymbol{w} = \nabla \phi - \ROT \psi \end{equation} where $\phi$ and $\psi$ are defined by \begin{equation} \label{eq:w1w2a} \left \{ \begin{aligned} &\Delta \phi = \diver \boldsymbol{w} \, & \text{in $E$,} \\ & \nabla \phi\cdot \boldsymbol{n} = \boldsymbol{w} \cdot \boldsymbol{n}^E \, & \text{on $\partial E$,} \end{aligned} \right. \qquad \,\, \text{and} \qquad \,\, \left \{ \begin{aligned} &\Delta \psi = \rot \boldsymbol{w} \, & \text{in $E$,} \\ & \psi = 0 \, & \text{on $\partial E$,} \end{aligned} \right. \end{equation} we can assume that $\phi$ is zero averaged. Moreover the decomposition is $L^2$-orthogonal, i.e. \begin{equation} \label{eq:pitagora} \|\boldsymbol{w}\|^2_{0, E} = \|\nabla \phi\|^2_{0, E} + \|\ROT \psi\|^2_{0, E} \, . \end{equation} \noindent Given a vector $\boldsymbol{g} \vcentcolon= (g_i)_{i=1}^N$, let $\|\boldsymbol{g}\|^2_{l^2} \vcentcolon= \sum_{i=1}^N g_i^2$ be its Euclidean norm. The following lemma for polynomials is easy to check. \begin{lemma} \label{lm:l2piccolo} Let $E \in \Omega_h$ and let $n \in \mathbb{N}$ a fixed integer. Under Assumptions \ref{ass:regu}, \ref{ass:star} and~\ref{ass:mesh}, let $\boldsymbol{g} \vcentcolon= (g_r)_{r=1}^{\pi_n}$ be a vector of real numbers and $g \vcentcolon= \sum_r^{\pi_n} g_r \, m_r \in \mathbb{P}_n(E)$, where $m_r \in \mathcal{M}_n(E)$. Then we have the following norm equivalence \[ h_E^2 \, \|\boldsymbol{g}\|^2_{l^2} \lesssim \|g\|^2_{0, E} \lesssim h_E^2 \, \|\boldsymbol{g}\|^2_{l^2} \,. \] Moreover let $\boldsymbol{g} \vcentcolon= (g_s)_{s=1}^{n+1}$ be a vector of real numbers and $\widetilde{g} \vcentcolon= \sum_s^{n+1} g_s \, \widetilde{m}_s \in \widetilde{\mathbb{P}}_n(e)$, where $\widetilde{m}_s \in \widetilde{\mathcal{M}}_n(e)$. Then we have the following norm equivalence \[ h_E \, \|\boldsymbol{g}\|^2_{l^2} \lesssim \|g\|^2_{0, e} \lesssim h_E \, \|\boldsymbol{g}\|^2_{l^2} \,. \] \end{lemma} \begin{prop} \label{pr:coercivity} Let $E \in \Omega_h$. Under Assumptions \ref{ass:regu}, \ref{ass:star} and~\ref{ass:mesh} the following holds \[ \|\boldsymbol{w}\|_{0,E}^2 \lesssim \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w}) \quad \forall \boldsymbol{w} \in \boldsymbol{V}_k(E). \] \end{prop} \begin{proof} Let $\boldsymbol{w} \in \boldsymbol{V}_k(E)$, since the decomposition \eqref{eq:w1w2} is $L^2$-orthogonal we need to prove that \begin{equation} \label{eq:co0} \|\nabla \phi\|_{0,E}^2 \lesssim \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w}) \qquad \text{and} \qquad \|\ROT \psi\|_{0,E}^2 \lesssim \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w}) \,. \end{equation} We start with the first bound in \eqref{eq:co0} and we infer \begin{gather} \label{eq:co1} \begin{gathered} \|\nabla \phi\|_{0,E}^2 = \int_E \boldsymbol{w} \cdot \nabla \phi \, {\rm d}E = - \int_E \diver \boldsymbol{w} \, \phi \, {\rm d}E + \sum_{E \in \mathcal{E}_h^{E}} \sigma_{E,e} \int_e \boldsymbol{w} \cdot \boldsymbol{n}^e \phi \, {\rm d}e \\ = - \int_E \diver \boldsymbol{w} \, \Pi_0^k \phi \, {\rm d}E + \sum_{E \in \mathcal{E}_h^{E}} \sigma_{E,e} \int_e \boldsymbol{w} \cdot \boldsymbol{n}^e \widetilde{\Pi}_0^k\phi \, {\rm d}e \end{gathered} \end{gather} where in the last equation we use the fact that $\diver \boldsymbol{w} \in \mathbb{P}_k(E)$ and $\boldsymbol{w} \cdot \boldsymbol{n}^e \in \widetilde{\mathbb{P}}_k(e)$ and definitions \eqref{eq:projection_def} and \eqref{eq:projection_edge}, respectively. Let us set \[ \Pi^k_0 \phi = \sum_{j=1}^{\pi_k} c_j m_j \text{ with } m_j \in \mathcal{M}_k(E), \quad \widetilde{\Pi}_0^k \phi = \sum_{i=1}^{k+1} d_i \widetilde{m}_i \text{ with } \widetilde{m}_i \in \widetilde{\mathcal{M}}_k(e). \] Then from \eqref{eq:co1} we infer \begin{multline*} \|\nabla \phi\|_{0,E}^2 = -\sum_{j=2}^{\pi_k} c_j \int_E \diver \boldsymbol{w} \, m_j \, {\rm d}E - c_1 \int_{\partial E} \boldsymbol{w} \cdot \boldsymbol{n}^E m_j \, {\rm d}e + \\+ \sum_{E \in \mathcal{E}_h^{E}} \sigma_{E,e} \sum_{i=1}^{k+1} d_i \int_e \boldsymbol{w} \cdot \boldsymbol{n}^e \widetilde{m}_i \, {\rm d}e \end{multline*} that is \begin{gather} \label{eq:co2} \|\nabla \phi\|_{0,E}^2 = -\sum_{j=2}^{\pi_k} c_j \int_E \diver \boldsymbol{w} \, m_j \, {\rm d}E + \sum_{E \in \mathcal{E}_h^{E}} \sigma_{E,e} \sum_{i=1}^{k+1} \hat{d}_i \int_e \boldsymbol{w} \cdot \boldsymbol{n}^e \widetilde{m}_i \, {\rm d}e \end{gather} where $\hat{d}_i = d_i - c_i$ if $i=1$, $\hat{d}_i= d_i$ otherwise. Using Lemma \ref{lm:l2piccolo}, the continuity of $\Pi_0^k$ with respect to the $L^2$-norm and a scaled Poincar\'e inequality for the zero averaged function $\phi$, the bulk integral in \eqref{eq:co2} can be bounded as follows: \begin{gather} \label{eq:co3} \begin{aligned} -\sum_{j=2}^{\pi_k} c_j & \int_E \diver \boldsymbol{w} \, m_j \, {\rm d}E = - \sum_{j=2}^{\pi_k} c_j \, \frac{|E|}{h_E} \, \boldsymbol{D_2}^j(\boldsymbol{w}) \\ & \lesssim \biggl( \sum_{j=1}^{\pi_k} c_j^2 \biggr)^{1/2} \biggl( |E| \sum_{j=2}^{\pi_k} \boldsymbol{D_2}^i(\boldsymbol{w})^2 \biggr)^{1/2} \lesssim h_E^{-1} \, \|\Pi_0^k \phi\|_{0,E} \, \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w})^{1/2}\\ &\lesssim h_E^{-1} \, \|\phi\|_{0,E} \, \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w})^{1/2} \lesssim \|\nabla \phi\|_{0,E} \, \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w})^{1/2} \,. \end{aligned} \end{gather} For the boundary integral in \eqref{eq:co2}, employing Lemma \ref{lm:l2piccolo}, we infer \[ \begin{aligned} \sum_{e \in \mathcal{E}_h^{E}} \sigma_{E,e} &\sum_{i=1}^{k+1} \hat{d}_i \int_e \boldsymbol{w} \cdot \boldsymbol{n}^e \widetilde{m}_i \, {\rm d}e \lesssim \sum_{e \in \mathcal{E}_h^{E}} \sigma_{E,e} \sum_{i=1}^{k+1} \hat{d}_i h_e \boldsymbol{D_1}^{e,i}(\boldsymbol{w}) \\ &\lesssim \sum_{e \in \mathcal{E}_h^{E}} \biggl(\sum_{i=1}^{k+1} \hat{d}_i^2 \biggr)^{1/2} \biggl( |E| \sum_{i=1}^{k+1} \boldsymbol{D_1}^{e,i}(\boldsymbol{w}) \biggr)^{1/2} \\ &\lesssim \sum_{e \in \mathcal{E}_h^{E}} \biggl(c_1^2 + \sum_{i=1}^{k+1} d_i^2\biggr)^{1/2} \biggl( |E| \sum_{i=1}^{k+1} \boldsymbol{D_1}^{e,i}(\boldsymbol{w}) \biggr)^{1/2}\\ & \lesssim \sum_{e \in \mathcal{E}_h^{E}} \biggl(h_E^{-1}\| \Pi_0^k \phi\|_{0,E} + h_E^{-1/2} \|\widetilde{\Pi}_0^k \phi\|_{0,e} \biggr) \biggl( |E| \sum_{i=1}^{k+1} \boldsymbol{D_1}^{e,i}(\boldsymbol{w})^2 \biggr)^{1/2} \,. \end{aligned} \] Then, using the continuity of $\widetilde{\Pi}_0^k$ with respect to the $L^2$-norm and the $H^1$ trace inequality for the zero averaged function $\phi$, from previous bound we get \begin{gather} \label{eq:co4} \begin{aligned} &\sum_{e \in \mathcal{E}_h^{E}} \sigma_{E,e} \sum_{i=1}^{k+1} \hat{d}_i \int_e \boldsymbol{w} \cdot \boldsymbol{n}^e \widetilde{m}_i \, {\rm d}e \\ &\lesssim \biggl( h_E^{-2}\| \Pi_0^k \phi\|^2_{0,E} + h_E^{-1} \sum_{e \in \mathcal{E}_h^{E}} \|\widetilde{\Pi}_0^k \phi\|_{0,e}^2 \biggr)^{1/2} \biggl( |E| \sum_{e \in \mathcal{E}_h^{E}}\sum_{i=1}^{k+1} \boldsymbol{D_1}^{e,i}(\boldsymbol{w})^2 \biggr)^{1/2} \\ & \lesssim \biggl( h_E^{-2}\| \phi\|^2_{0,E} + h_E^{-1} \sum_{e \in \mathcal{E}_h^{E}} \|\phi\|_{0,e}^2 \biggr)^{1/2} \, \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w})^{1/2} \\ & \lesssim \biggl( h_E^{-2}\| \phi\|^2_{0,E} + h_E^{-1} \|\phi\|_{0,\partial E}^2 \biggr)^{1/2} \, \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w})^{1/2} \lesssim \|\nabla \phi\|_{0,E} \, \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w})^{1/2} \,. \end{aligned} \end{gather} Collecting \eqref{eq:co4} and \eqref{eq:co3} in \eqref{eq:co2}, we obtain the first bound in \eqref{eq:co0}. Concerning the $\ROT$ part of $\boldsymbol{w}$ in decomposition \eqref{eq:w1w2}, recalling \eqref{eq:w1w2a}, we infer \begin{equation} \label{eq:co7} \begin{split} \|\ROT \psi\|_{0,E}^2 &= \int_E \ROT \psi \cdot \ROT \psi \, {\rm d}E = \int_E \Delta \psi \, \psi \, {\rm d}E = \int_E \rot \boldsymbol{w} \, \psi \, {\rm d}E \,. \end{split} \end{equation} Since $\rot \boldsymbol{w} = q_{k-1} \in \mathbb{P}_{k-1}(E)$ there exists $p_{k-1} \in \mathbb{P}_{k-1}(E)$ such that $\rot \boldsymbol{w} = \rot(\boldsymbol{x}^{\perp} p_{k-1})$ (cfr. Remark \ref{rm:rot}). Moreover being $\ROT \psi$ orthogonal with respect to the gradients, by decomposition \eqref{eq:polydec}, it holds $\boldsymbol{x}^{\perp} p_{k-1}= \Pi_0^k \ROT \psi$. Therefore from \eqref{eq:co7} we obtain \begin{gather} \label{eq:co8} \begin{aligned} \|\ROT \psi\|_{0,E}^2 &= \int_E \rot(\boldsymbol{x}^{\perp} p_{k-1})\, \psi \, {\rm d}E = \int_E \boldsymbol{x}^{\perp} p_{k-1}\cdot \ROT \psi \, {\rm d}E \\ &= \int_E \boldsymbol{x}^{\perp} p_{k-1} \cdot \boldsymbol{w} \, {\rm d}E - \int_E \boldsymbol{x}^{\perp} p_{k-1} \cdot \nabla \phi \, {\rm d}E \,. \end{aligned} \end{gather} Let us write $\boldsymbol{x}^{\perp} p_{k-1}$ in the monomial basis: it exists $g_l \in \mathbb{R}$, for $l=1, \ldots, \pi_{k-1}$, such that \[ \boldsymbol{x}^{\perp} p_{k-1} \vcentcolon= \sum_{l=1}^{\pi_{k-1}} g_l \boldsymbol{m}^{\perp} m_l \,, \] and let us analyse the two adds in the right-hand side of \eqref{eq:co8}. For the first one, using Lemma \ref{lm:l2piccolo}, we infer \begin{gather} \label{eq:co9} \begin{aligned} \int_E \boldsymbol{x}^{\perp} p_{k-1} \,\cdot \boldsymbol{w} \, {\rm d}E &= \sum_{l=1}^{\pi_{k-1}} g_l \int_E \boldsymbol{m}^{\perp} m_l \,\cdot \boldsymbol{w} \, {\rm d}E = \sum_{l=1}^{\pi_{k-1}} |E| g_l \boldsymbol{D_3}^l(\boldsymbol{w}) \\ & \lesssim h_E \biggl(\sum_{l=1}^{\pi_{k-1}} g_l^2 \biggr)^{1/2} \biggl( |E|\sum_{l=1}^{\pi_{k-1}} \boldsymbol{D_3}^l(\boldsymbol{w})^2\biggr)^{1/2} \\ & \lesssim \|\boldsymbol{x}^{\perp} p_{k-1}\|_{0,E} \, \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w})^{1/2} \lesssim \|\ROT \psi\|_{0,E} \, \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w})^{1/2}\,. \end{aligned} \end{gather} For the second term in \eqref{eq:co8}, using the first bound in \eqref{eq:co0}, we get \begin{gather} \label{eq:co10} \begin{aligned} \int_E \boldsymbol{x}^{\perp} p_{k-1} \cdot \nabla \phi \, {\rm d}E &\lesssim \|\boldsymbol{x}^{\perp} p_{k-1}\|_{0,E} \|\nabla \phi\|_{0,E} \lesssim \|\boldsymbol{x}^{\perp} p_{k-1}\|_{0,E} \, \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w})^{1/2} \\ &\lesssim \|\ROT \psi\|_{0,E} \, \mathcal{S}^E(\boldsymbol{w}, \boldsymbol{w})^{1/2}\,. \end{aligned} \end{gather} Collecting \eqref{eq:co9} and \eqref{eq:co8} in \eqref{eq:co10} we obtain the second bound in \eqref{eq:co0}. The thesis now follows from \eqref{eq:pitagora}. \end{proof} As a direct consequence of Proposition \ref{pr:interpolation}, Proposition \ref{pr:infsup}, Proposition \ref{pr:continuity} and Proposition \ref{pr:coercivity} we have the following result \cite{Brezzi2014,BeiraoVeiga2016}. \begin{prop} \label{pr:final} Under Assumptions \ref{ass:regu}, \ref{ass:star} and \ref{ass:mesh}, the virtual element problem \eqref{pb:darcy_vem} has a unique solution $(\bm{q}_h, p_h) \in \boldsymbol{V}_k(\Omega_h) \times Q_k(\Omega_h)$. Moreover, let $(\bm{q}, p) \in \boldsymbol{V} \times Q$ be the solution of problem \eqref{pb:darcy_weak} and assume that $\bm{q} \in [H^{k+1}(\Omega_h)]^2$ with $\diver \bm{q} \in H^{k+1}(\Omega_h)$, $p$, $f \in H^{k+1}(\Omega_h)$, then the following error estimates hold: \[ \begin{gathered} \|\bm{q} - \bm{q}_h\|_{\boldsymbol{V}} \lesssim h^{k+1}(|\bm{q}|_{k+1, \Omega_h} + |f|_{k+1, \Omega_h})\,, \\ \|p - p_h\|_{Q} \lesssim h^{k+1} (|\bm{q}|_{k+1, \Omega_h} + |p|_{k+1,\Omega_h})\,. \end{gathered} \] \end{prop} } \input{numExe} \section{Conclusions}\label{sec:conclusion} In this work we have performed a first analysis on the extension of the mixed virtual element method to grids where elements might have curved edges, for elliptic problems in 2D. A theoretical analysis is proposed to show well-posedness of the discrete problem. A choice for the degrees of freedom particularly well suited for discretizations on curvilinear edge elements is highlighted, and a numerical scheme is proposed that handles in a coherent and consistent way the geometry, thus exhibiting optimal error decay in accordance to the polynomial accuracy level of the approximation. This is particularly suited for real applications where the geometrical error might dominate and limit the accuracy of the numerical solution. The numerical examples are in accordance with the theoretical findings and showed the optimal error decay for a domain with curved boundary and a domain with internal interfaces in contrast with the standard mixed virtual element method where the geometrical error jeopardizes the performances. Natural extension of the current work are the introduction of the mixed virtual element method for three-dimensional problems with curved faces and for more general problems. \section*{Acknowledgments} The authors acknowledge financial support of INdAM-GNCS through project ``Bend VEM 3d'', 2020. Author S.S. also acknowledges the financial support of MIUR through project ``Dipartimenti di Eccellenza 2018-2022'' (Codice Unico di Progetto CUP E11G18000350001). \bibliographystyle{plain} \section{Numerical tests} \label{sec:numExe} In this section some numerical examples are provided to describe the behaviour of the method and give numerical evidence of the theoretical results derived in the previous sections. More specifically, we propose a comparison of the method with standard mixed virtual elements, in which the curved boundaries or interfaces of the domains are approximated by a straight edge interpolant. For brevity we will label the present approach which honours domain geometry as \texttt{withGeo}, and the standard approach as \texttt{noGeo}. We use the projection operators introduced in~\eqref{eq:projection_def} to define the following error indicators for both variables; for a given exact solution $(\bm{q}, p)$ of Problem~\ref{pb:darcy_model}, we compute: \begin{itemize} \item \textbf{velocity $L^2$ error:} $$ e_{\bm{q}}^2 \vcentcolon= {\sum_{E\in\Omega_h} \|\bm{q} - {\Pi}^k_0\bm{q}_h\|^2_E}\,, $$ \item \textbf{pressure $L^2$ error:} $$ e_p^2 \vcentcolon= {\sum_{E\in\Omega_h} \|p - p_h\|^2_E}\,. $$ \end{itemize} Moreover, to proceed with the convergence analysis, we define the mesh-size parameter \begin{equation*} h = \frac{1}{L_E}\sum_{E\in\Omega_h} h_E\,, \end{equation*} For each test we build a sequence of four meshes with decreasing mesh size parameter $h$ and the trend of each error indicator is computed and compared to the expected convergence trend, which, for sufficiently regular data is $\Or{k+1}$ in accordance to Proposition \ref{pr:final}. \subsection{Curved boundary}\label{sub:bound} \paragraph{Problem description} In this subsection we consider Problem~\ref{pb:darcy_model} on the domain $\Omega$ shown in Figure~\ref{fig:domExe1}. Such domain is obtained from the unit square $(0,\,1)^2$ deforming the top and the bottom edges to make them curvilinear, i.e., they are the graph of the following cubic functions: $$ g_1(x) = \frac{1}{2}x^2(x-1)+1\qquad\text{and}\qquad g_2(x) = \frac{1}{2}x^2(x-1)\,. $$ We set the right hand side and the boundary conditions in such a way that the exact solution of Problem~\ref{pb:darcy_model} is the couple: \begin{equation*} \bm{q}(x,\,y) = \left(\begin{array}{r} \pi\,\cos(\pi\,x)\,\cos(\pi\,y)\\ -\pi\,\sin(\pi\,x)\,\sin(\pi\,y) \end{array}\right) \qquad\text{and}\qquad p(x,\,y) = \sin(\pi x)\,\cos(\pi y)\,. \end{equation*} In this first example we take $\mu=1.$ and we consider a constant tensor $\kappa= \mathbb{I}$, where $\mathbb{I}$ is the identity matrix. \begin{figure}[!htb] \centering \includegraphics[width=0.20\textwidth]{fig/domExe1.png} \caption{Curved boundary: domain $\Omega$ considered in such example, curved boundaries are highlighted in red.} \label{fig:domExe1} \end{figure} \paragraph{Meshes} Computational meshes are obtained starting from polygonal meshes defined on the unit square $(0,1)^2$ and subsequently modified, following the idea proposed in~\cite{BeiraodaVeiga2019}. In the present case, {only} the $y$-component of a generic point $P$ is modified, i.e., the point $P(x_P,\,y_P)$ becomes $P'(x_P',\,y_P')$ where $$ { x_P' = x_P\qquad\text{and}\qquad y_P' = \begin{cases} y_P + g_2(x_P) (1 - 2y_P) &\text{if}\:y_P\leq 0.5\\[0.5em] 1 - y_P + g_1(x_P)\,(2y_P-1) &\text{if}\:y_P> 0.5 \end{cases}\,. } $$ {The curved part of the boundary is further exactly reproduced for the \texttt{withGeo} case.} As initial meshes we consider the following types of discretization of the unit square: \textit{i)} \texttt{quad}, a uniform mesh composed by squares; \textit{ii)} \texttt{hexR}, a mesh composed by hexagons; \textit{iii)} \texttt{hexD}, a mesh composed by distorted hexagons; \textit{iv)} \texttt{voro}, a centroidal Voronoi tessellation. The last two types of meshes have some interesting features which challenge the robustness of the virtual element approach: in particular $\texttt{hexD}$ meshes have distorted elements, whereas $\texttt{voro}$ meshes have tiny edges, see Figure~\ref{fig:meshGene}. \begin{figure}[!htb] \centering \begin{tabular}{cccc} \texttt{quad} &\texttt{hexR} & \texttt{hexD} &\texttt{voro} \\ \includegraphics[width=0.21\textwidth]{fig/quadExe1.png} & \includegraphics[width=0.21\textwidth]{fig/beeExe1.png} & \includegraphics[width=0.21\textwidth]{fig/hexaExe1.png} & \includegraphics[width=0.21\textwidth]{fig/voroExe1.png} \\ \end{tabular} \caption{Curved boundary: types of discretization used to proceed with the convergence analysis.} \label{fig:meshGene} \end{figure} \paragraph{Results} In Figures~\ref{fig:convExe1Quad},~\ref{fig:convExe1Bee},~\ref{fig:convExe1Hexa} and~\ref{fig:convExe1Voro}, we collect the results for the various types of meshes. The reported convergence lines of the \texttt{withGeo} and \texttt{noGeo} approaches coincide for polynomial degrees $k=0$ and 1. They have the expected convergence rate of $\Or{1}$ and $\Or{2}$, respectively. On the contrary, for polynomial degree $k>1$ the trend of both velocity and pressure $L^2$ errors is different between the two strategies. More specifically, the convergence trends of the \texttt{noGeo} case is bounded by the geometrical representation error to $\Or{2}$, as this error dominates the accuracy of the approximation with mixed virtual elements. On the contrary the proposed approximation scheme \texttt{withGeo} behaves as expected for both velocity and pressure variables and for each approximation degree, showing the optimal convergence trend for the used polynomial degree. Such behaviour is in line to what observed in~\cite{BeiraodaVeiga2019} for a Laplace problem. \begin{figure}[!htb] \centering \begin{tabular}{cc} \multicolumn{2}{c}{\texttt{quad}} \\ \includegraphics[width=0.49\textwidth]{fig/velQuad-eps-converted-to.pdf} & \includegraphics[width=0.49\textwidth]{fig/presQuad-eps-converted-to.pdf}\\ \end{tabular} \caption{Curved boundary: convergence lines for \texttt{quad} meshes for each VEM approximation degrees.} \label{fig:convExe1Quad} \end{figure} \begin{figure}[!htb] \centering \begin{tabular}{cc} \multicolumn{2}{c}{\texttt{hexR}} \\ \includegraphics[width=0.49\textwidth]{fig/velBee-eps-converted-to.pdf} & \includegraphics[width=0.49\textwidth]{fig/presBee-eps-converted-to.pdf}\\ \end{tabular} \caption{Curved boundary: convergence lines for \texttt{hexR} meshes for each VEM approximation degrees.} \label{fig:convExe1Bee} \end{figure} \begin{figure}[!htb] \centering \begin{tabular}{cc} \multicolumn{2}{c}{\texttt{hexD}} \\ \includegraphics[width=0.49\textwidth]{fig/velHexa-eps-converted-to.pdf} & \includegraphics[width=0.49\textwidth]{fig/presHexa-eps-converted-to.pdf}\\ \end{tabular} \caption{Curved boundary: convergence lines for \texttt{hexD} meshes for each VEM approximation degrees.} \label{fig:convExe1Hexa} \end{figure} \begin{figure}[!htb] \centering \begin{tabular}{cc} \multicolumn{2}{c}{\texttt{voro}} \\ \includegraphics[width=0.49\textwidth]{fig/velVoro-eps-converted-to.pdf} & \includegraphics[width=0.49\textwidth]{fig/presVoro-eps-converted-to.pdf} \end{tabular} \caption{Curved boundary: convergence lines for \texttt{voro} meshes for each VEM approximation degrees.} \label{fig:convExe1Voro} \end{figure} \subsection{Internal curved interface}\label{sub:inside1} \paragraph{Problem description} In this subsection we consider again Problem~\ref{pb:darcy_model} defined on a different domain with respect to the previous example. The domain $\Omega$ is shown in Figure~\ref{fig:domExe2} and consists of a unit square $\Omega = (-1,\,1)^2$, $\Omega=\overline{\Omega_1}\cup\overline{\Omega_2}$, being $\Omega_2$ a circular inclusion with radius $R=0.45$ and $\Omega_1:=\Omega\backslash\Omega_2$ a circular crown. Two different values of the tensor $\kappa= k \mathbb{I}$ are prescribed on each subdomain: $k_1 = 1$ and $k_2 = 0.1$ for the subdomain $\Omega_1$ and $\Omega_2$, respectively, while $\mu=1.$ on each subdomain. We set the right hand side and the boundary conditions in such a way that the exact solution for the pressure is \begin{equation*} p_1(x,\,y) = k_2\cos\left(\sqrt{x^2+y^2}\right) + \cos(R)\,(1 - k_2) \end{equation*} and \begin{equation*} p_2(x,\,y) = \cos\left(\sqrt{x^2+y^2}\right)\,, \end{equation*} for the subdomains $\Omega_1$ and $\Omega_2$, respectively. Then, the exact solution for the velocity variable is given by $$ \bm{q}_i(x,\,y) = -k_i(x,\,y) \nabla p_i(x,\,y)\,,\qquad\text{for }i=1,2\,. $$ The pressure solution is chosen in such a way that we have a $C^0$ continuity on $\partial \Omega_2$, and the velocity field has a $C^0$ continuity of the normal component across $\partial \Omega_2$, i.e., $$ p_1= p_2\qquad\text{and}\qquad \bm{q}_1\cdot\bm{n}_\iota + \bm{q}_2\cdot \bm{n}_\iota = 0\qquad\text{on}\,\,\partial\Omega_2\,, $$ where $\bm{n}_\iota$ is the normal of $\partial \Omega_2$ pointing from $\Omega_1$ to $\Omega_2$. \begin{figure}[!htb] \centering \subfloat[Domain]{\includegraphics[width=0.325\textwidth]{fig/domExe2.png}\label{fig:domExe2}}% \hspace*{0.1\textwidth}% \subfloat[Mesh]{\includegraphics[width=0.3\textwidth]{fig/exe2MeshHigh.png}\label{fig:exe2Mesh}}% \caption{Internal curved interface. On the left, domain $\Omega$ considered in such example, curved boundaries are highlighted in red. On the right, the whole mesh with the internal curved boundary. We show zooms of the yellow and red regions in Figure~\ref{fig:exe2MeshZoom}.} \end{figure} \paragraph{Meshes} To generate the grid, we start again from a structured mesh composed of square elements of the whole domain $\Omega$, independently of the internal interface $\partial\Omega_2$, and \emph{then} we cut the mesh elements into sub-elements according to $\partial\Omega_2$. {The geometry of the internal interface is exactly reproduced in the proposed \texttt{withGeo} approach, whereas it is replaced by straight edges in the \texttt{noGeo} approach}. In both cases, thanks to the ability of virtual elements in dealing with arbitrary shaped elements the mesh generation process is straightforward, as we do not need to re-mesh elements crossed by the circle, but we simply cut each intersected quadrilateral element into two new elements with one new (curved) edge, \emph{without} taking care about the resulting shape and size of the two cut elements. The flexibility in including interfaces in the mesh and the robustness with respect to element size/distortion are a huge advantage from the mesh generation point of view. In many applications a large number of possibly intersecting interfaces might be present in the computational domain, such that a robust and easy mesh generation process is of paramount importance. In such cases, the generation of good quality triangular meshes constrained to the interfaces might be an extremely complex task which might result in overly refined regions of the mesh, only needed to honour the geometry of the interfaces, independently from the desired accuracy level. If we consider a virtual element approach, interfaces can be easily superimposed to an existing regular mesh, as shown above, avoid unnecessary refinements and consequently decreasing the degrees of freedom and the computational effort. \begin{figure}[!htb] \centering \includegraphics[width=0.3035\textwidth]{fig/exe2Det1.png}% \hspace*{0.1\textwidth}% \includegraphics[width=0.3035\textwidth]{fig/exe2Det2.png} \caption{Internal curved interface: zooms of the yellow and red regions of Figure~\ref{fig:exe2Mesh}, where we highlight tiny triangles with a curved edge.} \label{fig:exe2MeshZoom} \end{figure} In Figure~\ref{fig:exe2MeshZoom} we show a detail of some cut elements. Here we better appreciate that elements crossed by the interface are simply split in two parts and there is no any further subdivision. Moreover, we notice that such meshing procedure might results in really tiny elements adjacent to big ones. In each mesh of the following convergence analysis there are many elements with these characteristics and we will see that the convergence trend of the method is not affected by them. As a final remark, we would like to underline another interesting property of the proposed approach. The proposed curved spaces are compatible with standard finite element discretizations. For instance it is possible to simply glue a standard Raviart-Thomas element with an element with curved edges along a straight edges, thus exploiting the proposed virtual element spaces \emph{only} on the elements with curvilinear edges and standard Raviart-Thomas discretization on elements with straight edges. As we have done for the previous example we make a sequence of four meshes with decreasing mesh size $h$ to proceed with the convergence analysis. \paragraph{Results} In Figure~\ref{fig:exe2Res} we show the convergence lines for the \texttt{withGeo} and \texttt{noGeo} approaches as $h$ is reduced, for values of $k$ ranging between $0$ and $4$. The behaviour of the error is similar to the one shown in the previous example. Indeed, in the \texttt{noGeo} case the convergence is the optimal one for polynomial accuracy values $k=0$ and $1$, while for $k>1$ the geometrical error dominates the VEM approximation error and the trend remains bounded by $\Or{2}$. On the contrary, when we consider the virtual element spaces for curvilinear edges, optimal error decay $\Or{k+1}$ is obtained for both velocity and pressure $L^2$ errors, for the used polynomial accuracy $k$. {A pre-asymptotic behaviour is observed for the \texttt{withGeo} approach for values of $k=2,3$ and 4, which however terminates in the considered range of $h$ values for almost all cases.} \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth]{fig/exe2Vel-eps-converted-to.pdf}% \includegraphics[width=0.49\textwidth]{fig/exe2Pre-eps-converted-to.pdf} \caption{Internal curved interface: convergence lines for each VEM approximation degrees.} \label{fig:exe2Res} \end{figure} \subsection{Double internal curved interfaces}\label{sub:inside2} \paragraph{Problem description} In this example we consider two internal boundaries which identify three regions, $\Omega_1$, $\Omega_2$ and $\Omega_3$, inside the square $(-1,\,1)^2$, see Figure~\ref{fig:domExe3}. Both internal boundaries are curved, i.e., $\Gamma_1$ and $\Gamma_2$ are defined as $$ g_1(x) = a\sin(\pi x) + b\qquad\text{and}\qquad g_2(x) = a\sin(\pi x) - b\,, $$ respectively. For this example we set $a=0.2$ and $b=0.31$. Then, we set the right hand side of Problem~\ref{pb:darcy_model} in such a way that the pressure solution is \begin{eqnarray*} p_1(x,\,y) &=& a\sin(\pi x)\,,\\ p_2(x,\,y) &=& a\,\sin\left\{\frac{\pi}{2b}[y-a \sin(\pi x)]\right\}\sin(\pi\,x)\,,\\ p_3(x,\,y) &=& -a \sin(\pi x)\,, \end{eqnarray*} and the velocity $\bm{q}_i(x,\,y) = - \nabla p_i(x,\,y)$ on each subdomain $\Omega_i$ for $i=1,2$ and $3$. Both velocity and pressure functions are chosen in such a way that we have a $C^0$ continuity for the pressure and for the normal component of the velocity on the curves $\Gamma_1$ and $\Gamma_2$, i.e., \begin{gather*} p_1= p_2\quad\text{and}\quad \bm{q}_1\cdot\bm{n}_1 + \bm{q}_2\cdot\bm{n}_1= 0\qquad\text{on}\,\,\Gamma_1\,,\\ p_2= p_3\quad\text{and}\quad \bm{q}_2\cdot\bm{n}_2 + \bm{q}_3\cdot\bm{n}_2 = 0\qquad\text{on}\,\,\Gamma_2\,, \end{gather*} where $\bm{n}_{1}$ is the normal of $\Gamma_1$ pointing from $\Omega_1$ to $\Omega_2$ and $\bm{n}_2$ is the normal of $\Gamma_2$ pointing from $\Omega_2$ to $\Omega_3$. \begin{figure}[!htb] \centering \subfloat[Domain]{\includegraphics[width=0.3\textwidth]{fig/domExe3.png}\label{fig:domExe3}}% \hspace*{0.1\textwidth}% \subfloat[Mesh]{\includegraphics[width=0.3\textwidth]{fig/exe3Mesh.png}\label{fig:exe3Mesh}}% \caption{Double internal curved interfaces. On the left, domain $\Omega$ considered in such example, curved boundaries are highlighted in red. On the right, the whole mesh with the internal curved boundaries. We show zooms of the yellow and red regions in Figure~\ref{fig:exe3MeshZoom}.} \end{figure} \paragraph{Meshes} To generate the meshes, we follow the same idea as the example of Subsection~\ref{sub:inside1}. We build a background mesh composed by squares and then we insert the curved internal interfaces, as shown in Figure~\ref{fig:exe3Mesh}. This is done, as previously, independently from the background mesh, and thus the resulting meshes are composed by elements with arbitrary size and shape, see Figure~\ref{fig:exe3MeshZoom}. {Also in this case, mesh element edges lying on the curvilinear interfaces exactly match the interface for the \texttt{withGeo} approach, whereas they are approximated by straight edges in the \texttt{noGeo} case.} \begin{figure}[!htb] \centering \includegraphics[height=0.25\textwidth]{fig/exe3Det1.png}% \hspace*{0.05\textwidth}% \includegraphics[height=0.28\textwidth]{fig/exe3Det2.png} \caption{Double internal curved interfaces: zooms of the yellow and red regions of Figure~\ref{fig:exe3Mesh}, where we highlight tiny triangles with a curved edge.} \label{fig:exe3MeshZoom} \end{figure} \paragraph{Results} In Figure~\ref{fig:exe3Res} we show convergence lines for both the \texttt{withGeo} and \texttt{noGeo} for values of $k=0,\ldots,4$. The behaviour of error decay is again as expected: in the \texttt{noGeo} case error decay follows the expected trend for the used polynomial accuracy only for $k\leq1$, being, for $k>1$, always $\Or{2}$ for the prevailing effect of the geometrical error. On the contrary, since appropriate basis functions are included in the definition of the approximation space in the proposed \texttt{withGeo} approach, optimal error decay is observed for the used polynomial accuracy level. \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth]{fig/exe3Vel-eps-converted-to.pdf}% \includegraphics[width=0.49\textwidth]{fig/exe3Pre-eps-converted-to.pdf} \caption{Double internal curved interfaces: convergence lines for each VEM approximation degrees.} \label{fig:exe3Res} \end{figure}
{ "timestamp": "2020-07-28T02:38:22", "yymm": "2007", "arxiv_id": "2007.13513", "language": "en", "url": "https://arxiv.org/abs/2007.13513" }
\section*{Results} \subsubsection*{Diffraction data collection} Data was collected at the SPB/SFX (single particles, biomolecules and clusters/serial femtosecond crystallography) instrument~\cite{Mancuso:2019} of the European XFEL using \SI{6}{\kilo\electronvolt} photons focused into a $3\times$\SI{3}{\micro\meter\squared} spot, as measured by a \SI{20}{\micro\meter}-thick YAG screen in the focal plane. Individual X-ray pulses were generated with \SI{2.5}{\milli\joule} of energy on average (\num{2.6e12} photons). The pulses were delivered in 150-pulse trains with an intra-train repetition rate of \SI{1.1}{\mega\hertz} and trains arriving every \SI{0.1}{\second}, leading to a maximum data collection rate of 1500 frames/second. A detector built specifically for this burst mode operation, the AGIPD~\cite{Heinrich:2011}, was placed \SI{705}{\milli\meter} downstream of the interaction region to collect the diffraction patterns for each pulse individually up to a scattering angle of \SI{8.3}{\degree} at the center-edge of the detector (see Fig.~\ref{fig:setup}). \begin{figure} \centering \includegraphics[width=0.95\textwidth]{images/SPI_exp_geom_morgan_Ayyer_v5.pdf} \caption{Experimental setup. XFEL pulses were focused by a series of Kirkpatrick-Baez mirrors into a $3\times$\SI{3}{\micro\meter\squared} spot and scattered off particles in the aerosol stream to produce diffraction patterns on the AGIPD. The lower inset shows the timing structure of the XFEL pulses at the instrument while the top inset shows representative SEM images of the \texttt{cub42} and \texttt{oct30} samples; scale bars are \SI{100}{\nano\meter}. The low-resolution part of the detector used for the structural sorting is highlighted in green.} \label{fig:setup} \end{figure} Gold octahedra and cubes, each of two different sizes, were sequentially injected into the X-ray beam using an electrospray-ionisation aerodynamic-lens-stack sample delivery system (see Methods). The nominal sizes of the particles measured using scanning electron microscopy were 30 and \SI{40}{\nano\meter} for the octahedra and 42 and \SI{17}{\nano\meter} for the cubes. In the rest of the article these samples are described using the codes \texttt{oct30}, \texttt{oct40}, \texttt{cub42} and \texttt{cub17} respectively. The octahedra and cubes were prepared using different protocols, generating different heterogeneity profiles as will be seen later. Diffraction patterns were observed in around 10~\% of the collected frames. This relatively high hit ratio compared to those achieved with biological particles in similar conditions was due to a combination of the relatively large X-ray focal spot size, high particle concentration and high mass and density of the larger gold nanoparticles, leading to lower speeds after acceleration by the gas flow in the aerodynamic lens stack~\cite{Roth:2018,Awel:2016,Hantke:2018}. Lower speeds lead to higher spatial densities, and thus higher hit ratios for the same particle beam size. Table~\ref{tab:data} shows the statistics of the number of frames collected for each sample as well as the various filtration steps after the analyses described below. When using the peak repetition rate of \SI{1.1}{\mega\hertz} and 150 pulses per train, diffraction patterns corresponding to the shapes of cubes and octahedra could be observed, but a high fraction of the diffraction patterns appeared to originate from spherical particles (see Table~\ref{tab:data} and third column of Fig.~\ref{fig:2dclass}). This was found to be caused by the melting of particles in the wings of the previous XFEL pulse in the train, as the particles approached the focus. To reduce this occurrence we therefore reduced the intra-train repetition rate from 1.1 MHz to 550 kHz, providing only half the available pulses; further reduction of the repetition rate was tested but not found to be necessary. This reduced-rate mode was used to collect most of the data for the three larger samples (but not the \texttt{cub17} sample). \begin{table}[h] \begin{center} \begin{spacing}{1.9} \begin{threeparttable} \caption{\linespread{1}\selectfont Data collection statistics for the four nanocrystal samples. The sample names refer to their nominal shape (octahedron or cube) and edge length in \si{\nano\meter}.} \begin{tabular}{l l l l l} \textbf{Parameter} & \texttt{oct30} & \texttt{oct40} & \texttt{cub42} & \texttt{cub17} \\ \hline No. frames & \num{15805472} & \num{29309832} & \num{34197950} & \num{36966286} \\ No. hits & \num{2117732} & \num{2133041} & \num{2451068} & \num{3307723} \\ Hit ratio & 13.40\% & 7.28\% & 7.17\% & 8.95\% \\ Hits/hour & \num{376947} & \num{233553} & \num{228633} & \num{402954} \\ Hits/train\tnote{*} & 5.2/10.4/15.6 & 2.8/6.4/8.4 & 2.4/5.6/9.1 & NA/7.2/12.1 \\ No. `good' hits & \num{1430086} & \num{1249328} & \num{433259} & \num{564121} \\ Sphere fraction (\%)\tnote{*} & 3.4/4.0/19.2 & 2.7/7.2/33.5 & 2.4/10.4/29.1 & NA\tnote{$\dagger$} \\ Resolution (nm)\tnote{$\ddagger$} & 3.50 (2.10-4.54) & 5.32 (1.89-7.17) & 4.89 (1.98-6.56) & 2.11 (1.81-3.31) \\ \hline \end{tabular} \begin{tablenotes} \linespread{1}\small \item[*] The three numbers correspond to values for 0.28 MHz, 0.55 MHz and 1.1 MHz intra-train repetition rates respectively \item[$\dagger$] There was no clear sign of spherical particles for \texttt{cub17} sample \item[$\ddagger$] The first number is the azimuthal average resolution while numbers in parentheses show minimum and maximum values, respectively \end{tablenotes} \label{tab:data} \end{threeparttable} \end{spacing} \end{center} \end{table} \subsubsection*{Single hit selection by 2D classification} Frames with diffraction from particles were detected by setting a threshold on the number of pixels in the AGIPD detector that recorded at least one photon (see Methods). Unfortunately, not all the particles are of interest, even accounting for the heterogeneity. The extraneous patterns include those from spheres formed after melting, multi-particle aggregates and other possible contaminants. In previous work, either manual selection~\cite{Ekeberg:2015,Lundholm:2018} or manifold learning methods~\cite{Yoon:2011,Rose:2018,Reddy:2017} have been used to classify patterns and reject outliers. We adopt an alternative approach, similar to one commonly used in cryo-EM~\cite{Scheres:2005}, but implemented in diffraction space. Two-dimensional orientation determination into multiple models was performed in the detector plane using the EMC algorithm~\cite{Loh:2009,Loh:2010} implemented in \textit{Dragonfly}~\cite{Ayyer:2016}. The in-plane rotation angle ($\theta$) and relative incident fluence ($\phi$) of each diffraction pattern was determined collectively and multiple independent 2D intensity models were reconstructed. Each of these intensities represent an average of aligned copies of a subset of the patterns from the whole set. In addition to the EMC algorithm being highly noise-tolerant~\cite{Philipp:2012,Giewekemeyer:2019,Ayyer:2019}, one can also use it to examine the average models to understand what type of particles are in the dataset. In this experiment, 50 random white noise 2D intensity models were used as initial guesses to perform the classification for each sample, using only the low resolution part of the detector highlighted at this stage (see Fig.~\ref{fig:setup}). Some of the reconstructed intensities are shown in Figure~\ref{fig:2dclass}. The first two columns of the figure show representative examples of `good' models of each sample, chosen manually to be those with high contrast and strong streaks for further processing. The third column shows an average of diffraction from rounded particles (except in the \texttt{cub17} case where a dimer average is highlighted). These models were used to determine the sphere fraction shown in Table~\ref{tab:data}. Finally, the last column shows low-contrast models where a diverse set of particles were averaged. \begin{figure} \centering \begin{tabular}{ c >{\centering\arraybackslash}m{0.86\textwidth} } \rotatebox[origin=c]{90}{\texttt{oct30}} & \includegraphics[width=0.85\textwidth]{images/oct30_2dclasses.pdf} \\ \rotatebox[origin=c]{90}{\texttt{oct40}} & \includegraphics[width=0.85\textwidth]{images/oct40_2dclasses.pdf} \\ \rotatebox[origin=c]{90}{\texttt{cub42}} & \includegraphics[width=0.85\textwidth]{images/cub42_2dclasses.pdf} \\ \rotatebox[origin=c]{90}{\texttt{cub17}} & \includegraphics[width=0.85\textwidth]{images/cub17_2dclasses.pdf} \\ \end{tabular} \caption{Representative examples of reconstructed 2D models shown on a logarithmic scale, with each row representing a different sample. The numbers indicate how many patterns had that model as the most likely one. The first two columns show models selected for further processing. The third column shows diffraction from rounded/spherical particles, except in the \texttt{cub17} case where there were no spherical particles and the model shows diffraction from a dimer instead. The fourth column shows some of the low-contrast models generated by averaging patterns from a diverse set of particles. The resolution at the edge of the circle is \SI{3.3}{\nano\meter}.} \label{fig:2dclass} \end{figure} The 2D classification also enabled the analysis of size-heterogeneity from those models where the faces of the nanoparticles were parallel to the X-ray beam. In these cases, one observes strong streaks on the detector and the fringe spacing indicates the distance between these parallel faces. The size distributions of the samples inferred this way are shown in Fig.~\ref{fig:sizing}(a). The octahedral samples had a much broader size distribution than the cubic ones. While some of the breadth of the peaks is due to apparent size variations when the faces are not being perfectly parallel to the beam, the much broader size distributions of the octahedra suggest that they had more heterogeneity. In addition, the octahedra were also noticeably asymmetric, as seen in Figs.~\ref{fig:sizing}(c) and (d). These histograms were made by identifying patterns which belonged to models with two strong streaks (e.g. top left model in Fig.~\ref{fig:2dclass}). Another run of 2D classification with just these two-streak patterns showed no variation in the angle between the streaks, but only in the fringe spacing. This is to be expected since the angle is fixed by the $\langle 111 \rangle$ growth direction, while the size is not restricted by symmetry. The equivalent figures for the cubic samples showed no asymmetry. Due to the low polydispersity of the cubes, they were used to determine the incident fluence distribution of the X-ray beam. Since the Fourier transform of a cube is the product of three orthogonal \textit{sinc} functions, the size fitting procedure also generated a predicted incident fluence. The distribution from \num{102480} patterns is shown in Fig.~\ref{fig:sizing}(b), yielding a maximum fluence of around \SI{60}{\micro\joule/\micro\meter\squared}, which leads to a lower bound estimate of around \SI{540}{\micro\joule} in the focal spot from the measured spot size. The actual fluence was likely higher as the particles were not ideal cubes and the scattering efficiency is reduced at high fluences~\cite{Jonsson:2015,Ho:2020}. One can also see that most diffraction patterns were obtained with lower incident fluences, because the particles interacted with the outer regions of the X-ray focus. \begin{figure} \centering \begin{tabular}{ c c } \includegraphics[width=0.45\textwidth]{images/size_distribution_filled} & \includegraphics[width=0.45\textwidth]{images/cub42_fluence_hist.pdf} \\ (a) & (b) \\ \includegraphics[width=0.45\textwidth]{images/oct30_sizehist2d.png} & \includegraphics[width=0.45\textwidth]{images/oct40_sizehist2d.png} \\ (c) & (d) \end{tabular} \caption{Size and incident fluence distributions from 2D classification. (a) Size distribution for the 4 samples. The sizes are represented by the distance between opposing parallel faces. The cubes have narrow distributions, while the octahedral distributions are broader. (b) Distribution of incident fluence on the particle calculated from the \texttt{cub42} sample assuming they are ideal cubes. (c-d) 2D histogram of size distributions from two-streak patterns for the \texttt{oct30} and \texttt{oct40} samples respectively. High density in the off-diagonal regions suggests the particles were asymmetric. The horizontal axis represents the brighter of the two streaks.} \label{fig:sizing} \end{figure} \subsubsection*{3D reconstruction with structural sorting} The fraction of good hits used for 3D structure reconstruction varied from 17~\% for the cube samples to around 60~\% for the octahedra (see Table~\ref{tab:data}). The 3D intensity distribution was obtained using these patterns before recovering the structures by performing phase retrieval using the difference map algorithm~\cite{Elser:2003,Ayyer:2019}. For computational efficiency, the 3D orientations were first determined using the low-resolution part of the detector where the highest resolution was \SI{3.3}{\nano\meter}. A refinement procedure similar to that developed for serial crystallography~\cite{Lan:2017} was used with the whole detector to get the full-resolution 3D intensities. In this procedure, only orientations in the neighbourhood of the most likely orientation of a given pattern from the low-resolution run were searched. The intensities recovered in this manner had noticeably lower contrast than the equivalent slices in the 2D models. From the size distributions seen in Fig.~\ref{fig:sizing}, this could be attributed to structural heterogeneity. To counter this, the patterns were probabilistically partitioned into five intensity volumes in a manner equivalent to the 2D classification procedure. However, the initial guesses were not random white noise, but rather isotropically stretched/scaled versions of the average models reconstructed above. Five models, with stretch factors ranging from 0.9 to 1.1 were used as these initial seeds. The rest of the reconstruction proceeded without any restraints between these models or any symmetry constraints. Once again, this structural sorting was performed at low resolution before refining the orientations of a subset of patterns from a single model to get full-resolution intensities. A comparison of orthogonal slices through the 3D intensity for the \texttt{oct30} sample is shown in Fig.~\ref{fig:3dintens}(a). The left column, showing the single model reconstruction with 1.4 million patterns has noticeably worse fringe contrast and background than the equivalent slices in the right column or in the first two columns of the 2D classification output shown in Fig.~\ref{fig:2dclass}. The homogeneous set had 0.53 million patterns selected using the multi-model EMC reconstruction. The visual improvement is accompanied by an increase in the likelihood of the model intensities outside the central speckle for the common patterns in both sets, as shown in Fig.~\ref{fig:3dintens}(b). The filled histogram shows the distribution of the per-pattern increase in likelihood, which we refer to as likelihood gain, while the two traces show the distributions for weak (relative scale $0.5 \pm 0.1$) and strong (relative scale $2.0 \pm 0.1$) patterns. The latter shows how brighter patterns are more selective towards an improved model. Figure~\ref{fig:3dintens}(c) shows the same information for the \texttt{oct40} sample, where the gain ratio is smaller, but still greater than 1. The 2D size distributions shown in Fig.~\ref{fig:sizing}(c) were re-calculated for each subset of patterns belonging to the five models and plotted in Fig.~\ref{fig:3dintens}(d), confirming the different sizes for each model, but also exhibiting a simpler structure than that of the full dataset. \begin{figure} \centering \begin{tabular}{ c c } \multirow{3}{*}[0.27\textwidth]{\includegraphics[width=0.65\textwidth]{images/oct30_intens_comp.pdf}} & \includegraphics[height=0.3\textwidth]{images/oct30_likegain.pdf} \\ & (b) \\ & \includegraphics[height=0.3\textwidth]{images/oct40_likegain.pdf} \\ (a) & (c) \\ \multicolumn{2}{c}{\includegraphics[width=0.95\textwidth]{images/oct30z_sizehist2d.png}} \\ \multicolumn{2}{c}{(d)} \end{tabular} \caption{Comparison of 3D intensity reconstructions for the octahedra before and after structural sorting. (a) Low-resolution logarithmic intensities of the \texttt{oct30} sample comparing the standard single-model reconstruction with one of the sorted models. The two rows represent slices normal to an edge and vertex of the octahedron respectively. (b) Likelihood gain distribution for the patterns which are shared with the sorted model shown in (a). The blue and red curves show distributions for weak and strong patterns, as identified by the relative fluence factor $\phi$, respectively. (c) The same gain plot for the \texttt{oct40} sample. (d) Two-streak size histograms (see Fig.~\ref{fig:sizing}(c)) for the \texttt{oct30} sample separated into the five reconstructed models.} \label{fig:3dintens} \end{figure} For the cubic particles, a single model 3D reconstruction was deemed sufficient, due to the relative monodispersity of the sample. The selection of `good' hits from the 2D classification was more stringent, including only high-contrast cube-like patterns. The incident fluence factors were estimated in the first few iterations where the calculated probability distributions were broad and then later kept fixed (see Methods). The electron densities were reconstructed by performing 3D iterative phase retrieval on the full-resolution intensity volumes (see Methods for details and Supplementary Fig.~\ref{fig:intensa} for intensity slices). Figure~\ref{fig:phasing}(a) shows the reconstructed electron densities as isosurface plots. The contour levels were chosen where the gradient of the density was highest. The phase retrieval transfer function (PRTF) metric as a function of wavevector $q$ is shown in Fig.~\ref{fig:phasing}(b). This metric is a measure of the reproducibility of recovered phases when starting from 128 random models. The 3D PRTF distribution was smoothed using a Gaussian kernel with a width equal to 1/3rd of the fringe width. The shaded region around each line shows the range of values in each $q$ shell, highlighting the strong anisotropy of the metric due to the faceted nature of the objects. The intersection with the common $1/e$ threshold determining the resolution is shown in Table~\ref{tab:data}. The resolution normal to the flat faces is \SI{2}{\nano\meter} or better for all samples, while the resolution is relatively low far from any strong streaks in Fourier space. This angle-dependent resolution is a property of the diffractive-domain averaging before phase retrieval, but also due to the strongly faceted shape and lack of internal structure of these objects, both of which are not representative of biological objects. \begin{figure} \centering \begin{tabular}{c c} \includegraphics[width=0.45\textwidth]{images/phased_particles_scale.png} & \includegraphics[width=0.45\textwidth]{images/prtf_all_range.pdf} \\ (a) & (b) \\ \end{tabular} \caption{Phase retrieval. (a) Isosurface plots of electron densities recovered after phase retrieval (scale bar is \SI{40}{\nano\meter}). The asymmetric structures of the octahedra are clearly evident (see Supplementary Movie S2). (b) Smoothed phase retrieval transfer function (PRTF) measuring reproducibility of phases as a function of $q$. The solid lines represent the azimuthal average PRTF conventionally used to determine the resolution of the structure. The shaded region around each line indicates the range of values at each $q$. The typical $1/e$ cutoff is shown in black.} \label{fig:phasing} \end{figure} \section*{Discussion} We have demonstrated an order-of-magnitude increase in data collection efficiency along with much higher imaging resolution than previously achieved for X-ray single particle diffractive imaging, setting a template for future SPI experiments at the European XFEL and elsewhere. We have also shown that with these large data sets, one can structurally sort the particles and average a narrow size and shape range to obtain higher resolution. A similar problem is expected to be faced when imaging biological particles and the method developed here shows the way towards overcoming conformational variability in the Fourier domain. Although we benefited from the strong scattering cross section of gold compared to organic materials, with the commissioning of a sub-micron focus at the SPB/SFX instrument, we can expect comparable signal strengths from organic materials. Unfortunately, smaller X-ray foci would also mean lower hit ratios with the current sample delivery setup. Improvements could be made through optimised focussing for the targeted size distribution~\cite{Roth:2018} or cryogenic injections systems~\cite{Samanta:2020} which additionally allow conformational selection~\cite{Chang:2015}. Another approach is to keep using the larger focus and conjugate the particles with gold nanoparticles to assist hitfinding and orientation determination~\cite{Ayyer:2020}. The effective hit rate can also be increased by using more pulses from the European XFEL (max. 2700) than the AGIPD detector can save (max. 352) and vetoing in real time those frames which do not contain diffraction signal. The class of experiments exemplified here can also be applied to study rare events such as transient states in a spontaneous phase transition or high free-energy states. Since each image is collected serially, one can identify relevant subsets corresponding to interesting states without averaging over all patterns. In this work, we have taken the approach of treating the objects as general 3D contrast functions with no \textit{a priori} information. One can also envision a parameterised refinement approach which should enable a finer characterisation of the structural landscape of the ensemble. \section*{Methods} \subsubsection*{Sample preparation} The octahedral gold nanoparticles were synthesised using published protocols~\cite{Li:2008,Lu:2017} with poly(diallyl\-dimethyl\-ammonium) chloride (PDDA) polymer coating to avoid aggregation. The cubic particles were synthesized in water using the method described in Park~\emph{et~al.}~\cite{Park:2018} with cetyltrimethylammonium chloride (CTAC) as a stabilising agent. In order to obtain the requisite $10^{12} - 10^{13}$ particles/\si{\milli\liter} concentration to approach an average of one particle per electrospray droplet~\cite{Bielecki:2019}, all syntheses were concentrated from initial values of $10^9$~particles/\si{\milli\liter} for the cubes and $10^{11}$~particles/\si{\milli\liter} for the octahedra and excess ligands were removed by centrifugation. Scanning and transmission electron microscopy images of the samples are shown in Supplementary Figs.~\ref{fig:sem_sample} and \ref{fig:tem_sample}. \subsubsection*{Aerosol sample delivery} The samples were suspended in 10 mM ammonium acetate and aerosolized using an electrospray nebulizer (average flow rate \SI{200}{\nano\liter/\minute}) and neutralized before delivering into the X-ray interaction point using the aerodynamic lens stack~\cite{Bielecki:2019}. An electrospray differential mobility analysis (ES-DMA) setup was installed at the beamline and the particles generated by the electrospray could be diverted into the ES-DMA to characterise the size distribution and concentration of aerosolized particles. Particle size distribution measurements were carried out with an electrostatic classifier (TSI 3082) together with the DMA (TSI 3081). The DMA was connected to a condensation particle counter (CPC, TSI 3789). Representative size distributions are shown in Supplementary Fig.~\ref{fig:dma_sizing}. To diagnose the width and density of the particle stream in the X-ray interaction region after aerodynamic focusing, we employed a Rayleigh scattering diagnostic (see Supplementary Fig.~\ref{fig:rayleigh}) using a frequency-doubled Nd:YAG laser which was mirror-incoupled perpendicular to both the X-ray beam and particle stream~\cite{Awel:2016,Hantke:2018}. These two diagnostic tools helped in assessing both the quality of the samples as well as the transmission efficiency of the sample delivery system. \subsubsection*{Online monitoring} In order to help align the experiment and dianose problems during data collection, the \textit{Hummingbird} software~\cite{Daurer:2016} was connected to the \textit{Karabo} bridge in the European XFEL DAQ system~\cite{Heisen:2013} to receive data with a delay of a few seconds. Since most of the photons in an SPI experiment are concentrated at low resolution, only the module of the AGIPD closest to the beam centre was used for online analysis. The use of uncalibrated data and only a single module enabled a frame rate of up to \SI{800}{\hertz} using all 176 memory cells of each pixel of the AGIPD available in this experiment. The analyses conducted live included lit-pixel hit finding and hit-ratio determination (see the following Preliminary analysis section), sphere model size determination and the detection of the fraction of spherical particles by analysing the azimuthal variation in intensities. The latter was used to understand and fix the particle melting issue mentioned in the Results section. \subsubsection*{Preliminary analysis} The AGIPD detector was calibrated using offset constants for each cell in each pixel. Except for a few pixels near the beam centre, no pixels switched gain mode. After offset correction, the number of pixels containing at least 0.7 of a photon was calculated for each frame. In the absence of particles, the number of such pixels is normally distributed with a mean dependent on background from the beamline, carrier gas and detector false positives. A threshold of 3$\sigma$ over the mean number in each run was used to select frames with particle scattering. Over the entire experiment, the hit ratio fluctuated between 7\%-15\%. For this and future analyses, memory cells in pixels with outlier dark offsets or dark noise were masked out, along with the double-wide pixels along ASIC edges. These hits were converted to photons by first subtracting the dark offsets, correcting for per frame common mode shifts by subtracting the median of each 64x64 pixel ASIC and then subtracting a pixel-wise running median of the last 128 frames over all cells. The last step was important in removing artifacts due to the slow drift of dark offsets on a pixel level. These corrected detector values were then converted to integer photon counts by thresholding with a variable cutoff using the following procedure. The probability distribution of detector ADUs (analog-to-digital units) at a pixel in the absence of photons is a Gaussian centered at 0 with a cell-dependent width. The 1-photon distribution is a shifted copy of the 0-photon distribution with a height which depends on the signal level (ignoring charge sharing for the large \SI{200}{\micro\meter} pixels). The optimal threshold was chosen to be the point at which these 0-photon and 1-photon distributions intersect for a signal level of $10^{-3}$ photons/pixel. If the 1-photon distribution is centered at $m_1$ ADUs and the standard deviation of the noise of a cell is $\sigma$ ADUs, the threshold was \[t = m_1 \left(0.5 - \frac{\sigma^2}{m_1^2} \log(10^{-3})\right)\] This threshold minimises the total error rate (false positive plus false negative) due to detector noise at the chosen signal level, ignoring charge sharing effects which are small for the \SI{200}{\micro\meter} pixels of the AGIPD. For higher signals, at lower resolution, the error rate would be higher, but biased towards false negatives. See Supplementary Fig.\ref{fig:detcorr} for the effects of the various corrections on the integrated detector image. For the current detector configuration and photon energy the 1-photon peak was centered at 47 ADUs and the average threshold was 0.755 of a photon. \subsubsection*{Intensity reconstruction} Both two- and three-dimensional intensity volumes were reconstructed using the \textit{Dragonfly} package. Detector files were generated and refined manually starting from initial geometries from a previous serial crystallography experiment~\cite{Yefanov:2019}. The geometry refinement only involved adjusting the positions of the detector quadrants since the modules within a quadrant had not been moved between the experiments. Two detector files were produced, one for the inner eight 128x64 pixel detector ASICs, while a high-resolution version contained all 1024x1024 pixels. The photon-converted data was saved in the sparse \textit{Dragonfly} \texttt{.emc} format with file sizes of around \SI{10}{\giga\byte} per sample. For the 3D reconstructions, the detector files specify the reciprocal space voxel coordinate of each pixel, which involves defining the radius of curvature of the Ewald sphere in voxels. The natural choice of the detector distance in pixel units (3525) produced too large an oversampling factor, with a fringe spacing of around 20 voxels for the largest \texttt{cub42} sample and even higher for the smaller samples. For the low-resolution detector file, the radius of curvature was set to be 2000 voxels, generating $253^3$ voxel volumes. For the full-resolution detector, it was 1500 voxels for all samples except \texttt{cub17}, where it was 1000 voxels. For all reconstructions, the deterministic annealing procedure~\cite{Ueda:1998,Ayyer:2016} was used to improve the convergence of the algorithm. The annealing parameter $\beta_d$ was initially set for each pattern, $d$ based on the number of scattered photons using the following empirical formula: \[\beta_d = \exp(-1.156\; C_d^{0.15})\] where $C_d$ refers to the number of orientationally relevant photons in the pattern. This generates a lower value for brighter patterns, broadening their otherwise sharp probability distribution over orientations. The specific expression was tested in simulations to produce a relatively flat dependence of the mutual information $I(K,\Omega)$~\cite{Ayyer:2016} on the signal strength. The parameter was increased by a factor of 2 every 10 iterations for each pattern. In the 2D reconstructions, 180 angular samples were chosen in the range from 0 to $2\pi$ for each model. The low resolution 3D reconstructions were performed with an orientational sampling level of 8, which corresponds to \num{25680} samples~\cite{Loh:2009}. The high-resolution refinement went up to a sampling level of 20 (\num{400200} samples). For the 3D multi-model reconstructions, the initial intensities were generated by isotropically stretching the single-mode intensity volume using linear interpolation and the initial fluence factors were also used from that run. For the cubic samples, the 3D reconstruction pipeline was modified. First, 40 iterations were performed with the initial $\beta_d$ parameters without an annealing schedule. For the larger \texttt{cub42} sample, only the pixels corresponding to the first 3 diffraction fringes (\SI{12.9}{\nano\meter} resolution) were used to determine the orientations and fluence factors. This was to avoid instabilities since most of the Fourier power at higher resolution is concentrated in the streaks normal to the faces and the angle between the streaks is large enough that interference between them poorly constrains the model. Once the low resolution, rotationally blurred intensities were stable, the fluence factors were fixed and the annealing schedule was enabled. The rest of the reconstruction proceeded in a similar manner to the octahedra. Isosurface plots for the three larger particles using the low-resolution data are shown in Supplementary Movie 1. \subsubsection*{Size and incident fluence fitting} The 2D classification was first used to identify patterns where strong streaks were visible. These classes can be seen in the first column in Fig.~\ref{fig:2dclass}. A pair of parallel faces on a particle produce a \textit{sinc}-function dependence in the Fourier transform along the face normal. This was used to determine the interfacial distance from each pattern. The classification procedure not only allowed us to identify the patterns which have strong streaks, but was also used to determine the angles of these streaks since we knew by what angle the pattern had to be rotated to fit the model. For these selected patterns, the intensity distribution along the streak was calculated by integrating over a 21-pixel wide strip along the streak. The size was determined by cross-correlating the intensity distributions with \textit{sinc} functions for sizes from 10 to \SI{50}{\nano\meter} in \SI{0.1}{\nano\meter} increments. The size was chosen as the one which produced the maximum Pearson correlation coefficient and only those streaks with a coefficient greater than 0.9 were included. Figures~\ref{fig:sizing}(c, d) were generated by only considering patterns with 2 strong streaks. For the \texttt{cub42} sample, the brightness of the two-streak patterns were used to determine the incident fluence by assuming that they were generated by perfect cubes. A procedure similar to the sphere-sizing done in previous works~\cite{Daurer:2017} was used to determine the incident fluence using the scattering cross-section for gold at \SI{6}{\kilo\electronvolt}. \subsubsection*{Phase retrieval} Before performing phase retrieval, the intensity volumes were processed in the following manner: Background subtraction was performed using a rolling minimum filter with the window size 1.5 times the width of a fringe using the \texttt{ndimage.minimum\_filter} in \textit{SciPy}~\cite{Virtanen:2020}. Since no fringe contrast was visible at the outer resolution edges (see Supplementary Figure~\ref{fig:intensa}), the data was truncated such that the corners of the cube were within the sphere. The full-period resolution of the cube at the center-edge was \SI{1.61}{\nano\meter} and there were $384^3$ voxels for the three larger samples and $256^3$ voxels for the \texttt{cub17} sample. A combination of the error reduction (ER) algorithm and the difference map (DM) algorithm~\cite{Elser:2003} were used to reconstruct the electron densities from the background-subtracted intensity distribution. For each phasing run, 400 iterations were performed, 100 ER, 200 DM and 100 ER. Within each iteration, a dynamic real-space support constraint was applied by sorting the electron density values and only keeping the top $N_\mathrm{supp}$ values. The volume $N_\mathrm{supp}$ was chosen such that the histogram of densities inside the support had a small fraction of low values to ensure that the support was not too tight. 128 phasing runs from random white noise initial guesses were performed for each sample and the resulting densities were aligned and averaged to produce the final electron densities as well as the phase retrieval transfer function (PRTF). The PRTF strongly depends on the intensity at a voxel and thus exhibits an oscillatory behaviour along the fringes, which is not reflective of the quality of the structure at a given resolution since the lack of accurate phases near an interference minima barely affect the real-space structure. Thus, the 3D PRTF distribution was smoothed with a Gaussian kernel with a width half that of a fringe~\cite{Lundholm:2018}. The 3D isosurface plots were rendered using \textit{Chimera}~\cite{Pettersen:2004} and the contour levels were determined by using the ``Surface Color'' feature, colouring the surface by the gradient of the density and choosing the level which had the highest density gradients. \subsection*{Acknowledgements} We thank Rick Millane for helpful discussions. We acknowledge European XFEL in Schenefeld, Germany, for provision of X-ray free-electron laser beamtime at Scientific Instrument SPB/SFX (Single Particles, Clusters, and Biomolecules and Serial Femtosecond Crystallography) and would like to thank the staff for their assistance. We thank the DESY NanoLab, CSSB cryoEM user facility and the XBI labs at EuXFEL for access to electron microscopy resources and their staff for their help. This work has been supported by the Clusters of Excellence `Center for Ultrafast Imaging' (CUI,EXC 1074, ID 194651731) and `Advanced Imaging of Matter' (AIM,EXC 2056, ID 390715994) of the Deutsche Forschungsgemeinschaft (DFG). This work has also been supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) through the Consolidator Grant COMOTION (ERC-614507-K{\"u}pper) and by the Helmholtz Gemeinschaft through the ``Impuls-und Vernetzungsfond''. J.C.H.S. and R.A.K. acknowledge support from the National Science Foundation BioXFEL award (STC-1231306). F.R.N.C.M. acknowledges support from the Swedish Research Council, R{\"o}ntgen-{\AA}ngstr{\"o}m Cluster and Carl Tryggers Foundation for Scientific Research. P.L.X. acknowledges a fellowship from the Joachim Herz Stiftung. P.L.X. and H.N.C. acknowledge support from the Human Frontiers Science Program (RGP0010/2017). \subsection*{Author contributions} The experiment was conceived by K.A. with the help of D.A.H., H.N.C, J.K{\"u}. and P.L.X.; P.L.X. prepared the samples with the help of H.L. and F.S.; J.B., K.G., R.B., H.K., R.L., T.S., M.S., P.V. and A.P.M. operated the SPB/SFX instrument at EuXFEL; J.B., A.K.S., R.A.K., M.S.H., A.D.E., J.L., L.W., N.R., S.A., D.A.H. and J.K{\"u}. setup and operated the sample delivery system; J.B., Y.K., R.L., T.S. and A.P.M. set up and operated the Rayleigh scattering diagnostics; B.J.D., F.R.N.C.M., and T.E. performed the online monitoring with help from K.A., T.W. and Z.S.; the data was analysed by K.A. with the help of Z.S., B.J.D., N.D.L., F.R.N.C.M., J.B., T.W., Y.Z., J.Ko., O.Y., A.B. and A.J.M.; K.A. wrote the manuscript with H.N.C., F.R.N.C.M., J.K{\"u} and J.H.C.S. with contributions from all authors. \subsection*{Competing interests} The authors declare no competing interests. \printbibliography \newpage \renewcommand{\figurename}{Supplementary Figure} \setcounter{figure}{0} \section*{Supplementary Information} \begin{figure} \centering \begin{tabular}{c c} \includegraphics[width=0.45\textwidth]{images/oct30_intensa.pdf} & \includegraphics[width=0.45\textwidth]{images/oct40_intensa.pdf} \\ (a) & (b) \\ \includegraphics[width=0.45\textwidth]{images/cub42_intensa.pdf} & \includegraphics[width=0.45\textwidth]{images/cub17_intensa.pdf} \\ (c) & (d) \\ \end{tabular} \caption{Full resolution intensity slices. Logarithmic intensity slices through intensity reconstructions using the full detector for the (a) \texttt{oct30} (b) \texttt{oct40} (c) \texttt{cub42} and (d) \texttt{cub17} datasets. The octahedral intensities were generated after structural sorting. The rings indicate the full-period resolution.} \label{fig:intensa} \end{figure} \begin{figure} \centering \begin{tabular}{c c} \includegraphics[width=0.45\textwidth]{images/oct30_sem2.png} & \includegraphics[width=0.45\textwidth]{images/oct40_sem2.png} \\ (a) & (b) \\ \includegraphics[width=0.45\textwidth]{images/cub42_sem2.png} & \includegraphics[width=0.45\textwidth]{images/cub17_sem2.png} \\ (c) & (d) \\ \end{tabular} \caption{Scanning electron microscopy images. (a) \texttt{oct30} (b) \texttt{oct40} (c) \texttt{cub42} (d) \texttt{cub17}. All scale bars are \SI{100}{\nano\meter}.} \label{fig:sem_sample} \end{figure} \begin{figure} \centering \begin{tabular}{c c} \includegraphics[width=0.45\textwidth]{images/oct30_tem2.png} & \includegraphics[width=0.45\textwidth]{images/oct40_tem2.png} \\ (a) & (b) \\ \includegraphics[width=0.45\textwidth]{images/cub42_tem2.png} & \includegraphics[width=0.45\textwidth]{images/cub17_tem2.png} \\ (c) & (d) \\ \end{tabular} \caption{Transmission electron microscopy images. (a) \texttt{oct30} (b) \texttt{oct40} (c) \texttt{cub42} (d) \texttt{cub17}. All scale bars are \SI{100}{\nano\meter}.} \label{fig:tem_sample} \end{figure} \begin{figure} \centering \begin{tabular}{c c} \includegraphics[width=0.45\textwidth]{images/dma_oct_fig.pdf} & \includegraphics[width=0.45\textwidth]{images/dma_cub_fig.pdf} \\ (a) & (b) \end{tabular} \caption{Differential mobility analysis. Representative size histograms for the (a) octahedral and (b) cubic samples obtained using the ES-DMA. The buffer peaks near \SI{10}{\nano\meter} are absent after passing through the aerodynamic lens to reach the interaction region and will, in any case, not be focused in the same region as the heavier AuNPs. Note that the ES-DMA measures the electrical mobility diameter which is dependent on both the size and shape of the particle~\cite{DeCarlo:2004}.} \label{fig:dma_sizing} \end{figure} \begin{figure} \centering \begin{tabular}{c} \includegraphics[width=\textwidth]{images/rayleigh.pdf} \\ \includegraphics[width=0.8\textwidth]{images/rayleigh_singles.pdf} \end{tabular} \caption{Rayleigh scattering diagnostics. \emph{Top}: Histogram of detected particle positions in the interaction region for the \texttt{cub42} sample. The histogram was obtained from 1000 trains with one image per train. The X-ray beam propagates along the Z-axis and the Y-axis is vertical in the laboratory frame. The finite length of the particle beam in the vertical direction is due to the $\sim$\SI{250}{\micro\meter} optical laser spot size. The particle positions were detected using the \texttt{peak\_local\_max} function of Scikit Image~\cite{vanderwalt:2014} after appropriate background correction. \emph{Bottom row}: Three representative single background-corrected images from the same run.} \label{fig:rayleigh} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{images/detcorr.pdf} \caption{Detector corrections. Integrated photon-converted detector patterns for all \num{44800} hits in a single run with progressively more detector corrections. First only standard dark offset and common mode corrections were applied before thresholding at 0.7 of a photon. Next, the running median offset value over the last 128 trains was subtracted from each pixel before thresholding. Finally, a variable threshold was used for each memory cell depending upon the standard deviation (sigma) of the cell. All images have the same colour scale, saturating at $10^{-3}$ photons/pixel/frame. The dark gray background shows panel gaps and masked pixels. The fourth plot shows the radial average intensity for the three images.} \label{fig:detcorr} \end{figure} \clearpage \subsection*{Supplementary Movie captions} [Movies can be found here: https://owncloud.gwdg.de/index.php/s/ybXaOva83PUE4dQ] \textbf{Supplementary Movie 1}: Intensity isosurfaces with varying levels for the the three larger samples. \textbf{Supplementary Movie 2}: Rotating versions of electron density isosurfaces shown in Fig. 5(a). \end{document}
{ "timestamp": "2020-07-28T02:41:07", "yymm": "2007", "arxiv_id": "2007.13597", "language": "en", "url": "https://arxiv.org/abs/2007.13597" }
\section{Introduction} Infectious disease pandemics have had a major impact on the evolution of mankind and have played a critical role in the course of history. Over the ages, pandemics made countless victims, decimating entire nations and civilizations. As medicine and technology have made remarkable progress in the last century, the means of fighting pandemics have become significantly more efficient. Another problem is that globalization, the development of the commerce and the ease to travel all over the would facilitates the transmission mechanism of a new disease much more than it did in the past. In 2019 a new pneumonia was associated to a new virus from the corona virus family known now as COVID-19. This spread out very quickly around the wolds as the next subsection shows. \subsection{Evolution of the disease}\label{s:maps} On 31$^{st}$ of December 2019, the first cases of infection with an unknown virus causing symptoms similar to those of pneumonia were reported to the World Health Organization in China. Shortly after that the new virus was identified as a new coronavirus, and the public health organizations feared that the situation may degenerate in one similar to the SARS epidemic in 2003. The SARS outbreak was also caused by a newly identified coronavirus and it caused 8000 infection cases, 774 deaths and significant financial losses.\\ Within less than 3 months COVID-19 outbreak has become a global pandemic, spreading across almost all countries all over the world. As of 29$^{th}$ of March 2020, there were 725980 COVID-19 confirmed cases which caused 35186 deaths. Although the outbreak started in China, the virus has spread rapidly all over the globe, the most affected countries being now Italy, Spain, China, Iran, France, United States of America and United Kingdom. The fast-evolving spread of the new coronavirus, which has been officially declared a pandemic, is represented below. The following charts show the countries where there have been reported at least 1000 cases of COVID-19 infection: \begin{figure}[H] \centering \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{februarie.png} \caption{9$^{th}$ of February 2020.} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{martie.png} \caption{8$^{th}$ of March 2020.} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{aprilie.png} \caption{29$^{th}$ of March 2020.} \end{subfigure} \end{figure} While at the beginning of February 2020 the virus was still affecting mainly China, it has started to spread rapidly to other countries, causing infections especially in western Europe countries at the beginning of March 2020. In less than a month, by the end of March 2020, the outbreak was present on all continents, affecting most of the countries in the world, which led the World Health Organization to officially name it a pandemic. \subsection{The main ideas of this paper} In the fight with the COVID-19, quarantine was one of the main measures, at least when the hospitals were overwhelmed with patients and the virus propagation and its inside body working was not well understood. A basic tool in analyzing the spread of the virus is the mathematical modeling. There is a growing body of mathematical models used at the moment as for a small sample by no means exhaustive see \cite{wakefield2019spatio,choisy2007mathematical, grassly2008mathematical, kucharski2020early,sardar2020assessment,schuttler2020covid,ferguson2020report,tsay2020modeling}. This turned out to be a valuable tool that can be used in the assessment, prediction and control of infectious diseases, as it is the COVID-19 pandemic, which significantly impacted almost all countries, with important social and economical implications worldwide. The main purpose of this work is to develop a predictive model that can accurately assess the transmission dynamic of COVID-19. In this paper we use as a starting point the standard SIR model initiated in \cite{rossjohn} and later investigated in depth in \cite{kermack1991contributions1,kermack1991contributions2,kermack1933contributions3}. The basic SIR model uses the assumption that the parameters are constant over time. By starting with given initial conditions and observing the solution at a single day, we can determine in a unique way the parameters of the model which we prove in Proposition~\ref{p:1}. This suggests that by fixing initial conditions and observing the solutions on a given day we can determine the parameters uniquely. As a new tool here we use a neural network having as input the solutions on a certain number of days for the SIR model with the output the parameters of the model. Now using the data of infected and recovered in Romania, we use the neural network to guess in the first round what the parameters are from day to day. This is not very accurate, because there is a lot of uncertainty in the data. For instance we can not have an accurate estimate of the infected number of individuals, nor can we accurately estimate the number of recovered, particularly when there are so many asymptotic cases and not much testing as it happened at the beginning of the pandemic. By analyzing the data, we draw the conclusion that assuming the parameters constant in time is not a valid assumption. The parameters of the model vary over time due to the measures implemented in the effort to mitigate the spread of the disease. By examining the data, we noticed that we can split the time frame into separate regimes, with transitions periods between them, and we can consider the parameters constant on each such regime with a transition allowed between regimes modeled by a logistic function. The actual fit of the parameters is based on a grid search around the averages suggested by the neural network output we described above. This is worked out by fitting the number of deaths reported and the solution to the differential equation which drives the dynamic of the number of deaths. The organization of the paper is as follows. In Section~\ref{s:1} we introduce briefly the SIR model and provide the main result that knowledge of solution to the system at any given time determines the parameters in a unique way. In section~\ref{s:2} we provide details on the construction of the neural network and the rough guess of the parameters based on the public data. In Section~\ref{s:3} we give a description of the SIRD model in which we separate the deceased numbers from the recovered and discuss how one can arrive at a differential equation for the deaths alone. Next, in Section~\ref{s:4} we introduce the regimes idea and how we model it. Here we also describe how we fit the paramters and in Section~\ref{s:5} we discuss the predictions based on this model. Finally Section~\ref{s:6} includes the main conclusions. \section{The SIR Model}\label{s:1} The first attempts of developing a mathematical model of the infectious diseases spreading were made at the beginning of the twentieth century. One of the most important models that can describe infectious diseases is the SIR model. The first ones that developed SIR epidemic models were Bernoulli, Ross, Kermack-McKendrick and Macdonald. The SIR model is a mathematical model that can be used in epidemiology in order to analyze, at a given time for a specific population, the interactions and dependencies between the number of individuals who are susceptible to get an infectious disease, the number of people who are currently infected and those who have already been recovered or have died as cause of the infection. This model can be used to describe diseases that can be contracted just one time, meaning that a susceptible individual gets a disease by contracting an infectious agent, which is afterwards removed (death or recovery). It is assumed that an individual can be in either one of the following three states: susceptible (S), infected (I) and removed/ recovered (R). This can be represented in the following mathematical schema: \tikzstyle{int}=[draw, fill=blue!20, minimum size=2em] \tikzstyle{init} = [pin edge={to-,thin,black}] \begin{center} \begin{tikzpicture}[node distance=3 cm,auto,>=latex'] \node [int] (a) {Susceptible}; \node [int] (c) [right of=a] {Infected}; \node [int] (d) [right of=c] {Recovered}; \node [coordinate] (end) [right of=c, node distance=2cm]{}; \path[->] (a) edge node {$\beta$} (c); \draw[->] (c) edge node {$\gamma$} (end) ; \end{tikzpicture} \end{center} where: \begin{itemize} \item $\beta$ = infection rate \item $\gamma$ = recovery rate. \end{itemize} We consider $N$ as the total population in the affected area. We assume $N$ to be fixed, with no births or deaths by other causes, for a given period of n days. Therefore, $N$ is the sum of the three categories previously defined: the number of susceptible people, the ones infected, and the ones removed: \[ N = \bar{S} + \bar{I} + \bar{R}. \] Therefore, we analyze the following SIR model: at time $t$, we consider $\bar{S}(t)$ as the number of susceptible individuals, $\bar{I}(t)$ as the number of infected individuals, and $\bar{R}(t)$ as the number of removed/recovered individuals. The equations of the SIR model are the following: \begin{equation}\label{eq0} \begin{cases} \frac{d \bar{S}}{dt}=-\frac{\bar{\beta} \bar{S} \bar{I}}{N} \\ \frac{d\bar{I}}{dt}=\frac{\bar{\beta} \bar{S} \bar{I}}{N}-\gamma \bar{I} \\ \frac{d\bar{R}}{dt}=\gamma \bar{I} \end{cases} \end{equation} where: \begin{itemize} \item $\frac{d \bar{S}}{dt}$ is the rate of change of the number of individuals susceptible to the infection over time; \item $\frac{d\bar{I}}{dt}$ is the rate of change of the number of individuals infected over time; \item $\frac{d\bar{R}}{dt}$ is the rate of change of the number of individuals recovered over time. \end{itemize} Because there is no canonical choice of $N$, we will transform the system \eqref{eq0} by dividing it by $N$ and considering $S(t)=\bar{S}(t)/N$, $I(t)=\bar{I}/N$ and $R(t)=\bar{R}(t)/N$. It is customary to choose $N=10^6$ for convenience but this is just an arbitrary choice. For instance, analysis on smaller communities, or cities involves less than $10^6$, however $10^6$ is a common choice because countries number their populations in multiples of $10^6$. With these notations we translate \eqref{eq0} into \begin{equation}\label{eq1} \begin{cases} \frac{d S}{dt}=-\beta SI \\ \frac{d I}{dt}=-\beta SI-\gamma I \\ \frac{d R}{dt}=\gamma I \end{cases} \end{equation} where $\beta=\bar{\beta}/N$ and $\gamma$ is the same as in \eqref{eq0}. Notice that now we actually have that $S(t)+I(t)+R(t)=S_0+I_0+R_0=1$ for all $t\ge0$. Since we are interested in the reverse problem, namely determining the parameters $\beta,\gamma$ from the observations, we put this as a formal mathematical result as follows. \begin{proposition}\label{p:1} Referring to the system \eqref{eq1}, if we know $I_0,S_0$ and the values $I(t_1), S(t_1)$ for some $t_1>0$, these determine uniquely the parameters $\beta$ and $\gamma$ of the system. \end{proposition} Notice there the main assumption, that the parameters $\beta, \gamma$ do not change in time. \begin{proof} The first step is to notice that by assumption, $\beta, \gamma$ constants in time yields in the first place that \begin{equation*} \frac{I'(t)}{S'(t)}=-\frac{\beta S(t)I(t)-\gamma I(t)}{\beta S(t)I(t)}=-1+\frac{\gamma}{\beta S(t)}. \end{equation*} which in turn gives that \begin{equation*} I'(t)=-S'(t)+\frac{\gamma}{\beta}\frac{S'(t)}{S(t)} \end{equation*} and finally integrating this shows that (here we denote $\rho=\gamma/\beta$) \begin{equation*} I(t)+S(t)-\rho \log(S(t)) \text{ constant in }t. \end{equation*} In particular, this means that \begin{equation}\label{e0:1} I(t)+S(t)-\rho\log(S(t))=I_0+S_0-\rho \log(S_0). \end{equation} Typically the initial value of $S_0$ is close to $1$ and $I_0$ is relatively small. In particular, if we assume that the epidemic ends somewhere then we definitely have $I(t)=0$ and thus $S(t)$ solves the equation \begin{equation} S-\rho \log(S)=I_0+S_0-\rho \log(S_0).\label{eq2} \end{equation} In particular if we assume that $I(t_\infty)=0$ and $S(t)$ converges as $t\to t_\infty$, then we get in the limit that $S(t_\infty)$ solves \eqref{eq2}. One consequence of this argument is that for all time $0\le t\le t_{\infty}$, we have that $S(t)-\rho\log(S(t))\le \alpha:= I_0+S_0-\rho \log(S_0)$. Another important consequence of this model is that if we assume $S_0$ and $I_0$ fixed (obviously $R_0$ will also be determined) but, for a given time $t=t_1>0$, knowing $S(t_1)$ and $I(t_1)$ (therefore $R(t_1)$ as well), we can determine uniquely the parameters $\beta$ and $\gamma$. Indeed this is clearly seen from \eqref{e0:1} which gives \[ \rho=\frac{I_0+S_0-I(t_1)-S(t_1)}{\log(S_0)-\log(S(t_1))}. \] On the other hand, from \eqref{e0:1} in the first line of \eqref{eq1}, then we obtain that \begin{equation}\label{e0:S} \frac{dS}{dt}=-\beta S(I_0+S_0-\rho\log(S_0) - S +\rho \log(S)) \end{equation} The problem is that we can not integrate explicitly this to obtain an analytic expression for $S(t)$. However, what we can still show is that by knowing $I_0,S_0,I(t_1),S(t_1)$ we can determine the parameter $\beta$. As we already pointed out, we know how to determine $\rho=\gamma/\beta$, thus we can rewrite \eqref{e0:S} in the form \begin{equation}\label{e0:S2} \frac{S'}{S(I_0+S_0-\rho\log(S_0)-S+\rho\log(S))}=-\beta. \end{equation} Now, for $\alpha:=I_0+S_0-\rho\log(S_0)>0$ and $\rho>0$ we define for $x$ such that $\alpha>x-\rho \log(x)$, \[ \Phi(x)=\int_x^1\frac{ds}{s(\alpha-s+\rho \log(s))} \] and notice that using this function, integrating \eqref{e0:S2}, we arrive at \[ \Phi(S(t_1))-\Phi(S(0))=\beta t_1 \] from which it is clear that $\beta$ is completely determined by $S(t_1),S_0,I_0$. Knowing $\beta$ and $\rho$, we can immediately solve for $\gamma=\rho\beta$, thus all parameters are determined. \end{proof} \section{The neural network and the parameters regime}\label{s:2} Our next goal is to get estimates on the parameters $\beta,\gamma$ of the SIR model. There are two basic ideas here. The first one is to train a neural network using a typical inverse problem. The second one is to use this neural network combined with the data to estimate the regimes of the parameters. In a real world the parameters do not stay constant, they change slightly and we would like to catch part of this behavior. We combine the neural network with this day by day estimate to get an indication of the regimes for $\beta$ and $\gamma$. In fact, what we do is we try to detect the regimes where the parameters stay more or less constant. As we will see the regime change is confirmed by the quarantine imposed as a fighting measure against the virus. \subsection{The neural network} To deal with the parameter estimates, we do the following. First we discretize $\beta$ by considering 200 points equally spaced in the interval $[0.1; 1.5]$ and for $\gamma$ we consider 100 points equally spaced in the interval $[0.05; 0.67]$. These intervals were chosen based on apriori analysis and much experimentation. Next, we solve the system of differential equations for each pair $(\beta_i, \gamma_j)$, for 50 days, for a population of $10^6$ individuals, and we store the results in a dataset. We train a neural network on the resulting dataset so that the input is of the form: \[ XTrain=(Day,\;\; \#Susceptible, \;\; \#Infected, \;\; \#Recovered) \] or in the terminology of the previous paragraph, we have \[ XTrain=(t,S(t),I(t),R(t))\text{ where }t=0,1,\dots, 50. \] and the output is exactly the pair \[ YTrain=(\beta,\gamma), \] which generated the solution above. We fixed the initial conditions $S_0=1-I_0$, $I_0=2/N$ and $R_0=0$. We started with $2$ infected people because there were two initial individuals who traveled in outside the country in exposed regions and were first spotted as the original spreaders. The neural network we used is of the following form \begin{enumerate} \item Dense 64, activation ReLU, with input dimension=4, dropout =.2 \item Dense 128, activation ReLU, dropout=.2 \item Dense 256, activation ReLU, dropout=.2 \item Dense 512, activation ReLU, dropout=.2 \item output $(\beta, \gamma)$ with optimazer Adam and loss MSE. \end{enumerate} \subsection{The day by day fit of the parameters} Before we move on with the results of the day by day estimates, we point out that the result of Proposition~\ref{p:1} guarantees that the parameters estimated should be well-determined by the network as each triple $(t,S(t),I(t))$ determines in a unique way the parameters $(\beta,\gamma)$. Once the model had been trained, we use it to predict the day by day $\beta$ and $\gamma$ for Romania Covid-19 reported numbers. What this means is that we try to predict a set of parameters such that for a given day $t$, what we observe is exactly the number of suspected, infected and recovered reported on that day by the officials. Therefore, we assume and try to predict a single set of parameters for the time period $[0,t]$, $t$ here being the corresponding day. The results are represented in the chart below. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{parameters.png} \caption{The prediction of day by day neural network trained on SIR models.} \label{f:1} \end{figure} What this suggests is that we can identify three regimes. The first one is characterized by uncertainty, with a high infection spread and big variation of the two parameters values from day to day. This is approximately for the first 15 days or so. This may be due to the fact that, even if there were not many cases reported yet and the restrictions have not already been imposed, people were starting to be aware of the severity of the situation. On the other hand, the last 25 days have a lower volatility for both parameters, which can be a consequence of the measures taken by the authorities. The intermediate regime can be considered a transition between the first one and the last one. This is roughly centered on the 30th day with a period of $\pm$10 days of regime switching. We should also commment on the fact that the data that is available shows the number of individuals that have been tested positive, but it is very likely that the real number of people infected is in fact much higher, as there are also asymptomatic individuals, people that are not being tested although they present the specific symptoms, so they are not part of the official reports. Another aspect that should be taken into consideration is that the long incubation period characteristic to this virus determines a delay between the moment when a person has been infected and the moment when that person has been tested positive. In order to reduce the effects of the above deficiencies, we consider another model below which accounts the number of deceased as separate rubric. In this process we keep in mind all the points discussed until now, an important one being that the parameters are not constants for all period, they can be at most constant on pieces, reflecting in fact the regime imposed on the population. \section{The SIRD model}\label{s:3} In the sequel we propose a model in which we modify the SIR model in two different directions. The first one is to consider an interaction between the recovered and the susceptible on one hand and the other direction is to have an account of the deaths in this analysis. If we want to take into account the interaction between the recovered and the susceptible, we really need to separate the deceased ones from the recovered ones. At the end of the day the number of deaths is probably the most reliable number we can account for, as the number of infected people is wildly unknown and the number of recovered is also largely unknown. Thus we have four variables changing with time now. These are $S(t)$, $I(t)$, $R(t)$ and $D(t)$ where $R(t)$ is the proportion of recovered and alive people while the $D(t)$ is the proportion of deceased people. We set the interaction as follows \begin{equation}\label{sird} \begin{cases} \frac{d S}{dt}=-\beta S I\\ \frac{dI}{dt}=\beta S I -(\gamma_1+\gamma_2) I \\ \frac{dR}{dt}=\gamma_1 I\\ \frac{dD}{dt}=\gamma_2 I. \end{cases} \end{equation} Notice that in this setup the recovered population bifurcates into recovered ones, accounted by $R$ and the dead ones accounted by $D$. The point of this model is to see how many people die in the long run from this disease. Of course this is not complete as there are other factors which should be taken into account, but we are going to use this simple model. Notice that for $R$ taken as the sum of the two factors $R+D$ above we fall into the classical SIR model. There are two points here for the model. One is that we separate the dead people from the recovered ones which are mixed up in the classical SIR model. We are going to manipulate these equations and reduce the computations to a single equation involving only one of these quantities, the most reliable one, namely $D(t)$. To do this we will write all the other quantities as functions of $D$ as follows: \begin{equation*} S=u(D), I=v(D), R=w(D). \end{equation*} The easiest to deal with is $R$ because from the last two equations we get \begin{equation*} \frac{dR}{dt}=\frac{\gamma_1}{\gamma_2}\frac{dD}{dt} \end{equation*} from which we deduce that $R(t)=\frac{\gamma_1}{\gamma_2}(D(t)-D_0)+R_0$. Now, we deal with the function $u$ from $S(t)=u(D(t))$. Dividing the first and the last we get \begin{equation*} u'(D)=-\frac{\beta}{\gamma_2}u(D) \end{equation*} which can be integrated and gives $S$ in terms of $D$ as \begin{equation*} S=S_0\exp\left( -\frac{\beta}{\gamma_2}(D-D_0) \right). \end{equation*} On the other hand this allows us to solve for $I=v(D)$. First we notice that \begin{equation*} \frac{dS}{dt}+\frac{dI}{dt}=-(\gamma_1+\gamma_2)I=-\frac{\gamma_1+\gamma_2}{\gamma_2}\frac{dD}{dt} \end{equation*} from which we deduce that \begin{equation*} S+I+\frac{\gamma_1+\gamma_2}{\gamma_2}D=S_0+I_0+\frac{\gamma_1+\gamma_2}{\gamma_2}D_0. \end{equation*} therefore we obtain that (as functions of $D$) \begin{equation}\label{e:D} \frac{dD}{dt}=\gamma_2 I_0-(\gamma_1+\gamma_2)(D-D_0)+\gamma_2 S_0\left[1-\exp\left( -\frac{\beta}{\gamma_2}(D-D_0) \right)\right] \end{equation} This last deduction works in the case the parameters $\beta,\gamma_1,\gamma_2$ are all assumed constant in time. However, if they vary with time, then, the equation is a little bit different, the main equation becomes now \begin{equation}\label{e:D2} D'(t)=\gamma_2(t) I_0-\gamma_2(t)\int_0^t\left(\frac{\gamma_1(s)}{\gamma_2(s)}+1\right)D'(s)ds+\gamma_2(t) S_0\left[1-\exp\left( -\int_0^t\frac{\beta(s)}{\gamma_2(s)}D'(s)ds \right)\right] \end{equation} The main idea from here is to use the data on the death cases to estimate the coefficients involved. As we already pointed out, the number of perished people is the most reliable data, since all the other data is very rough. For instance, the proportion of people which are infected is grossly underestimated since there are probably more infected people than the reported cases tested usually in the hospitals. It is also true that even the dead numbers are probably overestimated as many people perish due to existing conditions which lead to complications which in the end makes the task of deciding the cause of death much more difficult. However this is the most reliable data we can trust. From the technical standpoint, equation \eqref{e:D2} is not easy to handle and we will use equation \eqref{e:D} instead together with a regime switch and a piecewise parameter fit. In other words, we fit the number of deceased on pieces where we assume that the parameters do not change. \section{Regime switch}\label{s:4} We talked about the existence of different regimes in the spreading of the disease because of the measures that have been taken which had a significant impact on the evolution of the infection rate. We choose to apply an approach in which we have separate scenarios for the free spread period and the case of quarantine, as well as a transition period between these two scenarios. First we define the sigmoid function: $\sigma(x,c,s)=\frac{1}{1+e^{s(x-c)}}$ where $c$ models the turning point and $s$ models how swift the transition between these two regimes is taking place. Considering 2 different regimes and a transition period between them the \textbf{SIRD model} becomes \begin{equation}\label{sird_regimes} \begin{cases} \frac{d S}{dt}=-(\beta_1 \cdot\sigma(t,c,s)+\beta_2 \cdot (1-\sigma(t,c,s)))\cdot S\cdot I\\ \frac{dI}{dt}=(\beta_1\cdot \sigma(t,c,s)+\beta_2\cdot (1-\sigma(t,c,s)))\cdot S\cdot I \\ \;\;\;\;\;\;-(\gamma_{11} \cdot \sigma(t,c,s)+\gamma_{21} \cdot (1-\sigma(t,c,s)))\cdot I -(\gamma_{12} \sigma(t,c,s)+\gamma_{22} (1-\sigma(t,c,s)))I\\ \frac{dR}{dt}=(\gamma_{11}\cdot \sigma(t,c,s)+\gamma_{21}\cdot(1-\sigma(t,c,s)))\cdot I\\ \frac{dD}{dt}=(\gamma_{12}\cdot \sigma(t,c,s)+\gamma_{22}\cdot(1-\sigma(t,c,s)))\cdot I. \end{cases} \end{equation} where \begin{itemize} \item $\beta_1,\; \beta_2 $ represent the infection rates in the first regime respectively in the second one; \item $\gamma_{11},\; \gamma_{21} $ represent the recovery rates in the first regime respectively in the second one; \item $\gamma_{12},\; \gamma_{22} $ represent the fatality rates in the first regime respectively in the second one \end{itemize} With the same notations and same logic we will consider $\beta, \; \gamma_1,\; \gamma_2$ in \eqref{e:D} as follows: \begin{itemize} \item $\beta=\beta_1 \cdot\sigma(t,c,s)+\beta_2 \cdot (1-\sigma(t,c,s))$ \item $\gamma_1=\gamma_{11} \cdot\sigma(t,c,s)+\gamma_{21} \cdot (1-\sigma(t,c,s))$ \item $\gamma_2=\gamma_{12} \cdot\sigma(t,c,s)+\gamma_{22} \cdot (1-\sigma(t,c,s))$ \end{itemize} We have previously shown that we can derive all the parameters using the number of dead people, as it is the most reliable one, using equation \eqref{e:D}. In other words we will use the above substitutions for $\beta, \; \gamma_1,\; \gamma_2$ in equation \eqref{e:D} and we aim at finding the solution of the following problem: \begin{equation}\label{e:min1} \begin{aligned} & \underset{\beta_1,\; \beta_2,\; \gamma_{11},\; \gamma_{12},\; \gamma_{21},\; \gamma_{22} }{\text{minimize}} & & \sum _{t=1}^{t=n} \bigg(D(t)-Data(t)\bigg)^2 \\ \end{aligned} \end{equation} It is important to mention that we fix some of the parameters. We take as follows \begin{itemize} \item $c=30$ based on the moment when the recommendations were made or restrictions were imposed, when people started to be aware of the severity of this virus \item $s=0.1$ representing how swift was the transition between the two regimes. \item $n=60$ we solve the problem using the data from the first $60$ days. \item The optimization problem is solved by using a grid search around the average values of the $\beta,\gamma$ obtained from the neural network on each of the regimes outlined in the section above and plotted in Figure~\ref{f:1}. In particular we use here the fact that $\gamma_{11}+\gamma_{12}=\gamma_1$, for the first regime and similarly we have $\gamma_{21}+\gamma_{22}=\gamma_2$ for the second regime. We interpreted $\gamma_{11}=p \gamma_{1}$ and $\gamma_{12}=(1-p)\gamma_{1}$ and thus we in fact search for $p$ on a certain grid for the best fit in \eqref{e:min1}. \end{itemize} We point out that the last two parameters depend on the country, as different countries have acted differently. Solving the optimum problem in these conditions we obtain: \begin{figure}[H] \centering \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{pr_60_60.png} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{pr_60_90.png} \end{subfigure} \captionsetup{justification=centering,margin=1.3cm} \caption{The parameters are as follows: $\beta_1=0.56105, \gamma_{11}=0.28961, \gamma_{12}=0.00591, \beta_2=0.240419,\gamma_{21}= 0.12958,\gamma_{22}=0.0001,c=30, s=0.1$. \\ The right figure gives the estimated prediction curve together with the $\pm 10\%$, $\pm 15\%$ and $\pm20\%$ estimates in the shaded areas. } \end{figure} We plot here for 2 different timeframes: 60 days and 90 days. An important aspect that we have to keep in mind is that the prediction of the disease evolution can not be done accurately for a long timeframe, as the new restrictions or other measures, treatment and medication developed over the time definitely impact the model. Considering this idea, the prediction of the infectious disease evolution should always incorporate in the model the new information that arises as the time passes and we should split the timeframe in separate regimes. \section{Predictions}\label{s:5} Starting 15th of May, the Romanian Government started to gradually relax the COVID-19 restrictions and on the 15th of June the secound round of relaxations were issued. During this time, various restrictions have been lifted and a new package of relaxation measures has been released every two weeks. In order to generate reliable predictions, we now have to take into consideration the effect of lifting the restrictions and to adapt the model to this new regime. Therefore, we analyze how the measures of relaxation have impacted the parameters of the model and we generate a new prediction. We expect the number of deaths to grow. With the same logic that we used before we consider these new parameters: \begin{itemize} \item $\beta_3$ represents the infection rate in the third regime \item $\gamma_{31} $ represents the recovery rate in the third regime \item $\gamma_{32} $ represents the fatality rate in the third regime; \end{itemize} Now, in order to make a new prediction, we will consider $\beta, \; \gamma_1,\; \gamma_2$ in \eqref{e:D} as follows: \begin{itemize} \item $\beta=[\beta_1 \cdot\sigma(t_1,c_1,s_1)+\beta_2 \cdot (1-\sigma(t_1,c_1,s_1))]\cdot \sigma(t_2,c_2,s_2)+\beta_3\cdot(1-\sigma(t_2,c_2,s_2)) $ \item $\gamma_1=[\gamma_{11} \cdot\sigma(t_1,c_1,s_1)+\gamma_{21}\cdot (1-\sigma(t_1,c_1,s_1))]\cdot \sigma(t_2,c_2,s_2)+\gamma_{31} \cdot (1-\sigma(t_2,c_2,s_2))$ \item $\gamma_2=[\gamma_{12} \cdot\sigma(t_1,c_1,s_1)+\gamma_{22}\cdot (1-\sigma(t_1,c_1,s_1))] \cdot\sigma(t_2,c_2,s_2)+\gamma_{32} \cdot (1-\sigma(t_2,c_2,s_2))$ \item $c_2=115$ corresponding to 15th of June when the last round of relaxations were put in place, in particular the opening of restaurants, bars, hotels and pools were regulated. With this we take $s_2=.22$ which corresponds to a switch of approximately 5 days from one regime to the other. \end{itemize} Solving the equation with the third regime included we obtain the following prediction: \begin{figure}[H] \centering \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\linewidth]{pr_with_rel160.png} \end{subfigure} \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\linewidth]{pr_with_rel189.png} \end{subfigure} \caption{The parameters for each region is as follows: $\beta_1=0.56105,\gamma_{11}=0.28961,\gamma_{12}=0.00591, \beta_2=0.240419,\gamma_{21}= 0.12958,\gamma_{22}=0.0001,\beta_3=0.4388, \gamma_{31}=0.1081,\gamma_{32}=0.00127$, $c_2=115$, $s_2=0.22$. \\ This is the prediction based on the three regimes. The confidence intervals shown in blue are based on $\epsilon=10\%, 15\%, 20\%$ adjustment done as follows. With the parameters from above we constructed a grid around the mean values of $\beta_3$ and $\gamma_3$ and took the range of $(1\pm \epsilon)\beta_3,(1\pm \epsilon)\gamma_3$ and split this into a mesh with $10$ values in each range. Furthermore, we evaluate the number of deaths for each such combination and arrange them according to difference from the number of deaths predicted with $\beta_3,\gamma_3$. Then we take the values for which the number of deaths is the most extreme at the end of the period, both underestimated and overestimated. This will give our $(\beta_{3,\max/\min},\gamma_{3,\max/\min})$. These will give the boundary curves for each $\epsilon=10\%, 15\%, 20\%$. As a note, for this training we kept the ratio $\gamma_{31}/\gamma_{32}$ as constant. \\ The left figure is predicting the evolution until 1st of August based on the training till 15th of July, while the second figure gives a prediction until 1st of September. } \label{fig:4} \end{figure} On the new regime, the model predicts that in 30 days (with the fitting based on 141 days, which is up until July 15th), the number of deaths will reach \textbf{2318} deaths on 1$^{st}$ of August (the 160th day), \textbf{2478} deaths by Aug 15th and \textbf{2600} deaths by Sep 1$^{st}$ and the corresponding confidence intervals outlined above. It is worth comparing the previous scenario, when we had only two regimes, with the new situation when we have three such regimes: \begin{figure}[H]\label{fig:5} \centering \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{pr_without_rel160.png} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\linewidth]{pr_with_rel160.png} \end{subfigure} \caption{The figures of the regime without the adjusting of the parameters (left) and with the adjusting of the parameters (right). One can see that the prediction does not fit the real data any longer. \\ The left figure gives the estimated prediction curve together with the $\pm 10\%$, $\pm 15\%$ and $\pm20\%$ estimates in the shaded areas. The right figure is the same as the left picture in Figure~\ref{fig:4}.} \end{figure} The two graphs above show that the partial lifting of the restrictions will have a negative impact on the number of deaths caused by COVID-19. In the initial approach the number of predicted deaths on 1$^{st}$ of August would have been \textbf{1370} while in the second approach which includes the relaxation period, the predicted number of deaths increases significantly, to \textbf{2318}. We can easily notice that the first approach having one regime when the pandemic starts and one regime when restrictions are imposed, seems to understand the evolution of the epidemic better. Even so, considering the social and economical aspects and the behavior of the population and governments in the affected countries, we can deduce that the approach which adds the relaxation regime fits better the real situation. We use now the same parameters for \textbf{SIRD model} and in Figure~\ref{fig:6} is the plot of the evolution of all the main categories. \begin{figure}[H] \centering \begin{subfigure}[b]{0.5\linewidth} \includegraphics[width=0.8\linewidth]{sird189.png} \end{subfigure} \caption{The predicted evolution using the SIRD model with three different regimes. The estimates of the main categories for the period Feb 26 - Sep 1$^{st}$, which amounts for 190 days. Probably the most interesting part is that by then 80\% of the population has been exposed to the virus already. We should note that part of assumptions here is the fact that some form of immunity is built up and the reinfection is not taken into account.} \label{fig:6} \end{figure} \section{Conclusions}\label{s:6} Now we summarize what we did here. The main idea is that within the models we used, be it SIR or more realistic SIRD, we split the problem according to various regimes. In this paper we take three regimes. One is the regime before any measures were taken. The second regime is the one in which the quarantine was imposed on the population. We also model the transition from one regime to another. The third regime we consider is the one following the relaxation. The transition is also modeled with the help of logistic function. The fit is done using the number of deaths and using the SIRD model. The search of the parameters is done around the values of $\beta$ provided by the neural network constructed based on the simpler SIR model. We believe that this methodology is a general one and can be extended to any country provided that we have data, in particular some information about the regime switch for each of the regimes. As a disclaimer, there are several assumptions made here. One of them is that people build up immunity to this virus and the reinfections are negligible. \bibliographystyle{amsalpha}
{ "timestamp": "2020-08-07T02:03:11", "yymm": "2007", "arxiv_id": "2007.13494", "language": "en", "url": "https://arxiv.org/abs/2007.13494" }
\section{Introduction} The search for boson and fermion-boson stars is a subject of current interest in modern Astrophysics \cite{jetzer,liddle1,liddle2,liddle3,urena}. The interest in the theme is intensified the circumstances related with the search for purely boson stars. As it is known, the Einstein-Klein-Gordon (EKG) equations have no static solutions for real scalar fields \cite{jetzer}. Also, although the complex EKG equations show centrally symmetric solutions, they need to be time dependent in a stationary form (harmonic). In the classic references on fermion-boson stars \cite{liddle1,liddle2,liddle3,urena} it was investigated the existence and stability of such systems. The discussion in these works defined conditions for the existence of such objects for a general case, but in which still the interactions between the scalar fields and matter where not considered. In the present work, we relax the assumption of the lack of interaction between the scalar field and matter for the considered systems. This is done in order to inspect the possibility for the appearance of solutions in which the scalar field becomes static, that is, time independent when real scalar field are considered. In order to simplify the discussion, we write the EKG equations by considering matter described by simple constituent relations including the photon gas one. The interaction between the scalar field and the gas is introduced by assuming that the scalar field source is proportional to the matter energy density. Firstly, it is considered a matter being close to the limit of pressureless gas. The initial conditions for the energy density at the coordinate origin, for a centrally symmetric solution was then fixed to a given value. Further, the equations were solved by specifying a tentative scalar field value at the origin. This first step solutions was then inspected for the behavior of the scalar field at large radial distance. This behavior emerged in only two types: a) One in which the scalar field tends to be singular an positive at some radial distance, and b) Another, in which the field tended to be also singular but negative at some radius value. Then, we noted that, if the field tends to be positive (negative) valued, the reduction (increasing) of its initial value at origin, reduced the tendency to positive (negative) values, by at the same time always augmenting the radius value at which the field become singular. Therefore, after properly selecting the initial values by iterating the above described process, a solution in which the scalar field attains a Yukawa like behavior at infinity can be approached. The resulting energy and pressure of the matter became centered in a bounded spherical vicinity of the origin. As for the temporal $v$ and the radial $u$ diagonal components of the metric and its inverse, respectively, they both approach constants at large distances, thus reproducing the Minkowski space-time. At short distances the temporal metric defines a gravitational potential which bounds the matter to the origin, at which its minimum sits. These results indicate the existence of static solutions of the EKG equations in presence of matter, when it interacts with the scalar field. The stability of the solutions is preliminarily investigated. For this purpose, a usual stability criterion for simpler stars constituted by fluids is verified to be valid \cite{teukolsky}. In second place, we also examined the solution associated to a matter related to a photon-like gas. The procedure for obtaining the solution was identical. As before, after fixing the matter density and the scalar field at the origin, the before described process to define a decreasing scalar field at large distances became closely similar. However, a different outcome arose in connection with the temporal component of the metric at large distances. It, surprisingly got a nearly linear radial dependence at large distance, in place of the expected constant behavior associated to the Minkowski space-time. This potential behavior corresponds to a approximately constant gravitational force over any small body at large distances. Such a behavior should be associated to the special massless character of the photon-like field, which might not allow the gravitational forces to fully confine the massless particles as they bound massive ones. The above results strongly motivates to consider a photon-like field to describe the velocity $vs$ radius curves of the galaxies, being associated to dark-matter. In addition, although it not corresponds to the main current point of view that real photons can act as dark-matter (\cite{peebles ,\cite{rees}), we also examine this question, seeking for unnoticed possibilities. In a preliminary study of these issues, we considered the determination of the parameters of the considered star, in order to reproduce the experimentally observed velocity $vs$ radius curves. For this purpose the mass of the scalar field was fixed to a value defining a galaxy of a typical size. The rotation curve following was of the type C of the three A,B and C kinds in which the galaxies rotation curves are classified \cite{classif}. That is, the velocity curve after rapidly growing for small radial distances, continue to grow, but with a smaller slope. Then, the value of the photon like particles energy density at the radial distances near the outside of the galaxy was evaluated and compared with the CMB energy density. The obtained result for the energy density at such points resulted enormously higher than the CMB energy density. Therefore, this first checking of the possibility that dark-matter were constituted by real photons and not dark-matter analogous massless particles gave a negative result. However, some causes for the discrepancy of the result for the energy density can be thought. One them, is that EKG equations solved here consider the photons as obeying a free dispersion curve, which leads to the employed equation of state: $\epsilon=3 \, p$. In this sense, it can be noted that the photons under the gravitational attraction, can be imagined to acquire some of the properties of confined photons in a resonant cavity. By example a discrete spectrum, that could have similar effects as a mass for photons. In order to discuss this possibility a simple model is presented in which a Newtonian gravitational potential is able to create mass like terms in the Maxwell equations. They, explicitly makes the photon wave to decrease in the direction in which the potential increases. In a coming work, we expect to consider a classical statistical model for photons in each space-time point, but which statistics will be controlled by the local values of the metric. This consideration, we expect that can introduce an attracting action of the gravitational field upon the photon density. This effect could bound them, by consequently stopping the growing of the potential and leading to an asymptotic Minkowski space. Upon this, it could be the case that the energy density of the modified galaxy solution can perhaps tend to the CMB density far form the galaxy \cite{massive}. We also examined the bounds posed on the real photons acting as dark-matter, by the fact that the photon matter should not be observable. In this sense, the Tolman theorem expressing the temperature a black-body radiation subject to gravitational field becomes helpful. Let us assume as above, that photons are able to be trapped in galaxies by the mentioned self-consistent effects. Then, the natural value of the radiation temperature in external zones to the galaxies is the Cosmic-Microwave-Background one $T_{cmb}$. Therefore, from Tolman theorem it follows that no matter the high temperatures the photon radiation can attain at some interior point of the structure, the radiation coming to the Earth from that point, will always have nearly 3 Kelvin degrees of temperature. Thus, it could not be easily observed. This is a conclusion not ruling out the possibility for the real photons playing the role of dark-matter. A deeper investigation should however be pursued in order to check, if there is still some way for the CMB could be in thermal equilibrium with an internal to the galaxy photon gas defining the halo. It should be recognized that the enormous difference between the energy densities allowing the photons to reproduce the rotation curves, and the CMB energy density suggests that the answer should be negative. However, the mechanism discussed here has not clear restrictions to work if a photon like but really dark matter is constituting the dark matter. In section 2, the Lagrangian of the system is presented and the equations for the time independent spherically symmetric solutions written. Section 3, exposes the derivation of the static solution for nearly pressureless matter. Next, section 4, is devoted to describe the second solution associated to the photon-like matter. Section 5, considers the preliminary inspection of the possibility that the second solution might describe the velocities $vs$ radius curves in galaxies. The conclusions resume the content of the work and describe possible extensions. \section{The field equations} Let us consider the metric defined by the following squared interval and coordinate \begin{align} ds^{2} & =\mathit{v}(\rho){dx^{o}}^{2}-u(\rho)^{-1}d\rho^{2}-\rho ^{2}(sin^{2}\theta\text{ }d\varphi^{2}+d\theta^{2}),\\ x^{0} & =c\text{ }t\text{, \ \ \ }x^{1}=\rho,\\ x^{2} & =\varphi,\text{ \ }x^{3}\equiv\theta, \end{align} where the CGS unit system is employed. Therefore, the Einstein tensor $G_{\mu\nu}$ components in terms of functions $u,v$ and the radial variable $\rho$ are evaluated in the form \begin{align} {G_{0}^{0}} & ={\frac{u^{\prime}}{\rho}}-{\frac{1-u}{\rho^{2}}},\\ G_{1}^{1} & ={\frac{u}{v}}{\frac{v^{\prime}}{\rho}}-{\frac{{1-u}}{\rho^{2} },\\ {G_{2}^{2}}={G_{3}^{3}} & ={\frac{u}{2v}}{v^{\prime\prime}}+{\frac {uv^{\prime}}{4v}(\frac{u^{\prime}}{u}-\frac{v^{\prime}}{v})}\nonumber\\ & +{\frac{u}{2\rho}(\frac{u^{\prime}}{u}+\frac{v^{\prime}}{v})}. \end{align} The physical system interacting with gravity that will be will considered is composed of a scalar field and gas of matter. The scalar field will be assumed to also linearly interact with an external source associated to it. The action of the field will have the for \begin{equation} {S}_{mat-\phi}=\int L\sqrt{-g} \,\, d^{4}x, \end{equation} with a Lagrangian density given by \begin{equation} {L}={\frac{1}{2}}(g^{\alpha\beta}{\Phi}_{,\alpha}{\Phi}_{,\beta}+m^{2}{\Phi }^{2}+2\text{ }J(\rho)\text{ }\Phi), \label{denslag \end{equation} in which the $A_{,\alpha}$ mean the derivative of $A$ over the variable $\alpha$. This Lagrangian determines an energy momentum of the form \begin{equation} (T_{mat-\phi})_{\mu}^{\nu}=-\frac{\delta_{\mu}^{\nu}}{2}(g^{\alpha\beta}{\Phi }_{,\alpha}{\Phi}_{,\beta}+m^{2}{\Phi}^{2}+2\text{ }J(\rho)\text{ }\Phi), \end{equation} which afterwards can be added to the energy momentum tensor of the matter (\cite{Weinberg}): \begin{equation} (T_{e,P})_{\mu}^{\nu}=P\,\delta_{\mu}^{\nu}+u^{\nu}u_{\mu}(P+e), \end{equation} to write the total energy momentum tensor a \begin{align} T_{\mu}^{\nu} & =-\frac{\delta_{\mu}^{\nu}}{2}(g^{\alpha\beta}{\Phi }_{,\alpha}{\Phi}_{,\beta}+m^{2}{\Phi}^{2}++2\text{ }J(\rho)\text{ \Phi)\nonumber\\ & \text{ \ \ \ }+g^{\alpha\nu}{\Phi}_{,\alpha}{\Phi}_{,\mu}+P\text{ \delta_{\mu}^{\nu}+u^{\nu}u_{\mu}(P+e). \end{align} Since static field configurations are being searched, the four-velocity reduces to the form in the local rest syste \begin{equation} u^{\mu}=(1,0,0,0). \end{equation} After these definitions, the Einstein equations can be written as follows \begin{equation} G_{\mu}^{\nu}=\kappa\,\,\hspace{0.1mm}T_{\mu}^{\nu}, \label{Einstein \end{equation} in which both of the tensors appearing are diagonal and the gravitational constant has the valu \begin{equation} \kappa=8\pi\text{ }l_{P}^{2}, \end{equation} in terms of the Planck length $l_{P}=1.61\times10^{-33}$ cm. Their explicit forms are \begin{align} {\frac{u^{\prime}}{\rho}}-{\frac{1-u}{\rho^{2}}} & =-\kappa\text{ }[\frac {1}{2}(u\Phi_{,\rho}^{2}+m^{2}\Phi^{2}+2\text{ }j\text{ }\Phi)+e],\\ {\frac{u}{v}}{\frac{v^{\prime}}{\rho}}-{\frac{{1-u}}{\rho^{2}}} & =\kappa\text{ }[\frac{1}{2}(u\Phi_{,\rho}^{2}-m^{2}\Phi^{2}-2\text{ }j\text{ }\Phi)+P\,],\\ {\frac{\rho^{2}u}{2}}{v^{\prime\prime}}+\frac{\rho^{2}}{4}u\text{ }v^{\prime }({\frac{{u}^{\prime}}{u}-}\frac{{v}^{\prime}}{v})+\frac{\rho}{2}u\text{ }v^{\prime} & =\kappa\text{ }[\frac{1}{2}(u\Phi^{\prime2}+m^{2}\Phi ^{2}+2\text{ }j\text{ }\Phi)+P\,],\\ {\frac{\rho^{2}u}{2}}{v^{\prime\prime}}+\frac{\rho^{2}}{4}u\text{ }v^{\prime }({\frac{{u}^{\prime}}{u}-}\frac{{v}^{\prime}}{v})+\frac{\rho}{2}u\text{ }v^{\prime} & =\kappa\text{ }[\frac{1}{2}(u\Phi^{\prime2}+m^{2}\Phi ^{2}+2\text{ }j\text{ }\Phi)+P\,]. \end{align} These are four equations, the last two of which are identical. Thus, there are three independent Einstein equations in the problem. But, the third and the equivalent fourth ones can be substituted by a simpler relation. It comes from the Bianchi identities (\cite{Weinberg}): \begin{equation} G_{\mu\text{ };\text{ }\nu}^{\nu}=0, \label{Bianchi \end{equation} where the semicolon indicates the covariant derivative of the tensor $G_{\mu\text{ }}^{\nu}.$ After assuming the satisfaction of the Einstein equations (\ref{Einstein}) the $G_{\mu}^{\nu}$ tensor in (\ref{Bianchi}) can be substituted by the energy momentum tensor $T_{\mu}^{\nu}$ leading to the relation \begin{equation} -\Phi\text{ }J^{\prime}+P^{\prime}+\frac{v^{\prime}}{2v}(P+e)=0. \end{equation} This is a dynamic equations for the energy, the pressure and the scalar field, substituting the two equivalent Einstein equations being associated to the both angular directions. The last of the equations of movement for the system is the Klein-Gordon one for the scalar field. It can be obtained by imposing the vanishing of the functional derivative of the action $S_{mat-\phi}$ with respect to the scalar field \begin{align} \frac{\delta S_{mat-\phi}}{\delta\Phi(x)} & \equiv\frac{\partial}{\partial x^{\mu}}\frac{\partial L}{\partial\Phi_{,\mu}}-\frac{\partial L}{\partial\Phi }\nonumber\\ & \equiv\frac{1}{\sqrt{-g}}\frac{\partial}{\partial x^{\mu}}(\sqrt{-g g^{\mu\nu}\Phi_{,\nu})-m^{2}\Phi-J\nonumber\\ & =0, \end{align} a relation that after employing the temporal and radial Einstein equations in \ (\ref{Einstein}) can be rewritten in the form \begin{align} J(\rho)+m^{2}\Phi(\rho)-u(\rho)\text{ }\Phi^{\prime\prime}(\rho) & =\Phi^{\prime}(\rho)(\frac{u(\rho)+1}{\rho}-\rho\text{ }\kappa\text{ (\frac{m^{2}\Phi(\rho)^{2}}{2}+\nonumber\label{escalar}\\ & J(\rho)\text{ }\Phi(\rho)+\frac{e(\rho)-P(\rho)}{2})) \end{align} Therefore, the three relevant for the problem EKG equations are resumed as \begin{align} {\frac{u^{\prime}(\rho)}{\rho}}-{\frac{1-u(\rho)}{\rho^{2}}} & =-\text{ }\kappa\text{ }[\frac{1}{2}(u(\rho)\Phi^{\prime2}(\rho)+m^{2}\Phi(\rho )^{2}+2\text{ }J(\rho)\Phi(\rho))+e(\rho)],\\ {\frac{u(\rho)}{v(\rho)}}{\frac{v^{\prime}(\rho)}{\rho}}-{\frac{{1-u}(\rho )}{\rho^{2}}} & =\kappa\text{ }[\frac{1}{2}(u(\rho)\Phi^{\prime2 (\rho)-m^{2}\Phi(\rho)^{2}-2\text{ }J(\rho)\text{ }\Phi(\rho))+p(\rho)\,],\\ J(\rho)+m^{2}\Phi(\rho)-u(\rho)\text{ }\Phi^{\prime\prime(\rho)} & =\Phi^{\prime}(\rho)(\frac{u(\rho)+1}{\rho}-\rho\text{ }\kappa\text{ (\frac{m^{2}\Phi(\rho)^{2}}{2}+J\text{ }(\rho)\Phi(\rho)+\frac{e(\rho )-p(\rho)}{2})). \end{align} In order to work with dimensionless forms of the equations, let us define the new radial variable, scalar field and parameters as follows \begin{align} r & =m\rho,\\ \phi(r) & =\sqrt{8\pi}l_{p}\Phi(\rho),\\ j(r) & =\frac{\sqrt{8\pi}l_{p}}{m^{2}}J(\rho),\\ \text{\ \ }\epsilon(r) & \equiv\frac{8\pi l_{p}^{2}}{m^{2}}e(\rho),\text{ \ \ \ }\\ p(r) & =\frac{8\pi l_{p}^{2}}{m^{2}}P(\rho). \end{align} It should noted that the new variable $r$ has dimension of $gr \times cm$ . Therefore, the to be worked EKG equations in the new coordinates will be \begin{align} {\frac{u^{\prime}(r)}{r}}-{\frac{1-u(r)}{r^{2}}} & =-\frac{1}{2 (u(r)\phi^{\prime}{(r)}^{2}+\phi(r)^{2}+2j(r)\phi(r))-\epsilon (r),\label{eecuaadim1}\\ \frac{u(r)}{v(r)}\frac{v^{\prime}(r)}{r}-{\frac{1-u(r)}{r^{2}}} & =-\frac {1}{2}(-u(r){\phi}^{\prime}{(r)}^{2}+\phi(r)^{2}+2j(r)\phi (r))+p(r),\label{eecuaadim2}\\ (\epsilon(r)+p(r))\frac{v^{\prime}(r)}{2v(r)}-\phi(r)\text{ }j^{\prime}(r) & =0,\label{eecuaadim3}\\ j(r)+{\phi}(r)-u(r)\text{ }{\phi}^{\prime\prime}(r) & ={\phi}^{\prime }(r){\Large (}\frac{u(r)+1}{r}-r\text{ }(\frac{{\phi}(r)^{2}}{2}+j(r){\phi }(r)+\frac{\epsilon(r)-p(r)}{2}){\Large )} . \label{eecuaadim4 \end{align} Note that in order to simplify the notation, the same letter $u$ and $v$ had been used to indicates the metric components in the new variables. That is, we will write $u(r)=u(\rho)$ and $v(r)=v(\rho)$ in spite of the fact that functional forms of the two quantities can not be equal. This should not create confusion. \subsection{ The constitutive relations and the matter-field interaction} Let us now define the functional form of the constitutive relation for matter as expressed in the new coordinate system \begin{equation} \epsilon(r)=n\text{ }p(r), \end{equation} where the parameter $n$ defines the ratio between the energy density and the pressure. This simple form allows to consider the cases of nearly pressureless dust for $n$ large and the case of a photon gas for $n=3.$ As mentioned before, the discussion will consider the interaction between matter and the scalar field. It will be taken into account by assuming the source of the scalar field $j(r)$ becomes proportional to the matter energy $\epsilon(r): \begin{equation} j(r)=g\text{ }\epsilon(r). \end{equation} It should be stressed that we have considered the matter in a simpler approach with respect to the more detailed one employed in references \cite{liddle1,liddle2}. This was done in order to concentrate the discussion in the relevant issue addressed in this paper: the role of the inclusion of matter scalar field interaction. \section{The EKG including matter static scalar field solution} Once the physical system had been defined, let us present in this section a particular kind of solutions of the EKG equations for a real scalar field interacting with matter. The objective of the presentation is to show the existence of static configurations of a centrally symmetric star formed by scalar field and matter contents. The found solution tends to reproduce the Schwarzschild space-time with a Yukawa like scalar field decaying at large distances \cite{jetzer}. We start to search for it by fixing initial conditions for the radial evolution defined at very small radial distance $\delta=10^{-6}$ \begin{align} u(\delta) & =1,\\ v(\delta) & =1,\\ \phi(\delta) & =\phi_{0}=0.65,\\ p(\delta) & =p_{0}=0.0595725. \end{align} These initial conditions were fixed not exactly at the coordinate axis, but at a very small non vanishing radial distance $\delta$, because the equations for $u(r)$ and $v(r)$ are singular at $r=0.$ This singularity also enforces the value of $u$ to tend to one in the limit $r->0$ assumed that the solution for this quantity is regular. To solve the equations we programmed the differential equations by using the software Mathematica. The proportionally constant between the scalar field source and the energy density of the matter was chosen a \[ g=0.9. \] \begin{figure}[h] \par \begin{center} \hspace*{-0.5cm}\includegraphics[width=70mm]{uv.eps} \vspace{-1cm} \end{center} \caption{ The figure shows the evolution with the radial coordinate $r$ for the two fields $u(r)$ and $v(r)$. Note that at large radial distances the metric components coincide, which indicates that the metric tensor tends to be the Minkowski one, faraway from the symmetry center. At small distances at which the matter and field energy densities start to grow, $u$ and $v$ deviates one from another: $u$ tends to the unit at the origin and $v$ reduces its value to a minimum at this point. Note that the trapping of the matter near the origin is compatible with the interpretation of $v$ as a gravitational potential, which attracts matter to the region in which its minimum appears. \label{uv \end{figure} The procedure for obtaining the solutions was as follows. Firstly, it was fixed the specified value for the pressure at the point $r=\delta.$ The required value of $u(\delta)=1$ was imposed. Arbitrarily we fixed $v(\delta)=1.$ This arbitrary character of the initial condition for $v(\delta)$ is associated to the fact that for any solution of the general equations, the multiplication of the function $v(r)$ by an arbitrary constant is also a solution. Thus $v(\delta)=1$ can be always chosen. The employed property directly follows from the fact that the EKG equations only depend on $v(r)$ through the ratio $\frac{v^{\prime}(r)}{v(r)}$. \begin{figure}[h] \begin{center} \hspace*{-0.7cm}\includegraphics[width=70mm]{phi.eps} \vspace{-1cm} \end{center} \caption{ The plot show the radial behavior of the scalar field. In the large radial distance region, in which the equations become linear ones, the scalar field gets a decaying value corresponding with the Yukawa solution. \label{phi \end{figure} Afterwards, in a first step, the equations were solved by fixing an arbitrary value of the scalar field near the origin $\phi(\delta)=\phi_{0}$. The possible results in this first stage were two fold: a first one in which the scalar field grew to positive singular value at a given radial distance; a second one in which the singular values resulted to be negative also at a specific radial distance. Then, it was possible to note that decreasing (increasing) the value of $\phi_{0},$ reduced the positive (negative) singular values, while at the same time, in both cases the radial distance at which the singularity appears increased. Repeating iteratively this process of adjustment of the values of $\phi_{0},$ the singularity position was fixed each time at larger distances. After this, it became clear that the scalar field solution tended to reproduce the small field Yukawa like solution of the Klein-Gordon equation. Upon this, the matter and scalar field energy densities and pressures became concentrated in vicinity of the origin. The figure \ref{uv} shows the values of the metric components. As it can be noticed, at large distances from the symmetry center, the metric tends to be the flat space Minkowski one. It should be here remarked that the solution for $v (r)$ was multiplied by a constant in order to enforce the equality between $u$ and $v$ at large distance, which reproduces the Minkowski space in the faraway region. At small distances, the value of $u$ starts deviating from the value of $v$. This occurs in the region in which the matter and scalar field densities are mainly concentrated. While $u$ tends to the unit in approaching the symmetry point, the temporal component of the metric tends to a minimum value. This should be the case, if this metric component takes the role of gravitational potential attracting the matter to the zone in which its minimum value appear. The resulting solution for the scalar field is shown in figure \ref{phi}. Far form the center, the behavior is exponential as it should be, because in Minkowski space the only real decaying radially symmetric solution of the Klein-Gordon (KG) equation is the Yukawa potential one. \begin{figure}[h] \begin{center} \includegraphics[width=60mm]{et.eps} \hspace{1cm \includegraphics[width=60mm]{pt.eps} \end{center} \caption{ The two plots show the radial behavior of the total energy density and the total pressure of the solution. The central role of the scalar field-matter interaction in allowing the solution to exist, is compatible with the fact that the matter energy density closely overlaps with the scalar field one. \label{etpt \end{figure} As for the total energy and pressure, defined a \begin{align} \epsilon_{t}(r) & =\frac{1}{2}(u(r)\phi^{\prime}{(r)}^{2}+\phi(r)^{2 +2j(r)\phi(r))+\epsilon(r),\\ p_{t}(r) & =\frac{1}{2}(u(r)\phi^{\prime}{(r)}^{2}-\phi(r)^{2}-2j(r)\phi (r))+p(r), \end{align} they show to be concentrated in spatial regions being similar in size to the ones in which the scalar field energy is localized. Their radial dependence are shown in figure \ref{etpt}. This property is compatible with the determining role of the matter-field interaction in allowing the existence of the solution. \subsection{Stability analysis } The full stability analysis of the considered physical system including a scalar field, should follow from the general linearized equations of the systems. However, this complete discussion requires an involved mathematical discussion which is out of the context of this work. However, we will consider a simpler preliminary analysis. It will be assumed that the inclusion of the scalar field in addition with matter, allows to justify that stability implies the total mass of the solution should grow when the initial condition for the density of matter at the origin is also increased \cite{teukolsky}. This criterion can be easily checked for the here obtained solution, by considering the total mass formula \begin{align} M(\epsilon_{0}) & =\int dr\text{ }4\pi\text{ }r^{2}\text{ }\epsilon_{t}(r),\\ \epsilon_{t}(r) & =\frac{1}{2}(u(r)\phi^{\prime}{(r)}^{2}+\phi(r)^{2 +2j(r)\phi(r))+\epsilon(r),\nonumber \end{align} as a function of the initial condition for the matter energy density $\epsilon(0)=\epsilon_{0}=40$ $p_{0}$. Note that the initial value of the scalar field at the origin $\phi_{0}$ is in fact a function of $\epsilon_{0}$, which value was found by imposing the Yukawa behavior of the scalar field at large radius. The new value of the total mass after increasing the matter energy density at origin $\epsilon_{0}$, was then derived by solving the equations for two nearly values of $\epsilon_{0}$, the original one $\epsilon_{0}=$2.3829 and the closer one $\epsilon_{0}^{1}=$2.3929. After modifying $\epsilon_{0}$ the new value of $ \phi_{0}$ was determined by correspondingly modifying the old value in order to assure the Yukawa dependence faraway from the origin. Therefore, the approximate evaluation of the derivative of the mass respect to the central density of matter resulted in \begin{align} \frac{dM(\epsilon_{0})}{d\epsilon_{0}} & \simeq\frac{M(\epsilon _{0}+0.01)-M(\epsilon_{0})}{0.01}\nonumber\\ & =0.03504>0. \end{align} Thus, the obtained solution satisfies this assumed stability criterion, which is valid for standard stars being constituted only by fluids of matter. As mentioned before, the inclusion of the scalar field should imply more general linear equations for the oscillation modes of the system. However, it can be expected that the considered here criterion remains being valid. It should noted that the solution derived is valid for any value of the scalar field mass. Thus, its stability criterion works for the whole family of solutions for arbitrary values of this mass. It should be however, still investigated the stability for the star configurations by varying the initial condition for the energy density at the origin and the intensity of the coupling of the matter with the scalar field. This will allow to determine whether there are bounds for the total mass of the obtained solutions. We expect to consider this question in coming works. In ending this section, it should be remarked that the found static solution had been allowed to exist thanks to the assumed interaction between the scalar field and matter, as defined by the source of the field being proportional to the matter density. It might be also helpful to note, that in reference \cite{cb} the same type of interaction was employed to identify a static Universe in which dark-energy interacts with matter. As well as it happened here, the matter-scalar field interaction became central in the existence of the static solution. \section{The solution for photon-like matter.} \ In this section we will consider the important special case in which the assumed matter constitutive relation is associated to the photon-like gas. That is, massless vector particles are assumed. They could be the real photons or other dark-matter analogous massless vector particles. For these kind of particles the traceless of the energy momentum tensor implies \begin{equation} \epsilon(r)=3\, p(r). \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=70mm]{uvf.eps} \label{phi1} \end{center} \caption{ The plots of the corresponding metric functions $u$ and $v$ with the radial coordinates. They show how the gravitational potential, reflected by $v$ grows nearly linearly with the coordinates at large radii. \label{uv1 \end{figure}\begin{figure}[b] \begin{center} \includegraphics[width=70mm]{force.eps} \label{phif} \end{center} \caption{ The radial dependence of the derivative of the function $v(r)$. This quantity defines the gravitational force in the Newtonian limit. The behavior indicates that the force over small massive bodies tends to be nearly constant at large distances. \label{force \end{figure} Since the considered particles are massless, it is possible that the solution could show special characteristics, as it will be effectively the case. The method of solving the EKG equations became identical to the one employed for the case of nearly pressureless matter in the past section. The initial conditions for the fields again were defined at the small radial distance $\delta=10^{-6}$ in the form \begin{align} u(\delta) & =1,\\ v(\delta) & =1,\\ \phi(\delta) & =\phi_{0}=0.623048,\\ p(\delta) & =p_{0}=0.7943. \end{align} The interaction between the scalar field and the matter was fixed at the same value employed in the past sectio \[ g=0.9. \] The EKG equations (\ref{eecuaadim1})-(\ref{eecuaadim4}) were then also solved using the operation NDSolve of the programme Mathematica. The general procedure for obtaining the solutions was exactly the same that was followed in the past section. After that, the iterative process led to a definite solution in which the scalar field also decreased at large radial values. However, the net results became radically different from the physical point of view, as it will be described in what follows. The figure \ref{uv1} depicts the two metric components. As it can be observed, at large distances from the symmetry center the field $v(r)$ in place of becoming the constant Minkowski value, tends to get linear radial dependence. This linear dependence implies that the gravitational field intensity of the system is not decreasing with the distance. That is, the system is able to produce an attractive nearly constant force on bodies situated faraway from the symmetry center. The plot of this force $vs$ radius is shown in figure \ref{force}. The appearance of this force means that the gravitation is unable to trap the photon-like particles close to the centre. This effect is not out of place to occur because the considered particles are massless, which move at the velocity of light. This interpretation is clearly supported by the graph in figure \ref{radialmass} which plots the total mass density up to a radial distance $r$ \begin{align} \rho_{t}(r) & =4\pi\text{ }r^{2}\epsilon_{t}(r),\\ \epsilon_{t}(r) & =\frac{1}{2}(u(r)\phi^{\prime}{(r)}^{2}+\phi (r)^{2}+2j(r)\phi(r))+\epsilon(r). \end{align} \begin{figure}[h] \begin{center} \includegraphics[width=70mm]{radialmattersen.eps} \end{center} \caption{The figure show the mass density by unit radial distance, as a function of the radius, for the photon-like matter interacting with the \ scalar field. It shows that the photons are able to escape to relatively faraway regions from the symmetry center. \label{radialmass \end{figure} The picture indicates that the photon-like energy at large distances is not decaying, but slowly increases with the radius. It can remarked that this photon-like energy is what effectively defines this effect, because the radial decaying of the scalar field energy density makes the radial density of scalar field energy ($4 \pi r^{2}$ times this energy density) to rapidly vanish with the radius. The radial behavior of the scalar field is shown in figure \ref{phi1}. Far from the center, the field again tends to rapidly decrease. \begin{figure}[h] \begin{center} \hspace*{-0.7cm}\includegraphics[width=70mm]{phif.eps} \vspace{-1cm} \label{phif} \end{center} \caption{ The figure shows the evolution with the radial coordinate $r$ of the scalar field for the photon-like matter-boson star. \label{phi1 \end{figure}As for the matter density, for this alternative solution there is a region close to the symmetry center showing the highest values of the density. This vicinity is close to the one in which the scalar gets its highest values. The radial dependence of the energy density is shown in figure \ref{phien}. \begin{figure}[h] \begin{center} \hspace*{-0.7cm}\includegraphics[width=70mm]{etf.eps} \vspace{-1cm} \label{phif} \end{center} \caption{ The plot shows the total energy density as a function of the radial distance for the photon-like matter interacting with a scalar field. Although the density decreases with the radius, the mass accumulation does not stops as the radius increases as it clear from figure \ref{radialmass}. \label{phien \end{figure}The vanishing of the energy density at large radii can give the impression that the total mass of the structure is concentrated near the origin. However, this shown to be not true, by the plot in figure \ref{radialmass}. In the following section, it will be argued that the found solution suggests the possibility that photon-like matter can determine the velocity $vs$ radius curves indicating the presence of dark-matter in the Universe. \section{Dark-matter and photon-like-particles} In this Section, it will be preliminarily examined the possibility of applying the photon-like solutions to explain the dark-matter properties. For this purpose, lets us assume that the considered solution is furnishing the attraction required to define a rotational galaxy motion. In this situation the circular equation of motion (in the original CGS units) of a given star of mass $\delta m$, after assuming that its motion is defined by the non-relativistic Newton equation, will be \begin{equation} -\delta m\frac{d}{d\varrho}\Phi_{G}(\varrho)=-\delta m\frac{V(\varrho)^{2 }{\varrho}. \label{newt \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=70mm]{knee.eps} \end{center} \caption{The expression of the velocity $V(\rho)$ of the rotating star being at radial distance $\rho$ as a function of the variable $r$. \label{knee \end{figure}That is, the gravitational force generated by the potential $\Phi_{G}$ produces the centripetal acceleration of the circular motion. But, assuming the Newtonian approximation for the gravitation potential it follow \begin{equation} \Phi_{G}(\varrho)=\frac{c^{2}}{2}(v(\varrho)-1), \end{equation} which after substituted in (\ref{newt}) leads for the radial dependence of the velocity the expressio \begin{align} V(\varrho) & =\sqrt{\frac{m\text{ }c^{2}}{2}\varrho\frac{dv(r)}{dr} =\sqrt{\frac{c^{2}\text{ }r\text{ }v^{\prime}(r)}{2}}.\\ r & =m\text{ }\varrho, \end{align} where the derivative of $v$ over the radius $\varrho$ measured in cm has been expressed in terms of derivative relative to the radial variables $r=m$ $\varrho$. Thus, the velocity at some radial distance $\varrho$ is only a function of the $r$\ variable. This dependence is illustrated for the solution being considered in the figure \ref{knee}. It should be now recalled that we have found the solution for a particular values of the energy density and scalar field at the origin of coordinates. Therefore, the energy of the obtained field configuration is fixed, once its parameters $m$ and $\kappa$ had been specified. Therefore the rotation curve for the particular solution being investigated is only a function of $m$ (assumed the gravitational constant $\kappa$ is given). For illustrative purposes, let us select a value of $m$ (the mass of the scalar field) such that the radial position corresponding the knee in the curve shown in the figure \ref{knee} at $r=$ $r_{c}=2.9469$ corresponds to a radial distance of \begin{equation} \varrho_{c}=7.517\text{ 10}^{3}\text{ ly}. \label{size \end{equation} This is a distance of the order of the sizes of the galaxies. Then, the value of the scalar field mass turns to be \begin{equation} m=\frac{r_{c}}{\varrho_{c}}=4.1484\times10^{-22}\text{ \ gr. \end{equation} It should be noted that this is a mass for the boson of nearly one hundred time larger than the proton mass. However, we have only chosen this value for exemplifying the theoretical possibility that solutions of the considered form could be associated to dark-matter. It is clear that many other possibilities for the matter being coupled with the massless field can be considered in substitution of the assumed scalar field. This type of field was relevant here in connection with the first part of the work illustrating the existence of static boson stars under coupling of this field with matter. \begin{figure}[t] \begin{center} \includegraphics[width=70mm]{RotaCurve.eps} \end{center} \caption{ The figure shows the rotation curve for an hypothetical galaxy which is generated by the photon-like matter in interaction with a scalar field for the parameters chosen and the initial conditions for the matter at the origin specified. The curve shows a type C of the three sorts (A, B and C) in the classification of the galaxies according to the form of such curves \cite{classif}. The type B corresponds almost constant velocities and the A ones are associated to curves in that the velocities firstly rapidly increases and then start diminishing almost linearly. The galaxy was defined by assuming that the value $r_{c} $ of the variable $r$ at the "knee" of the curve in figure \ref{knee} is defining a radial distance through $\rho_{c}=\frac{r_{c }{m} $. \label{rotcurve \end{figure} Further, the velocity $vs$ radius curve for the considered solution, assumed that he scalar field mass is given by $m=4.1484\times10^{-22}$ \ gr , is shown in figure \ref{rotcurve}. Note that the star velocities in km/s are smaller than the light velocity, but not very much: only nearly an order of magnitude. The considered solution defines a mass for the system which is larger than the total mass of galaxies of the same size (by example Messier 33). Therefore, in conclusion, it can be argued that particular forms of the considered solution turn to be able in furnishing the total mass required for galaxies of the observed types in Nature. It may be also expected that similar effects can be obtained by coupling photon-like matter with the more usual fermion matter. \subsection{ Photon-like dark-matter} It should be remarked that the currently most accepted view about the nature of dark matter is that it is constituted by massive cold particles moving non relativistically \cite{peebles,rees}. However, the dark matter problem is so relevant that we estimate that it allows to explore all sort of ideas for clarifying it. Since the here discussed solutions indicates the possibility of generating the complementary attractive forces acting in galaxies, below we will consider some attention to examine the possibility that photon-like or real photons could describe the dark matter properties. \subsection{ A restriction for real photon matter} Let us firstly consider a required condition for the solution examined here (after assuming that it corresponds to real photons) can describe the galaxy observations associated to dark-matter This requirement for photons to generate dark-matter is that energy distribution of the photons should approach the value associated to the CMB, at large distances from the centre of the galaxy. This is a concrete requirement that can be easily checked for the here obtained solution. By example, the CMB energy density in CGS units has the formula and value \begin{align} \epsilon_{CMB}^{cgs} & =\frac{4}{c}\sigma T^{4}\nonumber\\ & =6.1236\times10^{-13}\text{ \ }\frac{\text{erg}}{\text{cm}^{3}},\\ T & =3\text{ \ }^{0}K, \end{align} where $\sigma$ is the Stefan-Boltzman constant. But, expressed in the special units defined in this work, it gets the value \begin{align} \epsilon_{CMB} & =\frac{8\pi l_{P}^{2}}{m^{2}}\epsilon_{CMB}^{cgs \nonumber\\ & =2.9417\times10^{-34}. \end{align} This result should be compared with the energy density of the considered solution determining the galaxy size (\ref{size}) . This size after expressed in the $r\ $spatial units used here gets the value \begin{align*} r & =m\text{ }\rho_{c}\\ & =19.6016, \end{align*} where $m$, is the mass of the scalar field determining a solution with the required total mass for generating a galaxy of similar size as the Messier 33 one. Therefore the energy density at distances from the symmetry center of the order of the radius of the galaxy, takes the value \begin{equation} \epsilon(19.6016)=1.071\times10^{-3}>>\epsilon_{CMB}=2.9417\times10^{-34}. \end{equation} This result indicates that the photon like particles energy density of the here investigated solution is enormously much larger than the CMB energy density. Therefore, it allows to conclude that real photons are not able to describe dark-matter in the galaxy, if the solution considered is taken as it is. However, there are still possibilities for overcoming the above obstacle. It is clear that the growing potential continuously rising up to infinity, which is generated in the considered solution is not observed in Nature. Therefore, some effect should restrict this unlimited increasing. One possibility can be that the photon dynamics could show a subtlety making the photon density to decrease very much rapidly at the outside of the galaxy. This effect might be related with the fact that the photon dynamics at any internal point of the galaxy is being assumed as the one valid for free photons: $\epsilon=3$ $p$. But, the photons in the galaxy are subject to the "confining" gravitational potential growing with the radial coordinates. Thus, the free photons can be expected to be perturbed by the gravitational attraction. Let us in the following subsection consider a simplified model to argue that such a confining effect can in fact tend to trap the photons in some circumstances. \subsection{Mechanisms for gravitationally bounding photons} Let us consider the Maxwell equations in the presence of a special static metric only depending on one Cartesian coordinates \cite{massive \begin{align} (\overrightarrow{\nabla}^{2}-\frac{\partial^{2}}{\partial x^{02 })\overrightarrow{E}(x) & =\overrightarrow{\nabla}(\overrightarrow{\nabla }h(x).\overrightarrow{E}(x))-\frac{\partial}{\partial x^{0}}(\frac{\partial }{\partial x^{0}}h(x)\overrightarrow{E}(x))-\nonumber\\ & \overrightarrow{\nabla}(h(x))\times(\overrightarrow{\nabla}\times \overrightarrow{E}(x)),\\ \overrightarrow{\nabla}\times\overrightarrow{E}(x) & =-\frac{\partial }{\partial x^{0}}\overrightarrow{B}(x) \end{align} where $h(x)$ is a function of the determinant of the metric $g_{\mu\nu}$, which in this subsection will be assumed to be \begin{equation} g_{\mu\nu}(x)=\left( \begin{tabular} [c]{llll $g_{00}(x^{3})$ & $0$ & $0$ & $0$\\ $0$ & $1$ & $0$ & $0$\\ $0$ & $0$ & $1$ & $0$\\ $0$ & $0$ & $0$ & $1 \end{tabular} , \ \right) \end{equation} in the space of the coordinates $\ x=(x^{0},x^{1},x^{2},x^{3}).$ That is, the metric component $g_{00}(x^{3})$, which plays the role of the gravitational potential in the Newtonian approximation, will be assumed to vary with the coordinate $x^{3}.$ The function $h$ is given by given as \begin{align} h(x) & =\log(\sqrt[g]{-g(x^{3})})\nonumber\\ & =\log(\sqrt{-g_{00}(x^{3})}),\\ \frac{\partial}{\partial x^{3}}h(x) & =\frac{1}{2}\frac{1}{g_{00}(x^{3 )}\frac{d}{dx^{3}}g_{00}(x^{3}). \end{align} Let us assume now that the metric has the form \begin{align} g_{00}(x^{3}) & =-(1+\frac{2\Phi(x^{3})}{c^{2}})\nonumber\\ & =-(1-2h\text{ }x^{3}) \end{align} and the gravitational potential $\Phi(x^{3})$ is assumed to satisfies the Newtonian approximation $\frac{\Phi(x^{3})}{c^{2}}<<1$ in a large region of the coordinate $x^{3}.$ It corresponds with a Newtonian potential attracting the masses to the negative $x^{3}$ axis. It is possible now to write the simplifying equation for the considered constant potentia \begin{equation} \overrightarrow{\nabla}(h(x))=(0,0,-h). \end{equation} After the above definitions the Maxwell equations reduces t \begin{equation} (\overrightarrow{\nabla}^{2}-\frac{\partial^{2}}{\partial x^{02 })\overrightarrow{E}(x)=-\overrightarrow{\nabla}(h(x))\times (\overrightarrow{\nabla}\times\overrightarrow{E}(x)), \end{equation} where we have also used that we will search for waves defined by fields $\overrightarrow{E}$ and $\overrightarrow{B}$ being orthogonal to the $x^{3}$ axis, and among themselves, which imply \begin{equation} \overrightarrow{\nabla}h(x).\overrightarrow{E}(x)=0, \end{equation} This constant form of the gradient of $h(x)$ allows the equation for the electric field be simplified as follow \begin{equation} (\overrightarrow{\nabla}^{2}-\frac{\partial^{2}}{\partial x^{02 })\overrightarrow{E}(x)=-h\frac{\partial}{\partial x^{3}}\overrightarrow{E (x)). \end{equation} This systems of equations can be solved by \ Fourier transforming in the variables \ $x^{0}$ and $\ x^{3}$. Then, expressing for the fields \begin{equation} \overrightarrow{E}(x)=(E,0,0)\exp(-\epsilon\text{ }x^{0}i+k^{3}x^{3}i), \end{equation} reduces the Maxwell equations to \begin{equation} (-(k^{3})^{2}+\epsilon^{2}+i\text{ }h\text{ }k^{3})\overrightarrow{E}(x)=0. \end{equation} Then, the waves should satisfy the dispersion relatio \begin{equation} k^{3}=\frac{h\text{ }i}{2}+\sqrt{\epsilon^{2}-\frac{h^{2}}{4}}. \end{equation} The explicit form of the solution for the electric field takes the form \begin{align} \overrightarrow{E}(x) & =(E,0,0)\exp(-\epsilon\text{ }x^{0}i+\sqrt {\epsilon^{2}-\frac{h^{2}}{4}}x^{3}i)\times\nonumber\\ & \exp(-\frac{h}{2}x^{3}). \end{align} This expression shows that the presence of the gravitational field introduces a damping of the wave in the direction of its propagation, defined by the positive $x^{3}$ axis. The damping is proportional to the intensity of the gravitational potential $h$. That is, the gravitational field affects the dynamics of the photons tending to trap them in the region of lower potential values. This effect, strongly suggests that the gravitational field can play a role determining that found solutions for photon-like matter, after corrected for the photon dynamics, can show metrics tending to the Minkowski one for large values of the radius. It this happens, it becomes an open question again whether such corrected solutions could or not show photon energy densities being able to reduce at large radii to the value associated to the CMB. This was the main obstacle posed before for the photons behave as dark-matter. In this sense, we would like to formulate a task that could help in discussing the above issue. It can be defined as consisting in formulating a classical statistical descriptions for massless particles, but subject to the local metric in each interior point of a gravitational field region. Such an analysis can correct for the neglected influence of the gravitational field over the employed free photon dispersion relation leading to the $\epsilon=3$ $p$. In the next subsection we will also remark about an effect that can support that photons can produce the effects of the dark-matter. \subsection{ The Tolman theorem} If the obstacles noted above can be surmounted, one important concept can play a role in the discussion of the currently mostly rejected possibility that photon can constitute the dark-matter. It is the so called Tolman theorem \cite{tolman}, that states that the temperature of radiation filling in thermal equilibrium a spatial region in which a static gravitational potential is acting, is scaled with the value of the temporal component of the metric $g_{00}$ as \begin{equation} T(x)=\frac{T_{\infty}}{g_{00}(\overrightarrow{x})}, \end{equation} where $x=(c\,t,\overrightarrow{x})$ are the space-time coordinates of a point and $T_{\infty}$is the temperature in the assumed faraway spatial Minkowski like regions. Note that in this subsection we have returned to the positive signature metric for $g_{00}.$ This effect has to do with the possibility that real photon matter can be observed from the Earth, if playing the examined role of dark-matter. It is true, that the considered solution retains its interest if the massless particles generating it are sorts of really dark-matter particle having a photon-like massless vector nature. However, such solutions could be even more motivating if the usual photon can be responsible for creating the dark-matter effects after surpassing the already posed limitations. For the situation under consideration, the natural setting for the temperature in the regions far from the galaxy is the CMB radiation temperature of nearly 3 K$^{0}$. Therefore, the radiation temperature at the interior point of the galaxy can take very much increased temperatures than 3 K$^{0}$ as the gravitational potential reduces near the center. However, the arriving to the Earth radiation coming from an arbitrary interior point of the galaxy, will always have a frequency spectrum in the microwave region. This is because the light should "climb" the gravitational potential barrier, which reduces its frequency spectrum down again to the CMB one. Thus, since the CMB associated frequencies are very far form the observable light spectrum, in this first instance, the real photons might play the role of dark matter. However, it should be recalled that in order to perform this function, it is required to explain how the photon gas at the interior of the structure can become in equilibrium with the CMB at the galaxies exterior regions. At first sight, this possibility seems to be difficult, due to the drastic lack of balance with the CMB radiation which was evaluated before for the solution presented here. \section*{Summary} The role of the interaction between a real scalar field and matter in the solutions of the EKG equations is investigated \cite{jetzer,liddle1,liddle2, urena}. Stars showing a static scalar field are argued to exist thanks to the interaction between the scalar field and matter. The field, matter and density distributions are evaluated as smooth functions of the radial distance to the symmetry center. The solutions satisfy a standard stability criterion which is obeyed by simpler stars constituted only by fluids \cite{teukolsky}. However, to conclude their stable character, a closer investigation should be done of the more complex linearized equations for the oscillation modes, which include a scalar field in addition to a fluid. The case of photon like matter is also examined. Surprisingly, in this case there exist star like solutions for which the radial gravitational potential at large distances grows linearly. This effect is related with the massless character of the particles being considered. The emerging behavior leads to suspect that it could be helpful for the understanding the origin of the dark-matter effects. In a preliminary examination of this question, we determine the predictions of the solutions for the rotation curves of a galaxy. The parameters of the solution are chosen to define a galaxy of size being similar to the ones observed. The form of the velocity curve obtained corresponds to a C type of galaxies according to a standard classification \cite{classif}. It is started to be explored the possibility that correcting the free photon constitutive relation $\epsilon=3 p$ for the effects of the gravitational field may stop the linear growing of the gravitational potential obtained here. A simple model is solved to inspect the action of the Newtonian gravitational potential over a plane wave propagating against it. The solutions supports that the inclusion of the metric in the classical photon statistics can produce a bounding effect over the photons. A model is then just formulated for to consider the corrected statistics. The idea is to derive the constitutive relations for photons moving at the light velocity, but in the space-time dependent metric. This problem is expected to be considered elsewhere. It is also discussed whether or not a gas of real photons can play the role of dark-matter, or it should be described by photon like real dark-matter particles. For this purpose the evaluation of the energy density of the photon like constitutive matter at the regions outside of the modeled galaxy was evaluated. The result was very much larger than the CMB corresponding value. This evaluation does not support the identification with photons of the photon like matter determining the rotation curves, if the solution is taken as it is. However, the determination of a corrected solution after improving the photon like dynamics, as proposed before, can still allow to satisfy this criterion. It was also presented an observation based in the Tolman theorem. This theorem defines the temperature of the photon radiation assumed to determine the solution considered here, as the CMB temperature after divided the temporal component of the metric. One important property of the theorem, gives a partial support to photons as creating the dark matter potential. It is the fact that, no matter the high the temperature of the photons will be at an interior point of the galaxy, the light coming to the Earth from this point will always arrive with a $3\, ^{0}K $ frequency spectrum, coinciding with the CMB radiation one, coming from any other direction. There remains an important issue to be further addressed in connection with the found solutions. It is the question about their stability. For this purpose in a coming study, the spectrum of the linearized equations of motion for small radial perturbations will be investigated. \section*{Acknowledgments} The authors very much thank the support received from the Office of External Activities of ICTP (OEA), through the Network on \textit{Quantum Mechanics, Particles and Fields} (Net-09).
{ "timestamp": "2020-07-28T02:43:51", "yymm": "2007", "arxiv_id": "2007.13671", "language": "en", "url": "https://arxiv.org/abs/2007.13671" }
\section{Introduction} The symmetrized poly-Bernoulli numbers were introduced by Kaneko-Sakurai-Tsumura \cite{KST} in order to generalize the dual formula of poly-Bernoulli numbers. The poly-Bernoulli polynomials $B_n^{(k)}(x)$ of index $k\in \Z$ are defined by the generating function \begin{align*} \sum_{n=0}^{\infty}B_n^{(k)}(x)\frac{t^n}{n!}=e^{-xt}\frac{\li_k(1-e^{-t})}{1-e^{-t}}, \end{align*} where $\li_k(z)$ is the polylogarithm function, \begin{align*} \li_k(z)=\sum_{m=1}^\infty \frac{z^m}{m^k}\quad (|z|<1). \end{align*} The two types of poly-Bernoulli numbers, $B_n^{(k)}$ and $C_{n}^{(k)}$ \cite{AIK,K97,K10} are special values of the poly-Bernoulli polynomials at $x=0$ and $x=1$. \begin{align*} B_n^{(k)}(0)=B_n^{(k)}\quad\mbox{and}\quad B_n^{(k)}(1)=C_n^{(k)}. \end{align*} For negative $k$ index these number sequences are integers (A099594 and A136126 \cite{OEIS}) and have several interesting combinatorial interpretations \cite{BH15, BH17, B08}. Both $B_n^{(-k)}$ and $C_n^{(-k)}$ are symmetric number arrays. These properties are special cases of the more general identity on poly-Bernoulli polynomials which hold for any non-negative integers $n$, $k$ and $m$. \begin{align*} \sum_{j=0}^{m}\st{m}{j} B_{n}^{(-k-j)}(m)=\sum_{j=0}^{m}\st{m}{j} B_{k}^{(-n-j)}(m), \end{align*} where $\st{n}{k}$ is the (unsigned) Stirling number of the first kind which count the number of permutations of $[n]=\{1,2,\ldots, n\}$ with $k$ disjoint cycles. Kaneko-Sakurai-Tsumura \cite{KST} defined this expression as the \textit{symmetrized poly-Bernoulli numbers}. \begin{align*} \scB_n^{(-k)}(m):=\sum_{j=0}^{m}\st{m}{j} B_{n}^{(-k-j)}(m). \end{align*} Note that \begin{align*} \scB_n^{(-k)}(0)=B_n^{(-k)}\quad \mbox{and}\quad \scB_n^{(-k)}(1)=C_n^{(-k-1)}. \end{align*} The authors \cite{KST} suggested the combinatorial investigations of these number sequences. The first result in this direction is due to the second author. Matsusaka \cite{M20} showed that the alternating diagonal sums of symmetrized poly-Bernoulli numbers coincide with certain values of the Dumont-Foata polynomials/Gandhi polynomials. \begin{align}\label{Gandhi} \sum_{j=0}^{n} (-1)^j \scB_{n-j}^{(-j)}(k)=k!(-1)^{n/2}G_n(k), \end{align} where $G_n(z)$ denotes the Gandhi polynomials satisfying \begin{align*} G_{n+2}(z)=z(z+1)G_n(z+1)-z^2G_n(z) \end{align*} with $G_0(z)=1$ and $G_1(z)=0$. Special cases of the theorem \cite{M20} are \begin{align*} \sum_{j=0}^{n} (-1)^j B_{n-j}^{(-j)} = \begin{cases} 1, &\text{if } n = 0,\\ 0, &\text{if } n > 0, \end{cases} \end{align*} which was proven analytically in \cite{AK99} and combinatorially in \cite{BH15}, and \begin{align*} \sum_{j=0}^{n} (-1)^j C_{n-j}^{(-j-1)} = -G_{n+2}, \end{align*} where $G_n := (2 - 2^{n+1}) B_n^{(1)} (1)$ are the Genocchi numbers $0,1, -1,0,1,0,-3,0,17,0,-155\ldots$ A001469 \cite{OEIS}. This last identity was proven by using analytical methods in \cite{KST}, but providing a combinatorial explanation is still open and seems to be a difficult problem. The paper is organized as follows. In the first three sections after the introduction we introduce three combinatorial models for the normalized symmetrized poly-Bernoulli numbers. In Section 5 we prove some recurrence relations. In the last section we formulate a conjecture and pose some open questions. \section{Barred Callan sequences} In this section we present a model of the \textit{normalized symmetrized poly-Bernoulli numbers} $\widehat\scB_n^k(m)$. We are interested in the combinatorics of symmetrized poly-Bernoulli numbers with negative $k$ indices (since these numbers are positive integers). Keeping the notation simpler, we define for non-negative integers $n$,$k$ and $m$, \[ \widehat{\scB}_n^k(m):=\frac{1}{m!}\scB_n^{(-k)}(m) \in \Z. \] In A099594 \cite{OEIS} Callan has given a combinatorial interpretation of the poly-Bernoulli numbers in certain type of permutations. Namely, $B_n^{(-k)}$ is the number of permutations of $[n+k]=\{1,\ldots, n+k \}$ such that all substrings of elements $\leq n$ and all substrings of elements $>n$ are in increasing order. Such permutations were called in the literature \cite{BH15,BH17} \emph{Callan permutations}. Essentially the same are \emph{Callan sequences} that we define as follows. Consider the set $N=\{\piros{1},\ldots, \piros{n}\}\cup\{\piros{*}\}$ (referred to as red elements) and $K=\{\kek{1},\ldots, \kek{k}\}\cup\{\kek{*}\}$ (referred to as blue elements). Let $R_1,\ldots, R_r,R^*$ be a partition of the set $N$ into $r+1$ non-empty blocks ($0 \leq r \leq n$) and $B_1,\ldots,B_r,B^*$ a partition of the set of $K$ into $r+1$ non-empty blocks. The blocks containing $\kek{*}$ and $\piros{*}$ are denoted by $B^*$ and $R^*$, respectively. We call $B^*$ and $R^*$ \emph{extra blocks}, while the other blocks \emph{ordinary blocks}. We call a pair of a blue and a red block, $(B_i;R_i)$ for an $i$ a \emph{Callan pair}. A Callan sequence is a linear arrangement of Callan pairs augmented by the extra pair \[(B_1;R_1)(B_2;R_2)\cdots(B_r;R_r)\cup(B^*;R^*).\] It is easy to check that this definition is equivalent with the one given by Callan in \cite{OEIS}. Given a Callan sequence, write the elements of the blocks in increasing order, record the blocks in the given order and if there are elements in $R^*$ besides $\piros{*}$ move this red elements into the front of the sequence, while the elements in $B^*$ at the end of the sequence. Delete $\kek{*}$ and $\piros{*}$, and shift the blue elements by $n$, $\kek{i}\rightarrow i+n$. \begin{example}[All Callan sequences with $n = 2$ and $k=2$] \begin{align*} &(\kek{1},\kek{2},\kek{*};\piros{1},\piros{2};\piros{*}) & &(\kek{1},\kek{2};\piros{1},\piros{2})(\kek{*};\piros{*}), & &(\kek{1};\piros{1},\piros{2})(\kek{2},\kek{*};\piros{*}), & &(\kek{2};\piros{1},\piros{2})(\kek{1},\kek{*};\piros{*}), & &(\kek{1},\kek{2};\piros{1})(\kek{*};\piros{2},\piros{*}),\\ &(\kek{1},\kek{2};\piros{2})(\kek{*};\piros{1},\piros{*}), & &(\kek{1};\piros{1})(\kek{2},\kek{*};\piros{2},\piros{*}), & &(\kek{2};\piros{1})(\kek{1},\kek{*};\piros{2},\piros{*}), & &(\kek{1};\piros{2})(\kek{2},\kek{*};\piros{1},\piros{*}), & &(\kek{2};\piros{2})(\kek{1},\kek{*};\piros{1},\piros{*}),\\ &(\kek{1};\piros{1})(\kek{2};\piros{2})(\kek{*};\piros{*}), & &(\kek{1};\piros{2})(\kek{2};\piros{1})(\kek{*};\piros{*}), & &(\kek{2};\piros{1})(\kek{1};\piros{2})(\kek{*};\piros{*}), & &(\kek{2};\piros{2})(\kek{1};\piros{1})(\kek{*};\piros{*}). \end{align*} We list the corresponding Callan permutations in the same order as above \begin{align*} &\piros{12} \kek{34}, & &\kek{34}\piros{12}, & &\kek{3}\piros{12}\kek{4}, & &\kek{4}\piros{12}\kek{3}, & &\piros{2}\kek{34}\piros{1},\\ &\piros{1}\kek{34}\piros{2}, & &\piros{2}\kek{3}\piros{1}\kek{4}, & &\piros{2}\kek{4}\piros{1}\kek{3}, & &\piros{1}\kek{3}\piros{2}\kek{4}, & &\piros{1}\kek{4}\piros{2}\kek{3},\\ &\kek{3}\piros{1}\kek{4}\piros{2}, & &\kek{3}\piros{2}\kek{4}\piros{1}, & &\kek{4}\piros{1}\kek{3}\piros{2}, & &\kek{4}\piros{2}\kek{3}\piros{1}. \end{align*} \end{example} \begin{definition} For integers $n,k > 0$ and $m \geq 0$, the $m$-barred Callan sequence of size $n \times k$ is the Callan sequence with $m$ bars inserted between (before and after) the ordinary pairs. We let $\mathcal{C}_n^k(m)$ denote the number of all $m$-barred Callan sequences of size $n \times k$. \end{definition} \begin{example} [All $2$-barred Callan sequences with $n = 3$ and $k = 1$] \begin{align*} &||(\kek{1}, \kek{*}; \piros{1}, \piros{2}, \piros{3}, \piros{*}), & &||(\kek{1}; \piros{1}, \piros{2}, \piros{3}) (\kek{*}; \piros{*}), & &||(\kek{1}; \piros{1}, \piros{2})(\kek{*}; \piros{3}, \piros{*}), & &||(\kek{1}; \piros{1},\piros{3}) (\kek{*}; \piros{2}, \piros{*}),\\ &||(\kek{1}; \piros{2}, \piros{3}) (\kek{*}; \piros{1}, \piros{*}), & &||(\kek{1}; \piros{1}) (\kek{*}; \piros{2}, \piros{3}, \piros{*}), & &||(\kek{1}; \piros{2}) (\kek{*}; \piros{1}, \piros{3}, \piros{*}), & &||(\kek{1}; \piros{3}) (\kek{*}; \piros{1}, \piros{2}, \piros{*}),\\ &|(\kek{1}; \piros{1}, \piros{2}, \piros{3}) | (\kek{*}; \piros{*}), & &|(\kek{1}; \piros{1}, \piros{2}) | (\kek{*}; \piros{3}, \piros{*}), & &|(\kek{1}; \piros{1},\piros{3}) | (\kek{*}; \piros{2}, \piros{*}), & &|(\kek{1}; \piros{2}, \piros{3}) | (\kek{*}; \piros{1}, \piros{*}),\\ &|(\kek{1}; \piros{1}) | (\kek{*}; \piros{2}, \piros{3}, \piros{*}), & &|(\kek{1}; \piros{2}) | (\kek{*}; \piros{1}, \piros{3}, \piros{*}), & &|(\kek{1}; \piros{3}) | (\kek{*}; \piros{1}, \piros{2}, \piros{*}),\\ &(\kek{1}; \piros{1}, \piros{2}, \piros{3}) || (\kek{*}; \piros{*}), & &(\kek{1}; \piros{1}, \piros{2}) || (\kek{*}; \piros{3}, \piros{*}), & &(\kek{1}; \piros{1},\piros{3}) || (\kek{*}; \piros{2}, \piros{*}), & &(\kek{1}; \piros{2}, \piros{3}) || (\kek{*}; \piros{1}, \piros{*}),\\ &(\kek{1}; \piros{1}) || (\kek{*}; \piros{2}, \piros{3}, \piros{*}), & &(\kek{1}; \piros{2}) || (\kek{*}; \piros{1}, \piros{3}, \piros{*}), & &(\kek{1}; \piros{3}) || (\kek{*}; \piros{1}, \piros{2}, \piros{*}). \end{align*} \end{example} \begin{remark} $m$-barred Callan sequences can be viewed in fact as a pair $(P, BP)$, where $P$ is a preferential arrangement of a subset of $\{1,2,\ldots,n\}$ and $BP$ is a barred preferential arrangement of a subset of $\{1,2,\ldots, k\}$. Barred preferential arrangements were introduced in \cite{AUP} and were used for combinatorial analysis of generalizations of geometric polynomials for instance in \cite{NBCC}. \end{remark} \begin{theorem} The number $\mathcal{C}_n^k(m)$ of $m$-barred Callan sequences of size $n \times k$ is given by the normalized symmetrized poly-Bernoulli number $\widehat{\scB}_n^k(m)$. \end{theorem} \begin{proof} Let $r$ be the number of ordinary pairs. Partition the elements of $N$ into $r+1$ blocks in $\sts{n+1}{r+1}$ ways, similarly $K$ into $r+1$ blocks in $\sts{k+1}{r+1}$ ways. ($\sts{n}{k}$ denotes the Stirling number of the second kind, counting the number of partitions of an $n$-element set into $k$ non-empty blocks.) Order both types of ordinary blocks in $r!$ ways and choose the positions of the $m$ bars from the $r+1$ places between the ordinary blocks (note that repetition is allowed) in $\binom{r+1+m-1}{m}$ ways. By summing them up, we have \begin{align}\label{Cal-exp} \mathcal{C}_n^k(m)=\sum_{r=0}^{\min(n,k)}\binom{r+m}{m}(r!)^2\sts{n+1}{r+1}\sts{k+1}{r+1}. \end{align} By comparing this expression \eqref{Cal-exp} with the closed formula derived in \cite[(2.9)]{KST} for the symmetrized poly-Bernoulli numbers, the theorem follows. \end{proof} It obviously follows from the definition that \[ \mathcal{C}_n^k(m) =\mathcal{C}_k^n(m). \] \begin{corollary} A \emph{labeled} $m$-barred Callan sequence is an $m$-barred Callan sequence such that the bars are labeled. The number of labeled $m$-barred Callan sequences of size $n \times k$ is given by $\scB_n^{(-k)}(m)$. Clearly, $\scB_n^{(-k)}(m)=\scB_k^{(-n)}(m)$. \end{corollary} By the right-hand side of (\ref{Cal-exp}), we define $\mathcal{C}_n^k(m)$ for $n= 0$ or $k=0$. Namely, $\mathcal{C}_n^0 (m) = \mathcal{C}_0^k (m) := 1$. \begin{theorem} For integers $n \geq 0$ and $k > 0$, the number $\mathcal{C}_n^k(m)$ obeys the recurrence relation of \[ \mathcal{C}_n^k(m) = \mathcal{C}_n^{k-1} (m) + \sum_{j=1}^n {n \choose j} \mathcal{C}_{n-j+1}^{k-1} (m) + m \sum_{j=1}^n {n \choose j} \mathcal{C}_{n-j}^{k-1} (m). \] \end{theorem} \begin{proof} We count $m$-barred Callan sequences of size $n \times k$ according to the following cases. We let $|_\ell$ denote $\ell$ consecutive bars. \begin{itemize} \item[(0)] $|_m (\kek{1}, \kek{2}, \dots, \kek{k}, \kek{*}; \piros{1}, \piros{2}, \dots, \piros{n}, \piros{*})$. \item[$(1)$] $(\kek{1}, \kek{B}; \piros{R})$ is the first ordinary Callan pair with $\kek{B} \neq \emptyset$. \item[$(2)_\ell$] $|_\ell (\kek{1}; \piros{R})$ is the first ordinary Callan pair. \item[$(3)_\ell$] $(\kek{B'}; \piros{R})|_\ell (\kek{1}, \kek{B}; \piros{R'})$ for some $(\kek{B'}; \piros{R})$ and $\kek{B} \neq \emptyset$. \item[$(4)_0$] $(\kek{B'}; \piros{R}) |_0 (\kek{1}; \piros{R'})$ for some $(\kek{B'}; \piros{R})$. \item[$(4)_\ell$] $(\kek{B'}; \piros{R'}) |_\ell (\kek{1}; \piros{R})$ for some $\ell > 0$ and $(\kek{B'}; \piros{R'})$. \end{itemize} The cases (0) and $(1)$ are in bijection with $m$-barred Callan sequences of size $n \times (k-1)$ by deleting $\kek{1}$. So the number of such cases is $\mathcal{C}_n^{k-1} (m)$. Next, we consider the cases $(2)_0$, $(3)_\ell$, and $(4)_0$. In these cases, we delete $\kek{1}$ and $\piros{R}$, and insert the additional number $\piros{0}$ as follows. We assume that $\piros{R}$ contains $j$ elements. ($1 \leq j \leq n$). \begin{itemize} \item[$(2)_0$] Insert $\piros{0}$ into the extra red block. \[ |_0 (\kek{1}; \piros{R}) |_{\ell'} (\kek{B'}; \piros{R'}) \cdots (\piros{R''}, \piros{*}; \kek{B''}, \kek{*}) \leftrightarrow |_{\ell'} (\kek{B'}; \piros{R'}) \cdots (\piros{0}, \piros{R''}, \piros{*}; \kek{B''}, \kek{*}). \] This gives $m$-barred Callan sequences of size $(n-j+1) \times (k-1)$ such that $\piros{0}$ is in the extra pair. \item[$(3)_\ell$] Replace $\piros{R}$ with $\piros{0}$. \[ (\kek{B'}; \piros{R}) |_\ell (\kek{1}, \kek{B}; \piros{R'}) \leftrightarrow (\kek{B'}; \piros{0}) |_\ell (\kek{B}; \piros{R'}). \] This gives $m$-barred Callan sequences of size $(n-j+1) \times (k-1)$ such that $\piros{0}$ is alone in an ordinary pair. \item[$(4)_0$] Replace $\piros{R}$ with $\piros{0}$, and merge with $\piros{R'}$. \[ (\kek{B'}; \piros{R}) |_0 (\kek{1}; \piros{R'}) \leftrightarrow (\kek{B'}; \piros{0}, \piros{R'}). \] This gives $m$-barred Callan sequences of size $(n-j+1) \times (k-1)$ such that the block that contains $\piros{0}$ includes also other red elements. \end{itemize} Clearly, the number of ways to create the $\piros{R}$ with $j$ elements is ${n \choose j}$. Thus, the number of patterns in the cases $(2)_0$, $(3)_\ell$ ($0 \leq \ell \leq m$), and $(4)_0$ is \[ \sum_{j=1}^n {n \choose j} \mathcal{C}_{n-j+1}^{k-1}(m). \] Finally, consider the remaining cases $(2)_\ell$ and $(4)_\ell$ with $1 \leq \ell \leq m$. If we delete the pair $(\kek{1}; \piros{R})$, we obtain $m$-barred Callan sequences of size $(n-j) \times (k-1)$. However, we obtain the same sequence $m$-times since $(\kek{1}; \piros{R})$ could have been after any bar. Indeed, conversely, take an $m$-barred Callan sequence of size $(n-j) \times (k-1)$ and insert the pair $(\kek{1}; \piros{R})$ after any bar. Thus, now we have \[ m \sum_{j=1}^n {n \choose j} \mathcal{C}_{n-j}^{k-1} (m). \] This concludes the proof. \end{proof} We give another type of recursion. Let $\widehat{\scB}_n^k(m;r)$ denote the number of $m$-barred Callan sequences with $r$ ordinary blocks. Then we have the following recursion. \begin{theorem} For positive integers $n,k >0$ and $m \geq 0$, it holds \begin{align*} \widehat{\scB}_n^{k}(m)=\sum_{j=1}^{n}\binom{n}{j}\sum_{r=0}^{\min{(n-j,k-1)}}(m+r+1)\widehat{\scB}_{n-j}^{k-1}(m;r)+ \sum_{r=0}^{\min{(n,k-1)}}(r+1)\widehat{\scB}_n^{k-1}(m; r). \end{align*} \end{theorem} \begin{proof} Consider an $m$-barred Callan sequence. There are two cases: $\kek{k}$ is in an ordinary pair as a singleton, or not, i.e., it is in an ordinary pair with other elements or in the extra pair. If it is in an ordinary pair as a singleton, let $j$ be the number of the red elements in this pair. Choose in $\binom{n}{j}$ ways such a Callan pair. Since it is an ordinary pair, $j$ is at least $1$. This new block can be inserted into the arrangement of the ordinary blocks and bars formed by the $m$-barred Callan sequence of size $(n-j) \times (k-1)$ with $r$ ordinary blocks, i.e., in $m+r+1$ ways. This gives the first part of our sum. On the other hand, if we insert $\kek{k}$ into any block that contains a blue element already, or into the extra block, that can be done in $r+1$ ways, which gives the second part of the sum. \end{proof} \section{Weighted barred Callan sequences} In this section we present a combinatorial interpretation, which allows us to extend the number that counted in our previous model the bars inserted between the Callan pairs, to arbitrary numbers. For this sake we introduce first a weight on permutations. Let $\pi$ be a permutation $\pi=\pi_1\pi_2\ldots\pi_n \in \mathfrak{S}_n$. Consider the maximal sequence $\pi_{i_0}>\pi_{i_1}>\pi_{i_2}>\cdots>\pi_{i_r}$, where $\pi_{i_0}=\pi_1$ and $\pi_{i_{j+1}}$ is the first element to the right of $\pi_{i_j}$ that is smaller for all $j$. Let $w(\pi)=r$, i.e., the length of this maximal sequence reduced by $1$. In other words, considering the elements of the permutation from left to right mark an element if it is smaller than the previous marked element. Then $w(\pi)$ is the number of marked elements reduced by one. For instance, for $\pi=\piros{8}\piros{6}9\piros{5}7\piros{2}34\piros{1}$ $w(\pi)=4$. Let $x^{\overline{n}}=x(x+1)(x+2)\cdots(x+n-1)$ denote the rising factorial. We have the following lemma. \begin{lemma} \[ \sum_{\pi \in \mathfrak{S}_n} x^{w(\pi)} = (x+1)^{\overline{n-1}}. \] \end{lemma} \begin{proof} The left-hand side obeys the same recurrence as the right-hand side. The initial value is $\sum_{\pi \in \mathfrak{S}_{1}} x^{w(\pi)} = 1$. By inserting the element $n$ into a permutation $\pi\in \mathfrak{S}_{n-1}$ the weight is increasing by one if we add it in the front as starting element, and stays preserved otherwise. Hence, \[ \sum_{\pi\in \mathfrak{S}_{n}} x^{w(\pi)} = (x+n) \sum_{\pi \in \mathfrak{S}_{n-1}} x^{w(\pi)}. \] \end{proof} \begin{example} \begin{align*} &\textcolor{red}{1}234, \textcolor{red}{1}243, \textcolor{red}{1}324, \textcolor{red}{1}342, \textcolor{red}{1}423, \textcolor{red}{1}432, \textcolor{red}{21}34, \textcolor{red}{21}43, \textcolor{red}{2}3\textcolor{red}{1}4, \textcolor{red}{2}34\textcolor{red}{1}, \textcolor{red}{2}4\textcolor{red}{1}3, \textcolor{red}{2}43\textcolor{red}{1},\\ &\textcolor{red}{31}24, \textcolor{red}{31}42, \textcolor{red}{321}4, \textcolor{red}{32}4\textcolor{red}{1}, \textcolor{red}{3}4\textcolor{red}{1}2, \textcolor{red}{3}4\textcolor{red}{21}, \textcolor{red}{41}23, \textcolor{red}{41}32, \textcolor{red}{421}3, \textcolor{red}{42}3\textcolor{red}{1}, \textcolor{red}{431}2, \textcolor{red}{4321} \end{align*} We have \[ \sum_{\pi \in \mathfrak{S}_4} x^{w(\pi)} = x^3 + 6x^2 + 11x + 6 = (x+1)(x+2)(x+3). \] \end{example} We define now a weight on a $1$-barred Callan sequence (from now on barred Callan sequence) using the weight on permutations above. Let $\mathcal{B}_n^k$ denote the set of barred Callan sequences of size $n \times k$ and $\alpha\in \mathcal{B}_n^k$. The natural order of the blocks in a partition $\sigma=B_1/B_2/\ldots/B_n$ is given by the least elements. For instance, the blocks of the partition $\{1,3,9\}/\{2,4,7\}/\{5,6\}/\{8\}$ are listed in the natural order. We consider now the set of blue blocks of the Callan sequence with this natural order and add $|$ as the smallest element to the set. The weight $w(\alpha)$ is the weight of the permutation of the blue blocks (and the bar) in the barred Callan sequence $\alpha$. \begin{example} [All $1$-barred Callan sequences with $n = 2$ and $k = 2$ with indication of their weight] \begin{align*} &\underline{|}(\kek{1}, \kek{2}, \kek{*}; \piros{1}, \piros{2}, \piros{*}), & &\underline{|}(\kek{1}, \kek{2}; \piros{1}, \piros{2})(\kek{*};\piros{*}) & &\underline{|}(\kek{1}; \piros{1}, \piros{2})(\kek{2}, \kek{*}; \piros{*}), & &\underline{|}(\kek{2}; \piros{1}, \piros{2})(\kek{1}; \kek{*}; \piros{*}), & &\underline{|}(\kek{1}, \kek{2}; \piros{1}) (\kek{*}; \piros{2}, \piros{*}),\\ &\underline{|}(\kek{1}; \piros{1})(\kek{2},\kek{*}; \piros{2}, \piros{*}), & &\underline{|}(\kek{2}; \piros{1})(\kek{1}, \kek{*}; \piros{2}, \piros{*}), & &\underline{|}(\kek{1}, \kek{2}; \piros{2}) (\kek{*}; \piros{1}, \piros{*}), & &\underline{|}(\kek{1}; \piros{2}) (\kek{2}, \kek{*}; \piros{1}, \piros{*}), & &\underline{|}(\kek{2}; \piros{2})(\kek{1}, \kek{*}; \piros{1}, \piros{*}),\\ &\underline{|}(\kek{1}; \piros{1})(\kek{2}; \piros{2})(\kek{*}; \piros{*}), & &\underline{|}(\kek{2}; \piros{1})(\kek{1}; \piros{2})(\kek{*}; \piros{*}), & &\underline{|}(\kek{1};\piros{2})(\kek{2}; \piros{1})(\kek{*}; \piros{*}), & &\underline{|}(\kek{2};\piros{2})(\kek{1}; \piros{1}) (\kek{*}; \piros{*}), & &(\underline{\kek{1}}, \kek{2}; \piros{1}, \piros{2})\underline{|} (\kek{*};\piros{*}),\\ &(\underline{\kek{1}}; \piros{1}, \piros{2}) \underline{|} (\kek{2}, \kek{*}; \piros{*}), & &(\underline{\kek{2}}; \piros{1}, \piros{2}) \underline{|} (\kek{1}; \kek{*}; \piros{*}), & &(\underline{\kek{1}}, \kek{2}; \piros{1}) \underline{|} (\kek{*}; \piros{2}, \piros{*}), & &(\underline{\kek{1}}; \piros{1}) \underline{|} (\kek{2},\kek{*}; \piros{2}, \piros{*}), & &(\underline{\kek{2}}; \piros{1}) \underline{|} (\kek{1}, \kek{*}; \piros{2}, \piros{*}),\\ &(\underline{\kek{1}}, \kek{2}; \piros{2}) \underline{|} (\kek{*}; \piros{1}, \piros{*}), & &(\underline{\kek{1}}; \piros{2}) \underline{|} (\kek{2}, \kek{*}; \piros{1}, \piros{*}), & &(\underline{\kek{2}}; \piros{2}) \underline{|} (\kek{1}, \kek{*}; \piros{1}, \piros{*}), & &(\underline{\kek{1}}; \piros{1}) \underline{|} (\kek{2}; \piros{2})(\kek{*}; \piros{*}), & &(\underline{\kek{2}}; \piros{1}) \underline{|} (\kek{1}; \piros{2})(\kek{*}; \piros{*}),\\ &(\underline{\kek{1}};\piros{2}) \underline{|} (\kek{2}; \piros{1})(\kek{*}; \piros{*}), & &(\underline{\kek{2}};\piros{2}) \underline{|} (\kek{1}; \piros{1}) (\kek{*}; \piros{*}), & &(\underline{\kek{1}}; \piros{1}) (\kek{2}; \piros{2}) \underline{|} (\kek{*}; \piros{*}), & &(\underline{\kek{2}}; \piros{1}) (\underline{\kek{1}}; \piros{2}) \underline{|} (\kek{*}; \piros{*}), & &(\underline{\kek{1}};\piros{2}) (\kek{2}; \piros{1}) \underline{|} (\kek{*}; \piros{*}),\\ &(\underline{\kek{2}};\piros{2})(\underline{\kek{1}}; \piros{1}) \underline{|} (\kek{*}; \piros{*}). \end{align*} \end{example} \begin{definition} We define the \emph{Callan polynomial} for any positive integers $n$ and $k$ as \[C_n^k(x)=\sum_{\alpha\in \mathcal{B}_n^k}x^{w(\alpha)}.\] \end{definition} By the above example, we see that $C_2^2(x) = 2x^2 + 15x + 14$. \begin{proposition}\label{C-exp} The polynomials $C_n^k(x)$ are given by \[C_n^k(x)=\sum_{j=0}^{\min(n,k)}j!(x+1)^{\overline{j}}\sts{n+1}{j+1}\sts{k+1}{j+1}.\] \end{proposition} \begin{proof} It is straightforward from the definition of barred Callan sequences and the definition of the weight. \end{proof} Next, we show the recursion by modifying the proof appropriately in the previous section. We define $C_n^0(x)=C_0^k(x)=1$. \begin{theorem}\label{Callan-poly} For any integers $n \geq 0$ and $k > 0$, we have \begin{align}\label{recperm} C_n^{k}(x)=C_{n}^{k-1}(x)+\sum_{j=1}^n\binom{n}{j}C_{n-j+1}^{k-1} (x) +x\sum_{j=1}^{n}\binom{n}{j}C_{n-j}^{k-1} (x). \end{align} \end{theorem} \begin{proof} We split the set $\mathcal{B}_n^k$ into disjoint subsets as follows: Let $A$ denote the set $\alpha\in \mathcal{B}_n^k$ such that $\kek{k}$ is in the extra pair with $\kek{*}$. Let $B$ denote the set $\beta\in \mathcal{B}_{n}^k$ such that $\kek{k}$ is in the first Callan pair alone and there is no bar before it. Let $C$ denote the set $\gamma\in \mathcal{B}_n^k$ such that $\kek{k}$ is in an ordinary block. Further, if it is alone in the first Callan pair, then the bar is before it. If $\kek{k}$ is in the extra blue block $B^*$, we simply take a barred Callan sequence with $k-1$ blue elements and $n$ red elements and insert $\kek{k}$ into the extra block. The extra block does not affect the weight. Thus, we have \[\sum_{\alpha\in A}x^{w(\alpha)}=C_{n}^{k-1}(x).\] We obtain a Callan sequence $\beta\in B$ by choosing in $\binom{n}{j}$ ways $j$ red elements for the first Callan pair $(\kek{k}; \piros{R_1})$, and constructing from the remaining $n-j$ red elements and $k-1$ blue elements a barred Callan sequence. $(\kek{k}; \piros{R_1})$ is glued simply before the sequence. The weight will be increased by one, since the block $(\kek{k}; \piros{R_1})$ is the greatest among the blocks. Hence, we have \[\sum_{\beta\in B}x^{w(\beta)}=x\sum_{j=1}^{n}\binom{n}{j}C_{n-j}^{k-1}(x).\] We split the set $C$ into further disjoint subsets as follows. $C_1$ are the Callan sequences, where $\kek{k}$ is alone in its ordinary block and the bar is directly before it. $C_2$ consists of the Callan sequences, where $\kek{k}$ is alone, a bar is not before it and it is not in the first Callan pair. Finally, $C_3$ are the Callan sequences, where $\kek{k}$ is not alone in its blue block. Clearly, $C=C_1\dot{\cup}C_2\dot{\cup}C_3$. Choose again $j$ red elements in $\binom{n}{j}$ ways for $\kek{k}$ to create a block $(\kek{k};\widehat{\piros{R}})$. Construct a barred Callan sequence with $([n] \backslash \widehat{\piros{R}} )\cup \{\piros{0}\}$ red elements and $[k-1]$ blue elements. We have three cases: If $\piros{0}$ is in the extra block, delete $\piros{0}$ and insert $(\kek{k};\widehat{\piros{R}})$ directly after the bar. In this case we obtain the set $C_1$. The weight does not change, since $\kek{k}$ is ``greater'' than $|$. If $\piros{0}$ is in an ordinary block and there is no other red element in its block, merge $(\kek{k};\widehat{\piros{R}})$ to this Callan pair by $(\kek{B}; \piros{0}) \to (\kek{B}, \kek{k}; \widehat{\piros{R}})$. Do not change the position of the bar. This case gives the set $C_3$. The weight does not change, since the so obtained blue block contains smaller elements than $\kek{k}$, and the order of the blocks are determined by their least elements. If $\piros{0}$ is in an ordinary pair, say $(\kek{B}; \piros{0}, \piros{R})$ and this block contains other red elements also, then delete $\piros{0}$ and insert $(\kek{k};\widehat{\piros{R}})$ after this Callan pair, that is, $(\kek{B}; \piros{0}, \piros{R}) \to (\kek{B}; \piros{R}) (\kek{k}; \widehat{\piros{R}})$. If the bar was directly after this pair $(\kek{B}; \piros{0}, \piros{R})$, then delete it from here and place it now after $(\kek{k};\widehat{\piros{R}})$. This case gives the set $C_2$. The weight does not change since there is a block with smaller value (respecting to the order of blocks) to the left of the block $(\kek{k};\widehat{\piros{R}})$, hence, $(\kek{k};\widehat{\piros{R}})$ does not affect the weight anymore. We have \[\sum_{\gamma\in C}x^{w(\gamma)}=\sum_{j=1}^n\binom{n}{j}C_{n-j+1}^{k-1}(x),\] which concludes the proof. \end{proof} \begin{corollary} For any integers $n, k \geq 0$ and $m \geq 0$, we have \[ C_n^k(m) = \mathcal{C}_n^k(m) = \widehat{\scB}_n^k (m). \] \end{corollary} \section{Weighted alternative tableaux of rectangular shape} \label{s5} In this section we introduce a weight on alternative tableaux of rectangular shapes and show that the so obtained polynomials are identical with the Callan polynomials, hence, the numbers of such tableaux are the normalized symmetrized poly-Bernoulli numbers. Alternative tableaux were introduced by Viennot \cite{Viennot}. The literature on alternative tableaux and related topics is extremely rich. For instance, a combinatorial interpretation of the generalized Dumont-Foata polynomial in terms of alternative tableaux was given in \cite{Josuat}. \begin{definition} \cite[Definition 1.2]{N11} An \textit{alternative tableau} of rectangular shape of size $n \times k$ is a rectangle with a partial filling of the cells with left arrows $\leftarrow$ and down arrows $\downarrow$, such that all cells pointed by an arrow are empty. We let $\calT_n^k$ denote the set of all alternative tableaux of rectangular shape of size $n \times k$. \end{definition} \begin{example}\label{Ex-T} In Figure \ref{alttabl} we give an example of alternative tableaux of size $5 \times 6$ with its weight defined later. \begin{figure}[H] \centering \includegraphics[width=40mm]{image1} \caption{An alternative tableaux of size $5 \times 6$} \label{alttabl} \end{figure} \end{example} We introduce a weight on alternative tableaux as follows. For each $\lambda \in \calT_n^k$, \begin{itemize} \item[1.] Consider the first (from the top) consecutive rows that contain left arrows $\leftarrow$. \item[2.] Count the number of left arrows $\leftarrow$ such that all $\leftarrow$ in the upper rows are located further to the right. \end{itemize} We let $w(\lambda)$ denote the number of such left arrows. For instance, the alternative tableau in Figure \ref{alttabl} has the weight $w(\lambda) = 3$. In Figure \ref{altweight} we list all $31$ elements in $\calT_2^2$ with their weights. \begin{figure}[H] \centering \includegraphics[width=160mm]{image3} \caption{Alternative tableaux of size $2\times 2$ with their weights}\label{altweight} \end{figure} We define the polynomial $T_n^k(x)$ by \[ T_n^k(x) := \sum_{\lambda \in \calT_n^k} x^{w(\lambda)}. \] From the above example, $T_2^2(x) = 2x^2 + 15x + 14$, which coincides with the Callan polynomial $C_2^2(x)$. In general, the following holds. \begin{theorem} We define $T_n^0(x) = T_0^k (x) = 1$. For any integers $n, k \geq 0$, the polynomial $T_n^k(x)$ coincides with the Callan polynomial $C_n^k(x)$. \end{theorem} \begin{proof} For each $\lambda \in \calT_n^k$, we let $R = R(\lambda)$ denote the most right column of $\lambda$. We split the set $\calT_n^k$ into disjoint subsets as follows: Let $A$ denote the set $\lambda \in \calT_n^k$ such that $R$ contains no $\leftarrow$. Let $B$ denote the set $\lambda \in \calT_n^k$ such that the top-right box is empty and $R$ contains at least one $\leftarrow$. Let $C$ denote the set $\lambda \in \calT_n^k$ such that the top-right box contains $\leftarrow$. If $\lambda \in A$, then $R$ is empty or contains the unique $\downarrow$. The remaining rectangle $\lambda^- := \lambda \backslash R$ defines a sub-rectangle in $\calT_n^{k-1}$, and we see that $w(\lambda) = w(\lambda^-)$. The number of patterns of $R$ is $n+1$ (empty or one $\downarrow$). Thus, we get \[ \sum_{\lambda \in A} x^{w(\lambda)} = (n+1) T_n^{k-1}(x). \] If $\lambda \in B$, then $R$ contains $j \leftarrow$'s ($1 \leq j \leq n-1$). For each $j$, the number of patterns of $R$ is ${n \choose j+1}$, ($j \leftarrow$ and zero or one $\downarrow$). In the rectangle $\lambda \backslash R$, $j$ rows are killed, and the remaining rows define a sub-rectangle $\lambda^- \in \calT_{n-j}^{k-1}$. In this case it holds for the weight $w(\lambda^-) = w(\lambda)$, and hence \[ \sum_{\lambda \in B} x^{w(\lambda)} = \sum_{j=1}^{n-1} {n \choose j+1} T_{n-j}^{k-1} (x) = \sum_{j=1}^{n-1} {n \choose j-1} T_j^{k-1} (x). \] Finally, if $\lambda \in C$, then $R$ contains $(j+1) \leftarrow$'s ($0 \leq j \leq n-1$). For each $j$, the number of patterns of $R$ is ${n \choose j+1}$. In the rectangle $\lambda \backslash R$, $(j+1)$ rows are killed, and the remaining rows define a sub-rectangle $\lambda^- \in \calT_{n-j-1}^{k-1}$. In this case, the $\leftarrow$ in the corner affect the weight of $\lambda$, thus $w(\lambda^-) = w(\lambda) - 1$. Hence, \[ \sum_{\lambda \in C} x^{w(\lambda)} = x \sum_{j=0}^{n-1} {n \choose j+1} T_{n-j-1}^{k-1} (x) = x \sum_{j=0}^{n-1} {n \choose j} T_j^{k-1} (x). \] Therefore, we have \begin{align}\label{T-rec} T_n^k(x) = (n+1) T_n^{k-1}(x) + \sum_{j=1}^{n-1} {n \choose j-1} T_j^{k-1} (x) + x \sum_{j=0}^{n-1} {n \choose j} T_j^{k-1} (x), \end{align} which is equivalent to the recursion formula for the Callan polynomial in Theorem \ref{Callan-poly}. \end{proof} \begin{corollary} For any integers $n, k \geq 0$ and $m \geq 0$, we have \[ T_n^k(m) = \widehat{\scB}_n^k(m). \] \end{corollary} \section{Applications} First, we present a generalization of Ohno-Sasaki's result on poly-Bernoulli numbers \cite[Theorem 1]{OS20} (see also \cite{OS20+}). \begin{align}\label{OS-eq} \sum_{0 \leq i \leq \ell \leq m} (-1)^{i} \st{m+2}{i+1} B_{n+\ell}^{(-k)} = 0 \qquad (n \geq 0, m \geq k > 0), \end{align} The theorem gives a new type of recurrence relation for the (normalized) symmetrized poly-Bernoulli numbers $\widehat{\scB}_n^k(m)$ with the single index $k$, (see also a related question in \cite[Remark 14.5]{AIK}). \begin{theorem}\label{OS} For any $n \geq 0, m > k \geq 0$, we have \[ \sum_{\ell = 0}^m (-1)^\ell \st{m+1}{\ell+1} C_{n+\ell}^k (x) = 0. \] \end{theorem} \begin{proof} By Proposition \ref{C-exp}, the left-hand side equals \begin{align*} \sum_{j=0}^k j! (x+1)^{\overline{j}} \sts{k+1}{j+1} \sum_{\ell = 0}^\infty (-1)^\ell \st{m+1}{\ell+1} \sts{n+\ell+1}{j+1}. \end{align*} By showing the identity \begin{align}\label{iden} \sum_{\ell = 0}^\infty (-1)^\ell \st{m+1}{\ell+1} \sts{n+\ell+1}{j+1} = 0 \qquad \text{for } j < m, \end{align} the theorem holds by the assumption $k < m$. We prove Equation \eqref{iden} by induction on $n$. Let $\delta_{i,j}$ denote the Kronecker delta defined by $\delta_{i,j} = 1$ if $i = j$ and $\delta_{i,j} = 0$ otherwise. For $n = 0$, by \cite[Proposition 2.6 (5.2)]{AIK}, we have \[ \sum_{\ell=0}^\infty (-1)^\ell \st{m+1}{\ell+1} \sts{\ell+1}{j+1} = (-1)^j \delta_{j, m}, \] which equals $0$ if $j <m$. For any positive $n$, by the recurrence relation of the Stirling numbers of the second kind, \[ \sum_{\ell=0}^\infty (-1)^\ell \st{m+1}{\ell+1} \sts{n+\ell+1}{j+1} = \sum_{\ell=0}^\infty (-1)^\ell \st{m+1}{\ell+1} \left(\sts{n+\ell}{j} + (j+1) \sts{n+\ell}{j+1} \right), \] which also equals to $0$ by the induction hypothesis. \end{proof} For example, since $C_n^k(0) = \widehat{\scB}_n^k(0) = B_n^{(-k)}$, we get \begin{align}\label{Ber-rec} \sum_{\ell = 0}^m (-1)^\ell \st{m+1}{\ell+1} B_{n+\ell}^{(-k)} = 0 \qquad (n \geq 0, m > k \geq 0). \end{align} Our formula looks simpler than Ohno-Sasaki's formula (\ref{OS-eq}). Here we show the relation between these two results. Let $\mathrm{OS}(n)$ be the left-hand side of (\ref{OS-eq}). By a direct calculation, \begin{align*} \mathrm{OS}(n) - \mathrm{OS}(n+1) &= \sum_{0 \leq i \leq \ell \leq m} (-1)^{i} \st{m+2}{i+1} B_{n+\ell}^{(-k)} + \sum_{1 \leq i \leq \ell \leq m+1} (-1)^i \st{m+2}{i} B_{n+\ell}^{(-k)}\\ &= \sum_{\ell = 0}^m (-1)^\ell \st{m+2}{\ell+1} B_{n+\ell}^{(-k)} + \sum_{i = 1}^{m+1} (-1)^i \st{m+2}{i} B_{n +m+1}^{(-k)}. \end{align*} Since \begin{align*} \sum_{j=0}^n \st{n+1}{j+1} x^j = (x+1)^{\overline{n}} \end{align*} and $\st{n}{n} = 1$ hold, the last sum becomes $\sum_{i=1}^{m+1} (-1)^i \st{m+2}{i} = (-1)^{m+1}$. Hence, \[ \mathrm{OS}(n) - \mathrm{OS}(n+1) = \sum_{\ell=0}^{m+1} (-1)^\ell \st{m+2}{\ell+1} B_{n+\ell}^{(-k)}, \] which coincides with the left-hand side of (\ref{Ber-rec}) with shifted $m$ by one. This concludes that the equation (\ref{OS-eq}) implies (\ref{Ber-rec}). Next, we give another recurrence formula. \begin{theorem}\label{diag-sum} For any integers $n, k \geq 0$, we have \[ \sum_{\ell=0}^n \st{n+1}{\ell+1} C_\ell^k (x) = n! \sum_{j=0}^{\min(n,k)} (x+1)^{\overline{j}} \sts{k+1}{j+1} {n+1 \choose j+1}. \] \end{theorem} \begin{proof} By using Proposition \ref{C-exp} again, the left-hand side becomes \[ \sum_{j=0}^\infty j! (x+1)^{\overline{j}} \sts{k+1}{j+1} \sum_{\ell=0}^n \st{n+1}{\ell+1} \sts{\ell+1}{j+1}. \] The inner sum is an expression for the Lah numbers, which satisfies \[ \sum_{\ell=0}^n \st{n+1}{\ell+1} \sts{\ell+1}{j+1} = {n \choose j} \frac{(n+1)!}{(j+1)!}. \] Both sides of the identity counts the ways of partitions of $\{1,2,\ldots,n+1\}$ into $j+1$ linear arrangements, lists. In order to obtain a set of lists, split first the $n+1$ elements into $\ell+1$ cycles, then partition the $\ell+1$ cycles into $j+1$ blocks. The product of the cycles determines the list in a block. On the other hand, take a permutation of $[n+1]$ and place bars to split it into $j+1$ pieces (from the $n$ places between the elements we choose $j$ to place the bars in $\binom{n}{j}$ ways). Since the order of the lists is irrelevant, we divide by the number of permutations of the lists, $(j+1)!$. The theorem follows. \end{proof} To apply the theorem for the special cases at $x = 0$ and $x=1$, we recall the following identity. \begin{lemma}\label{Faul} For any integers $n > 0, k \geq 0$, we have \begin{align}\label{Seki} \sum_{j=0}^\infty j! \sts{k+1}{j+1} {n \choose j+1} = \sum_{i=1}^{n} i^k =:S_k(n). \end{align} \end{lemma} \begin{proof} Let $s_k(n)$ the left-hand side of (\ref{Seki}), and consider the generating function \[ \sum_{k=0}^\infty s_k(n) \frac{t^k}{k!} = \sum_{j=0}^\infty j! {n \choose j+1} \sum_{k=0}^\infty \sts{k+1}{j+1} \frac{t^k}{k!} = e^t \sum_{j=0}^\infty {n \choose j+1} (e^t-1)^j. \] The last equality follows from the fact \cite[Proposition 2.6, (7)]{AIK} \[ \sum_{k=0}^\infty \sts{k+1}{j+1} \frac{t^k}{k!} = \frac{e^t (e^t-1)^j}{j!}. \] This implies that \[ \sum_{k=0}^\infty (s_k(n+1) - s_k(n)) \frac{t^k}{k!} = e^t \sum_{j=0}^n {n \choose j} (e^t-1)^j = e^{(n+1)t} = \sum_{k=0}^\infty (n+1)^k \frac{t^k}{k!}, \] that is, $s_k(1) = 1$ and $s_k(n+1) = s_k(n) +(n+1)^k$. Hence $s_k(n) = S_k(n)$. We can also prove the equation \[ (s_k(n+1) - s_k(n) = ) \sum_{j=0}^\infty j! \sts{k+1}{j+1} {n \choose j} = (n+1)^k \] combinatorially. The term $(n+1)^k$ counts the number of words $w_1 w_2 \cdots w_k$ of length $k$ out of an alphabet with $n+1$ distinct letters $\{0, 1, \dots, n\}$. We can get such a word as follows also: add the special position $w_0 := 0$ and partition the $k+1$ positions of the word into $j+1$ subsets, on the positions of a subset, the entries are the same. We choose the remaining entries in $j! {n \choose j}$ ways. \end{proof} \begin{corollary} At $x = 0$, \begin{align}\label{B-Seki} \sum_{\ell=0}^n \st{n+1}{\ell+1} B_\ell^{(-k)} = n! S_k(n+1), \end{align} At $x = 1$, we also get \begin{align}\label{diag-sum-C} \sum_{\ell=0}^n \st{n+1}{\ell+1} \widehat{\scB}_\ell^k (1) = n! (n+1)^{k+1}. \end{align} \end{corollary} \begin{proof} The first equation (\ref{B-Seki}) immediately follows from Theorem \ref{diag-sum} and Lemma \ref{Faul}. For the second equation (\ref{diag-sum-C}), by Theorem \ref{diag-sum} again, we have \[ \sum_{\ell=0}^n \st{n+1}{\ell+1} \widehat{\scB}_\ell^k (1) = n! \sum_{j=0}^{\min(n,k)} (j+1)! \sts{k+1}{j+1} {n+1 \choose j+1} = n! (n+1)^{k+1}. \] The last equation is given combinatorially by counting the term $(n+1)^{k+1}$ in a similar way as the proof of Lemma \ref{Faul}. In this argument, we do not need the special position $w_0$. \end{proof} We also give a direct combinatorial proof for the identity (\ref{diag-sum-C}). Both sides of the equation counts the number of permutations of $[n+(k+1)]$ such that tall substrings of consecutive elements greater than $n$ are in increasing order. Such a permutation can be decoded by a pair $(\pi, w)$, where $\pi \in \mathfrak{S}_n$, a permutation of $n$ and $w=w_1\ldots w_k w_{k+1}$ is a word of length $k+1$ on the alphabet $\{0,1,\ldots, n\}$. Let $\sigma$ be a permutation with the above property. Then the subsequence of the elements $\{1,2,\ldots, n\}$ is $\pi$, while $w_i$ is the number of the elements to the left of $i+n$ that are smaller than or equal to $n$. Clearly, the number of such pairs is given by $n!(n+1)^{k+1}$. For instance, for $n=7$ and $k=6$ the permutation $\sigma={\bf 11}-6-{\bf 8-10}-3-1-{\bf 13-14}-7-5-4-2-{\bf 9-12}$ is decoded by the pair $(\pi;w)=(6-3-1-7-5-4-2;1710733)$. On the other hand, we obtain such a permutation $\sigma$ using Callan sequences as follows. A \emph{$C$-Callan permutation} is a Callan permutation starting with an element greater than $n$. Equivalently, a \emph{$C$-Callan sequence} is a ($0$-barred) Callan sequence of size $n \times k$ with an extra red block $R^* = \{\piros{*}\}$. It can be shown that $C$-Callan permutations are in bijection with $1$-barred Callan sequences, and hence, their number is $B_n^{(-k)} (1) = \widehat{\scB}_n^{k-1}(1)$. Take a $C$-Callan sequence with red elements $\{\piros{1}, \dots, \piros{\ell}, \piros{*}\}$ and blue elements $\{\kek{1}, \dots, \kek{k}, \kek{k+1}, \kek{*}\}$. By the definition of the $C$-Callan sequence, this ends with $(\kek{B}\cup\{ \kek{*}\}; \piros{*})$. Construct a permutation of $\{0, 1, \dots, n\}$ with $\ell+1$ cycles $c_0, c_1, \dots, c_\ell$ in $\st{n+1}{\ell+1}$ ways. Let $c_i$ denote the $i$th cycle in the natural order of the cycles determined by the smallest elements of them. So for instance $c_0$ denotes the cycle that contains $0$. Replace in the $C$-Callan sequence $\kek{*} \piros{*}$ by $c_0$, and each red element for $i > 0$ by the cycle $c_i$ and take the product of the cycles in each red block. Finally, delete $0$ and shift the blue elements by $n$, $\kek{i} \to i+n$. The so obtained permutation is $\sigma$. For instance, the $C$-Callan sequence $(\kek{4}; \piros{4}) (\kek{1}, \kek{3}; \piros{1}) (\kek{6}, \kek{7}; \piros{2}, \piros{3}) (\kek{2}, \kek{5}, \kek{*}; \piros{*})$ and the cycles $c_0 = (0), c_1 = (1, 3), c_2 = (2, 7), c_3 = (4,5), c_4 = (6)$ with $n = 7, k = 6, \ell = 4$ correspond to the above $\sigma$ by \begin{align*} (\kek{4}; \piros{4}) (\kek{1}, \kek{3}; \piros{1}) (\kek{6}, \kek{7}; \piros{2}, \piros{3}) &(\kek{2}, \kek{5}, \kek{*}; \piros{*}) \to \kek{4} (6) \kek{13} (1,3) \kek{67} (2,7)(4,5) \kek{25} (0)\\ &\to \kek{4}-6-\kek{1}-\kek{3}-3-1-\kek{6}-\kek{7}-7-5-4-2-\kek{2}-\kek{5}-0\\ &\to {\bf 11}-6-{\bf 8-10}-3-1-{\bf 13-14}-7-5-4-2-{\bf 9-12}. \end{align*} \section{Further problems} In section \ref{s5}, we define the weight $w_{\leftarrow}(\lambda) := w(\lambda)$ on alternative tableaux by using left arrows $\leftarrow$. We let $w_\downarrow (\lambda)$ denote another weight on alternative tableaux corresponding to down arrows similarly. More precisely, for each $\lambda \in \calT_n^k$, the weight $w_\downarrow(\lambda)$ is defined as follows. \begin{itemize} \item[1.] Consider the first (from the right) consecutive columns that contain down arrows $\downarrow$. \item[2.] Count the number of down arrows $\downarrow$ such that all $\downarrow$ in the right-hand columns are located in the upper rows. \end{itemize} Figure \ref{altweight2} shows the list of all elements in $\calT_2^2$ with the weight $w_\downarrow(\lambda)$. \begin{figure}[H] \centering \includegraphics[width=160mm]{image4} \caption{Alternative tableaux of size $2 \times 2$ with their weights $w_\downarrow(\lambda)$}\label{altweight2} \end{figure} We define the two-variable polynomial \[ T_n^k(x,y) := \sum_{\lambda \in \calT_n^k} x^{w_\leftarrow(\lambda)} y^{w_\downarrow(\lambda)}. \] From the above example, $T_2^2(x,y) = x^2 y + xy^2 + x^2 + 7xy + y^2 + 7x + 7y + 6$. By simple observations, we also see that \[ T_n^1(x,y) = T_1^n (x,y) = (2^{n-1} -1)xy + 2^{n-1} x + 2^{n-1} y + 2^{n-1}. \] \begin{conjecture} We put \begin{align*} t_n^0(x,y) &= t_0^k(x,y) = 1,\\ t_n^1(x,y) &= t_1^n(x,y) = (2^{n-1}-1)xy + 2^{n-1}x + 2^{n-1}y + 2^{n-1} \end{align*} as initial values. The polynomials defined by \begin{align*} t_n^k(x,y) &:= \sum_{j=0}^n {n+1 \choose j} t_j^{k-1} (x,y) + (x-1) \sum_{j=0}^{n-1} {n \choose j} t_j^{k-1} (x,y)\\ & \qquad + (y-1) \sum_{j=0}^{n-1} {n \choose j} t_j^{k-1} (x,y) + (x-1)(y-1) \sum_{j=0}^{n-1} {n-1 \choose j} t_j^{k-1} (x,y) \end{align*} coincide with $T_n^k(x,y)$. \end{conjecture} We checked the coincidence for $(n,k) = (2,2), (3,2)$, and $(2,3)$ by hand. By comparing the recurrence formula at $x =1$ or $y=1$ with that in (\ref{T-rec}), we easily see that $t_n^k(x,1) = t_n^k(1,x) = T_n^k(x)$. Another direction is to consider the polynomial at other values, for instance at negative integers. By applying Theorem \ref{OS} for $n=-1$ formally, we get \[ ``C_k^{-1} (x) = -\frac{1}{m!} \sum_{\ell=1}^m (-1)^\ell \st{m+1}{\ell+1} C_k^{\ell-1} (x)". \] Here we used the symmetric property $C_n^k(x) = C_k^n(x)$. Recalling the condition $m > k \geq 0$ on $m$, and specializing by $m = k+1$, we tentatively define $C_k^{-1}(x)$ by \[ C_k^{-1}(x) := \frac{1}{(k+1)!} \sum_{\ell=0}^k (-1)^\ell \st{k+2}{\ell+2} C_k^\ell (x). \] \begin{proposition} For any integer $k \geq 0$, we have \[ C_k^{-1} (x) = -\frac{S_k(-x)}{x}, \] where $S_k(x)$ is the Seki-Bernoulli polynomial \cite[Section 1.2]{AIK} defined by \[ S_k(x) := \frac{1}{k+1} \sum_{j=0}^k {k+1 \choose j} B_j x^{k+1-j} \] with the classical Bernoulli number $B_k = B_k^{(1)} (0)$. \end{proposition} \begin{proof} Let $s_k(x) := x C_k^{-1}(-x)$. By Proposition \ref{C-exp}, \[ s_k(x) = x C_k^{-1} (-x) = -\frac{1}{(k+1)!} \sum_{j=0}^k j! (-x)^{\overline{j+1}} \sts{k+1}{j+1} \sum_{\ell=0}^\infty (-1)^\ell \st{k+2}{\ell+2} \sts{\ell+1}{j+1}. \] Since the inner sum over $\ell$ equals $(-1)^j (k+1)!/(j+1)!$ for $0 \leq j \leq k$ (we prove it in Lemma \ref{last-lem}), we have \begin{align*} s_k(x) = \sum_{j=0}^k (-1)^{j+1} (-x)^{\overline{j+1}} \sts{k+1}{j+1} \frac{1}{j+1}. \end{align*} For any positive integer $n > 0$, \[ s_k(n) = \sum_{j=0}^k j! {n \choose j+1} \sts{k+1}{j+1}. \] By Lemma \ref{Faul}, this equals $S_k(n)$. Since $s_k(x)$ and $S_k(x)$ are polynomials, this concludes the proof, that is, $s_k(x) = S_k(x)$. \end{proof} \begin{lemma}\label{last-lem} For integers $k \geq j \geq 0$, we have \[ \sum_{\ell=j}^k (-1)^{\ell+j} \st{k+2}{\ell+2} \sts{\ell+1}{j+1} = \frac{(k+1)!}{(j+1)!}. \] \end{lemma} \begin{proof} Consider the generating function of the left-hand side with respect to $k$. By \cite[Proposition 2.6 (7) and (9)]{AIK}, \begin{align*} \sum_{k=j}^\infty \sum_{\ell=j}^k (-1)^{\ell+j} \st{k+2}{\ell+2} \sts{\ell+1}{j+1} \frac{t^{k+1}}{(k+1)!} &= \sum_{\ell=j}^\infty (-1)^{\ell+j}\sts{\ell+1}{j+1} \sum_{k=\ell}^\infty \st{k+2}{\ell+2} \frac{t^{k+1}}{(k+1)!}\\ &= \frac{(-1)^{j+1}}{1-t} \sum_{\ell=j}^\infty \sts{\ell+1}{j+1} \frac{(\log(1-t))^{\ell+1}}{(\ell+1)!}\\ &= \frac{1}{(j+1)!} \frac{t^{j+1}}{1-t} = \sum_{k=j}^\infty \frac{(k+1)!}{(j+1)!} \frac{t^{k+1}}{(k+1)!} \end{align*} This concludes the proof. \end{proof} One natural question is whether there exists a suitable generalization of the Callan polynomial $C_n^k(x)$ or the symmetrized poly-Bernoulli numbers $\widehat{\scB}_n^k(m)$ for negative integers $k$ and $m$. It would be interesting to investigate the polynomials that arise by the weight function on alternative tableaux of other special shapes or on arbitrary shapes. In this paper we did not provide bijections between our models. It would be interesting to find simple bijections, especially between alternative tableaux and the Callan sequences. Also there should exist combinatorial proofs of Theorem \ref{OS} and so on. \section*{Acknowledgements} We would like to thank Yasuo Ohno and Yoshitaka Sasaki for sending us their preprint and some helpful comments. Further, we thank to Sithembele Nkonkobe for helpful conversations. The second author was supported by JSPS KAKENHI Grant Number 20K14292.
{ "timestamp": "2020-07-28T02:42:26", "yymm": "2007", "arxiv_id": "2007.13636", "language": "en", "url": "https://arxiv.org/abs/2007.13636" }
\section{Background} We consider the standard Markov Decision Process (MDP) setting~\cite{puterman1994markov}, in which the environment is specified by a tuple $\mathcal{M} = \langle \mathcal{S}, \mathcal{A}, \mathcal{R}, \mathcal{T}, \mu_0, \gamma \rangle$, consisting of a state space $\mathcal{S}$, an action space $\mathcal{A}$, a reward distribution function $\mathcal{R}$, a transition probability function $\mathcal{T}$, an initial state distribution $\mu_0$, and a discount $0\le \gamma<1$. A policy $\pi$ interacts with the environment iteratively, starting with an initial state $s_0 \sim \mu_0$. For simplicity, we will restrict the text to consider the infinite-horizon setting, although all results apply in the finite horizon setting as well. In this work, we largely focus on estimation of the \emph{value} of a given target policy $\pi$, defined as the expected accumulated reward of $\pi$ in $\mathcal{M}$, averaged over time via $\gamma$-discounting: \begin{equation} \avgstep(\pi) := (1-\gamma)\cdot\mathbb{E}\left[\left.\sum_{t=0}^\infty \gamma^t\cdot r_t ~\right|~ s_0\sim\mu_0,a_t\sim\pi(s_t),r_t\sim\mathcal{R}(s_t,a_t),s_{t+1}\sim\mathcal{T}(s_t,a_t)\right]. \end{equation} We consider the \emph{off-policy} setting, in which we do not have explicit knowledge of $\mathcal{R},\mathcal{T},\mu_0$. Rather, we only have access to a finite empirical dataset of experience samples from these distributions. More concretely, we have a dataset $\mathcal{D}_n:=\{(s_0^{(j)}, s^{(j)}, a^{(j)}, r^{(j)}, s^{\prime(j)})\}_{j=1}^n$ consisting of $n$ tuples $(s_0, s, a, r, s')$ independently sampled via \begin{equation} \label{eq:def-d} s_0\sim\mu_0~;~~~ (s, a)\simd^\mathcal{D}~;~~~ r\sim \mathcal{R}(s,a)~;~~~ s'\sim\mathcal{T}(s,a), \end{equation} where $d^\mathcal{D}$ is some unknown distribution over state-action pairs. We will abuse notation at times and use $d^\mathcal{D}(s_0,s,a,r,s')$ to denote the joint distribution on tuples and $d^\mathcal{D}(s_0),d^\mathcal{D}(r,s'|s,a)$ the appropriately marginalized and conditioned distributions. The finite dataset $\mathcal{D}_n$ induces its own empirical distribution over tuples, which we denote \begin{equation} d^{\Dnset}:= \frac{1}{n}\sum_{j=1}^n \delta_{(s_0^{(j)}, s^{(j)}, a^{(j)}, r^{(j)}, s^{\prime(j)})}, \end{equation} where $\delta_\chi$ is the Dirac delta distribution centered at $\chi$. The empirical distribution over tuples $d^{\Dnset}$ in turn determines an empirical initial state distribution ${\mu}_0^{\Dnset}(s_0):=d^{\Dnset}(s_0)$, an empirical reward distribution function ${\Reward}_{\Dnset}(r|s,a):=d^{\Dnset}(r|s,a)$, and an empirical transition probability function ${\Transition}_{\Dnset}(s'|s,a):=d^{\Dnset}(s'|s,a)$. To appropriately define ${\Reward}_{\Dnset},{\Transition}_{\Dnset}$ when $d^{\Dnset}$ has poor coverage of the state or action space, we define ${\Reward}_{\Dnset}(r|s, a):= \Reward_{\mathrm{prior}}(r|s,a)$, ${\Transition}_{\Dnset}(s'|s,a):= \Transition_{\mathrm{prior}}(s'|s,a)$ for all $s,a$ such that $d^{\Dnset}(s, a)=0$, for some fixed \emph{prior} distribution functions $\Reward_{\mathrm{prior}}, \Transition_{\mathrm{prior}}$. The \emph{direct method} (DM) uses the empirically observed ${\mu}_0^{\Dnset},{\Reward}_{\Dnset},{\Transition}_{\Dnset}$ to estimate $\avgstep(\pi)$ as \begin{equation*} \hspace{-2mm} \avgstep_{\mathrm{DM}}(\pi|\mathcal{D}_n) := (1-\gamma)\cdot\mathbb{E}\left[\left.\sum_{t=0}^\infty \gamma^t\cdot r_t ~\right|~ s_0\sim{\mu}_0^{\Dnset},a_t\sim\pi(s_t),r_t\sim{\Reward}_{\Dnset}(s_t,a_t),s_{t+1}\sim{\Transition}_{\Dnset}(s_t,a_t)\right]. \end{equation*} The direct method may be implemented explicitly through a \emph{model-based} (MB) procedure, where ${\mu}_0^{\Dnset},{\Reward}_{\Dnset},{\Transition}_{\Dnset}$ are either determined analytically or approximated by parameteric models via maximum likelihood. Then, $\avgstep_{\mathrm{DM}}(\pi|\mathcal{D}_n)$ is approximated by Monte Carlo trajectories of $\pi$ rolled out using these models. Alternatively, DM can also be implemented in a model-free fashion via \emph{$Q$-evaluation} (QE). In this approach, a $Q$-value function $Q:\mathcal{S}\times\mathcal{A}\to\mathbb{R}$ is iteratively learned via the Bellman backup procedure, \vspace{-3mm} \begin{equation} \label{eq:qe-backup} Q^{(i+1)}(s,a) \leftarrow \mathbb{E}_{d^{\Dnset}(r,s'|s,a),a'\sim\pi(s')}\left[r + \gamma Q^{(i)}(s',a')\right]. \end{equation} Ignoring issues of function approximation, this procedure converges to a fixed point $\widehat{Q}^\pi=\lim_{i\to\infty} Q^{(i)}$, which is the $Q$-value function of $\pi$ under the empirical MDP.\footnote{When $d^{\Dnset}$ has poor coverage, the fixed point $\hat{Q}^\pi$ depends on the initial $Q$-values $Q^{(0)}$. The fixed point $\hat{Q}^\pi$ is still the $Q$-value function of $\pi$ under the empirical MDP, where the prior reward and transition functions $\Reward_{\mathrm{prior}},\Transition_{\mathrm{prior}}$ are \emph{implicitly} defined by the initialization of $Q^{(0)}$.} Once this fixed point is determined, the value of $\pi$ may be approximated as $\avgstep_{\mathrm{DM}}(\pi|\mathcal{D}_n)= (1-\gamma)\cdot\mathbb{E}_{d^{\Dnset}(s_0),a_0\sim\pi(s_0)}[\widehat{Q}^\pi(s_0,a_0)]$. When the iterative procedure in~\eqref{eq:qe-backup} is performed via a regression over parameterized $Q$, this procedure is known as \emph{fitted $Q$-evaluation} (FQE). The reader may look to~\cite{voloshin2019empirical} for a review of a variety of instantiations of the direct method. Although DM via either MB or QE is straightforward, it generally yields \emph{biased} estimates of $\avgstep(\pi)$: \begin{equation} \avgstep(\pi) \ne \mathbb{E}_{\mathcal{D}_n}[\avgstep_{\mathrm{DM}}(\pi|\mathcal{D}_n)]. \end{equation} Still, unbiased estimates are not completely necessary in practical risk-sensitive applications, where one would rather have access to accurate confidence intervals, and the bias of a single point estimate is irrelevant. In the statistics literature, Efron's bootstrap (Algorithm~\ref{alg:bootstrap}) is widely used to provide asymptotically accurate confidence intervals, even when point estimates of the statistic are biased, and doing the same for DM methods has been proposed in the past~\cite{hanna2017bootstrapping}. However, Efron's bootstrap is not always guaranteed to yield accurate confidence intervals~\cite{putter2012resampling,abadie2008failure}. In this paper, we will investigate conditions under which Efron's bootstrap applied to DM is guaranteed to yield accurate confidence intervals, and suggest mechanisms to improve the validity of the confidence intervals when these conditions do not hold. Before getting into our main contributions, we list a few useful assumptions. For ease of exposition, we state these assumptions and our theoretical results with respect to \emph{countable} sets $\mathcal{S}$ and $\mathcal{A}$; this allows us to avoid technical details from measure theory. \begin{assumption}[Bounded rewards] \label{assumption:bounded_rewards} The rewards of the MDP are bounded by some finite constant $R_{\mathrm{max}}$: $\nbr{r}_\infty\le R_{\mathrm{max}}$. \end{assumption} For the next assumption, we make use of the discounted on-policy distribution $d^\pi$, which measures the likelihood of the policy $\pi$ encountering state-action pair $(s,a)$ when interacting with $\mathcal{M}$~\cite{nachum2020reinforcement}: \begin{equation} \label{eq:dpi} d^\pi(s,a) := (1-\gamma)\cdot\sum_{t=0}^\infty \gamma^t\cdot \Pr[s_t=s,a_t=a~|~\mu_0,\pi,\mathcal{R},\mathcal{T}]. \end{equation} \begin{assumption}[Sufficient data coverage] \label{assumption:bounded_ratios} There exists $\epsilon>0$ such that for any $(s,a)$, $d^\pi(s,a)>0$ implies $d^\mathcal{D}(s,a)>\epsilon$. \end{assumption} As we will discuss later, Assumption~\ref{assumption:bounded_ratios} is very strong and often not satisfied in practice (e.g., in infinite state or action spaces). \begin{algorithm} \caption{Efron's non-parameteric, bias-corrected bootstrap~\cite{efron1987better}.} \begin{algorithmic} \label{alg:bootstrap} \STATE {\bf Inputs}: A functional $F$, a desired confidence $1-\alpha$, a finite sample dataset $\mathcal{D}_n:=\{(s_0^{(j)}, s^{(j)}, a^{(j)}, r^{(j)}, s^{\prime(j)})\}_{j=1}^n$, number of bootstraps $b$ to use for percentile calculation. \vspace{2mm} \STATE \emph{\#\# Note: $F$ is a function from distributions over $(s_0,s,a,r,s')$ to $\mathbb{R}$. When applied to a finite dataset $\widetilde{\mathcal{D}}$, it is understood to be applied to the empirical distribution $d^{\Dtilde}$ determined by $\widetilde{\mathcal{D}}$.} \vspace{2mm} \STATE Compute empirical estimate $\hat{y}:= F(\mathcal{D}_n)$. \STATE Create $b$ bootstrapped datasets $\{\mathcal{D}_n^{(k)}\}_{k=1}^b$, each of $n$ elements sampled uniformly from $\mathcal{D}_n$. \STATE Compute bootstrapped estimates $\hat{y}_1:= F(\mathcal{D}_n^{(1)}),\dots,\hat{y}_b:= F(\mathcal{D}_n^{(b)})$. \STATE Compute $\alpha/2$ and $1-\alpha/2$ quantiles $z_{\alpha/2},z_{1-\alpha/2}$ of $\{\hat{y}_k - \hat{y}\}_{k=1}^b$. \vspace{2mm} \STATE {\bf Return} $C:= [\hat{y} - z_{1-\alpha/2}, \hat{y} - z_{\alpha/2}]$. \end{algorithmic} \end{algorithm} \section{Conclusion} We have investigated the validity of Efron's bootstrap for computing confidence intervals with respect to the direct method (DM) for off-policy evaluation. Our theoretical results show that Efron's bootstrap is valid given that specific conditions -- sufficient data size and sufficient coverage -- are satisfied. While these conditions are often not satisfied in practice, there are a number of heuristic mechanisms that can be employed to mitigate their effects, although at a cost of overly conservative or biased intervals. Still, empirically we find that these mechanisms can be used to yield impressive performance for OPE in challenging environments. In the future, we hope to use the ideas and techniques presented here and apply them to policy optimization problems, where safety is also a key concern. \section{Experiments} We evaluate our methods first in a discrete tabular domain, where we investigate how well the coverage of the estimated bootstrap intervals matches the intended coverage and show how reward noise can assist in low-data regimes. Sufficient coverage is not much of an issue in finite domains,\footnote{In finite domains, Assumption~\ref{assumption:bounded_ratios} reduces to $d^\pi(s,a)>0\Rightarrow d^\mathcal{D}(s,a)>0$.} and so we continue to a more difficult set of continuous control tasks from OpenAI Gym~\cite{brockman2016openai}, where we evaluate the use of appropriately regularized function approximators in conjunction with bootstrapping and noisy rewards. \subsection{Tabular Tasks} We use Frozen Lake as a discrete domain for tabular experiments. In this environment, the agent navigates in a discrete world from a start state to a goal state. The environment dynamics are stochastic and some actions lead to episode terminations. We use $\gamma=0.999$. We use a target policy that is near-optimal in this domain. We collect an experience dataset using a behavior policy derived as the target policy injected with 0.2 $\epsilon$-greedy noise (this reduces the value of the policy $\avgstep(\pi)$ from about $0.0007$ to about $0.0002$). For this task, policy evaluation with DM with either MB or QE can be equivalently solved using the exact tabular method, so we plot a single variant labelled DM. We present empirical results in Figure~\ref{fig:lake}. We plot the results of using Efron's bootstrap with DM to construct confidence intervals with confidence $1-\alpha$ across a number of dataset sizes. The results here show empirical coverage of the estimated confidence intervals, as measured over 200 randomly sampled datasets (each dataset is then resampled repeatedly for computing bootstrap estimates). We find that DM with bootstrapping is able to achieve near-correct empirical coverage as the dataset size grows. As suggested by the theory, the bootstrap typically underestimates the desired coverage, and this is severe in low-data regimes (when the number of episodes is less than $50$). \begin{figure}[h] \begin{center} \setlength{\tabcolsep}{0pt} \renewcommand{\arraystretch}{0.7} \begin{tabular}{ccc} \includegraphics[width=0.3\columnwidth]{figs/tabular/lake50b.png} & \includegraphics[width=0.3\columnwidth]{figs/tabular/lake75b.png} & \includegraphics[width=0.3\columnwidth]{figs/tabular/lake90b.png} \end{tabular} \end{center} \vspace{-5mm} \caption{Results on Frozen Lake across different confidences $1-\alpha$. Each plot shows the proportion of times the estimated confidence interval covers the true value of the policy, as measured over 200 separate trials.} \label{fig:lake} \end{figure} We show the results of using noisy rewards to combat this low-data issue. We perturb the rewards with $R_{\mathrm{noise}}=0.25\cdot\sqrt{\mathrm{Var}_{\mathcal{D}_n}[r]}$. The resulting difference in performance in the low-data regime is striking; DM with noisy bootstrap is able to yield near-optimal coverage, although, as expected, it typically slightly overestimates the desired coverage. As a point of comparison, we plot a number of other high-confidence policy evaluation methods: IS with bootstrapping, IS with empirical Bernstein's, IS with Student's $t$, IS with Hoeffding's, and doubly robust (DR) IS with bootstrapping (see~\cite{Thomas15HCPE,Thomas15HCPI,hanna2017bootstrapping}). We find that all of these previous methods mostly either severely underestimate or severely overestimate the desired coverage. There is a potential for our proposed noisy rewards to be beneficial for some of these baselines as well (e.g., DR bootstrap), and this is a promising avenue for future work. \begin{figure}[h] \begin{center} \includegraphics[scale=0.35]{figs/mujoco/DM-Reacher-v2.pdf} \includegraphics[scale=0.35]{figs/mujoco/DM-HalfCheetah-v2.pdf} \includegraphics[scale =0.35]{figs/mujoco/DM-Hopper-v2.pdf} \end{center} \vspace{-5mm} \caption{Policy evaluation on continuous domains. For all methods, we plot estimated 95\% confidence intervals. For lower and upper bounds we plot a median and $25$th and $75$th percentiles (black vertical lines) over 5 seeds. We also plot values for the target policy value $\rho(\pi)$ and the behavior policy value $\rho(\mathcal{D})$. For FQE with noisy boostrapping, the noise scale corresponds to a coefficient applied to the standard deviation of observed rewards in the dataset. Some of the variants (FQE without weight decay, IS bootstrap) at times produce intervals which are wholly outside the plotted range. } \label{fig:mujoco} \end{figure} \subsection{Continuous Control Tasks} We now evaluate the use of bootstrapping on continuous control tasks from OpenAI gym~\cite{brockman2016openai}. Due to high computational demands, we focus on Reacher, HalfCheetah, and Hopper. We follow a protocol similar to \cite{nachum2019dualdice}. First, we generate a near-optimal policy by training SAC~\cite{haarnoja2018soft}. The target policy $\pi$ is set to be this near-optimal policy with fixed variance $\sigma^2=0.01$. The datasets $\mathcal{D}_n$ are sampled via a sub-optimal policy derived from the near-optimal policy with variance replaced with a fixed quantity $\sigma^2$ ($\sigma=0.5$ or $\sigma=0.75$ depending on the task). % We train all networks for one million steps using stochastic gradient descent via the Adam optimizer~\cite{kingma2014adam} with learning rate $3\cdot10^{-4}$ and a minibatch size of $256$. As a form of regularization to combat issues with sufficient coverage, we apply weight decay (L2 regularization) equal to $10^{-5}$ to all methods, unless otherwise specified. We present the computed intervals of FQE and MB in Figure~\ref{fig:mujoco}. Focusing first on the effect of reward noise, we look at the ablation presented by the three FQE variants in these plots (see the appendix for an ablation over MB variants). In extreme low data regimes (10 trajectories), the variance of vanilla FQE intervals is large and coverage of the true value suffers (especially for Reacher). With increased reward noise scales, coverage of the true value improves, but at the cost of a wider interval at times. Next, we consider the issue of sufficient coverage. By default, we apply L2 regularization to FQE. In Figure~\ref{fig:mujoco} we present a variant without L2 regularization. We find the absence of this regularization to have a detrimental effect on performance. At times, the intervals computed by unregularized FQE are so inaccurate that they are outside the range of the plot. We find that the regularized version of FQE exhibits more stable performance. We found regularization of MB to also be crucial. The MB method plotted here uses L2 regularization on the weights and clips states and rewards generated during model-based rollouts. Although not plotted, we found that without these regularizations, the MB bootstrap intervals diverge. In some instances, we can see the consequences of these strong regularizations in terms of biased intervals that do not cover the true value, such as in HalfCheetah. Overall, we conclude that DM approaches when using bootstrapping and our proposed mechanisms can yield strong performance in these difficult domains. Between FQE and MB, FQE appears to be better suited for these domains, although both methods show substantial improvement over existing approaches (IS with bootstrapping).\footnote{DR with bootstrap produces even worse intervals, and so we do not plot it.} \section{Introduction} Providing accurate and trustworthy estimates of a policy's long term value in a decision-making environment is an important problem in reinforcement learning (RL). Typically, due to cost or safety constraints, one must perform this estimation without actually running the policy in the live environment. Instead, one must predict the value of the policy using only a limited set of experience of some other logging (or behavior) policies acting in the sequential environment. This problem is generally referred to as \emph{off-policy evaluation} (OPE)~\cite{precup2000eligibility}. The OPE problem is especially relevant to many practical domains, such as health~\cite{murphy2001marginal,liao2019off}, education~\cite{mandel2014offline}, and recommendation systems~\cite{Swaminathan17OP}, where accurate evaluation of a new policy is critical to maximize safety and minimize risks associated with deployment of a new policy~\cite{thomas2015safe}. Perhaps the most straightforward approach to OPE is to use the given finite dataset of experience to determine the environment's empirically observed initialization, transition, and reward probabilities, and then to evaluate the expected value of the target policy in this \emph{empirical} environment. This straightforward approach is known as the \emph{direct method} (DM)~\cite{dudik2011doubly,voloshin2019empirical}. In addition to encompassing \emph{model-based} (MB) methods~\cite{Thomas16DE,hanna2017bootstrapping}, this general paradigm is also implicitly implemented by \emph{$Q$-evaluation} (QE), or its parameteric counterpart \emph{fitted $Q$-evaluation} (FQE)~\cite{paine2020hyperparameter,voloshin2019empirical,bradtke1996linear}. Indeed, the mathematical equivalence of QE and MB, even under certain function approximation schemes, has been recently demonstrated~\cite{duan2020minimax}. Although the DM paradigm is a straightforward and intuitive approach, it is traditionally seen as undesirable due to it yielding \emph{biased} estimates. That is, the estimates returned by QE or MB over multiple experiments on randomly sampled finite datasets \emph{are not} centered around the true value of the target policy. This fact has led much of the OPE literature to focus on a variety of importance sampling (IS) based approaches~\cite{precup2000eligibility,li2011unbiased,jiang2015doubly,liu2018breaking,nachum2019dualdice}, for which unbiased estimates are feasible. However, the ability to provide unbiased estimates is not necessary in many practical applications. Rather, in many practical scenarios where \emph{safety} is a key concern~\cite{thomas2015safe}, the ability to provide unbiased estimates is less relevant than the need for \emph{high-confidence} and accurate lower or upper bounds on the true value of the target policy. Efron's bootstrap~\cite{efron1987better} is a well-known method in statistics for deriving confidence intervals from biased estimates, and so it may be a promising technique for use in conjunction with DM~\cite{hanna2017bootstrapping}. Still, while bootstrapping is a simple approach widely used in statistics, it is not always guaranteed to yield accurate confidence intervals~\cite{putter2012resampling, abadie2008failure}, and in the case of MB or QE, where the OPE estimate is a complex function of the input data, it is not immediately clear whether Efron's bootstrap would be valid. In this paper, we investigate the validity of Efron's bootstrap applied to DM. We derive theoretical guarantees that show that, if certain conditions are satisfied, Efron's bootstrap applied to DM yields asymptotically accurate confidence intervals. The conditions we identify -- namely, sufficient sample size and sufficient coverage of the underlying experience data distribution -- may not hold in many practical scenarios. Therefore, we use insights from our derivations to suggest mechanisms -- noisy rewards and regularization -- for mitigating the effect of these in practice. We present empirical results in tabular settings that show the validity of our theory and the benefit of our heuristic mechanisms. Extending our methods to more complex environments with function approximation, we present state-of-the-art results, showing that MB and QE with Efron's bootstrap can yield accurate and useful confidence intervals on challenging continuous control benchmarks. \section*{Broader Impact} Our work focuses on the practically relevant problem of off-policy evaluation. Interestingly, our work reveals the potential issues with applying a well-known technique -- Efron's bootstrap -- without considering its validity. Our work shows that Efron's bootstrap may often not be valid. Although we propose mechanisms to remedy this, our solutions are not fool-proof. In a practical setting, where many of our assumptions may not hold, one must take special care when applying our method to mitigate risks of failure. \begin{ack} Thanks to Jonathan Tompson, Andy Zeng, Branislav Kveton, and others at Google Research for contributing helpful thoughts and discussions. \end{ack} \section{Investigating the Validity of Efron's Bootstrap} We begin by presenting a theoretical result showing the validity of using Efron's bootstrap based on estimates $\avgstep_{\mathrm{DM}}(\pi|\mathcal{D}_n)$ prescribed by the direct method. \begin{theorem}[Correctness of DM with bootstrapping] \label{theorem:qe} Under Assumptions~\ref{assumption:bounded_rewards},\ref{assumption:bounded_ratios}, the use of Algorithm~\ref{alg:bootstrap} with $F(d^{\Dnset}):= \avgstep_{\mathrm{DM}}(\pi|\mathcal{D}_n)$ yields confidence intervals $C(d^{\Dnset})$ which are asymptotically correct, in the sense that \begin{equation} \Pr[\rho(\pi)\in C(d^{\Dnset})] = 1 - \alpha - O_p(n^{-1/2}), \end{equation} where $O_p$ is used to denote \emph{order in probability}. Additionally, the one-sided confidence intervals are asymptotically correct at rate $O_p(n^{-1/2})$. These asymptotic rates may be improved by using more sophisticated bootstrapping methods in place of Algorithm~\ref{alg:bootstrap}, such as BCa or ABC~\cite{diciccio1996bootstrap}. \end{theorem} \begin{proof} (Sketch) First, it is clear by the definition of $d^\mathcal{D}$ in~\eqref{eq:def-d} and Assumption~\ref{assumption:bounded_ratios} that $F(d^\mathcal{D})=\avgstep(\pi)$. Thus it is left to show that bootstrap yields correct intervals around $F(d^\mathcal{D})$. Sufficient conditions for correctness of Efron's bias-corrected bootstrap are known, and they are given by smoothness (specifically, Hadamard differentiability\footnote{See the appendix for a definition of Hadamard differentiability.}) of the functional $F$ evaluated in a neighborhood (i.e., a sufficiently small $L_\infty$ ball) around the true distribution $d^\mathcal{D}$~\cite{wasserman2006all,politis2012subsampling,hall2013bootstrap}. In the appendix, we show that under the assumption of bounded rewards (Assumption~\ref{assumption:bounded_rewards}) the derivative $F'(d^{\Dtilde})$ for general distribution $d^{\Dtilde}$ satisfies \begin{equation} \label{eq:f-deriv1} ||F'(d^{\Dtilde})||_\infty = O\left(||\visitpi_{\Dtilde}/d^{\Dtilde}||_\infty\right), \end{equation} where $\visitpi_{\Dtilde}$ is the discounted on-policy distribution of $\pi$ under $\mu_0^{\Dtilde},\Reward_{\Dtilde},\Transition_{\Dtilde}$. When $d^{\Dtilde}=d^\mathcal{D}$, we have $||F'(d^\mathcal{D})||_\infty = O\left(||d^\pi/d^\mathcal{D}||_\infty\right)$. In the appendix, we show that $||\visitpi_{\Dtilde}/d^{\Dtilde}||_\infty$ is bounded within a sufficiently small neighborhood of $d^\mathcal{D}$, given sufficient coverage of $d^\mathcal{D}$ (Assumption~\ref{assumption:bounded_ratios}), and this completes the proof. \end{proof} \vspace{-3mm} Although necessary conditions for the validity of Efron's bootstrap are not known in general, Hadamard differentiability is the key property typically used to prove validity. Our derivations make it clear that Assumption~\ref{assumption:bounded_ratios} is necesssary to ensure Hadamard differentiability of $F$; otherwise, a small change in $d^\mathcal{D}$ may take $d^\pi$ out of the support of $d^\mathcal{D}$, causing divergence in the derivative~\eqref{eq:f-deriv1}. In contrast, a weaker variant of this assumption, $||d^\pi/d^\mathcal{D}||_\infty=W_{\mathrm{max}}<\infty$, which appears in previous OPE literature~\cite{nachum2019dualdice}, is not sufficiently strong to guarantee differentiability in the neighborhood of $d^\mathcal{D}$. % We encapsulate this in the following theorem. \begin{theorem}[Necessity of Assumption~\ref{assumption:bounded_ratios}] \label{theorem:bounded_ratios} Suppose Assumption~\ref{assumption:bounded_rewards} holds and define functional $F(d^{\Dnset}):= \avgstep_{\mathrm{DM}}(\pi|\mathcal{D}_n)$. There exists $d^\mathcal{D}$ with uniformly bounded ratios $||d^\pi/d^\mathcal{D}||_\infty=W_{\mathrm{max}}<\infty$ such that $F$ is not Hadamard differentiable within any neighborhood of $d^\mathcal{D}$. \end{theorem} \begin{proof} See the appendix. \end{proof} Theorem~\ref{theorem:bounded_ratios} is somewhat disappointing, as Assumption~\ref{assumption:bounded_ratios} is strong and often not satisfied in practice; in continuous state or action settings, it is almost never satisfied. In addition to the need for Assumption~\ref{assumption:bounded_ratios}, the other major lacking of Theorem~\ref{theorem:qe} is that it only guarantees correct intervals \emph{asymptotically}. For any finite $n$, the confidence intervals yielded by Efron's bootstrap will generally exhibit \emph{under-coverage}, and in practice this can lead to overly confident confidence intervals. Indeed, in the extreme case of $n=1$, there will be no variation in the boostrapped estimates of $\avgstep$ leading to confidence intervals $C(d^{\Dnset})$ that are single points. In the following subsections, we elaborate on our suggested mechanisms for appropriately compensating for these two main theoretical shortcomings of Efron's bootstrap applied to DM. \subsection{Regularizations for Insufficient Coverage} To better understand the need for sufficient coverage, we can look at a simple scenario illustrated in Figure~\ref{fig:trajectories}. If the data distribution includes $s_2$ but does not cover the action $\pi(s_2)$ chosen by the policy, then the estimates ${\Reward}_{\Dnset}(s_2,\pi(s_2)),{\Transition}_{\Dnset}(s_2,\pi(s_2))$ will be set to the priors $\Reward_{\mathrm{prior}}(s_2,\pi(s_2)),\Transition_{\mathrm{prior}}(s_2,\pi(s_2))$. However, if the data distribution includes state action pair $(s_2, \pi(s_2))$ with even a tiny probability, the estimates ${\Reward}_{\Dnset}(s_2,\pi(s_2)),{\Transition}_{\Dnset}(s_2,\pi(s_2))$ are changed to their empirical estimates. In general, this change is not smooth (i.e., not Hadamard differentiable) with respect to the underlying data distribution, and this leads to issues with the validity of Efron's bootstrap applied to $\avgstep_{\mathrm{DM}}$. \begin{wrapfigure}{R}{0.3\textwidth} \begin{minipage}{0.3\textwidth} \begin{center} \vspace{-5mm} \includegraphics[width=0.99\columnwidth]{figs/trajectories.png} \vspace{-5mm} \end{center} \caption{Trajectories of the policy may diverge from trajectories in the data.} \vspace{-15mm} \label{fig:trajectories} \end{minipage} \end{wrapfigure} It is thus clear that to ensure validity of Efron's bootstrap, we require estimates ${\Reward}_{\Dnset},{\Transition}_{\Dnset}$ that are smoother around $d^{\Dnset}(s,a)\approx 0$. For example, smoother empirical reward and transition functions may be found by defining biased reward and transitions in terms of some fixed $\kappa>0$, \begin{align} {\Reward}^\kappa_{\Dnset}(r|s,a) &:= \frac{d^{\Dnset}(s, a, r) + \kappa\cdot\Reward_{\mathrm{prior}}(r|s,a)}{d^{\Dnset}(s, a) + \kappa} \\ {\Transition}^\kappa_{\Dnset}(s'|s,a) &:= \frac{d^{\Dnset}(s, a, s') + \kappa\cdot\Transition_{\mathrm{prior}}(s'|s,a)}{d^{\Dnset}(s, a) + \kappa}. \end{align} These biased functions would yield a \emph{regularized} DM estimate: \begin{equation} \label{eq:biased-dm} \avgstephat^\kappa(\pi|\mathcal{D}_n) := (1-\gamma)\cdot \mathbb{E}\left[\left.\sum_{t=0}^\infty \gamma^t\cdot r_t~\right|~ {\mu}_0^{\Dnset},\pi,{\Reward}^\kappa_{\Dnset},{\Transition}^\kappa_{\Dnset}\right]. \end{equation} This estimator is provably amenable to statistical bootstrapping regardless of data coverage, although at the cost of providing intervals for a \emph{biased} estimate of $\avgstep(\pi)$, as stated by the following theorem. \begin{theorem}[Correctness of regularized DM with bootstrapping] \label{theorem:regularized} Under Assumption~\ref{assumption:bounded_rewards}, the use of Algorithm~\ref{alg:bootstrap} with $F(d^{\Dnset}):= \avgstephat^\kappa(\pi|\mathcal{D}_n)$ yields confidence intervals $C(d^{\Dnset})$ which are asymptotically correct, in the sense that \begin{equation} \Pr[\avgstephat^\kappa(\pi|d^\mathcal{D})\in C(d^{\Dnset})] = 1 - \alpha - O_p(n^{-1/2}). \end{equation} As for Theorem~\ref{theorem:qe}, the one-sided intervals converge at a rate $O_p(n^{-1/2})$ and these rates may be improved by using more sophisticated bootstrapping methods. \end{theorem} \begin{proof} See the appendix. \end{proof} For succinctness, we have expressed Theorem~\ref{theorem:regularized} in terms of the specific ${\Reward}^\kappa_{\Dnset},{\Transition}^\kappa_{\Dnset}$ defined above. In general, the guarantees of the theorem hold for any suitably smooth ${\Reward}^\kappa_{\Dnset},{\Transition}^\kappa_{\Dnset}$, i.e., reward and transition functions that are locally differentiable around $d^\mathcal{D}$; see the appendix for details. This more general result is promising for function approximation settings. In such settings, when using model-based evaluation or fitted $Q$-evaluation, it is straightforward to smooth out the estimated reward and transition functions via a number of standard regularizations. For example, in our experiments with neural network function approximators, we utilize standard weight decay, which acts as a regularization towards prior reward and transition functions implicitly defined by the network structure. \subsection{Noisy Rewards} Even with sufficient coverage or appropriate regularization, the computed confidence intervals will generally be over-confident and under-cover the true value, especially in low-data regimes. This is due to the fact that for finite $n$, the empirical variance of the functional $F$ over the bootstrapped datasets is in general an underestimate of the true variance. % To incorporate additional variance, we propose to augment the dataset $\mathcal{D}_n$ via perturbations applied to observed rewards, \begin{multline} ~~~~~~~~~~~~~~~~~~~~~~~~~\widetilde{\mathcal{D}}_n \leftarrow \mathcal{D}_n~~\cup~~ \{(s_0, s, a, r + R_{\mathrm{noise}}, s')~|~(s_0, s, a, r, s')\in\mathcal{D}_n\} \\ \cup~~ \{(s_0, s, a, r - R_{\mathrm{noise}}, s')~|~(s_0, s, a, r, s')\in\mathcal{D}_n\}. \end{multline} Note that the variance of the empirical dataset is increased to $\mathrm{Var}_{\widetilde{\mathcal{D}}_n}[r] = \frac{2}{3}R_{\mathrm{noise}}^2 + \mathrm{Var}_{\mathcal{D}_n}[r]$. Given the augmented dataset $\widetilde{\mathcal{D}}_n$, one may perform Algorithm~\ref{alg:bootstrap} as-is, sampling $b$ bootstrapped datasets each of $n$ elements. This same technique of augmenting a dataset with noisy rewards has been used in the bandit literature as a way to perform better exploration~\cite{kveton2019perturbed,kveton2018garbage}. As in this previous literature, a large enough $R_{\mathrm{noise}} \ge \sqrt{\frac{3}{2}}\cdot(1-\gamma)^{-1}\cdotR_{\mathrm{max}}$ would be sufficient to compensate for the inherent under-coverage in bootstrapping, although in practice a much smaller $R_{\mathrm{noise}}$ can still yield good coverage. With noisy rewards, we are able to compensate for the under-coverage of Theorems~\ref{theorem:qe} and~\ref{theorem:regularized}. However, this generally comes at the cost of over-coverage. In practice, the parameter $R_{\mathrm{noise}}$ provides a way to trade-off between safety in small data regimes and looseness of the confidence intervals. In our experiments, we found that setting $R_{\mathrm{noise}}=0.25\cdot\sqrt{\mathrm{Var}_{\mathcal{D}_n}[r]}$ provides a reasonable trade-off for our considered environments. \section{Proofs} \subsection{Hadamard Differentiability} We provide a definition of Hadamard differentiability, which is a key property for showing validity of Efron's bootstrap. The following is paraphrased from~\cite{wasserman2006all}. \begin{definition} Suppose $F$ is a functional mapping distributions over $\mathcal{P}:= \mathcal{S}\times \mathcal{S}\times\mathcal{A}\times\mathbb{R}\times\mathcal{S}$ (i.e., distributions of tuples $(s_0,s,a,r,s')$) to $\mathbb{R}$. Denote $\mathcal{P}_L$ as the the linear space generated by $\mathcal{P}$. The functional $F$ is said to be \textbf{Hadamard differentiable} at $d^{\Dtilde}\in\mathcal{P}$ if there exists a linear functional $L_{\widetilde{\mathcal{D}}}$ on $\mathcal{P}_L$ such that for any $\epsilon_n \to 0$ and $P, P_1,P_2,P_3,\dots \in \mathcal{P}_L$ such that $\|P_n - P\|_\infty \to 0$ and $d^{\Dtilde} + \epsilon_n P_n \in \mathcal{P}$, \begin{equation} \lim_{n\to\infty} \left| \frac{F(d^{\Dtilde} +\epsilon_n P_n) - F(d^{\Dtilde})}{\epsilon_n} - L_{\widetilde{\mathcal{D}}}(P) \right| = 0. \end{equation} \end{definition} \subsection{Proof of Theorem~\ref{theorem:qe}} \label{sec:proof1} As in the main text, we use $\mu_0^{\Dtilde},\Reward_{\Dtilde},\Transition_{\Dtilde}$ to denote the initial state, conditional reward, and conditional transition distributions observed in $d^{\Dtilde}$. Furthermore, let $\overline{\Reward}_{\Dtilde} = \mathbb{E}_{\Reward_{\Dtilde}}[r]$. For ease of notation, we will use matrix notation and assume finite state and action spaces (an extension to Hilbert spaces with linear operators is straightforward). The functional $F$ may be expressed as, \begin{equation} \label{eq:def-f} F(d^{\Dtilde}) := (1-\gamma)\cdot \overline{\Reward}_{\Dtilde}^{T} (I - \gamma \Pi \Transition_{\Dtilde})^{-1} \Pi\mu_0^{\Dtilde}, \end{equation} where we use $\Pi$ to denote the matrix mapping distributions over states to distributions over state actions, where actions as sampled by $\pi$. Note that, assuming $d^\pi(s,a)>0\Rightarrow d^\mathcal{D}(s,a)>0$, the components of this expression for $F$ at $d^{\Dtilde}=d^\mathcal{D}$ yield the $Q^\pi$-values and on-policy distribution $d^\pi$. Specifically, \begin{align} \label{eq:visitpi-matrix} d^\pi &= (1-\gamma)(I - \gamma \Pi \mathcal{T})^{-1} \Pi\mu_0, \\ \label{eq:qpi-matrix} Q^\pi &= \overline{\mathcal{R}}^{T} (I - \gamma \Pi \mathcal{T})^{-1}. \end{align} For general $d^{\Dtilde}$, these expressions will yield $\visitpi_{\Dtilde}$, the on-policy distribution in the empirical MDP, and $Q^\pi_{\Dtilde}$, the $Q^\pi$ values in the empirical MDP, respectively. As mentioned in the proof sketch, the validity of Theorem~\ref{theorem:qe} rests on the Hadamard differentiability of $F(d^{\Dtilde})$ for all $d^{\Dtilde}$ in a neighborhood around $d^\mathcal{D}$. In addition to local Hadamard differentiability, one must also have that the derivative linear functional $L_{\mathcal{D}}$ satisfy \begin{equation} 0 < \mathbb{E}_{(s_0,s,a,r,s')\simd^\mathcal{D}}\left[L_{\mathcal{D}}(\delta_{(s_0,s,a,r,s')}-d^\mathcal{D})^2\right] < \infty. \end{equation} See Theorems 3.19 and 3.21 in~\cite{wasserman2006all} for more information. In the text below, we will show that $F$ is indeed Hadamard differentiable with derivative satisfying \begin{equation} L_{\mathcal{D}}(\delta_{(s_0,s,a,r,s')}-d^\mathcal{D}) = O(d^\pi(s,a) / d^\mathcal{D}(s,a)). \label{eq:linear-func} \end{equation} The result~\eqref{eq:linear-func} in conjunction with Assumption~\ref{assumption:bounded_ratios} will immediately make it clear that $\mathbb{E}_{(s_0,s,a,r,s')\simd^\mathcal{D}}\left[L_{\mathcal{D}}(\delta_{(s_0,s,a,r,s')}-d^\mathcal{D})^2\right] < \infty$. Moreover, the linear nature of the functional $F$ with respect to $\Reward_{\Dtilde}$ makes it clear that $0<\mathbb{E}_{(s_0,s,a,r,s')\simd^\mathcal{D}}\left[L_{\mathcal{D}}(\delta_{(s_0,s,a,r,s')}-d^\mathcal{D})^2\right]$, thus showing the validity of the bootstrap. We now continue to characterize the linear functional $L_{\mathcal{D}}$. We will first derive, via standard Frechet differentiation, the derivatives of $F(d^{\Dtilde})$ with respect to $\overline{\Reward}_{\Dtilde}$, $\mu_0^{\Dtilde}$, and $\Transition_{\Dtilde}$, for $d^{\Dtilde}$ that satisfy Assumption~\ref{assumption:bounded_rewards}. We will later use these results in conjunction with Assumption~\ref{assumption:bounded_ratios} to show the Hadamard differentiability of $F$ with respect to $d^{\Dtilde}$ in a ball around $d^\mathcal{D}$. \begin{itemize} \item $\overline{\Reward}_{\Dtilde}$: It is clear from~\eqref{eq:visitpi-matrix} that $\partial F / \partial \overline{\Reward}_{\Dtilde} = \visitpi_{\Dtilde}$. \item $\mu_0^{\Dtilde}$: It is clear from~\eqref{eq:qpi-matrix} that $\partial F / \partial \mu_0^{\Dtilde} = (1-\gamma)Q^\pi_{\Dtilde} \Pi$. \item $\Transition_{\Dtilde}$: This derivation is not as trivial as the previous two. Still, it may be approached in a straightforward manner by utilizing the policy gradient theorem~\cite{sutton2000policy}. Although the policy gradient theorem is typically used to derive gradients of $F$ with respect to $\Pi$, we may apply it here, interpreting $\Transition_{\Dtilde}$ as the stationary ``policy'' whose gradient we wish to calculate (``transitions'' are now between state-action pairs and the ``actions'' are choices of next states). Specifically, we may re-write~\eqref{eq:def-f} as \begin{equation} K + (1-\gamma)\cdot(\overline{\Reward}_{\Dtilde}^T\Pi) (I - \gamma \Transition_{\Dtilde}\Pi)^{-1} \Transition_{\Dtilde} (\Pi\mu_0^{\Dtilde}), \end{equation} where $K$ is constant with respect to $\Transition_{\Dtilde}$. This way, we deduce that $\frac{\partial F}{\partial \Transition_{\Dtilde}(s'|s,a)} = \visitpi_{\Dtilde}(s,a) \cdot \mathbb{E}_{a'\sim\pi(s')}[Q^\pi_{\Dtilde}(s',a')]$ for all $s,a,s'$. \end{itemize} With these three partial derivatives calculated, we may continue to show differentiability of $F$ in a neighborhood around $d^\mathcal{D}$. Without loss of generality, we assume that $d^\pi$ has full support; if not, we may simply ignore all tuples outside of the support, since they do not affect $\avgstep(\pi)$ or $\avgstep_{\mathrm{DM}}(\pi)$ (note that by Assumption~\ref{assumption:bounded_ratios} this means $d^\mathcal{D}$ also has full support). Now we continue to characterize the derivative of $F$. Denote the derivative of $F$ by $F'$, where $F'(d^{\Dtilde})$ is defined to be the linear functional satisfying, \begin{equation} \label{eq:directional-deriv} \hspace{-2mm} \left\langle F'(d^{\Dtilde}), \delta_{(s_0^*,s^*,a^*,r^*,s^{\prime*})} - d^{\Dtilde} \right\rangle = \lim_{t\to0} \frac{1}{t}\left(F((1-t)\cdotd^{\Dtilde} + t\cdot\delta_{(s_0^*,s^*,a^*,r^*,s^{\prime*})}) - F(d^{\Dtilde})\right), \hspace{-2mm} \end{equation} for all tuples $(s_0^*,s^*,a^*,r^*,s^{\prime*})$. We analyze the behavior of these directional limits. We again split our analysis into three parts: \begin{itemize} \item Influence of $r^*$. The influence of $r^*$ is in the empirical average reward function at $s^*,a^*$: $\overline{\Reward}_{\Dtilde}(s^*,a^*)$. At a change of $t$, this value is updated to \begin{equation} \label{eq:rr-deriv} \frac{(1-t)d^{\Dtilde}(s^*,a^*) \overline{\Reward}_{\Dtilde}(s^*,a^*)+t r^*}{(1-t)d^{\Dtilde}(s^*,a^*) + t}. \end{equation} The derivative of this expression at $t=0$ is $\frac{-\overline{\Reward}_{\Dtilde}(s^*,a^*) + r^*}{d^{\Dtilde}(s^*,a^*)}$. Combined with the partial derivative computed earlier, we find the total influence on $F$ is $\frac{\visitpi_{\Dtilde}(s^*,a^*)}{d^{\Dtilde}(s^*,a^*)} (-\overline{\Reward}_{\Dtilde}(s^*,a^*) + r^*)\cdot t$ as $t\to0$. \item Influence of $s_0^*$. The influence of $s_0^*$ is in the empirical initial state distribution $\mu_0^{\Dtilde}$, which is updated to $(1-t)\mu_0^{\Dtilde} + t\delta_{s_0^*}$. To deduce the influence on $F$, we combine with the partial derivative computed earlier, and find the change in $F$ to be $\left(-\avgstep_{\mathrm{DM}}(\pi|d^{\Dtilde}) + (1-\gamma)\mathbb{E}_{a_0\sim\pi(s_0^*)}[Q^\pi_{\Dtilde}(s_0^*,a_0)]\right)\cdot t$. \item Influence of $s^{\prime*}$. As for the reward, the influence here is in the empirical transition probabilities $\Transition_{\Dtilde}(s' | s^*,a^*)$, which is updated to \begin{equation} \label{eq:tt-deriv} \frac{(1-t)d^{\Dtilde}(s^*,a^*)\Transition_{\Dtilde}(s'|s^*,a^*) + t\delta_{s^{\prime*}}(s')}{(1-t)d^{\Dtilde}(s^*,a^*) + t}. \end{equation} The derivative of this expression at $t=0$ is $\frac{-\Transition_{\Dtilde}(s'|s^*,a^*) + \delta_{s^{\prime*}}(s')}{d^{\Dtilde}(s^*,a^*)}$. Combining this with the known partials of $F$ with respect to $\Transition_{\Dtilde}$, we find that the total influence on $F$ is $\frac{\visitpi_{\Dtilde}(s^*,a^*)}{d^{\Dtilde}(s^*,a^*)} \left(-\mathbb{E}_{s'\sim\Transition_{\Dtilde}(s^*,a^*),a'\sim\pi(s')}[Q^\pi_{\Dtilde}(s',a')] + \mathbb{E}_{a'\sim\pi(s^{\prime*})}[Q^\pi_{\Dtilde}(s^{\prime*},a')]\right)\cdot t$ as $t\to0$ \end{itemize} We may see that each of these influences on $F$ are linear in $t$. By Assumption~\ref{assumption:bounded_rewards}, $r^*$ is uniformly bounded, as are $\overline{\Reward}_{\Dtilde}$ and $Q^\pi_{\Dtilde}$. Thus, in conjunction with the Riesz representation theorem, we deduce that the derivative $F'$ satisfies \begin{equation} \label{eq:f-deriv} ||F'(d^{\Dtilde})||_\infty = O\left( \left|\left|\frac{\visitpi_{\Dtilde}}{d^{\Dtilde}}\right|\right|_\infty\right). \end{equation} Now consider an arbitrary distribution $d^{\Etilde}$ and the directional limit \begin{equation} \lim_{t\to0}\frac{1}{t}\left( F((1-t)\cdotd^{\Dtilde} + t\cdotd^{\Etilde}) - F(d^{\Dtilde})\right). \end{equation} Analogous to the derivations above, we may find, \begin{itemize} \item The empirical average reward $\overline{\Reward}_{\Dtilde}(s, a)$ at a change of $t$ is updated to \begin{equation} \label{eq:rr-deriv2} \frac{(1-t)d^{\Dtilde}(s,a) \overline{\Reward}_{\Dtilde}(s,a)+td^{\Etilde}(s,a)\overline{\Reward}_{\Etilde}(s,a)}{(1-t)d^{\Dtilde}(s,a) + td^{\Etilde}(s,a)}. \end{equation} \item The empirical initial state distribution at a change of $t$ is updated to \begin{equation} \label{eq:ii-deriv2} (1-t)\mu_0^{\Dtilde} + t\mu_0^{\Etilde}. \end{equation} \item The empirical transition probabilities $\Transition_{\Dtilde}(s'|s,a)$ at a change of $t$ are updated to \begin{equation} \label{eq:tt-deriv2} \frac{(1-t)d^{\Dtilde}(s,a)\Transition_{\Dtilde}(s'|s,a) + td^{\Etilde}(s,a)\Transition_{\Etilde}(s'|s,a)}{(1-t)d^{\Dtilde}(s,a) + td^{\Etilde}}. \end{equation} \end{itemize} By considering the limits of~\eqref{eq:rr-deriv2},~\eqref{eq:ii-deriv2},~\eqref{eq:tt-deriv2} as $t\to0$, it is clear that \begin{equation} \label{eq:directional-deriv2} \left\langle F'(d^{\Dtilde}), d^{\Etilde} - d^{\Dtilde} \right\rangle = \lim_{t\to0} \frac{1}{t}\left(F((1-t)\cdotd^{\Dtilde} + t\cdotd^{\Etilde}) - F(d^{\Dtilde})\right). \end{equation} To show Hadamard differentiability, we invoke Assumption~\ref{assumption:bounded_ratios}, which implies that there exists a sufficiently small $\zeta=\epsilon/2$ such that the $L_\infty$ ball centered at $d^\mathcal{D}$ with radius $\zeta$ has uniformly bounded $||d^\pi/d^{\Dtilde}||_\infty$. Since the support of $\visitpi_{\Dtilde}$ is contained within the support of $d^\pi$, this means that the same ball has uniformly bounded $||\visitpi_{\Dtilde}/d^{\Dtilde}||_\infty$. Moreover, it is clear that within this ball $d^{\Dtilde}>\epsilon/2$ uniformly, and so the directional derivatives of~\eqref{eq:rr-deriv2},~\eqref{eq:ii-deriv2}, and~\eqref{eq:tt-deriv2} converge uniformly with $t\cdot\|d^{\Etilde}\|_\infty$. Thus, there exists a sufficiently small ball around $d^\mathcal{D}$ within which $F$ is Hadamard differentiable. This completes our proof. \subsection{Proof of Theorem~\ref{theorem:bounded_ratios}} First, a brief sketch: If Assumption~\ref{assumption:bounded_ratios} does not hold, then for any $L_\infty$ ball, one may find a distribution near $d^\mathcal{D}$ outside of the support of $\pi$, and this will cause discontinuities in $F$. Now more concretely: Consider an MDP with state space $\{s_{\mathrm{start}}, s_{\mathrm{term}}, s_1,s_2,\dots\}$. The MDP's initial state distribution is $\mu_0 := \delta_{s_{\mathrm{start}}}$. The MDP has a single action $a$ and the transition function is defined as, \begin{align} \mathcal{T}(s_n|s_{\mathrm{start}},a) & = \frac{6}{\pi^2 n^2}, \\ \mathcal{T}(s_{\mathrm{start}}|s_{\mathrm{start}},a) & = 0, \\ \mathcal{T}(s_{\mathrm{term}}|s_{\mathrm{start}},a) & = 0, \\ \mathcal{T}(s_n, a) &= \delta_{s_{\mathrm{term}}}, \\ \mathcal{T}(s_{\mathrm{term}}, a) &= \delta_{s_{\mathrm{term}}} \end{align} The reward function is defined as \begin{align} \mathcal{R}(s_{\mathrm{start}},a) &= \delta_0, \\ \mathcal{R}(s_n,a) &= \delta_0, \\ \mathcal{R}(s_{\mathrm{term}},a) &= \delta_1. \end{align} Define prior reward and transition functions \begin{align} \Transition_{\mathrm{prior}}(s, a) &:= \delta_{s_{\mathrm{term}}}, \\ \Reward_{\mathrm{prior}}(s, a) &:= \delta_1. \end{align} Let policy $\pi$ be a policy on this MDP (there exists only one). Consider $\gamma=0.5$. Thus we have \begin{equation} \avgstep(\pi) = \frac{1}{4}, \end{equation} \begin{equation} d^\pi(s_n, a) = \frac{3}{2\pi^2 n^2}. \end{equation} Let $d^\mathcal{D}$ be defined as $d^\mathcal{D} := d^\pi$. It is clear that $d^\mathcal{D}$ satisfies $\|d^\pi / d^\mathcal{D}\|_\infty = 1 < \infty$ but that Assumption~\ref{assumption:bounded_ratios} does not hold. Now consider any $L_\infty$ ball around $d^\mathcal{D}$. Suppose this ball has radius $\zeta>0$ and let $N$ be such that $\frac{3}{2\pi^2 N^2} < \zeta$. We may define the distribution \begin{equation} d^{\Dtilde} := d^\mathcal{D} - \frac{3}{2\pi^2 N^2} \delta_{(s_{\mathrm{start}}, s_N, a, 0, s_{\mathrm{term}})} + \frac{3}{2\pi^2 N^2} \delta_{(s_{\mathrm{start}}, s_{\mathrm{term}}, a, 1, s_{\mathrm{term}})}. \end{equation} It is clear that $d^{\Dtilde}$ is within the $L_\infty$ ball and $d^{\Dtilde}(s_N, a) = 0$. Thus, $\Reward_{\Dtilde}(s_N,a)=\Reward_{\mathrm{prior}}$ and so \begin{equation} F(d^{\Dtilde}) = \frac{1}{4} + \frac{3}{2\pi^2 N^2}. \end{equation} Now we define \begin{equation} P := \delta_{(s_{\mathrm{start}}, s_1, a, 0, s_{\mathrm{term}})} - d^{\Dtilde}, \end{equation} \begin{equation} \epsilon_n := \frac{1}{n}. \end{equation} It is clear that a change $d^{\Dtilde}\to d^{\Dtilde} + \epsilon_n\cdot P$ would not change the empirical reward or transition functions, and so we have, \begin{equation} \lim_{n\to\infty} \frac{1}{\epsilon_n}(F(d^{\Dtilde} + \epsilon_n\cdot P) - F(d^{\Dtilde})) = 0. \end{equation} We may also consider a sequence $\{P_n\}_{n=1}^\infty$ defined as \begin{equation} P_n:= \frac{1}{n}\cdot\delta_{(s_{\mathrm{start}}, s_N, a, 0, s_{\mathrm{term}})} + \left(1-\frac{1}{n}\right)\delta_{(s_{\mathrm{start}}, s_1, a, 0, s_{\mathrm{term}})} - d^{\Dtilde}. \end{equation} Clearly $\lim_{n\to\infty} P_n = P$. However, $P_n$ changes the empirical reward distribution at $(s_N, a)$, and this causes \begin{equation} \lim_{n\to\infty} \frac{1}{\epsilon_n} (F(d^{\Dtilde} + \epsilon_n\cdot P_n) - F(d^{\Dtilde})) = \lim_{n\to\infty} \frac{1}{\epsilon_n}\cdot\frac{3}{2\pi^2 N^2} = \infty. \end{equation} Thus, $F$ is not Hadamard differentiable at $d^{\Dtilde}$. \subsection{Proof of Theorem~\ref{theorem:regularized}} We prove a more useful generalization of this theorem, stated below: \paragraph{Generalized Theorem~\ref{theorem:regularized}} Suppose ${\Reward}^\kappa_{\Dtilde},{\Transition}^\kappa_{\Dtilde}$ are reward and transition probability functions defined with respect to general distributions $d^{\Dtilde}$ and that these functions are differentiable with respect to $d^{\Dtilde}$ in a neighborhood around $d^\mathcal{D}$ with uniformly bounded derivatives. Under Assumption~\ref{assumption:bounded_rewards}, the use of Algorithm~\ref{alg:bootstrap} with $F(d^{\Dnset}):= \avgstephat^\kappa(\pi|\mathcal{D}_n)$ yields confidence intervals $C(d^{\Dnset})$ which are asymptotically correct, in the sense that \begin{equation} \Pr[\avgstephat^\kappa(\pi|d^\mathcal{D})\in C(d^{\Dnset})] = 1 - \alpha - O_p(n^{-1/2}). \end{equation} As for Theorem~\ref{theorem:qe}, the one-sided intervals converge at a rate $O_p(n^{-1/2})$ and these rates may be improved by using more sophisticated bootstrapping methods. \paragraph{Proof} The proof is straightforward given the derivations in Section~\ref{sec:proof1}. Specifically, analogous to Section~\ref{sec:proof1} one may readily show that \begin{align} \partial F / \partial \overline{\mathcal{R}}_{\widetilde{\mathcal{D}}}^\kappa &= \visitpi_{\Dtilde} \\ \frac{\partial F}{\partial \Transition_{\Dtilde}^\kappa(s'|s,a)} &= \visitpi_{\Dtilde}(s,a) \cdot \mathbb{E}_{a'\sim\pi(s')}[Q^\pi_{\Dtilde}(s',a')]. \end{align} Using chain rule with the assumption of differentiability of ${\Reward}^\kappa_{\Dtilde},{\Transition}^\kappa_{\Dtilde}$ then immediately shows that $F'$ is well-defined and thus $F$ is appropriately differentiable around $d^\mathcal{D}$. \newpage \subsection{Additional Experiments} In this section we provide additional experiments for Model Based Policy Evaluation. In particular, we demonstrate that the scale of the noise has a similar effect for MB policy evaluation as for Fitted Q-Evaluation (see Figure \ref{app:mujoco}). \begin{figure}[h] \begin{center} \includegraphics[scale=0.375]{figs/mujoco/appendix/DM-Reacher-v2-mb.pdf} \includegraphics[scale=0.375]{figs/mujoco/appendix/DM-HalfCheetah-v2-mb.pdf} \includegraphics[scale =0.375]{figs/mujoco/appendix/DM-Hopper-v2-mb.pdf} \end{center} \caption{Additional results for MB. We plot different the confidence interval for different values of the noise scale.} \label{app:mujoco} \end{figure} \subsection{Experimental Details} For the ease of reproducibility we provide details of our experimental setup. For all methods we normalize the states and rewards to have mean of $0$ and standard deviation of $1$. We normalize the terminating rewards accordingly. For all neural networks we use orthogonal initialization. \paragraph{Fitted Q-Evaluation} We use 2 layer MLP with 256 hidden units and perform standard TD-$0$ policy evaluation. In order to compute the target value for FQE, we use target networks that are updated using Polyak averaging with $\tau=0.005$ as in \cite{lillicrap2015continuous}. For the results we plot predictions from the target network. \paragraph{Model based policy evaluation} We perform model based policy evaluation as described in \cite{hanna2016high}. We found that to make the algorithm stable in low data regime, it is crucial to apply L2 regularization and clip states and rewards generated by the models to the limits observed in the training data. The forward model predicts offset from the current state: $f_\theta(s, a) \mapsto s' - s$ and trained by optimizing a mean squared error $\cfrac{1}{N}\sum_{i=1}^N(f_\theta(s_i, a_i) + (s_i - s'_i))^2$, while for rewards we train a model that regresses rewards directly $\cfrac{1}{N}\sum_{i=1}^N(g_\theta(s_i, a_i) - r_i)^2$. We also train a model that predicts terminating condition via binary classification. \section{Related Work} Our paper focuses on producing confidence bounds for off-policy evaluation and therefore follows a long line of work on \emph{high-confidence policy evaluation} (HCOPE)~\cite{Thomas15HCPE}. Many of the existing methods for HCOPE focus on importance sampling (IS) based estimators, in which the rewards of a trajectory are re-weighted according to an inverse propensity ratio to yield an unbiased estimate of $\avgstep(\pi)$~\cite{precup2000eligibility}. Given a dataset with several trajectories, one may derive several unbiased estimates and then use concentration inequalities to derive high-confidence lower and upper bounds on the true average~\cite{Thomas15HCPE}. Since these concentration inequalities typically require unbiased estimates, they are not applicable to the direct method. % In terms of statistical bootstrapping, there have been several instances of its use for off-policy evaluation. Specifically,~\cite{Thomas15HCPI} combined statistical bootstrapping with IS to derive OPE confidence intervals. Unlike for DM, the validity of Efron's bootstrap with IS is straightforward, since the functional $F$ in this case is the standard mean. We are aware of one previous instance in which statistical bootstrapping was used for high-confidence policy evaluation with DM; specifically,~\cite{hanna2017bootstrapping} proposes to use Efron's bootstrap in conjunction with model-based learning, similar to the present work. However, the validity of using Efron's bootstrap is not addressed in this previous work. The theoretical investigation we presented is a key contribution of our paper. Notably, we found that the use of Efron's bootstrap directly is misguided without the use of strong assumptions, or alternatively, as we suggest, the use of mechanisms like regularization and noisy rewards. Furthermore, our experimental work presents strong results on continuous control benchmarks, while previous work mostly focuses on tabular domains. Outside of the narrow scope of HCOPE, the ideas behind Efron's bootstrap have inspired a number of existing RL algorithms. Specifically, statistical bootstrapping has been proposed as a mechanism for exploration; e.g., bootstrapped DQN~\cite{osband2016deep,osband2017deep}. However, in practice, the type of bootstrapping performed in these algorithms is far from that prescribed by Efron's bootstrap. Usually, an ensemble of models is learned over the whole dataset, without any re-sampling or bias correction, and thus the theory behind bootstrap does not readily apply. Although this simple paradigm has achieved impressive results on hard exploration environments~\cite{nachum2019does}, in our initial experiments for off-policy evaluation we found the naive ensembling approach to yield poor confidence intervals. In the bandits literature, ideas from statistical bootstrapping have also been investigated as an exploration mechanism~\cite{kveton2018garbage,hao2019bootstrapping}. While we have focused on policy evaluation, extending the insights and derivations of the present paper to propose better algorithms for exploratory policy learning (or, conversely, safe policy learning) is an interesting avenue for future work.
{ "timestamp": "2020-07-28T02:41:26", "yymm": "2007", "arxiv_id": "2007.13609", "language": "en", "url": "https://arxiv.org/abs/2007.13609" }
\section{Introduction} Thanks to the advancement of distributed computing techniques, numerous large-scale systems have been developed to provide diverse services benefiting people in daily life. The architecture of these distributed systems is complex that consists of a set of heterogeneous subsystems. Particularly, with the advancement of the distributed ledger technology, the nested architectures become ubiquitous in decentralized systems \cite{androulaki_hyperledger_2018} \cite{ding_dagbase_2020}. Besides, these systems collaborate with each other by complex protocols \cite{ding_derepo_2020} \cite{ding_bloccess_2020} based on peer communications and cross-layer communications to satisfy the requirements. However, for such a complex distributed system, it is a challenge to ensure correctness and system properties. Reasoning about distributed systems in a sound logic plays an imperative role in proving correctness and properties. As an extension of Hoare logic, separation logic (SL) introduces the separating conjunction to provide modular reasoning about systems. SL was established in papers \cite{ohearn_logic_1999} \cite{ishtiaq_bi_2001}, and firmly developed by John C. Reynolds \cite{reynolds_separation_2002}. The intention of SL is to reason about resources generally and verify the correctness of memory usage, specifically random access memory (RAM), by merging the logic model and the engineering model, which presents high value in program verification. To make SL more expressive, concurrent separation logic (CSL) was advanced by Peter W. O’Hearn \cite{ohearn_resources_2007} to reason about concurrent programs, which makes it possible to formalize thread-level or process-level parallelisms. CSL has been mechanized by recent research \cite{jung_iris_2015} \cite{krebbers_essence_2017}, \cite{bizjak_iron_2019}, which proves effectiveness and powerful expressiveness in the formalization and verification of parallel systems and distributed systems. Nevertheless, the standard CSL does not support modularity well, which reflects in three aspects. \begin{itemize} \item The standard CSL only has spatial modularity, which leads to a barrier to tackle temporal problems. \item The standard CSL focuses on the low-level formalization such as memory management, which lacks modular components such as communications. \item The standard CSL is restricted to provide the support of nested formalization with modularity to specify and verify nested systems consisting of multiple parallel layers. \end{itemize} In this paper, we focus on the methodology of formalizing systems at different abstraction levels with great modularity. We propose an extended concurrent separation logic (ECSL) that enhances the modularity with the support of the temporal extension, communication extension, environment extension, and nest extension. Our principles for extending the CSL with modularity are twofold. On the one hand, we need to ensure unitarity that is to unify the specification and verification of systems at different abstraction levels. On the other hand, we need the compatibility of ECSL to permit the interpretation of CSL and typical variants. Our logic ECSL makes the following main contributions: \begin{enumerate} \item ECSL has spatiotemporal modularity to facilitate both spatial and temporal reasoning with the support of the temporal extension. Furthermore, it can embed a temporal logic to formulate the temporal properties. \item ECSL is capable of formalizing systems at different abstraction levels, especially complex systems with a nested architecture and a set of communication protocols with the support of communication extension, environment extension, and nest extension. Particularly, environment extension enables the specification to perceive the environment factors. \item ECSL follows the unitarity and compatibility principles. It can be implemented in a specification language to formalize systems developed with an expressive programming language. \end{enumerate} \section{Related Work} Since the establishment of CSL, modular reasoning of CSL has been a highlight to simplify the formalization of concurrent programs. O’Hearn addressed the importance of local reasoning in his early work \cite{ohearn_resources_2007}. Parallel mergesort was formalized in CSL with \textit{Parallel Composition Rule} to present the elegance of independent reasoning. Besides the restrictive reasoning about disjoint concurrency, the author introduced the concept of resource invariant to enable CSL to formalize inter-process interactions. A pointer-transferring buffer example was used to prove the effectiveness. Furthermore, Smallfoot \cite{berdine_smallfoot_2005} demonstrated the feasibility of mechanizing modular reasoning about concurrent programs with several detailed examples mechanized in Smallfoot. Since then, the CSL has been applied to verify a wide range of applications \cite{chen_using_2015}. Many variants of CSL has been proposed to enhance the capability of CSL with the support of modular reasoning. In \cite{bell_concurrent_2010}, a CSL supporting shared channel endpoints was proposed for pipelined parallelization. The authors made the use of modular reasoning to formalize the asynchronous channels. Although it extended the standard CSL with communication support, it fails to solve the temporal modularity issue and high-level formalization issue. The temporal modularity was tackled in \cite{sergey_programming_2017} by proposing DISEL, a framework for the compositional verification of distributed systems with their clients. This work extends the CSL with the capability of formalizing the distributed protocols. However, the formalization of systems consisting of multiple parallel layers is still a challenge. Recently, Iris Project, a higher-order CSL, was developed \cite{jung_iris_2015,jung_higher-order_2016,krebbers_essence_2017}, which highly extended CSL and enhanced modularity both in aspects of temporal formalization and high-level formalization. Particularly, Actris \cite{hinrichsen_actris_2019} and Aneris \cite{krogh-jespersen_aneris_2020} were proposed for reasoning about message passing and node-local resources in distributed systems. However, Aneris still struggles with the formalization of nested parallel architecture. \section{Concurrent Separation Logic} In this section, we introduce some critical concepts of CSL as the preliminary of our logic. As an extension of SL, CSL acts as a concurrent program logic in proving the correctness properties of concurrent programs. It still uses a Hoare triple style as the form for proving specifications: $\{ P \} ~\alpha~ \{ P' \}$, $P$ and $P'$ denote pre-condition and post-condition respectively while $\alpha$ denotes the action that changes the state of the program. We consider a general CSL introduced in \cite{vafeiadis_concurrent_2011}. Program state is defined as a tuple $\langle S,H \rangle$, where $S$ denotes the stack and $H$ denotes the heap. A specification language is structured in Definition~\ref{def:spec_lang_csl}. \begin{definition}[Syntax of Specification Language of CSL] \label{def:spec_lang_csl} Let $\bar{E}$ and $\bar{B}$ denote arithmetic and Boolean expressions respectively. The structure of the assertion $P$ and $P'$ are defined as follows: $\check{P}, \check{P}' ::= \textbf{emp} ~|~ \bar{B} ~|~ \bar{E}_1 \mapsto \bar{E}_2 ~|~ \check{P} \land \check{P}' ~|~ \check{P} \lor \check{P}' ~|~ \neg \check{P} ~|~ \check{P} \implies \check{P}' ~|~ \check{P} * \check{P}' ~|~ \check{P} \mathrel{-\mkern-6mu*} \check{P}'$ \end{definition} Separating conjunction $*$ and separation implication $\mathrel{-\mkern-6mu*}$ are two critical operators with special semantics. We define a modeling relation $(s, h) \models \check{P}, s \in S, h \in H$ meaning that the program state $(s, h)$ satisfies the assertion $\check{P}$. \begin{definition}[Semantics of Assertions] \label{def:sem_CSL} The semantics of assertions in CSL is given as follow: $(s,h) \models \textit{emp} \iff \textit{Dom}(h) = \emptyset$ $(s,h) \models \bar{E}_1 \mapsto \bar{E}_2 \iff \textit{Dom}(h) = \llbracket \bar{E}_1 \rrbracket_s \land h(\llbracket \bar{E}_1 \rrbracket_s) = \llbracket \bar{E}_2 \rrbracket_s $ $(s,h) \models \check{P} * \check{P}' \iff \exists h_1,h_2: h=h_1 \uplus h_2 \land (s,h_1) \models \check{P} \land (s,h_2) \models \check{P}'$ $(s,h) \models \check{P} \mathrel{-\mkern-6mu*} \check{P}' \iff \forall h_1: (\widetilde{h \uplus h_1}) \land (s,h_1) \models \check{P} \implies (s,h \uplus h_1) \models \check{P}'$ $\llbracket \bar{E} \rrbracket_s \triangleq s(\bar{E})$ $\widetilde{h} \triangleq h \text{ is defined}$ \end{definition} In CSL, an important proof rule is \textit{Parallel Composition Rule} given as follow: \begin{align*} \infer{\{ \circledast_{i=0}^n P_i \} ~\alpha_0 \parallel ... \parallel \alpha_n ~ \{ \circledast_{i=0}^n P_i' \} } {\{ P_0 \}~\alpha_0~\{ P_0' \}~...~\{ P_n \}~\alpha_n~\{ P_n' \} } \\ \text{(Parallel Composition Rule)} \end{align*} $\circledast_i^n$ denotes consecutive separating conjunction from index $i$ to $n$. This rule is the key to formalize the disjoint concurrency with the support of completely local reasoning about processes in a parallel program. Furthermore, CSL gives \textit{Critical Region Rule} to reason about the inter-process interaction. \begin{align*} \infer{\{ P \} \text{ with } r \text{ when } B \text{ do } \alpha~\{ P' \} } {\{ (P * \textit{RI}_r) \land B \}~\alpha~\{ P' * \textit{RI}_r \} } \\ \text{(Critical Region Rule)} \end{align*} Here, $\textit{RI}_r$ denotes the resource invariant and $B$ is the guard. Resource $r$ provides a mutual exclusion for different interactions with critical regions in a program. We can obtain that CSL supports modular reasoning by introducing separation operators. However, a general CSL only focuses on the spatial formalization with low-level modular reasoning about the memory. \section{Extended Concurrent Separation Logic} \subsection{Program Model} To illustrate our logic, we firstly define a program model in Definition \ref{def:program}. \begin{definition} \label{def:program} A program over set $V$ of typed variables is defined as $\mathfrak{P} \triangleq (L, A, \mathcal{E}, \hookrightarrow, L_0, g_0)$, where $L$ is a set of code locations and $A$ is a set of actions. $\mathcal{E}$ denotes the effect function $A \times \llbracket V \rrbracket \mapsto \llbracket V \rrbracket$. The notation ${\hookrightarrow} \subseteq L \times \| V \| \times A \times L$ represents the conditional transition relation. $L_0 \subseteq L$ and $g_0 \in \| V \|$ denotes a set of initial locations and the initial condition respectively. $\llbracket V \rrbracket$ denotes the set of variable evaluations that include memory locations $\mathcal{L}$. $\| V \|$ denotes the set of Boolean conditions over $V$. \end{definition} For convenience, we use the notation $l \xhookrightarrow{g:\alpha} l'$ as shorthand for $(l,g,\alpha,l') \in {\hookrightarrow}$ where $l \in L$ and $\alpha \in A$, meaning that the program $\mathfrak{P}$ goes from location $l$ to $l'$ when the current variable evaluation $\eta \models g$. Therefore, we can specify $l \xhookrightarrow{g:\alpha} l'$ in CSL as $\{ g \} \alpha \{ g' \}$ where $\mathcal{E}(\alpha,\eta) \models g'$. We call a program consisting of a set of programs as a system that is denoted as $\mathfrak{W}$. \subsection{Temporal Extension} We introduce temporal representation and reasoning to extend the CSL into two-dimension reasoning with temporal memory as the composition of program states. To illustrate the temporal memory, we firstly define \textit{action occurrence} in Definition~\ref{def:action_occurence}. \begin{definition}[Action Occurrence] \label{def:action_occurence} An action occurrence is a partial function $A \rightharpoonup \acute{A}$, where $\acute{A}$ denotes the set of occurred actions. $a \in \acute{A}$ is a tuple $\langle I, O, A, I_p \rangle$, where $I \subseteq \mathbb{N}^{+}$ is the set of indices of occurred actions, and $O \subseteq \mathbb{N}$ denotes the set of indices of action executors that are programs executing action $a$. $I_p \subseteq \mathbb{N}$ denotes the set of indices of occurred actions happened before action $a$. If $a$ is the first occurred action, then $a.I_p = 0$. \end{definition} Now, we formally define the structure of the program state in ECSL. \begin{definition}[Program State] \label{def:program_state} A program state $\omega \in \Omega$ is a tuple $\langle S, H, T \rangle$, where $S$ denotes the stack, $H$ denotes the heap, and $T$ denotes the temporal memory. Formally, we define $S$, $H$, and $T$ as follows: \begin{align*} S \triangleq V \mapsto \llbracket V \rrbracket \\ H \triangleq \mathcal{L} \rightharpoonup_\text{fin} \llbracket V \rrbracket \\ T \triangleq I \mapsto \acute{A} \\ \Omega \triangleq S \times H \times T \end{align*} \end{definition} We discuss further the temporal memory by defining path and trace. Firstly, we define a relationship between actions. \begin{definition}[Action Relation] The action ordered relation $\triangleleft$ is a partially ordered relation, which is defined as: $(a,a') \in \triangleleft \iff a=\textit{Pre}(a')$, where $a,a' \in \acute{A}$ and $\textit{Pre}(a)$ denotes the predecessor action set of $a$. \end{definition} We use the notation $a \triangleleft a'$ as shorthand for $(a,a') \in \triangleleft$. Intuitively, $a \triangleleft a'$ means action $a$ happens before action $a'$. \begin{definition}[Action Path] \label{def:action_path} A finite action path $\hat{\varrho}$ is a finite action sequence $a_1 a_2 ... a_n$ such that $\forall i \in [1,n): (a_i,a_{i+1}) \in \triangleleft$, where $n \geq 1$. An infinite action path $\varrho$ of $\mathfrak{P}$ is an infinite action sequence $a_1 a_2 a_3 ...$ such that $\forall i \in [1,+\infty):(a_i,a_{i+1}) \in \triangleleft$. \end{definition} \begin{definition}[Maximal and Initial] \label{def:max_init} An action path is maximal if and only if it is finite and terminable or infinite. An action path is initial if and only if $l_0 \xhookrightarrow{g_0:\alpha} l$, where $l_0 \in L_0$ is the initial location, $g_0$ is the initial condition, and $\alpha = a_1.A$ is the action of the first occurred action. \end{definition} Now, we can define temporal memory in Definition~\ref{def:temporal_memory} with Definition~\ref{def:action_path} and Definition~\ref{def:max_init}. \begin{definition}[Temporal Memory] \label{def:temporal_memory} Temporal memory $t \in T$ is an initial and maximal action path. \end{definition} We consider a program $\mathfrak{P}$ in $\mathfrak{W}$. For $a \in t$, if $a.O$ is mapping to $\mathfrak{P}$, we call $a$ is a block in the temporal memory associated with $\mathfrak{P}$. All blocks associated with $\mathfrak{P}$ compose a subset of the temporal memory $t_n$ that is the native temporal memory of $\mathfrak{P}$. We have $t = t_f \uplus t_n$ where $t_f$ is the foreign temporal memory of $\mathfrak{P}$. To connect the semantics with the temporal properties that are verified in ECSL, we introduce the action trace defined in Definition~\ref{def:action_trace}. \begin{definition}[Action Trace] \label{def:action_trace} The action trace of the finite action path $\hat{\varrho}=a_1 a_2 ... a_n$ is defined as $\mathcal{T}(\hat{\varrho})=\mathcal{P}(a_1)\mathcal{P}(a_2)...\mathcal{P}(a_n)$. The action trace $\mathcal{T}(\varrho)$ of the infinite action path $\varrho$ is defined in the same way. $\mathcal{P}$ is a function that relates a set of propositions to occurred action $a$, which is defined as $\mathcal{P} \triangleq a \mapsto 2^\textit{Prop}$, where $\textit{Prop}$ is a set of propositions. \end{definition} Additionally, we use $\mathcal{T}(\mathfrak{P})$ to denote the set of all possible traces of program $\mathfrak{P}$. The set of all possible traces of the system is denoted as $\mathcal{T}(\mathfrak{W})$. \begin{theorem} \label{the:trace_satisfication} Let \textit{Prop} be a set of propositions over a finite action trace $\mathcal{T}(\hat{\varrho})$ and $\Phi$ be a propositional logic formula over \textit{Prop}, then $\infer{\mathcal{T}(\hat{\varrho}) \models \Phi} {\forall a' \in \hat{\varrho}: (\forall a \in \hat{\varrho}: a \triangleleft a' \land \mathcal{P}(a) \models \Phi \implies \mathcal{P}(a') \models \Phi)}$ \end{theorem} \begin{proof} Let us consider a finite action trace $\mathcal{T}(\hat{\varrho})$ of the finite action path $\hat{\varrho}=a_1a_2...a_n$. We take an action $a_i \in \hat{\varrho}$ to construct a fragment $\hat{\varrho}'=a_1...a_i$ of the path such that $\forall a \in \hat{\varrho}': a \triangleleft a' \land \mathcal{P}(a) \models \Phi$. If it implies that $\mathcal{P}(a_i) \models \Phi$, we have $\forall a \in \hat{\varrho}': \mathcal{P}(a) \models \Phi \iff \mathcal{T}(\hat{\varrho}') \models \Phi$. Therefore, we can take all $a' \in \hat{\varrho}$ to construct all possible fragments of the action path. If $\forall \hat{\varrho}' \subseteq \hat{\varrho}:(\forall a \in \hat{\varrho}': a \triangleleft a' \land \mathcal{P}(a) \models \Phi \implies \mathcal{P}(a') \models \Phi)$, then $\forall a \in \hat{\varrho}: P(a) \models \Phi \iff \mathcal{T}(\hat{\varrho}) \models \Phi$. \end{proof} Intuitively, Theorem \ref{the:trace_satisfication} shows that for any occurred action $a'$ in $\mathcal{T}$, if all actions that happen before $a'$ satisfy $\Phi$, which implies that $a'$ also satisfies $\Phi$, then we have trace $\mathcal{T}$ satisfies $\Phi$. \subsection{Communication Extension} In ECSL, we consider the formalization of the communication as the elementary component to facilitate the high-level formalization of a complex system where communications among programs are indispensable. Channel is the abstraction of the transmission medium to convey information signals. Communication is the action of passing and receiving messages through the channel. In ECSL, we reason about the channels and communications as the basis. \begin{definition}[Channel] A channel $c \in C$ is a buffer with a capacity $\textit{Cap}(c) \in \mathbb{N} \cup \{ \infty \}$, a domain $\textit{Dom}(c)$. \end{definition} Let $c!m$ denote sending signal $m$ via channel $c$ and $c?v$ denote receiving a signal from channel $c$ and assign the signal to variable $v$. \begin{definition}[Communication] A communication $\pi \in \Pi$ is an action where $\Pi = \{ c!m, c?v \}$, $c \in C, m \in \textit{Dom}(c), v \in V \text{ with } \textit{Dom}(v) \supseteq \textit{Dom}(c)$. \end{definition} \begin{definition}[Complete] A communication $c!m$ is complete if and only if there exists a communication $c?v$ satisfying that $(c!m, c?v) \in \triangleleft$. A communication $c?v$ is complete if and only if there exists a communication $c!m$ satisfying that $(c!m, c?v) \in \triangleleft$. \end{definition} We can obtain that it is reasonable to consider channel $c$ as a buffer. In this manner, a communication $c!m$ produces signal $m$ into the buffer whereas a communication $c?v$ consumes a signal from the buffer while assigning it to variable $v$. With the definition of channels and communications, we extend our program model into the channel program model. \begin{definition}[Channel Program] A channel program over $(V, C)$ is defined in the same manner as Definition~\ref{def:program}: $\mathfrak{C} \triangleq (L, A, \mathcal{E}, \hookrightarrow, L_0, g_0)$. The only difference is that $\hookrightarrow \subseteq L \times \| V \| \times (A \cup \Pi) \times L$ where $V$ is a set of typed variables and $C$ is a set of channels. \end{definition} Hence, conditional transitions are extended with the communication action set $\Pi$, which yields conditional transitions $l \xhookrightarrow{g:c!m} l'$ and $l \xhookrightarrow{g:c?v} l'$ respectively. \begin{theorem} \label{thm:comm_spec} Let $c \in C$ be a finite asynchronous channel. If a complete communication $c!m$ and $c?v$ is correct, then the specifications below are satisfied. \begin{enumerate} \item $\{ m \mapsto - \}~c!m~\{ \top \}$ \item $\{ v \mapsto - \}~c?v~\{ \llbracket v \rrbracket = \llbracket m \rrbracket \}$ \end{enumerate} \end{theorem} \begin{proof} For the finite asynchronous channel $c$, we consider that $c$ is a finite queue with $\textit{Cap}(c) \in \mathbb{N}^{+}$. Recall the basic data type operation of a queue such as $\textit{Enqueue}$ (appending the element at the rear of the queue), $\textit{Dequeue}$ (returning and removing the element at the head of the queue), $\textit{Peek}$ (returning the element at the head of the queue without removing the element from the queue). We specify $c!m$ and $c?v$ respectively as follows. $c!m \triangleq \text{with } c \text{ when } \neg \textit{full}$ do $\textit{Enqueue}(c, m)$ $c?v \triangleq \text{with } c \text{ when } \neg \textit{empty}$ do $v \gets \textit{Dequeue}(c)$ where $\textit{full} \triangleq \textit{Len}(c) = \textit{Cap}(c)$, $\textit{empty} \triangleq \textit{Len}(c) = 0$. We introduce the resource invariant $RI_{c}$ as follows: $\textit{RI}_c \triangleq \neg \textit{full} \lor (\neg \textit{empty} \land \textit{Peek}(c) = \llbracket m \rrbracket)$ We have channel proof rules derived from \textit{Critical Region Rule}. \begin{align*} \infer{\{ P \} ~c!m~ \{ P' \} } {\{ (P * \textit{RI}_c) \land \neg \textit{full} \}~\textit{Enqueue}(c,m)~\{ P' * \textit{RI}_c \} } \\ \infer{\{ P \} ~c?v~ \{ P' \} } {\{ (P * \textit{RI}_c) \land \neg \textit{empty} \}~v \gets \textit{Dequeue}(c,m)~\{ P' * \textit{RI}_c \} } \label{eq:cpr} \end{align*} Now, we prove the first specification $\{ m \mapsto - \}~c!m~\{ \top \}$. \begin{align*} \{ (\textit{RI}_c * m \mapsto -) \land \neg \textit{full} \} \\ \{ \neg \textit{full} * m \mapsto - \} \\ \textit{Enqueue}(c, \llbracket m \rrbracket) \\ \{ \neg \textit{empty} \land \textit{Peek}(c) = \llbracket m \rrbracket \} \\ \{ \textit{RI}_c \} \\ \{ \textit{RI}_c \land \top \} \end{align*} The proof of $\{ v \mapsto - \}~c?v~\{ \llbracket v \rrbracket = \llbracket m \rrbracket \}$ is given below. \begin{align*} \{ (\textit{RI}_c * v \mapsto -) \land \neg \textit{empty} \} \\ \{ (\neg \textit{empty} \land \textit{Peek}(c) = \llbracket m \rrbracket) * v \mapsto - \} \\ v \gets \textit{Dequeue}(c) \\ \{ \llbracket v \rrbracket = \llbracket m \rrbracket \land \neg \textit{full} \} \\ \{ \neg \textit{full} * \llbracket v \rrbracket = \llbracket m \rrbracket \} \\ \{ \textit{RI}_c * \llbracket v \rrbracket = \llbracket m \rrbracket \} \end{align*} This completes our proof of the specifications given the complete communication is correct. \end{proof} \begin{corollary} \label{cor:com_spec} Let $c \in C$ be a finite asynchronous channel. If a complete communication $c!m$ and $c?v$ is correct, then $\{ m \mapsto - \}~c!m \parallel c?v~\{ \llbracket v \rrbracket = \llbracket m\rrbracket \}$ is satisfied. \end{corollary} \begin{proof} With \textit{Parallel Composition Rule}, we only need to prove separate programs locally in the same way of proof of Theorem~\ref{thm:comm_spec} and combine these local proofs with \textit{Critical Region Rule} and \textit{Parallel Composition Rule}. The proof outline is given in Table~\ref{tab:com_proof_spec2}. \begin{table} \caption{Proof outline of $\{ m \mapsto - \}~c!m \parallel c?v~\{ \llbracket v \rrbracket = \llbracket m \rrbracket \}$.} \label{tab:com_proof_spec2} \centering \begin{tabular}{ccc} \multicolumn{3}{c}{$\{ m \mapsto - \}$} \\ \multicolumn{3}{c}{$\{ m \mapsto - * \top \}$} \\ $\{ m \mapsto - \}$ & & $\{ \top \}$ \\ $c!m$ & $\parallel$ & $c?v$ \\ $\{ \top \}$ & & $\{ \llbracket v \rrbracket = \llbracket m \rrbracket \}$ \\ \multicolumn{3}{c}{$\{ \top * \llbracket v \rrbracket = \llbracket m \rrbracket \}$} \\ \multicolumn{3}{c}{$\{ \llbracket v \rrbracket = \llbracket m \rrbracket \}$} \end{tabular} \end{table} \end{proof} \subsection{Environment Extension} We extend the CSL with the representation of the environment to reason about the environment factors of a program, especially in a parallel system. The environment factor can be factorized into foreign factor and native factor. \begin{definition}[Judgement Form] \label{def:judgement_form} We define the form of a judgement of ECSL as: $J \vdash \{ \Gamma, \gamma \land P \}~\alpha~\{ \Gamma, \gamma' \land P' \}$, where $J$ denotes the judgement \cite{reddy_syntactic_2012}. $\Gamma$ specifies the foreign conditions while $\gamma$ and $\gamma'$ specify the native pre-conditions and post-conditions. $P$ and $P'$ are assertions. $\alpha \in A$ is the action to change the state of programs. The syntax of $\Gamma$ and $\gamma$ is defined in a manner of temporal logic. \end{definition} Intuitively, the foreign environment is a set of conditions from other processes while the native environment is a set of conditions from the local process. There is a special kind of assertion called \textit{pure} assertion. \begin{definition}[Pure Assertion] \label{def:pure} In program $\mathfrak{P}$, an assertion $P$ is pure if and only if $(s, h, t) \models P \implies \forall t' \in T: (s, h, t') \models P$. \end{definition} In intuitive terms, assertion $P$ is pure if and only if the validity of $P$ is independent with the environment factors. If both pre-condition and post-condition only contain pure assertions, the specification is reduced to the CSL. We introduce new rules for environment extension. For brevity, we use $\Upsilon$ to denote the conjunction of $\gamma$ and $P$. The big star notation $\circledast_i^n$ denotes consecutive separating conjunction from index $i$ to $n$. \begin{align*} \infer{\{ \Gamma, \Upsilon \} \text{ when } \Gamma \text{ do } \alpha~\{ \Gamma, \Upsilon' \} } {\{ \Gamma, \Upsilon \}~\alpha~\{ \Gamma, \Upsilon' \} } \\ \text{(Foreign Environment Rule)} \end{align*} \begin{align*} \infer{\{ \circledast_{i=0}^n \Upsilon_i \}~\alpha_0 \parallel ... \parallel \alpha_n~\{ \circledast_{i=0}^n \Upsilon_i' \}} {\{ \Gamma_0, \Upsilon_0 \}~\alpha_0~\{ \Gamma_0, \Upsilon_0' \}~...~\{ \Gamma_n, \Upsilon_n \}~\alpha_n~\{ \Gamma_n, \Upsilon_n' \}} \\ \text{(Environment Composition Rule)} \end{align*} In \textit{Environment Composition Rule}, the inference eliminates the foreign environment naturally if we regard a parallel system as the highest level of specification, which means that all separated programs must mutually satisfy the foreign environments of the other. \begin{example} \label{eg:env} We consider a system $\mathfrak{W}$ containing two parallel channel programs $\mathfrak{C}_0$ and $\mathfrak{C}_1$ specified in Table~\ref{tab:pcs_spec} where the left channel program is $\mathfrak{C}_0$ and the right one is $\mathfrak{C}_1$. The specification is well formulated with the support of the communication encapsulation and Theorem~\ref{thm:comm_spec}. \begin{table*} \caption{Specification of $\mathfrak{W}$ in Example~\ref{eg:env}.} \label{tab:pcs_spec} \centering \begin{tabular}{ccc} & $\{ \top, m_0, m_1 \mapsto -, - * v_0, v_1 \mapsto -, - \}$ & \\ $c!m_0$ & $\parallel$ & $c?v_0$ \\ $c!m_1$ & & $c?v_1$ \\ \multicolumn{3}{c}{$\{ \top, (c!m_0 \triangleleft c!m_1 \land \top) * (c?v_0 \triangleleft c?v_1 \land \llbracket v_0 \rrbracket = \llbracket m_0 \rrbracket \land \llbracket v_1 \rrbracket = \llbracket m_1 \rrbracket) \}$} \end{tabular} \end{table*} The sending channel program $\mathfrak{C}_0$ sends signals $m_0$, $m_1$. The receiving channel program $\mathfrak{C}_1$ receives signal $m_0$ and assigns it to local variable $v_0$ after $\mathfrak{C}_0$ completes sending action $c!m_0$ and receives signal $m_1$ in the same way. We can verify $\mathfrak{W}$ by verifying $\mathfrak{C}_0$ and $\mathfrak{C}_1$ separately according to \textit{Environment Composition Rule}. For each channel program, we formalize it from both foreign perspective and native perspective with the support of environment factors. For $\mathfrak{C}_0$ that is sending signals through channel $c$, we can specify and verify it as follows with omission of detailed pointer management: \begin{align*} \{ \top, \top \land m_0, m_1 \mapsto -, - \} \\ c!m_0 \\ \{ \top, c!m_0 \land m_1 \mapsto - \} \\ c!m_1 \\ \{ \top, c!m_0 \triangleleft c!m_1 \land \top \} \end{align*} It is noteworthy that with Definition~\ref{def:pure}, we can reduce the formalization of $\mathfrak{C}_0$ to a CSL form by omitting the environment factor because the assertion is pure. We use the action path as shorthand to notate the atomic proposition in a temporal logic. For $\mathfrak{C}_1$ that is receiving signals from channel $c$ and assigning signals to different variables, we can formalize it as follows: \begin{align*} \{ \top, v_0, v_1 \mapsto -, - \} \\ \{ c!m_0, v_0, v_1 \mapsto -, - \} \\ c?v_0 \\ \{ c!m_0, c?v_0 \land \llbracket v_0 \rrbracket = \llbracket m_0 \rrbracket \} \\ \{ c!m_0 \triangleleft c!m_1, c?v_0 \land \llbracket v_0 \rrbracket = \llbracket m_0 \rrbracket \} \\ c?v_1 \\ \{ c!m_0 \triangleleft c!m_1, c?v_0 \triangleleft c?v_1 \land \llbracket v_0 \rrbracket = \llbracket m_0 \rrbracket \land \llbracket v_1 \rrbracket = \llbracket m_1 \rrbracket \} \end{align*} Now, we give the outline of proof in Table~\ref{tab:env_proof} with \textit{Environment Composition Rule}. \begin{table*} \caption{Proof outline of the specification of $\mathfrak{W}$ in Example~\ref{eg:env}.} \label{tab:env_proof} \centering \begin{tabular}{lcl} & $\{ \top, m_0, m_1 \mapsto -, - * v_0, v_1 \mapsto -, - \}$ & \\ $\{ \top, \top \land m_0, m_1 \mapsto -, - \}$ & & $\{ c!m_0, v_0, v_1 \mapsto -, - \}$ \\ $c!m_0$ & $\parallel$ & $c?v_0$ \\ $\{ \top, c!m_0 \land m_1 \mapsto - \}$ & & $\{ c!m_0, c?v_0 \land \llbracket v_0 \rrbracket = \llbracket m_0 \rrbracket \}$ \\ $\{ \top, c!m_0 \land m_1 \mapsto - \}$ & & $\{ c!m_0 \triangleleft c!m_1, c?v_0 \land \llbracket v_0 \rrbracket = \llbracket m_0 \rrbracket \}$ \\ $c!m_1$ & $\parallel$ & $c?v_1$ \\ $\{ \top, c!m_0 \triangleleft c!m_1 \land \top \}$ & & $\{ c!m_0 \triangleleft c!m_1, c?v_0 \triangleleft c?v_1 \land \llbracket v_0 \rrbracket = \llbracket m_0 \rrbracket \land \llbracket v_1 \rrbracket = \llbracket m_1 \rrbracket \}$ \\ \multicolumn{3}{c}{$\{ \top, (c!m_0 \triangleleft c!m_1 \land \top) * (c?v_0 \triangleleft c?v_1 \land \llbracket v_0 \rrbracket = \llbracket m_0 \rrbracket \land \llbracket v_1 \rrbracket = \llbracket m_1 \rrbracket) \}$} \end{tabular} \end{table*} \end{example} From Example~\ref{eg:env}, we can obtain that each program in a system can be proved locally and then combined together with \textit{Environment Composition Rule} as long as their native environments could mutually satisfy the foreign environments. In other words, the system can be proved correct if and all foreign environments of programs in the system can be eliminated. By extending the environment representation, we equip the CSL with the capability to reason about the assertions together with environment factors including foreign environment and native environment, which enhances the modularity. Furthermore, we can formulate temporal properties in systems explicitly. \subsection{Nest Extension} To formalize complex systems with a nested architecture, we introduce the nest extension to enhance the capability of formalizing these systems from a low abstraction level to a high abstraction level. Let $\mathfrak{N}$ denote a set of systems at the same level. We have $N_0,.., N_n \in \mathfrak{N}$. $@$ is the ownership relation between action and system. $(\alpha, N) \in @$ denotes action $\alpha$ happens at system $N$, meaning that system $N$ has the ownership of action $\alpha$. We use the notation $\alpha @ N$ as shorthand for $(\alpha,N) \in @$. $\parallel_i$ is the notation for nest parallel where $i$ denotes the level ID to distinguish with program parallel $\parallel$. We introduce new rules for nest extension. \begin{align*} \infer{\{ \hat{\Gamma}, \circledast_{i=0}^n \Upsilon_i \}~\alpha_0@N \parallel ... \parallel \alpha_n@N~\{ \hat{\Gamma}, \circledast_{i=0}^n \Upsilon_i' \}} {\{ \Gamma_0, \Upsilon_0 \}~\alpha_0@N~\{ \Gamma_0, \Upsilon_0' \}~...~\{ \Gamma_n, \Upsilon_n \}~\alpha_n@N~\{ \Gamma_n, \Upsilon_n' \}} \\ \text{(Nest Environment Composition Rule)} \end{align*} \begin{align*} \infer{ \{ \circledast_{i=0}^n \Upsilon_i \}~\alpha_0@N_0 \parallel_i ... \parallel_i \alpha_n@N_n~\{ \circledast_{i=0}^n \Upsilon_i' \} } { \{ \Gamma_0, \Upsilon_0 \}~\alpha_0@N_0~\{ \Gamma_0, \Upsilon_0' \}...\{ \Gamma_n, \Upsilon_n \}~\alpha_n@N_n~\{ \Gamma_n, \Upsilon_n' \}} \\ \text{(Nest Composition Rule)} \end{align*} The most distinguishing feature of \textit{Nest Environment Composition Rule} is the immutability of the foreign environment. For a subsystem on $N$, the foreign environment includes the temporal properties of other subsystems on $N$ and the temporal properties of other systems at the same level. While specifying a parallel subsystem on system $N$, the temporal properties of other systems can be used as environment factors. For instance, to do action $c!m_1$, the sending program on $N$ needs to satisfy that another program on $N$ has received a signal $m_0$ from $N'$ through channel $c$ and assigned $m_0$ to variable $v$, which can be specified as $\{ c!m_0@N' \triangleleft c?v@N, m_1 \mapsto - \}~c!m_1~\{ \top, c!m_1 \}$. In this manner, we distinguish communications at different levels. Furthermore, our \textit{Nest Composition Rule} makes it possible to construct a nested system having multiple abstraction levels. \begin{example} \label{eg:nest} We consider a small network with two systems $N_0$ and $N_1$ communicating with each other. Each system has two parallel channel programs with one for sending signals and another one for receiving signals. Assume a simple communication protocol that $N_0$ sends a message $m_0$ to $N_1$ via channel $c_0$. When $N_1$ receives message $m_0$ from $N_0$, $N_1$ sends message $n_0$ to $N_0$ through channel $c_1$. When $N_0$ receives message $n_0$ from channel $c_1$, $N_0$ sends message $m_1$ back to $N_1$ through channel $x_1$. When $N_0$ confirms that it has received $n_1$ and finished the assignment, the sending channel program of $N_0$ will terminate in normal. We annotate interactions in the network in Table~\ref{tab:nest_annotate}. \begin{table*} \caption{Annotation of the network in Example~\ref{eg:nest}.} \label{tab:nest_annotate} \centering \begin{tabular}{ccccccc} $c_0!m_0 @ N_0$ & & $c_1?x_0 @ N_0$ & & $c_1!n_0 @ N_1$ & & $c_0?y_0 @ N_1$ \\ & $\parallel$ & & $\parallel_0$ & & $\parallel$ & \\ $c_0!m_1 @ N_0$ & & $c_1?x_1 @ N_0$ & & $c_1!n_1 @ N_1$ & & $c_0?y_1 @ N_1$ \end{tabular} \end{table*} With the support of the nest parallel extension, we can specify systems separately and combine local reasoning to complete the specification and proof. We specify system $N_0$ as follows: \begin{align*} \{ \top, m_0, m_1 \mapsto -, - * x_0, x_1 \mapsto -, - \} \\ c_0!m_0@N_0;c_0!m_1@N_0 \parallel c_1?x_0@N_0;c_1?x_1@N_0 \\ \{ c_1!n_0@N_1 \triangleleft c_1!n_1@N_1, (c_0!m_0 \triangleleft c_0!m_1) * \\ (c_1?x_0 \triangleleft c_1?x_1 \land \llbracket x_0 \rrbracket = \llbracket n_0 \rrbracket \land \llbracket x_1 \rrbracket = \llbracket n_1 \rrbracket) \} \end{align*} We give the proof outline for the specification of $N_0$ in Table~\ref{tab:n_0_proof}. For brevity, we omit the pointer management and merge some pre-conditions and post-conditions that are trivial to be proved with the CSL inference rules. \begin{table*} \caption{Proof outline of the specification of $N_0$ in Example~\ref{eg:nest}.} \label{tab:n_0_proof} \centering \begin{tabular}{ccc} & $\{ \top, m_0, m_1 \mapsto -, - * $ & \\ & $x_0, x_1 \mapsto -, - \}$ & \\ $\{ \top, m_0, m_1 \mapsto -, - \}$ & & $\{ c_0!m_0@N_0 \triangleleft c_1!n_0@N_1, x_0, x_1 \mapsto -, - \}$ \\ $c_0!m_0@N_0$ & $\parallel$ & $c_1?x_0@N_0$ \\ $\{ c_1?x_0@N_0, c_0!m_0@N_0 \land m_1 \mapsto - \}$ & & $\{ c_0!m_1@N_0 \triangleleft c_1!n_1@N_1, \llbracket x_0 \rrbracket = \llbracket n_0 \rrbracket \land x_1 \mapsto - \}$ \\ $c_0!m_1@N_0$ & $\parallel$ & $c_1?x_1@N_0$ \\ $\{ c_1?x_1@N_0, c_0!m_0@N_0 \triangleleft c_0!m_1@N_0 \}$ & & $\{ c_1!n_0@N_1 \triangleleft c_1!n_1@N_1, c_1?x_0@N_1 \triangleleft c_1?x_1@N_1 \land $ \\ & & $\llbracket x_0 \rrbracket = \llbracket n_0 \rrbracket \land \llbracket x_1 \rrbracket = \llbracket n_1 \rrbracket \}$ \\ \multicolumn{3}{c}{$\{ c_1!n_0@N_1 \triangleleft c_1!n_1@N_1, (c_0!m_0@N_0 \triangleleft c_0!m_1@N_0) * (c_1?x_0@N_0 \triangleleft c_1?x_1@N_0 \land \llbracket x_0 \rrbracket = \llbracket n_0 \rrbracket \land \llbracket x_1 \rrbracket = \llbracket n_1 \rrbracket) \}$} \end{tabular} \end{table*} We also give the specification of system $N_1$ as follows: \begin{align*} \{ \top, y_0, y_1 \mapsto -,- * n_0, n_1 \mapsto -, - \} \\ c_0?y_0@N_1;c_0?y_1@N_1 \parallel c_1!n_0@N_1;c_1!n_1@N_1 \\ \{ c_0!m_0@N_0 \triangleleft c_0!m_1@N_0, (c_1!n_0 \triangleleft c_1!n_1) * \\ (c_0?y_0 \triangleleft c_0?y_1 \land \llbracket y_0 \rrbracket = \llbracket m_0 \rrbracket \land \llbracket y_1 \rrbracket = \llbracket m_1 \rrbracket) \} \} \end{align*} The proof outline is similar to system $N_0$ in Table~\ref{tab:n_1_proof}. \begin{table*} \caption{Proof outline of the specification of $N_1$ in Example~\ref{eg:nest}.} \label{tab:n_1_proof} \centering \begin{tabular}{ccc} & $\{ \top, y_0, y_1 \mapsto -,- * $ & \\ & $n_0, n_1 \mapsto -, - \}$ & \\ $\{ c_0!m_0@N_0, y_0, y_1 \mapsto -, - \}$ & & $\{ c_0?y_0@N_1, n_0, n_1 \mapsto -, - \}$ \\ $c_0?y_0@N_1$ & $\parallel$ & $c_1!n_0@N_1$ \\ $\{ c_1!n_0@N_1 \triangleleft c_0!m_1@N_0, \llbracket y_0 \rrbracket = \llbracket m_0 \rrbracket \land y_1 \mapsto - \}$ & & $\{ c_0?y_1@N_1, c_1!n_0@N_1 \land n_1 \mapsto - \}$ \\ $c_0?y_1@N_1$ & $\parallel$ & $c_1!n_1@N_1$ \\ $\{ c_0!m_0@N_0 \triangleleft c_0!m_1@N_0, c_0?y_0 \triangleleft c_0?y_1 \land $ & & $\{ \top, c_1!n_0@N_1 \triangleleft c_1!n_1@N_1 \}$ \\ $\llbracket y_0 \rrbracket = \llbracket m_0 \rrbracket \land \llbracket y_1 \rrbracket = \llbracket m_1 \rrbracket \}$ & & \\ \multicolumn{3}{c}{$\{ c_0!m_0@N_0 \triangleleft c_0!m_1@N_0, (c_1!n_0@N_1 \triangleleft c_1!n_1@N_1) * (c_0?y_0@N_1 \triangleleft c_0?y_1@N_1 \land \llbracket y_0 \rrbracket = \llbracket m_0 \rrbracket \land \llbracket y_1 \rrbracket = \llbracket m_1 \rrbracket) \}$} \end{tabular} \end{table*} Let $P(N_i)'$ denote the post-condition of system $N_i$. In this network, we have $P(N_0)'$ and $P(N_1)'$ as follows: $P(N_0)' = \{ c_1!n_0@N_1 \triangleleft c_1!n_1@N_1, (c_0!m_0@N_0 \triangleleft c_0!m_1@N_0) * (c_1?x_0@N_0 \triangleleft c_1?x_1@N_0 \land \llbracket x_0 \rrbracket = \llbracket n_0 \rrbracket \land \llbracket x_1 \rrbracket = \llbracket n_1 \rrbracket) \}$ $P(N_1)' = \{ c_0!m_0@N_0 \triangleleft c_0!m_1@N_0, (c_1!n_0@N_1 \triangleleft c_1!n_1@N_1) * (c_0?y_0@N_1 \triangleleft c_0?y_1@N_1 \land \llbracket y_0 \rrbracket = \llbracket m_0 \rrbracket \land \llbracket y_1 \rrbracket = \llbracket m_1 \rrbracket) \}$. Now, we specify the network in Table~\ref{tab:network_spec} where $\mathfrak{C}@N_i$ denotes programs running on $N_i$. It is trivial to give the proof with the combination of system proofs by \textit{Nest Composition Rule}. \begin{table*} \caption{Specification of the network in Example~\ref{eg:nest}.} \label{tab:network_spec} \centering \begin{tabular}{ccc} & $\{ \top, m_0, m_1 \mapsto -, - * x_0, x_1 \mapsto -, - *$ & \\ & $n_0, n_1 \mapsto -, - *y_0, y_1 \mapsto -, - \}$ &\\ $\{ \top, m_0, m_1 \mapsto -, - *$ & & $\{ \top, n_0, n_1 \mapsto -, - *$ \\ $x_0, x_1 \mapsto -, - \}$ & & $y_0, y_1 \mapsto -, - \}$ \\ $\mathfrak{C}@N_0$ & $\parallel_0$ & $\mathfrak{C}@N_1$ \\ $P(N_0)'$ & & $P(N_1)'$ \\ \multicolumn{3}{c}{$\{\top, (c_0!m_0 \triangleleft c_0!m_1) * (c_1?x_0 \triangleleft c_1?x_1 \land \llbracket x_0 \rrbracket = \llbracket n_0 \rrbracket \land \llbracket x_1 \rrbracket = \llbracket n_1 \rrbracket) *$} \\ \multicolumn{3}{c}{$(c_1!n_0 \triangleleft c_1!n_1) * (c_0?y_0 \triangleleft c_0?y_1 \land \llbracket y_0 \rrbracket = \llbracket m_0 \rrbracket \land \llbracket y_1 \rrbracket = \llbracket m_1 \rrbracket) \}$} \end{tabular} \end{table*} \end{example} \section{Language} The full expressiveness of the ECSL can be exploited when specifying and verifying a complex system that has a nested structure and cross-layer communications. Firstly, we define a programming language with the built-in support of communication actions and nested parallel construction. Furthermore, we define a specification language to formalize the systems constructed by the programming language. \subsection{Programming Language} We define a programming language that has powerful expressiveness to construct systems at different abstraction levels. \begin{definition}[Syntax of Programming Language] The syntax of the programming language is defined as follow: $\bar{E} ::= x ~|~ n ~|~ \bar{E}+\bar{E} ~|~ \bar{E}-\bar{E} ~|~ ...$ $\bar{B} ::= \top ~|~ \bot ~|~ \bar{B} \land \bar{B} ~|~ \bar{E}=\bar{E} ~|~ ...$ $\bar{C} ::= \textbf{skip} ~|~ x \gets \bar{E} ~|~ x \gets [\bar{E}] ~|~ [\bar{E}] \gets \bar{E} ~|~ ...$ $\bar{A} ::= \bar{C} ~|~ \textbf{send}(\bar{E}, \textit{ch}) ~|~ x \gets \textbf{receive}(\textit{ch})$ $\bar{S} ::= \bar{A} ~|~ \bar{S_1};\bar{S_2} ~|~ \textbf{if } \bar{B} \textbf{ then } \bar{A_1} \textbf{ else } \bar{A_2} ~|~ \textbf{while } \bar{B} \textbf{ do } \bar{A}$ $\bar{P} ::= \bar{S} ~|~ \bar{P_1} \parallel \bar{P_2} ~|~ \bar{P_1} \parallel_i \bar{P_2}$ \end{definition} We omit some usual arithmetic expressions in $\bar{E}$ and Boolean expressions in $\bar{B}$. Commands $\bar{C}$ include elementary actions such as the empty command, assignment command and memory management commands while actions $\bar{A}$ encapsulate communication commands including \textbf{send} and \textbf{receive}. Statements $\bar{S}$ defines the basic program structure. Parallel structures $\bar{P}$ describe the construction of nested systems. \subsection{Specification Language} To specify systems developed by the programming language above, we define a specification language. \begin{definition}[Syntax of Specification Language of ECSL] The assertion component of the specification language is structured in Definition~\ref{def:spec_lang_csl}. Let $\Phi$ denote a grammar in a temporal logic. The new syntax is defined as follow: $\check{E} ::= \Phi$ $\check{Q} ::= \check{E}_f, \check{E}_n \land \check{P}$ \end{definition} \begin{example} We take linear temporal logic (LTL) as an example. For the syntax of LTL over the set \textit{Prop} of proposition with $Q \in \textit{Prop}$, $\Gamma$ and $\gamma$ of program $\mathfrak{P}$ in system $\mathfrak{W}$ are formed as LTL formulae according to the following grammar: $\Phi ::= \top ~|~ Q ~|~ \neg \Phi ~|~ \Phi_0 \land \Phi_1 ~|~ \bigcirc \Phi ~|~ \Phi_0 \sqcup \Phi_1$. Unary prefix operator $\bigcirc$ and binary infix operator $\sqcup$ are temporal modalities. If $\Phi$ holds in the next time step, $\bigcirc \Phi$ holds at the current moment. $\Phi_0 \sqcup \Phi_1$ holds at the current moment, if there is a future moment $i$ for which $\Phi_1$ holds and $\Phi_0$ holds at all moments until moment $i$. \end{example} With Definition~\ref{def:program_state}, we can give the operational semantics of this specification language. \begin{definition}[Semantics of Specifications] The semantics of specifications contains the semantics of assertions and the environment factor. The semantics of assertions is given in Definition~\ref{def:sem_CSL} with the only difference that the temporal memory is a new component of the program state according to Definition~\ref{def:program_state}. The semantics of the environment factor is defined as follows with Theorem~\ref{the:trace_satisfication}: $(s, h, t) \models (\check{E}_f, \check{E}_n \land \check{P}) \iff \exists h_0, h_1, t_f, t_n: h = h_0 \uplus h_1 \land t = t_f \uplus t_n \land (s, h) \models \check{P} \land \mathcal{T}(t_f) \models \check{E}_f \land \mathcal{T}(t_n) \models \check{E}_n$ \end{definition} \section{Discussion} We design ECSL by following unitarity and compatibility, which makes ECSL capable of specifying and verifying systems at different abstraction levels and interpreting the standard CSL and typical variants such as \cite{bell_concurrent_2010,sergey_programming_2017}. We also expand the capability of the CSL to handle the nested architecture compared to the work \cite{krogh-jespersen_aneris_2020}. The most distinguishing feature of ECSL compared to the current work is to focus on tackling typical modularity issues in a unified manner. Our general idea about proving the soundness of ECSL is to formulate a new semantics of judgements according to \cite{vafeiadis_concurrent_2011} considering an auxiliary predicate $\textit{sate}_n(\alpha, s, h, t, J, P')$, where $\alpha$ denotes the action executing with stack $s$, a heap $h$ and recorded in temporal memory $t$ while the judgement form is defined in Definition~\ref{def:judgement_form}. The soundness can be formulated as follows: $J \vdash \{ \Gamma, \gamma \land P \}~\alpha~\{ \Gamma, \gamma' \land P' \} \implies J \models \{ \Gamma, \gamma \land P \}~\alpha~\{ \Gamma, \gamma' \land P' \}$. We can prove soundness by proving each proof rule of ECSL is a sound implication after replacing all $\vdash$ by $\models$ though the proof of environment rules is challenging. Although we do not address the representation of temporal properties in our examples, we can indeed represent them in temporal logic. For instance, the foreign environment of the pre-condition can be represented as $\Box(c!m \longrightarrow \Diamond c?v)$, meaning that whenever signal $m$ is sent through channel $c$, then $v$ will eventually be assigned with the value of signal $m$ received from channel $c$. \section{Conclusion} Reasoning about complex distributed systems requires great modularity. In this paper, we have proposed ECSL, an extended concurrent separation logic with the temporal extension, communication extension, environment extension, and nest extension. We equip ECSL with spatiotemporal modularity to enrich expressiveness. Besides, ECSL facilitates formalizing systems at different abstraction levels. Particularly, ECSL is capable of formalizing nested systems containing both peer communications and cross-layer communications, which enables ECSL to specify and verify complex systems. Furthermore, ECSL presents unitarity and compatibility, which is implemented in a specification language to formalize systems at different abstraction levels. Our next research direction is to apply mechanized ECSL into formalizing practical systems, especially decentralized systems such as the consensus algorithm, smart contract, and blockchain design. \bibliographystyle{splncs04}
{ "timestamp": "2020-07-28T02:44:17", "yymm": "2007", "arxiv_id": "2007.13685", "language": "en", "url": "https://arxiv.org/abs/2007.13685" }
\section{Introduction} We are concerned with numerical schemes for evolution equations that arise as gradient flow (steepest descent) for an energy $f:H \rightarrow \mathbb{R}$, where $H$ is a Hilbert space with inner product $\langle\cdot,\cdot\rangle$: \begin{equation} \label{eq:de} u'=-\nabla_H E(u). \end{equation} Additionally, we will study gradient flows with a solution dependent inner product: \begin{equation} \label{eq:Lde} u'=-\mathcal{L}(u)\nabla_H E(u) \end{equation} where $\mathcal{L}(u)$ is a positive definite operator that depends on $u$. Equations \cref{eq:de} and \cref{eq:Lde} may represent (scalar or vectorial) ordinary or partial differential equations. One property of \cref{eq:de} and \cref{eq:Lde} is dissipation: $\frac{d}{dt} E(u) \leq 0$, to see this \[ \frac{d}{dt} E(u)=\langle \nabla_H E(u),u' \rangle = -\langle \nabla_H E(u),\mathcal{L}(u) \nabla_H E(u) \rangle \leq 0. \] In \cite{alexander2019variational} the authors focused on unconditionally stable numerical methods to solve \cref{eq:de}. In this paper, our focus is on semi-implicit methods that come with a rigorously established energy stability property inherited from the above expression of dissipation. Specifically let \begin{equation} \label{eq:additive} E(u) = E_1(u)+E_2(u) \end{equation} where in our numerical implementation we will handle $E_1$ implicitly and $E_2$ explicitly. We allow any choice for $E_1$ and $E_2$ as long as (\ref{eq:additive}) is satisfied. Our numerical methods will guarantee that when the time step is less than a constant depending only on $E_2$ the following numerical dissipation property will hold: \begin{equation} \label{eq:energymin} E(u_{n+1})\leq E(u_n) \end{equation} where $u_n$ denotes the approximation to the solution at the $n$-th time step. A basic semi-implicit scheme for the abstract equation \cref{eq:de}, with time step size $k>0$, reads \begin{equation} \label{eq:si} \frac{u_{n+1} - u_n}{k} = -\nabla_H E_1(u_{n+1})-\nabla_H E_2(u_{n}). \end{equation} Let $L_2(u,u_n)$ be the linearization of $E_2$ around $u_n$ so \begin{equation*} L_2(u,u_n)=E_2(u_n)+\langle\nabla_H E_2(u_n),u-u_n\rangle \end{equation*} Then \cref{eq:si} is the Euler-Lagrange equation for the optimization problem \begin{equation} \label{eq:minmov} u_{n+1}=\argmin_u E_1(u)+L_2(u,u_n)+\frac{1}{2k}\left\|u-u_n\right\|^2 \end{equation} where $\| \cdot \|^2 = \langle \cdot , \cdot \rangle$. For \begin{equation} \label{eq:lmda} \Lambda=\max\{0,\max_{u,\left\|v\right\|=1}D^2E_2(u)\big(v ,v\big)\} \end{equation} where by $D^2E_2(u)\big(v ,w\big)$ we mean $\left. \frac{d^2}{d\epsilon_2d\epsilon_1}E_2(u+\epsilon_1 v+\epsilon_2 w) \right|_{\epsilon_1=\epsilon_2=0}.$ We have \begin{equation} \label{eq:bound} E_2(u)\leq L_2(u,p)+\frac{\Lambda}{2}\left\|u-p\right\|^2 \end{equation} for any $u$ and $p$. It follows that when $k \leq \frac{1}{\Lambda}$ \begin{align*} E(u_{n+1})&=E_1(u_{n+1})+E_2(u_{n+1})\leq E_1(u_{n+1})+L_2(u_{n+1},u_n)+\frac{1}{2k}\left\|u_{n+1}-u_n\right\|^2\\&\leq E_1(u_{n})+L_2(u_{n},u_n)+\frac{1}{2k}\left\|u_n-u_n\right\|^2=E_1(u_{n})+E_2(u_{n})=E(u_{n}) \end{align*} so that scheme \cref{eq:si} is stable under this condition on the time step $k$, provided that optimization problem \cref{eq:minmov} can be solved. Our new class of methods has no assumption (e.g. convexity, concavity) on the components $E_1$ and $E_2$ of the energy that are treated implicitly and explicitly, respectively. They have high (at least up to third) order accuracy. Our stability results are conditional, but revert to unconditional stability when $E_1$ and $E_2$ have appropriate convexity properties, and contain as a special case previous unconditional stability results for high order, convexity splitting type schemes. In particular, a previous paper on higher order ARK IMEX energy stable schemes \cite{shin2017unconditionally} studies, in the spirit of convexity splitting \cite{eyre1998unconditionally}, formulations that break up the energy into convex and concave parts, and treat the convex part implicitly and the concave part explicitly. We know of no other work on ARK IMEX methods that considers energy stability. Another novelty of the present paper is extending these high order, stable, implicit and semi-implicit methods for general gradient flows to solve \cref{eq:Lde}, the case when the inner product is solution dependent. There are certainly existing stable methods for \cref{eq:Lde} on a case by case basis, for example for the Cahn-Hillard equation with degenerate mobility \cite{chen2019positivity,han2015second} and the porous medium equation \cite{del2018robust, duan2019numerical, westdickenberg2010variational}. To our knowledge, this is the first time energy stable, high order schemes for general gradient flows with solution dependent inner products have been developed. \noindent The rest of the paper is organized as follows: \begin{itemize} \item \Cref{sec:gf} presents the conditions for energy stability and constructs schemes that satisfy them. \item In \cref{sec:ex}, we state the consistency equations for the ARK IMEX schemes for solving gradient flows \cref{eq:de} and give 2nd and 3rd order examples. \item \Cref{sec:SDIP} gives 2nd and 3rd order methods for solving gradient flows with solution dependent inner product \cref{eq:Lde} and provides consistency calculations. \item In \cref{sec:nr}, we present numerical convergence studies several of well-known partial differential equations that are gradient flows, including with respect to Wasserstein metrics. \end{itemize} The code for \cref{sec:nr} is publicly available, and can be found at \url{https://github.com/AZaitzeff/SIgradflow}. \section{Stability of Our New Schemes} \label{sec:gf} In this section, we formulate a wide class of numerical schemes that are energy stable by construction. The first of these schemes are Implicit-Explicit Additive Runge-Kutta (ARK IMEX) schemes, but we will write them in variational form in order to prove energy stability more easily. The variational formulation of our $M$-stage ARK IMEX scheme is: \begin{enumerate} \item Set $U_0 = u_n$. \item For $m=1,\ldots,M$: \begin{equation} \label{eq:ms} U_m=\argmin_u \bigg( E_1(u)+\sum^{m-1}_{i=0} \theta_{m,i} L_2(u,U_i)+\sum^{m-1}_{i=0}\frac{\gamma_{m,i}}{2k}\left\|u-U_i\right\|^2 \bigg). \end{equation} where \begin{equation} \label{eq:L2} L_2(u,p)=E_2(p)+\langle\nabla_H E_2(p),u-p\rangle \end{equation} \item Set $u_{n+1}=U_M$. \end{enumerate} \medskip \noindent The schemes for approximating a gradient flow with respect to a {\it solution dependent inner product} will be a series of embedded ARK IMEX methods. The inner product will be fixed for each ARK IMEX step, allowing the stability results of this section to apply. Now we establish quite broad conditions on the coefficients $\gamma_{m,i}$ $\theta_{m,i}$ that ensure conditional energy dissipation \cref{eq:energymin}. Before we state and prove the conditions in generality, consider the following two-stage special case of scheme \cref{eq:ms}: \small \begin{align} \label{eq:twostage1} U_1=\argmin_u\bigg( E_1(u) &+L_2(u,u_n)+\frac{\gamma_{1,0}}{2k}\left\|u-u_n\right\|^2\bigg)\\ \label{eq:twostage2} \begin{split} u_{n+1}=\argmin_u \bigg(E_2(u) &+\theta_{2,1} L_2(u,U_1)+\theta_{2,0} L_2(u,u_n)\\&+\frac{\gamma_{2,0}}{2k}\left\|u-u_n\right\|^2+\frac{\gamma_{2,1}}{2k}\left\|u-U_1\right\|^2\bigg) \end{split} \end{align} \normalsize Let $\Lambda=\max\{0,\max_{x,\left\|v\right\|=1} D^2E_2(x)\big(v ,v\big)\}$. Note that this implies \begin{equation} \label{eq:twostage3} E_2(u)\leq L_2(u,p)+\frac{\Lambda}{2}\left\|u-p\right\|^2 \end{equation} for any $u$ and $p$. Also note that $L_2(u,u)=E_2(u)$. Impose the conditions \small \begin{align} \label{eq:twostage4} \begin{split} &\gamma_{1,0}-k\Lambda-\frac{(\gamma_{2,0}-k\Lambda\theta_{2,0})^2}{(\gamma_{2,0}+\gamma_{2,1}-k\Lambda\theta_{2,0}-k\Lambda\theta_{2,1})} \geq 0 \mbox{, }\\ &\gamma_{2,1}+\gamma_{2,0}-k\Lambda\theta_{2,0}-k\Lambda\theta_{2,1} > 0 \mbox{, }\\ &\theta_{2,1}+\theta_{2,0}=1 \mbox{ and }\\ &\theta_{2,1},\theta_{2,0}\geq 0 \end{split} \end{align} \normalsize on the parameters. Set $\mu = \frac{\gamma_{2,0}-k\Lambda\theta_{2,0}}{\gamma_{2,0}+\gamma_{2,1}-k\Lambda\theta_{2,0}-k\Lambda\theta_{2,1}}$. First note that \eqref{eq:twostage2} is equivalent to \small \begin{multline} \label{eq:twostage5} u_{n+1} = \argmin_u E_1(u)+\theta_{2,1} L_2(u,U_1)+\theta_{2,0} L_2(u,u_n)+\theta_{2,0}\Lambda \left\| u - u_n \right\|^2+ \\ \theta_{2,1}\Lambda\left\| u - U_1 \right\|^2 + \frac{\gamma_{2,0}+\gamma_{2,1}-k\Lambda\theta_{2,0}-k\Lambda\theta_{2,1}}{2k} \left\| u - \big( \mu u_n + (1-\mu) U_1 \big) \right\|^2. \end{multline} \normalsize This can be seen by expanding the norm squared and comparing the quadratic and linear terms in $u$. With these tools in hand we can prove energy dissipation: \small \begin{align*} &E(u_{n+1})\\=&E_1(u_{n+1})+E_2(u_{n+1})\\ \leq& E_1(u_{n+1})+ \theta_{2,1} [L_2(u_{n+1},U_1) +\frac{\Lambda}{2}\left\|u_{n+1}-U_1\right\|^2 ]\\&+ \theta_{2,0} [L_2(u_{n+1},u_n) +\frac{\Lambda}{2}\left\|u_{n+1}-u_n\right\|^2] && \text{(by \eqref{eq:twostage3})}\\ \leq& E_1(u_{n+1})+ \theta_{2,1} [L_2(u_{n+1},U_1) +\frac{\Lambda}{2}\left\|u_{n+1}-U_1\right\|^2 ]\\&+ \theta_{2,0} [L_2(u_{n+1},u_n) +\frac{\Lambda}{2}\|u_{n+1}-u_n\|^2]\\&+\frac{\gamma_{2,0}+\gamma_{2,1}-k\Lambda\theta_{2,0}-k\Lambda\theta_{2,1}}{2k}\left\| u_{n+1} - \big( \mu u_n + (1-\mu) U_1 \big) \right\|^2. && \text{(by \eqref{eq:twostage4})}\\ \leq& E_1(U_1) + \theta_{2,1} E_2(U_1)+ \theta_{2,0} [L_2(U_1,u_n) +\frac{\Lambda}{2}\|U_1-u_n\|^2]\\&+\frac{\gamma_{2,0}+\gamma_{2,1}-\Lambda\theta_{2,0}-\Lambda\theta_{2,1}}{2k} \left\|U_1 - \big( \mu u_n + (1-\mu) U_1 \big) \right\|^2 && \text{(by \eqref{eq:twostage5})}\\ \leq& E_1(U_1) + \theta_{2,1}[ L_2(U_1,u_n) +\frac{\Lambda}{2}\|u_{n+1}-u_n\|^2]\\&+ \theta_{2,0} [L_2(U_1,u_n) +\frac{\Lambda}{2}\|U_1-u_n\|^2]\\&+\frac{(\gamma_{2,0}-k\theta_{2,0})^2}{(\gamma_{2,0}+\gamma_{2,1}-k\Lambda\theta_{2,0}-k\Lambda\theta_{2,1})2k} \left\|U_1 - u_n \right\|^2\\ \leq& E_1(U_1) + [L_2(U_1,u_n) +\frac{\Lambda}{2}\|U_1-u_n\|^2]+ \frac{\gamma_{1,0}-k\Lambda}{2k} \left\| U_1 - u_n \right\|^2 && \text{(by \eqref{eq:twostage3})}\\ \leq& E(u_n). && \text{(by \eqref{eq:twostage1})} \end{align*} \normalsize The first two conditions of \cref{eq:twostage4} require $k$ to be below a certain threshold. Hence the dissipation of \cref{eq:twostage1} \& \cref{eq:twostage2} is conditional, unless $E_2$ happens to be concave, in which case these two conditions are satisfied for all $k>0$. \\ We will now extend this discussion to general, $M$-stage case of scheme \cref{eq:ms}: \begin{theorem} \label{claim:ms} Fix a time step $k$. Define $\Lambda=\max\{0,\max_{x,\left\|v\right\|=1} D^2E_2(x)\big(v ,v\big)\}$ and the following auxiliary quantities in terms of the coefficients $\gamma_{m,i}$ and $\theta_{m,i}$ of scheme \cref{eq:ms}: \begin{align} \label{eq:tildegamma} &\tilde{\gamma}_{m,i}=\gamma_{m,i}-k\Lambda\theta_{m,i}-\sum_{j=m+1}^M\tilde{\gamma}_{j,i}\frac{\tilde{S}_{j,m}}{\tilde{S}_{j,j}}\\ \label{eq:S} &\tilde{S}_{j,m}=\sum_{i=0}^{m-1} \tilde{\gamma}_{j,i} \end{align} If $\tilde{S}_{m,m}>0$ for $m=1,\ldots,M$, $\theta_{m-1,i}\geq\theta_{m,i}\geq 0$ and $\sum_{i=0}^{m-1} \theta_{m,i}=1$, then scheme \cref{eq:ms} satisfies the energy stability condition \cref{eq:energymin}: For every $n=0,1,2,\ldots$ we have $E(u_{n+1}) \leq E(u_n)$. \end{theorem} As we will see in \cref{sec:ex}, the conditions on the parameters $\gamma_{i,j}$ and $\theta_{m,i}$ of scheme \cref{eq:ms} imposed in \cref{claim:ms} are loose enough to enable meeting consistency conditions to high order. We will establish \cref{claim:ms} with the help of a couple of lemmas: \begin{lemma} Let the auxiliary quantities $\tilde{S}_{j,m}$, and $\tilde{\gamma}_{m,i}$ be defined as in \cref{claim:ms}. We have \label{lem:equiv} \small \begin{align*} &\argmin E(u)+\sum^{m-1}_{i=0} \theta_{m,i} L_2(u,U_i)+\sum^{m-1}_{i=0}\frac{\gamma_{m,i}}{2k}\left\|u-U_i\right\|^2\\= &\argmin E(u)+\sum^{m-1}_{i=0} \theta_{m,i} [L_2(u,U_i)+\frac{\Lambda}{2}\left\|u-U_i\right\|^2]+\frac{1}{2k}\sum_{j=m}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \left\|u-\sum_{i=0}^{m-1} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\right\|^2 \end{align*} \end{lemma} \normalsize \begin{proof} As in the two step case the proof consists of expanding the norm squared terms and showing that all the quadratic and linear terms of $u$ are equal. First, the expansion of $\sum^{m-1}_{i=0}\frac{\gamma_{m,i}}{2k}\|u-U_i\|^2$ is \small \begin{align} \label{eq:firstexp} \frac{\|u\|^2}{2k}\sum^{m-1}_{i=0}\gamma_{m,i} - \frac{1}{k}\langle u,\sum^{m-1}_{i=0}\gamma_{m,i}U_i\rangle+\text{terms that do not depend on $u$.} \end{align} \normalsize Next, we will establish two identities to help us expand \small\[\frac{1}{2k}\sum_{j=m}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \|u-\sum_{i=0}^{m-1} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\|^2.\]\normalsize First by rearranging \cref{eq:tildegamma}, \small \begin{equation} \label{eq:gammaind} \gamma_{m,i}-k\Lambda\theta_{m,i}=\sum_{j=m}^M\tilde{\gamma}_{j,i}\frac{\tilde{S}_{j,m}}{\tilde{S}_{j,j}}. \end{equation} \normalsize Next, an identity of $\tilde{S}_{m,m}$: \small \begin{align*} \tilde{S}_{m,m}&=\sum_{i=0}^{m-1} \tilde{\gamma}_{m,i}=\sum_{i=0}^{m-1}\bigg[ \gamma_{m,i}-k\Lambda \theta_{m,i}-\sum_{j=m+1}^M\tilde{\gamma}_{j,i}\frac{\tilde{S}_{j,m}}{\tilde{S}_{j,j}}\bigg]\\&= \sum_{i=0}^{m-1} \bigg[\gamma_{m,i}-k\Lambda \theta_{m,i}\bigg]-\sum_{j=m+1}^M\bigg[\sum_{i=0}^{m-1}\tilde{\gamma}_{j,i}\bigg]\frac{\tilde{S}_{j,m}}{\tilde{S}_{j,j}}\\&=\sum_{i=0}^{m-1}\bigg[ \gamma_{m,i}-k\Lambda \theta_{m,i}\bigg]-\sum_{j=m+1}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}}. \end{align*} \normalsize We use this identity to establish the following: \small \begin{equation} \label{eq:idenitysum} \sum_{j=m}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}}=\tilde{S}_{m,m}+\sum_{j=m+1}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}}=\sum_{i=0}^{m-1}\bigg[ \gamma_{m,i}-k\Lambda \theta_{m,i}\bigg] \end{equation} \normalsize Now we can calculate the expansion: \small \begin{align*} &\frac{1}{2k}\sum_{j=m}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \|u-\sum_{i=0}^{m-1} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\|^2+\sum^{m-1}_{i=0} \theta_{m,i} \Lambda\left\|u-U_i\right\|^2\\ =&\frac{\|u\|^2}{2k} \sum_{j=m}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} -\frac{1}{k}\langle u,\sum_{i=0}^{m-1} \sum_{j=m}^M \tilde{\gamma}_{j,i} \frac{\tilde{S}_{j,m}}{\tilde{S}_{j,j}} U_i \rangle+\frac{\Lambda}{2}\|u\|^2\sum^{m-1}_{i=0} \theta_{m,i} +\Lambda \sum^{m-1}_{i=0} \langle u,\theta_{m,i}U_i\rangle\\&+\text{terms that do not depend on $u$}\\ =&\frac{\|u\|^2}{2k}\sum^{m-1}_{i=0}\gamma_{m,i} - \frac{1}{k}\langle u,\sum^{m-1}_{i=0}\gamma_{m,i}U_i\rangle+\text{terms that do not depend on $u$.} \end{align*}\\ \normalsize Where the last equality follows from \cref{eq:gammaind} and \cref{eq:idenitysum}. Since this expansion matches \cref{eq:firstexp} up to a constant in $u$ the proof is complete. \end{proof} \begin{lemma} \label{lem:step} Let $\Lambda$ and the auxiliary quantities $\tilde{S}_{j,m}$, $\tilde{\gamma}_{m,i}$ be given in \cref{claim:ms}. Additionally, let $\tilde{S}_{m,m}>0$ for $m=1,\ldots,M$. Then \small \begin{align*} &E_1(U_m)+\sum^{m-1}_{i=0} \theta_{m,i} [L_2(U_m,U_i)+\frac{\Lambda}{2}\left\|U_m-U_i\right\|^2]+\frac{1}{2k}\sum_{j=m}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \left\|U_m-\sum_{i=0}^{m-1} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\right\|^2\\\leq & E_1(U_{m-1})+\sum^{m-2}_{i=0} \theta_{m-1,i} [L_2(U_{m-1},U_i)+\frac{\Lambda}{2}\left\|U_{m-1}-U_i\right\|^2]\\&+\frac{1}{2k}\sum_{j=m-1}^M \frac{\tilde{S}_{j,m-1}^2}{\tilde{S}_{j,j}} \left\|U_{m-1}-\sum_{i=0}^{m-2} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m-1}} U_i\right\|^2 \end{align*} \normalsize \end{lemma} \begin{proof} By \cref{eq:ms} \& \cref{lem:equiv}, \small \[ U_m=\argmin_u E(u)+\sum^{m-1}_{i=0} \theta_{m,i} [L_2(u,U_i)+\Lambda\left\|u-U_i\right\|^2]+\frac{1}{2k}\sum_{j=m}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \|u-\sum_{i=0}^{m-1} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\|^2. \] \normalsize Since $U_m$ is the minimizer of the above optimization problem \small \begin{align} &E_1(U_m)+\sum^{m-1}_{i=0} \theta_{m,i} [L_2(U_m,U_i)+\frac{\Lambda}{2}\left\|U_m-U_i\right\|^2]\nonumber\\&+\frac{1}{2k}\sum_{j=m}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \left\|U_m-\sum_{i=0}^{m-1} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\right\|^2 \nonumber\\ \label{eq:lem2step1} \begin{split} \leq&E_1(U_{m-1})+\theta_{m,m-1}E_2(U_{m-1})+\sum^{m-2}_{i=0} \theta_{m,i} [L_2(U_{m-1},U_i)+\frac{\Lambda}{2}\left\|U_{m-1}-U_i\right\|^2]\\&+\frac{1}{2k}\sum_{j=m}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \left\|U_{m-1}-\sum_{i=0}^{m-1} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\right\|^2 \end{split} \end{align} \normalsize We give two inequalities to aid us in the proof. First, using the definition of the auxiliary variables, we can state an identity that will simplify \cref{eq:lem2step1}. For $m>1$ and $j\geq m$ \small \begin{align} \label{eq:lem2step12} \begin{split} &\frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \left\|U_{m-1}-\sum_{i=0}^{m-1} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\right\|^2=\frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \left\|U_{m-1}\bigg(1-\frac{\tilde{\gamma}_{j,m-1}}{\tilde{S}_{j,m}}\bigg)-\sum_{i=0}^{m-2} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\right\|^2\\ =&\frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \left\|U_{m-1}\bigg(\frac{\tilde{S}_{j,m-1}}{\tilde{S}_{j,m}}\bigg)-\sum_{i=0}^{m-2} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\right\|^2=\frac{\tilde{S}_{j,m-1}^2}{\tilde{S}_{j,j}} \left\|U_{m-1}-\sum_{i=0}^{m-2} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m-1}} U_i\right\|^2. \end{split} \end{align} \normalsize Now since $\tilde{S}_{m-1,m-1}>0$, \small \begin{equation} \label{eq:lem2step3} \frac{\tilde{S}_{m-1,m-1}^2}{\tilde{S}_{m-1,m-1}} \left\|U_{m-1}-\sum_{i=0}^{m-2} \frac{\tilde{\gamma}_{m-1,i}}{\tilde{S}_{m-1,m-1}} U_i\right\|^2>0. \end{equation} \normalsize Using \cref{eq:lem2step12} and \cref{eq:lem2step3} we have \small \begin{multline} \label{eq:lem2step2} \frac{1}{2k}\sum_{j=m}^M \frac{\tilde{S}_{j,m}^2}{\tilde{S}_{j,j}} \left\|U_{m-1}-\sum_{i=0}^{m-1} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m}} U_i\right\|^2=\frac{1}{2k}\sum_{j=m}^M \frac{\tilde{S}_{j,m-1}^2}{\tilde{S}_{j,j}} \left\|U_{m-1}-\sum_{i=0}^{m-2} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m-1}} U_i\right\|^2\\\leq\frac{1}{2k}\sum_{j=m-1}^M \frac{\tilde{S}_{j,m-1}^2}{\tilde{S}_{j,j}} \left\|U_{m-1}-\sum_{i=0}^{m-2} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m-1}} U_i\right\|^2. \end{multline} \normalsize Next, since $\sum_{i=1}^{m-1}\theta_{m,i}=1$ for all $m$ we have the equality \begin{equation} \label{eq:thetaeq} \theta_{m,m-1}=1-\sum^{m-2}_{i=0} \theta_{m,i}=\sum^{m-2}_{i=0} \theta_{m-1,i}-\sum^{m-2}_{i=0}\theta_{m,i} \end{equation} Using \cref{eq:twostage3} and \cref{eq:thetaeq}, we have our second inequality: \small \begin{align} \label{eq:lem2stepE} \begin{split} &\theta_{m,m-1}E_2(U_{m-1})+\sum^{m-2}_{i=0} \theta_{m,i} [L_2(U_{m-1},U_i)+\frac{\Lambda}{2}\left\|U_{m-1}-U_i\right\|^2]\\ =&\sum^{m-2}_{i=0} (\theta_{m-1,i}-\theta_{m,i})E_2(U_{m-1}) +\sum^{m-2}_{i=0} \theta_{m,i} [L_2(U_{m-1},U_i)+\frac{\Lambda}{2}\left\|U_{m-1}-U_i\right\|^2]\\ \leq&\sum^{m-2}_{i=0} (\theta_{m-1,i}-\theta_{m,i}) [L_2(U_{m-1},U_i)+\frac{\Lambda}{2}\left\|U_{m-1}-U_i\right\|^2]\\&+\sum^{m-2}_{i=0} \theta_{m,i} [L_2(U_{m-1},U_i)+\frac{\Lambda}{2}\left\|U_{m-1}-U_i\right\|^2]\\ =&\sum^{m-2}_{i=0} \theta_{m-1,i} [L_2(U_{m-1},U_i)+\frac{\Lambda}{2}\left\|U_{m-1}-U_i\right\|^2] \end{split} \end{align} \normalsize Using inequalities \cref{eq:lem2step2} and \cref{eq:lem2stepE}, we have that \cref{eq:lem2step1} is less than or equal to \small \begin{multline*} E_1(U_{m-1})+\sum^{m-2}_{i=0} \theta_{m-1,i} [L_2(U_{m-1},U_i)+\frac{\Lambda}{2}\left\|U_{m-1}-U_i\right\|^2]\\+\frac{1}{2k}\sum_{j=m-1}^M \frac{\tilde{S}_{j,m-1}^2}{\tilde{S}_{j,j}} \left\|U_{m-1}-\sum_{i=0}^{m-2} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,m-1}} U_i\right\|^2 \end{multline*} \normalsize concluding the proof. \end{proof} \begin{proof}(of theorem) The main idea of the proof is to use \cref{lem:step} repeatedly to relate the energy of $E(u_{n+1})$ to $E(u_n)$. First, by \cref{eq:twostage3} and our assumption that $\tilde{S}_{M,M}>0$ \small \begin{align*} E(u_{n+1})&=E_1(U_M)+E_2(U_M)\\ &\leq E_1(U_M)+\sum^{M-1}_{i=0} \theta_{M,i} [L_2(U_M,U_i)+\frac{\Lambda}{2}\left\|U_M-U_i\right\|^2]\\&+\frac{1}{2k} \frac{\tilde{S}_{M,M}^2}{\tilde{S}_{M,M}} \|U_M-\sum_{i=0}^{M-1} \frac{\tilde{\gamma}_{M,i}}{\tilde{S}_{M,M}} U_i\|^2. \end{align*} \normalsize By using the \cref{lem:step} repeatedly we have \small \begin{align*} &E_1(U_M)+\sum^{M-1}_{i=0} \theta_{M,i} [L_2(U_M,U_i)+\frac{\Lambda}{2}\left\|U_M-U_i\right\|^2]+\frac{1}{2k} \frac{\tilde{S}_{M,M}^2}{\tilde{S}_{M,M}} \|U_M-\sum_{i=0}^{M-1} \frac{\tilde{\gamma}_{M,i}}{\tilde{S}_{M,M}} U_i\|^2\\ \leq& E_1(U_{M-1})+\sum^{M-2}_{i=0} \theta_{M-1,i} [L_2(U_{M-1},U_i)+\frac{\Lambda}{2}\left\|U_{M-1}-U_i\right\|^2]\\&+\frac{1}{2k}\sum_{j=M-1}^M \frac{\tilde{S}_{j,M-1}^2}{\tilde{S}_{j,j}} \left\|U_{M-1}-\sum_{i=0}^{M-2} \frac{\tilde{\gamma}_{j,i}}{\tilde{S}_{j,M-1}} U_i\right\|^2\\ & \vdots \\\leq& E_1(U_1)+ L_2(U_1,U_0)+\frac{\Lambda}{2}\left\|U_1-U_0\right\|^2+\frac{1}{2k}\sum_{j=1}^M \frac{\tilde{S}_{j,1}^2}{\tilde{S}_{j,j}} \|U_1- \frac{\tilde{\gamma}_{j,0}}{\tilde{S}_{j,1}} U_0\|^2. \end{align*} \normalsize By \cref{eq:ms} and \cref{lem:equiv} \small \[U_1=\argmin_u E_1(u)+L_2(u,U_0)+\frac{\Lambda}{2}\left\|u-U_0\right\|^2+\frac{1}{2k}\sum_{j=1}^M \frac{\tilde{S}_{j,1}^2}{\tilde{S}_{j,j}} \|u- \frac{\tilde{\gamma}_{j,0}}{\tilde{S}_{j,1}} U_0\|^2\] \normalsize so \small \begin{align*} &E_1(U_1)+L_2(U_1,U_0)+\frac{\Lambda}{2}\left\|U_1-U_0\right\|^2+\frac{1}{2k}\sum_{j=1}^M \frac{\tilde{S}_{j,1}^2}{\tilde{S}_{j,j}} \|U_1- \frac{\tilde{\gamma}_{j,0}}{\tilde{S}_{j,1}} U_0\|^2\\ \leq& E_1(U_0)+E_2(U_0)+\frac{\Lambda}{2}\left\|U_0-U_0\right\|^2+\frac{1}{2k}\sum_{j=1}^M \frac{\tilde{S}_{j,1}^2}{\tilde{S}_{j,j}} \|U_0-U_0\|^2\\=&E(u_n) \end{align*} \normalsize completing the proof of the theorem. \end{proof} \begin{remark} In the above proof, we assume that $E_2(u)$ is two times differentiable. This assumption can be dropped if we replace $L_2(u,p)$ with another approximation $A_2(u,p)$ that has the properties $A_2(u,u)=E_2(u)$ and for some choice $\Lambda$, $E_2(u) \leq A_2(u,p)+\frac{\Lambda}{2}\left\|u-p\right\|^2$ for all $u$ and $p$. \end{remark} \section{Examples of the New Schemes for Gradient Flows} \label{sec:ex} In this section, we give examples of {\it high order} semi-implicit schemes for gradient flows, for any desired choice of implicit and explicit terms $E_1$ and $E_2$, that are energy stable under the conditions of \cref{claim:ms}. First, we give the conditions on $\gamma_{m,i}$ and $\theta_{m,i}$ in scheme \cref{eq:ms} to ensure high order consistency with the abstract evolution law \cref{eq:de}. Recall that $U_0=u_n$. From \cref{eq:ms}, each stage $U_m$ satisfies the Euler-Lagrange equation: \begin{equation} \label{eq:ms2} \bigg[\sum^{m-1}_{i=0}\gamma_{m,i}\bigg]U_{m}+k\nabla_HE_1(U_m)=-\sum^{m-1}_{i=0} k\theta_{m,i}\nabla_HE_2(U_i)+\sum^{m-1}_{i=0}\gamma_{m,i}U_i. \end{equation} \cref{eq:ms2} is equivalent to the form more often seen for ARK IMEX methods: \small \begin{equation} \label{eq:rk} U_m=U_0-k\sum^{m}_{i=1} \alpha_{m,i} \nabla_H E_1(U_i)-k\sum^{m-1}_{i=1} \tilde{\alpha}_{m,i} \nabla_H E_2(U_i) \end{equation} \normalsize where $\alpha_{m,i}$ and $\tilde{\alpha}_{m,i}$ depend on $\gamma_{m_i}$ and $\theta_{m,i}$. The consistency equations for ARK IMEX methods have been previously worked out \cite{kennedy2003additive,Pareschi2005,shin2017unconditionally,zharovsky2015class}. As such, we will state without proof the conditions required to achieve various orders of accuracy in terms of $\gamma$ and $\theta$: \iffalse We begin with the exact solution starting from $u(t_0)$: \small \[ \begin{cases} u_t=-\nabla E(u) & t>t_0 \\ u(t_0)=U_0 \end{cases} \] \normalsize the Taylor expansion of the $u(k+t_0)$ around $t_0$ is \small \begin{align} \begin{split} \label{eq:true} u(k+t_0)=&u(t_0)+ku_t(t_0)+\frac{1}{2}k^2u_{tt}(t_0)+\frac{1}{6}k^3u_{ttt}(t_0)+\text{h.o.t.}\\=&U_0-k DE(U_0)+\frac{1}{2}k^2D^2E(U_0)DE(U_0)\\&-\frac{1}{6}k^3\big[D^2E(U_0)\left(D^2E(U_0)\left(DE(U_0)\right)\right)+D^3E(U_0)\big(DE(U_0),DE(U_0)\big)\big]+\text{h.o.t.}\\ =&U_0-k DE(U_0)+\frac{1}{2}k^2D^2E_1(U_0)DE(U_0)+\frac{1}{2}k^2D^2E_2(U_0)DE(U_0)\\&-\frac{1}{6}k^3\big[D^2E_1(U_0)\left(D^2E_1(U_0)\left(DE(U_0)\right)\right)+D^2E_1(U_0)\left(D^2E_2(U_0)\left(DE(U_0)\right)\right)\\&+D^2E_2(U_0)\left(D^2E_1(U_0)\left(DE(U_0)\right)\right)+D^2E_2(U_0)\left(D^2E_2(U_0)\left(DE(U_0)\right)\right)\\&+D^3E_1(U_0)\big(DE(U_0),DE(U_0)\big)+D^3E_2(U_0)\big(DE(U_0),DE(U_0)\big)\big]+\text{h.o.t.} \end{split} \end{align} \normalsize \fi \begin{claim} \label{claim:cons} Let $U_i$ be given in \cref{eq:ms}. The Taylor expansion of $U_i$ at each stage has the form: \small \begin{align} \label{eq:indte} \begin{split} U_i&=U_0-\beta_{1,i}k DE(U_0)+k^2\big[\beta_{2,i}k^2D^2E_1(U_0)DE(U_0)+\beta_{3,i}D^2E_2(U_0)DE(U_0)\big]\\&-k^3\big[\beta_{4,i}D^2E_1(U_0)\left(D^2E_1(U_0)\left(DE(U_0)\right)\right)+\beta_{5,i}D^2E_1(U_0)\left(D^2E_2(U_0)\left(DE(U_0)\right)\right)\\&+\beta_{6,i}D^2E_2(U_0)\left(D^2E_1(U_0)\left(DE(U_0)\right)\right)+\beta_{7,i}D^2E_2(U_0)\left(D^2E_2(U_0)\left(DE(U_0)\right)\right)\\&+\beta_{8,i}D^3E_1(U_0)\big(DE(U_0),DE(U_0)\big)+\beta_{9,i}D^3E_2(U_0)\big(DE(U_0),DE(U_0)\big)\big]+\text{h.o.t.} \end{split} \end{align} \normalsize where for $l\in\{1,2,3,\ldots\}$, $D^l E(u) : H^l \to \mathbb{R}$ denotes the multilinear form given by \small \[ D^l E(u)\big(v_1,\ldots,v_n\big) = \left. \frac{\partial^l}{\partial s_1 \cdots \partial s_l} E(u+s_1 v_1 + s_2 v_2 + \cdots + s_l v_l) \right|_{s_1 = s_2 = \cdots = s_l = 0}\] \normalsize so that the linear functional $D^lE(u)\big(v_1,v_2,\ldots,v_{l-1},\cdot \big) : H \to \mathbb{R}$ may be identified with an element of $H$, and so on. The coefficients of \cref{eq:indte} obey the following recursive relations: \begin{align} \begin{split} \label{eq:rec} &\beta_{1,0}=\beta_{2,0}=\ldots=\beta_{9,0}=0\\ &\beta_{1,m}=\frac{1}{S_m}\bigg[1+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{1,i} \bigg]\\ &\beta_{2,m}=\frac{1}{S_m}\bigg[\beta_{1,m}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{2,i} \bigg]\\ &\beta_{3,m}=\frac{1}{S_m}\bigg[\sum_{i=0}^{m-1} \theta_{m,i} \beta_{1,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{3,i} \bigg]\\ &\beta_{4,m}=\frac{1}{S_m}\bigg[\beta_{2,m}+\sum^{m-1}_{i=1}\gamma_{m,i} \beta_{4,i}\bigg] \\ &\beta_{5,m}=\frac{1}{S_m}\bigg[\beta_{3,m}+\sum^{m-1}_{i=1}\gamma_{m,i} \beta_{5,i}\bigg] \\ &\beta_{6,m}=\frac{1}{S_m}\bigg[\sum_{i=0}^{m-1} \theta_{m,i} \beta_{2,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{6,i} \bigg]\\ &\beta_{7,m}=\frac{1}{S_m}\bigg[\sum_{i=0}^{m-1} \theta_{m,i} \beta_{3,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{7,i} \bigg]\\ &\beta_{8,m}=\frac{1}{S_m}\bigg[\frac{\beta_{1,m}^2}{2}+\sum^{m-1}_{i=1}\gamma_{m,i} \beta_{8,i}\bigg] \\ &\beta_{9,m}=\frac{1}{S_m}\bigg[\frac{1}{2}\sum_{i=0}^{m-1} \theta_{m,i} \beta_{1,i}^2+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{9,i} \bigg] \end{split} \end{align} with $S_m=\sum^{m-1}_{i=0}\gamma_{m,i}$. Furthermore, the following conditions for $u_{n+1}=U_M$ in scheme \cref{eq:ms} are necessary and sufficient for various orders of accuracy: \begin{alignat}{5} \label{eq:cons} &\text{\underline{First Order:}}&\quad & \text{\underline{Second Order:}} &\quad &\text{\underline{Third Order:}} \nonumber \\ &\beta_{1,M}=1& &\beta_{1,M}=1& &\beta_{1,M}=1 \nonumber\\ & & &\beta_{2,M}=1/2 & &\beta_{2,M}=1/2\\ & & &\beta_{3,M}=1/2& &\beta_{3,M}=1/2 \nonumber\\ & & & & &\beta_{4,M}=\beta_{5,M}=\ldots=\beta_{9,M}=1/6 \nonumber \end{alignat} \end{claim} \iffalse \begin{proof} We will now show by induction that the aforementioned consistency formulas, \cref{eq:indte} and \cref{eq:rec}, hold.\\ \textbf{Stage one:} \begin{equation} \label{eq:one} \gamma_{1,0}U_{1}+kDE_1(U_1)=\gamma_{1,0}U_0-kDE_2(U_0) \end{equation} We first will Taylor expand $DE_1(U_1)$ around $U_0$ in \cref{eq:one}: \begin{multline*} \gamma_{1,0}U_{1}+kDE_1(U_0)+kD^2E_1(U_0)(U_1-U_0)\\+\frac{1}{2}kD^3E_1(U_0)(U_1-U_0,U_1-U_0)+\text{h.o.t.}=\gamma_{1,0}U_0-kDE_2(U_0). \end{multline*} Note that combine $DE_1(U_0)+DE_2(U_0)=DE(U_0)$. Now we plug in an ansatz for the expansion on $U_1$ around $U_0$, $U_{1}=U_0+A_1k+A_2k^2+A_3k^3+\text{h.o.t.}$, and solve for $A_1$, $A_2$ and $A_3$: \begin{multline*} \gamma_{1,0}(A_1k+A_2k^2+A_3k^3)+kDE(U_0)+k^2D^2E_1(U_0)\big(A_1\big)\\+k^3D^2E_1(U_0)\big(A_2\big)+\frac{1}{2}k^3D^3E_1(U_0)\big(A_1,A_1\big)+\text{h.o.t.}=0. \end{multline*} Matching terms of the same order we get \begin{align*} A_1&=-\frac{1}{\gamma_{1,0}}DE(U_0)\\ A_2&=-\frac{1}{\gamma_{1,0}}D^2E_1(U_0)\big(A_1\big)=\frac{1}{\gamma_{1,0}^2}D^2E_1(U_0)\big(DE(U_0)\big)\\ A_3&=-\frac{1}{\gamma_{1,0}}D^2E_1(U_0)\big(A_2\big)-\frac{1}{2}\frac{1}{\gamma_{1,0}}D^3E_1(U_0)\big(A_1,A_1\big)\\ &=-\frac{1}{\gamma_{1,0}^3}D^2E_1(U_0)\left(D^2E_1(U_0)\left(DE(U_0)\right)\right)-\frac{1}{2}\frac{1}{\gamma_{1,0}^3}D^3E_1(U_0)\big(DE(U_0),DE(U_0)\big). \end{align*} Noting that $S_1=\gamma_{1,0}$ completes stage one.\\ \textbf{Stage m:} \begin{equation} \label{eq:method} \bigg[\sum^{m-1}_{i=0}\gamma_{m,i}\bigg]U_{m}+kDE_1(U_m)=\sum^{m-1}_{i=0}\gamma_{m,i}U_i-k\sum^{m-1}_{i=0}\theta_{m,i}DE_2(U_i). \end{equation} and assume \cref{eq:indte} and \cref{eq:rec} up to $m-1$. First we are going to solve for $U_m-U_0$ in \cref{eq:method}: \small \begin{align} \label{eq:stepone} U_{m}-U_0=&-\frac{k}{S_m}DE_1(U_m)-\frac{k}{S_m}\sum^{m-1}_{i=0}\theta_{m,i}DE_2(U_i)+\frac{1}{S_m}\sum^{m-1}_{i=0}\gamma_{m,i}U_i-U_0. \end{align} \normalsize Now Taylor expand $DE_2(U_i)$ for $i=1,\ldots,m-1$, and $DE_1(U_m)$ around $U_0$ in \cref{eq:stepone}: \small \begin{align*} \\U_{m}-&U_0=-\frac{k}{S_m}\bigg[DE_1(U_0)+D^2E_1(U_0)(U_m-U_0)+\frac{1}{2}D^3E_1(U_0)\big(U_m-U_0,U_m-U_0\big)\bigg]\\&-\frac{k}{S_m}\sum_{i=0}^{m-1}\theta_{m,i}\bigg[DE_2(U_0)+D^2E_2(U_0)(U_i-U_0)+\frac{1}{2}D^3E_2(U_0)\big(U_i-U_0,U_i-U_0\big)\bigg]\\&+\frac{1}{S_m}\sum^{m-1}_{i=0}\gamma_{m,i}U_i-U_0+\text{h.o.t.} \end{align*} \normalsize Simplify $DE_1(U_0)+\sum_{i=0}^{m-1}\theta_{m,i}DE_2(U_0)=DE(U_0)$. Then plug in the ansatz $U_0+kA_1+k^2A_2+k^3A_3+\text{h.o.t.}$ for $U_{m}$ and equation \cref{eq:indte} for $U_i$, and retaining up to terms of third order, we have that \footnotesize \begin{align} \label{eq:withansatz} \begin{split} kA_1&+k^2A_2+k^3A_3=\\&-\frac{k}{S_m}\bigg[1+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{1,i}\bigg]DE(U_0)+\frac{k^2}{S_m}\bigg[D^2E_1(U_0)\big(-A_1+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{2,i}DE(U_0)\big)\\&+\bigg(\sum^{m-1}_{i=1}\theta_{m,i}\beta_{1,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{3,i}\bigg)D^2E_2(U_0)DE(U_0) \bigg]\\&-\frac{k^3}{S_m}\bigg[D^2E_1(U_0)\big(A_2+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{4,i}D^2E_1(U_0)DE(U_0)+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{5,i}D^2E_2(U_0)DE(U_0)\big)\\ &+\bigg(\sum^{m-1}_{i=1}\theta_{m,i}\beta_{2,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{6,i}\bigg)D^2E_2(U_0)\big(D^2E_1(U_0)\big(DE(U_0)\big)\big)\\&+\bigg(\sum^{m-1}_{i=1}\theta_{m,i}\beta_{3,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{7,i}\bigg)D^2E_2(U_0)\big(D^2E_2(U_0)\big(DE(U_0)\big)\big) \\&+\frac{1}{2}D^3E_1(U_0)\big(A_1,A_1\big)+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{8,i}D^3E_1(U_0)\big(DE(U_0),DE(U_0)\big)\\&+\bigg(\frac{1}{2}\sum^{m-1}_{i=1}\theta_{m,i}\beta_{1,i}^2+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{9,i}\bigg)D^3E_2(U_0)\big(DE(U_0),DE(U_0)\big)\bigg]+\text{h.o.t.} \end{split} \end{align} \normalsize Solving for $A_1$, $A_2$, $A_3$ by matching terms of the same order in \cref{eq:withansatz}, we arrive at: \footnotesize \begin{align*} \label{eq:solveansatz} \begin{split} A_1=&-\frac{1}{S_m}\bigg[1+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{1,i}\bigg]DE(U_0)\\ A_2=&\frac{1}{S_m}D^2E_1(U_0)\big(-A_1+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{2,i}DE(U_0)\big)\\&+\frac{1}{S_m}\bigg(\sum^{m-1}_{i=1}\theta_{m,i}\beta_{1,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{3,i}\bigg)D^2E_2(U_0)DE(U_0)\\=&\bigg(\frac{1}{S_m^2} \bigg[1+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{1,i}\bigg]+\frac{1}{S_m}\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{2,i}\bigg)D^2E_1(U_0)\big(DE(U_0)\big)\\&+\frac{1}{S_m}\bigg(\sum^{m-1}_{i=1}\theta_{m,i}\beta_{1,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{3,i}\bigg)D^2E_2(U_0)DE(U_0)\\ A_3=&-\frac{1}{S_m}D^2E(U_0)\big(A_2+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{4,i}D^2E(U_0)DE(U_0)+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{5,i}D^2E_2(U_0)DE(U_0)\big)\\ &-\frac{1}{S_m}\bigg(\sum^{m-1}_{i=1}\theta_{m,i}\beta_{2,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{6,i}\bigg)D^2E_2(U_0)\big(D^2E_1(U_0)\big(DE(U_0)\big)\big)\\&-\frac{1}{S_m}\bigg(\sum^{m-1}_{i=1}\theta_{m,i}\beta_{3,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{7,i}\bigg)D^2E_2(U_0)\big(D^2E_2(U_0)\big(DE(U_0)\big)\big)\\ &-\frac{1}{2}\frac{1}{S_m}D^3E_1(U_0)\big(A_1,A_1\big)-\frac{1}{S_m}\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{8,i}D^3E_1(U_0)\big(DE(U_0),DE(U_0)\big)\\&-\bigg(\frac{1}{2}\sum^{m-1}_{i=1}\theta_{m,i}\beta_{1,i}^2+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{9,i}\bigg)D^3E_2(U_0)\big(DE(U_0),DE(U_0)\big) \\=&-\bigg(\frac{1}{S_m^3}\bigg[1+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{1,i} \bigg]+\frac{1}{S_m^2}\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{2,i}\\&+\frac{1}{S_m}\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{4,i}\bigg)D^2E_1(U_0)\left(D^2E_1(U_0)\left(DE(U_0)\right)\right)\\&-\bigg(\frac{1}{S_m^2}\sum^{m-1}_{i=1}\theta_{m,i}\beta_{1,i}+\frac{1}{S_m^2}\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{3,i}\\&+\frac{1}{S_m}\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{5,i}\bigg)D^2E_1(U_0)\left(D^2E_2(U_0)\left(DE(U_0)\right)\right)\\&-\frac{1}{S_m}\bigg(\sum^{m-1}_{i=1}\theta_{m,i}\beta_{2,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{6,i}\bigg)D^2E_2(U_0)\big(D^2E_1(U_0)\big(DE(U_0)\big)\big)\\&-\frac{1}{S_m}\bigg(\sum^{m-1}_{i=1}\theta_{m,i}\beta_{3,i}+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{7,i}\bigg)D^2E_2(U_0)\big(D^2E_2(U_0)\big(DE(U_0)\big)\big)\\&-\bigg(\frac{1}{2}\frac{1}{S_m^3}\bigg[1+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{1,i} \bigg]^2+\frac{1}{S_m}\sum^{m-1}_{i=1}\gamma_{m,i} \beta_{8,i}\bigg)D^3E_1(U_0)\big(DE(U_0),DE(U_0)\big)\\&-\frac{1}{S_m}\bigg(\frac{1}{2}\sum^{m-1}_{i=1}\theta_{m,i}\beta_{1,i}^2+\sum^{m-1}_{i=1}\gamma_{m,i}\beta_{9,i}\bigg)D^3E_2(U_0)\big(DE(U_0),DE(U_0)\big) \end{split} \end{align*} \normalsize completing the induction step. \\ Matching the consistency equations, \cref{eq:indte} and \cref{eq:rec}, at $u_{n+1}=U_M$ with the one step error \cref{eq:true} gives the conditions on $u_{n+1}$ for various orders of accuracy \cref{eq:cons}, completing the proof. \end{proof} \fi Now, we give second order and a third order example of method \cref{eq:ms}. However, The examples we give are not unique by any means. We begin with a five step method that is second order accurate: \iffalse \begin{align} \label{eq:2ndordergamma} \begin{split} &\theta=\left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ \frac{1}{115} & \frac{114}{115} & 0 & 0 & 0 \\ \frac{1}{115} & \frac{113}{114} & \frac{1}{13110} & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & \frac{3545}{3546} & \frac{1}{3546} \\ \end{array} \right)\\ &\gamma \approx \left( \begin{array}{ccccc} \frac{610}{69} & 0 & 0 & 0 & 0 \\ -\frac{160}{173} & \frac{595}{111} & 0 & 0 & 0 \\ -\frac{311}{70} & \frac{441}{73} & \frac{189}{199} & 0 & 0 \\ -\frac{217}{66} & \frac{112}{19} & -\frac{27}{77} & \frac{5}{29} & 0 \\ -\frac{74}{19} & -\frac{57}{170} & 4.96359 & -1.72242 & 7.68372 \\ \end{array} \right) \end{split} \end{align} \fi \begin{align} \label{eq:2ndordergamma} \begin{split} &\theta \approx \left( \begin{array}{ccccc} 1. & 0 & 0 & 0 & 0 \\ 0.009 & 0.991 & 0 & 0 & 0 \\ 0.009 & 0.991 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1. & 0 \\ 0 & 0 & 0 & 1. & 0 \\ \end{array} \right)\\ &\gamma \approx \left(\begin{array}{ccccc} 8.841 & 0 & 0 & 0 & 0\\ -0.925 & 5.360 & 0 & 0 & 0\\ -4.443 & 6.041 & 0.950 & 0 & 0\\ -3.288 & 5.895 & -0.351 & 0.172 & 0\\ -3.895 & -0.335 & 4.964 & -1.722 & 7.684 \end{array}\right) \end{split} \end{align} which is stable for $k\Lambda \leq 3/872$. Next we have a thirteen step method that is third order accurate: \tiny \begin{align} \label{eq:3rdordergamma} \begin{split} &\theta\approx \left( \begin{array}{ccccccccccccc} 1. & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.049 & 0.951 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.024 & 0.075 & 0.901 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.017 & 0.042 & 0.113 & 0.829 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.012 & 0.029 & 0.071 & 0.386 & 0.501 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.01 & 0.023 & 0.06 & 0.366 & 0.457 & 0.085 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.007 & 0.018 & 0.05 & 0.351 & 0.437 & 0.06 & 0.076 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.003 & 0.005 & 0.006 & 0.008 & 0.009 & 0.011 & 0.028 & 0.929 & 0 & 0 & 0 & 0 & 0 \\ 0.002 & 0.002 & 0.002 & 0.002 & 0.003 & 0.004 & 0.009 & 0.029 & 0.948 & 0 & 0 & 0 & 0 \\ 0 & 0.001 & 0.001 & 0.001 & 0.001 & 0.002 & 0.004 & 0.007 & 0.011 & 0.971 & 0 & 0 & 0 \\ 0 & 0 & 0.001 & 0.001 & 0.001 & 0.001 & 0.003 & 0.005 & 0.008 & 0.912 & 0.069 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0.001 & 0.002 & 0.003 & 0.005 & 0.107 & 0.025 & 0.857 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0.001 & 0.001 & 0.002 & 0.013 & 0.007 & 0.018 & 0.958 \\ \end{array} \right)\\ &\gamma \approx \left( \begin{array}{ccccccccccccc} 11. & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2.1 & 15.5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1.4 & 1.6 & 17. & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.2 & 1.6 & -2.4 & 18.1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.3 & -8.5 & 3. & 9.6 & 7.8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1.4 & -5.9 & -0.1 & 2. & 8. & 4.1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -4. & -0.5 & -0.4 & -1.8 & 5.1 & 6.8 & 0.9 & 0 & 0 & 0 & 0 & 0 & 0 \\ -9.2 & 4.8 & 2.7 & -3.2 & 2.5 & 6.2 & 2.5 & 4.6 & 0 & 0 & 0 & 0 & 0 \\ -1.7 & -3.6 & -0.1 & 1.3 & 5.7 & 3.4 & -0.8 & -0.8 & 0.4 & 0 & 0 & 0 & 0 \\ -2.7 & -3.5 & 0.6 & 1.4 & 6.1 & 3.5 & -0.7 & -0.2 & -0.4 & 0.5 & 0 & 0 & 0 \\ 5.9 & -4.8 & -5.1 & -3.1 & 3.4 & 6.6 & -0.7 & -5.2 & 4.9 & -0.8 & 8.2 & 0 & 0 \\ 7.1 & 0.9 & -3.1 & -2.7 & -5.8 & -1.9 & 0.6 & -3.4 & 4.3 & -1.3 & 9.2 & 9.1 & 0 \\ 3.8 & 1.9 & 2.7 & 2.1 & -7.5 & -10.6 & -1.2 & 2. & 0.7 & -0.2 & -0.2 & 9.5 & 12.8 \\ \end{array} \right) \end{split} \end{align} \normalsize and is stable if $k\Lambda \leq 18/28567$. The coefficients to machine precision as well as code to verify \cref{claim:ms} and \cref{claim:cons} can be found at \url{https://github.com/AZaitzeff/SIgradflow}. In the following section, we consider methods for \cref{eq:Lde}, when the inner product changes with the solution. \section{Schemes for Solving Gradient Flows with Solution Dependent Inner Product} \label{sec:SDIP} Now we move on to the problem of simulating flow \cref{eq:Lde}, \begin{equation*} u'=-\mathcal{L}(u)\nabla_H E(u). \end{equation*} We consider the case where $\mathcal{L}(u)$ is strictly positive definite. Our approach will be as follows: \begin{enumerate} \item Generate a $u_*$ from $u_n$. \item Construct $\mathcal{L}(u_*)$. \item Use the algorithm \cref{eq:ms} with norm $\left\|\cdot\right\|^2_{\mathcal{L}^{-1}(u_*)}= \langle \cdot ,\mathcal{L}^{-1}(u_*) \cdot \rangle$ to generate $u_{n+1}$. \end{enumerate} One advantage to constructing $\mathcal{L}(u_*)$ and then using it in \cref{eq:ms} is that \cref{claim:ms} immediately gives conditional energy stability for coefficients such as \cref{eq:2ndordergamma} or \cref{eq:3rdordergamma}. Thus, we only need to consider what choice of $u_*$ will give our algorithm the desired level of accuracy. Now at every step we are solving \small \begin{equation} \label{eq:Lms2} \bigg[\sum^{m-1}_{i=0}\gamma_{m,i}\bigg]U_{m}+k\mathcal{L}(u_*)\nabla_HE_1(U_m)=-k\mathcal{L}(u_*)\sum^{m-1}_{i=0} \theta_{m,i}\nabla_HE_2(U_i)+\sum^{m-1}_{i=0}\gamma_{m,i}U_i. \end{equation} \normalsize \iffalse Consider if we use a multi-stage algorithm that gave us the desired level of accuracy, such as the examples on \cref{eq:Lms2} \small \begin{align} \begin{split} \label{eq:Lfinal} u_{n+1}&=u_n-k\mathcal{L}(u_*) DE(u_n)+\frac{1}{2}k^2\mathcal{L}(u_*)D^2E(u_n)(\mathcal{L}(u_*)DE(u_n))\\&-\frac{1}{6}k^3 \mathcal{L}(u_*)D^2E(u_n)\left(\mathcal{L}(u_*)D^2E(u_n)\left(\mathcal{L}(u_*)DE(u_n)\right)\right)\\&-\frac{1}{6}k^3\mathcal{L}(u_*)D^3E(u_n)\big(\mathcal{L}(u_*)DE(u_n),\mathcal{L}(u_*)DE(u_n)\big)+O(k^4) \end{split} \end{align} \normalsize \fi We will set up the consistency equations for \cref{eq:Lde}. Let $u_n=u(t_0)$. For convenience, denote $\mathcal{L}(u_n)$ as $\mathcal{L}_n$ and $E(u_n)$ as $E_n$. We begin with the exact solution starting from $u(t_0)$: \small \[ \begin{cases} u_t=-\mathcal{L}(u)\nabla E(u) & t>t_0 \\ u(t_0)=U_0 \end{cases} \] \normalsize By Taylor expanding around $t_0$ we find \begin{equation} \label{eq:truewmu} u(k+t_0)=u(t_0)+ku_t(t_0)+\frac{1}{2}k^2u_{tt}(t_0)+\frac{1}{6}k^3u_{ttt}(t_0) \end{equation} where the higher derivatives in time are found using \cref{eq:Lde}: \small \begin{align*} u_t(t_0)=&- \mathcal{L}_nDE_n\\ u_{tt}(t_0)=& D\mathcal{L}_n(\mathcal{L}_nDE_n) DE_n+\mathcal{L}_n D^2E_n(\mathcal{L}_nDE_n)\\ u_{ttt}(t_0)=&- D\mathcal{L}_n(D\mathcal{L}_n(\mathcal{L}_nDE_n)DE_n) D E_n - D^2\mathcal{L}_n(\mathcal{L}_nDE_n,\mathcal{L}_nDE_n) DE_n\\ &- D\mathcal{L}_n(\mathcal{L}_n(D^2E_n(\mathcal{L}_nDE_n))) D E_n-2D\mathcal{L}_n(\mathcal{L}_nDE_n)D^2E_n (\mathcal{L}_nDE_n)\\&-\mathcal{L}_nD^2E_n(D\mathcal{L}_n(\mathcal{L}_nDE_n)) DE_n-\mathcal{L}_nD^2E_n(\mathcal{L}_nD^2E_n(\mathcal{L}_nDE_n))\\&-\mathcal{L}_n D^3E_n\big(\mathcal{L}_nDE_n,\mathcal{L}_nDE_n\big) \end{align*} \normalsize where for $l\in\{1,2,3,\ldots\}$, $D^l L(u) : H^l \to H$ denotes the multilinear form given by \small \[ D^l \mathcal{L}(u)\big(v_1,\ldots,v_l\big) = \left. \frac{\partial^l}{\partial s_1 \cdots \partial s_l} \mathcal{L}(u+s_1 v_1 + s_2 v_2 + \cdots + s_l v_l) \right|_{s_1 = s_2 = \cdots = s_l = 0}\] \normalsize so that $D^l\mathcal{L}(u)\big(v_1,v_2,\ldots,v_{l}\big)$ is a linear operator from $H$ to $H$. \iffalse Suppose that we generate $u_*$ in such a way that has Taylor expansions given as. \begin{multline} \label{ustar} u_*=u_n-k\beta^*_1\mathcal{L}_nD E(u_n)+k^2\beta^*_2 D \mathcal{L}_n (\mathcal{L}_n DE(u_n))DE(u_n)\\+k^2\beta^*_3 \mathcal{L}_n D^2E(u_n)(\mathcal{L}_nDE(u_n))+O(k^3) \end{multline} expanding $\mathcal{L}_n$ around $u_n$ in \cref{eq:Lfinal} we have \small \begin{align} \label{eq:fullexpansion} \begin{split} u_{n+1}=&u_n-k\mathcal{L}_n DE(u_n)+k^2 \beta^*_1D\mathcal{L}_n(\mathcal{L}_n DE(u_n))\\&+\frac{k^2}{2}\mathcal{L}_nD^2E(u_n)(\mathcal{L}_nDE(u_n))\\&-k^3 \beta^*_2 \mathcal{L}_n(D\mathcal{L}_nDE(u_n))(D\mathcal{L}_nDE(u_n))DE(u_n)\\&-k^3(\sum_i \alpha_i \beta^*_{3,i})\mathcal{L}_n\mathcal{L}_n(D\mathcal{L}_nD^2E(u_n)\big(DE(u_n)\big))DE(u_n)\\&-\frac{k^3}{2}(\sum_i \alpha_i (\beta^*_{1,i})^2)\mathcal{L}_n\mathcal{L}_nD^2\mathcal{L}_n\big(DE(u_n),DE(u_n)\big)DE(u_n)\\&-k^3(\sum_i \alpha_i \beta^*_{1,i})\mathcal{L}_n\mathcal{L}_n(D\mathcal{L}_nDE(u_n))D^2E(u_n)DE(u_n)\\&-\frac{k^3}{6}\mathcal{L}_n\mathcal{L}_n\mathcal{L}_nD^2E(u_n)\left(D^2E(u_n)\left(DE(u_n)\right)\right)\\&-\frac{k^3}{6}\mathcal{L}_n\mathcal{L}_n\mathcal{L}_nD^3E(u_n)\big(DE(u_n),DE(u_n)\big)+\text{h.o.t.} \end{split} \end{align} \normalsize Then we have the following conditions: \small \begin{alignat}{2} \label{eq:Lcons} & \text{\underline{Second Order:}} &\quad &\text{\underline{Third Order:}} \nonumber \\ &\sum_i \alpha_i \beta^*_{1,i}=1/2& &\sum_i \alpha_i \beta^*_{1,i}=1/2 \nonumber\\ & & &\sum_i \alpha_i \beta^*_{2,i}=1/6\\ & & &\sum_i \alpha_i \beta^*_{3,i}=1/6 \nonumber\\ & & &\sum_i \alpha_i (\beta^*_{1,i})^2=1/3 \nonumber \end{alignat} \normalsize \fi In the next two subsections, we provide second and third order examples and accompanying consistency calculations. Both of these examples also have the property that $E(u_*) \leq E(u_n)$. \subsection{Second Order Method} \begin{algorithm}[H] \caption{A second order method for solving gradient flows with solution dependent inner product} \label{alg:m2ndorder} Fix a time step size $k>0$. Set $u_n = u_0$. To obtain $u_{n+1}$ from $u_n$, carry out the following steps: \begin{enumerate} \item Find $u_*$ by solving $u_*+\frac{1}{2}k\mathcal{L}_n\nabla E_1(u_*)=u_n-\frac{1}{2}k\mathcal{L}_n\nabla E_2(u_n)$ \item Find $u_{n+1}$ using \cref{eq:Lms2} with coefficients \cref{eq:2ndordergamma} and $u_*$ in $\mathcal{L}(u_*)$ given by Step 1 of this algorithm. \end{enumerate} \end{algorithm} Our second order algorithm is laid out in \cref{alg:m2ndorder}. Now we will prove that it is indeed second order. First, the expansion of $u_*$ is \small \begin{align} \begin{split} \label{eq:1s2o} u_*&=u_n-\frac{1}{2}k\mathcal{L}_n DE_n+O(k^2) \end{split} \end{align} \normalsize We use \cref{eq:rec} to get an expansion of $u_{n+1}$: \small \begin{equation} \label{eq:Lsecondorder} u_{n+1}=u_n-k\mathcal{L}(u_*) DE_n+\frac{1}{2}k^2\mathcal{L}(u_*)D^2E_n(\mathcal{L}(u_*)DE_n)+O(k^3) \end{equation} \normalsize Now, expand $u_*$ around $u_n$ in \cref{eq:Lsecondorder}: \begin{align*} u_{n+1}=&u_n-k\mathcal{L}_n DE_n-kD\mathcal{L}_n(u_*-u_n) DE_n\\&+\frac{1}{2}k^2\mathcal{L}_nD^2E_n(\mathcal{L}_nDE_n)+O(k^3)\\ =&u_n-k\mathcal{L}_n DE_n+\frac{1}{2}k^2D\mathcal{L}_n(\mathcal{L}_n DE_n) DE_n\\&+\frac{1}{2}k^2\mathcal{L}_nD^2E_n(\mathcal{L}_n DE_n)+O(k^3) \end{align*} The Taylor expansion of $u_{n+1}$ matches \cref{eq:truewmu} to second order. \subsection{Third Order Method} \begin{algorithm}[H] \caption{A third order method for solving gradient flows with solution dependent inner product} \label{alg:m3rdorder} Fix a time step size $k>0$ and set $u_n = u_0$. For convenience, we will denote $D^2\mathcal{L}(u_*)\big(\mathcal{L}(u_*)\nabla E(u_*),\mathcal{L}(u_*)\nabla E(u_*)\big)$ as $D^2\mathcal{L}(u_*)$. Additionally, let $MS \Big(\tilde{k},\mathcal{L}(u_*),\tilde{u},\gamma,\theta\Big)$ denote $U_M$ obtained from the multistage algorithm \small \[ \bigg[\sum^{m-1}_{i=0}\gamma_{m,i}\bigg]U_{m}+\tilde{k}\mathcal{L}(u_*)\nabla_HE_1(U_m)=-\tilde{k}\mathcal{L}(u_*)\sum^{m-1}_{i=0} \theta_{m,i}\nabla_HE_2(U_i)+\sum^{m-1}_{i=0}\gamma_{m,i}U_i. \] \normalsize with $U_0 = \tilde{u}$. To obtain $u_{n+1}$ from $u_n$, carry out the following steps: \begin{enumerate} \item Let $\gamma$ and $\theta$ be given by \cref{eq:si1c}. Set $u_{*_1}=MS\Big(\frac{1}{6}k,\mathcal{L}(u_n),u_n,\gamma,\theta\Big).$ \item Let $\gamma$ and $\theta$ be given by \cref{eq:3rdordergamma}. Set \small\[\bar{u}=MS\bigg(\frac{1}{2}k,\mathcal{L}(u_{*_1})-\frac{1}{72}k^2D^2\mathcal{L}(u_{*_1}),u_n,\gamma,\theta\bigg).\]\normalsize \item Let $\gamma=\theta=\left(1\right)$. Set $u_{*_{2,1}}=MS\Big(\frac{2}{5}k,\mathcal{L}(u_n),u_n,\gamma,\theta\Big)$. \item Let $\gamma$ and $\theta$ be given by \cref{eq:si1125c}. Set $u_{*_{2,2}}=MS\Big(\frac{5}{6}k,\mathcal{L}(u_{*_{2,1}}),u_n,\gamma,\theta\Big).$ \item Let $\gamma$ and $\theta$ be given by \cref{eq:3rdordergamma}. Then \small\[u_{n+1}=MS\bigg(\frac{1}{2}k,\mathcal{L}(u_{*_{2,2}})-\frac{1}{72}k^2D^2\mathcal{L}(u_{*_{2,2}}),\bar{u},\gamma,\theta\bigg).\]\normalsize \end{enumerate} \end{algorithm} Now we present our third order algorithm for solving \cref{eq:Lde}. It requires the use of two new sets of coefficients, \small \begin{align} \label{eq:si1c} \begin{split} &\theta \approx\left( \begin{array}{ccc} 1. & 0 & 0. \\ -0.667 & 0.333 & 0 \\ 0 & 0 & 1.000\\ \end{array} \right)\\ &\gamma \approx\left( \begin{array}{ccc} 1.833 & 0 & 0. \\ 0.556 & 0.667 & 0 \\ 1.030 & -0.026 & 0.159 \\ \end{array} \right) \end{split} \end{align} \normalsize and \small \begin{align} \label{eq:si1125c} \begin{split} &\theta \approx\left( \begin{array}{ccccccc} 1. & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.708 & 0.292 & 0 & 0 & 0 & 0 & 0 \\ 0.013 & 0.018 & 0.969 & 0 & 0 & 0 & 0 \\ 0.008 & 0.012 & 0.867 & 0.113 & 0 & 0 & 0 \\ 0.006 & 0.009 & 0.206 & 0.056 & 0.724 & 0 & 0 \\ 0 & 0.005 & 0.05 & 0.025 & 0.053 & 0.867 & 0 \\ 0 & 0 & 0.015 & 0.009 & 0.015& 0.04 & 0.920 \\ \end{array} \right)\\ &\gamma \approx \left( \begin{array}{ccccccc} 7.727 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.594 & 2.241 & 0 & 0 & 0 & 0 & 0 \\ 3.056 & -0.455 & 0.636 & 0 & 0 & 0 & 0 \\ -1.571 & 5.091 & -1.063 & 2.786 & 0 & 0 & 0 \\ -3.714 & 3.1 & -1.267 & 1.545 & 9.655 & 0 & 0 \\ -6.923 & 5.1 & -2.056 & 3.471 & 4.571 & 4.033 & 0 \\ -2.467 & -2.1 & 0.009 & -0.182 & 0.660 & 7.224 & 9.428 \\ \end{array} \right), \end{split} \end{align} \normalsize to achieve particular Taylor expansions as we explain later in the section. The values of \cref{eq:si1c} and \cref{eq:si1125c} to machine precision can be found at \url{https://github.com/AZaitzeff/SIgradflow}. \cref{alg:m3rdorder} details our third order version for solving gradient flows with solution dependent inner product. The method adds another condition for stability to hold, namely: \begin{equation} \label{eq:opreq} \mathcal{L}(u)-\frac{1}{72}k^2D^2\mathcal{L}(u)(w,w) \end{equation} needs to be positive definite for all $u$ and $w$. Now we will prove that \cref{alg:m3rdorder} produces a third order approximation. By applying \cref{eq:rec}, the coefficients \cref{eq:si1c} give the following expansion for $u_{*_1}$: \small \begin{align} \begin{split} \label{eq:si1} u_{*_1}&=u_n-\frac{1}{6}k\mathcal{L}_n DE_n+\frac{1}{36}k^2\mathcal{L}_n D^2E_n(\mathcal{L}_n DE_n)+O(k^3) \end{split} \end{align} \normalsize Now we can expand $\bar{u}$ by using \cref{eq:rec} and expanding $u_{*_1}$ around $u_n$ \small \begin{align} \begin{split} \label{eq:3o1h} \bar{u}&=u_n-\frac{1}{2}k\mathcal{L}(u_{*_1}) DE_n+\frac{1}{8}k^2\mathcal{L}(u_{*_1})D^2E_n(\mathcal{L}(u_{*_1})DE_n)\\&-\frac{1}{48}k^3 \mathcal{L}(u_{*_1})D^2E_n\left(\mathcal{L}(u_{*_1})D^2E_n\left(\mathcal{L}(u_{*_1})DE_n\right)\right)\\&-\frac{1}{48}k^3\mathcal{L}(u_{*_1})D^3E_n\big(\mathcal{L}(u_{*_1})DE_n,\mathcal{L}(u_*)DE_n\big)\\&+\frac{1}{144}k^3D^2\mathcal{L}(u_{*_1})\big(\mathcal{L}(u_{*_1})DE(u_{*_1}),\mathcal{L}(u_{*_1})DE(u_{*_1})\big)DE_n+O(k^4)\\ &=u_n-\frac{1}{2}k\mathcal{L}_n DE_n+\frac{1}{12}k^2D\mathcal{L}_n(\mathcal{L}_nDE_n) DE_n+\frac{1}{8}k^2\mathcal{L}_nD^2E_n(\mathcal{L}_nDE_n)\\&-\frac{1}{72}k^3D\mathcal{L}_n(\mathcal{L}_n D^2E_n(\mathcal{L}_n DE_n)) DE_n\\&-\frac{1}{48}k^3\mathcal{L}_nD^2E_n(D\mathcal{L}_n(\mathcal{L}_nDE_n)DE_n)\\&-\frac{1}{48}k^3D\mathcal{L}_n(\mathcal{L}_nDE_n) D^2E_n(\mathcal{L}_n DE_n)\\&-\frac{1}{48}k^3 \mathcal{L}_n D^2E_n\left(\mathcal{L}_n D^2E_n\left(\mathcal{L}_n DE_n\right)\right)\\&-\frac{1}{48}k^3\mathcal{L}_n D^3E_n\big(\mathcal{L}_n DE_n,\mathcal{L}_n DE_n\big)+O(k^4) \end{split} \end{align} \normalsize Now we will apply the same steps to derive the expansions of $u_{*_{2,1}}$ \small \begin{align} \begin{split} \label{eq:1s} u_{*_{2,1}}&=u_n-\frac{2}{5}k\mathcal{L}_n DE_n+O(k^2) \end{split} \end{align} \normalsize and $u_{*_{2,2}}$ \small \begin{align} \begin{split} \label{eq:si1125} u_{*_{2,2}}&=u_n-\frac{5}{6}k\mathcal{L}(u_{*_{2,1}}) DE_n+\frac{11}{36}k^2\mathcal{L}(u_{*_{2,1}}) D^2E_n(\mathcal{L}(u_{*_{2,1}}) DE_n)+O(k^3)\\ &=u_n-\frac{5}{6}k\mathcal{L}_n DE_n\\&+\frac{1}{3}k^2D\mathcal{L}_n(\mathcal{L}_n DE_n) DE_n+\frac{11}{36}k^2\mathcal{L}_n D^2E_n(\mathcal{L}_n DE_n)+O(k^3)\\ \end{split} \end{align} \normalsize Finally, we can find the expansion of $u_{n+1}$. We will first apply \cref{eq:rec} around $\bar{u}$ \small \begin{align*} u_{n+1}&=\bar{u}-\frac{1}{2}k\mathcal{L}(u_{*_{2,2}}) DE(\bar{u})+\frac{1}{8}k^2\mathcal{L}(u_{*_{2,2}})D^2E(\bar{u})(\mathcal{L}(u_{*_{2,2}})DE(\bar{u}))\\&-\frac{1}{48}k^3 \mathcal{L}(u_{*_{2,2}})D^2E(\bar{u})\left(\mathcal{L}(u_{*_{2,2}})D^2E(\bar{u})\left(\mathcal{L}(u_{*_{2,2}})DE(\bar{u})\right)\right)\\&-\frac{1}{48}k^3\mathcal{L}(u_{*_{2,2}})D^3E(\bar{u})\big(\mathcal{L}(u_{*_{2,2}})DE(\bar{u}),\mathcal{L}(u_{*_{2,2}})DE(\bar{u})\big)\\&+\frac{1}{144}k^3D^2\mathcal{L}(u_{*_{2,2}})\big(\mathcal{L}(u_{*_{2,2}})DE(u_{*_{2,2}}),\mathcal{L}(u_{*_{2,2}})DE(u_{*_{2,2}})\big)DE(\bar{u})+O(k^4) \end{align*} expand $u_{*_{2,2}}$ \begin{align*} u_{n+1}&=\bar{u}-\frac{1}{2}k\mathcal{L}_n DE(\bar{u})+\frac{5}{12}k^2D\mathcal{L}_n(\mathcal{L}_n DE_n) DE(\bar{u})+\frac{1}{8}k^2\mathcal{L}_n D^2E(\bar{u})(\mathcal{L}_n DE(\bar{u}))\\&-\frac{1}{6}k^3D\mathcal{L}_n(D\mathcal{L}_n(\mathcal{L}_n DE_n) DE_n) DE(\bar{u})\\&-\frac{1}{6}k^3D^2\mathcal{L}_n(\mathcal{L}_n DE_n,\mathcal{L}_n DE_n) DE_n) DE(\bar{u})\\&-\frac{11}{72}k^3D\mathcal{L}_n(\mathcal{L}_n D^2E_n(\mathcal{L}_n DE_n)) DE(\bar{u})\\& -\frac{5}{48}k^3 D\mathcal{L}_n(\mathcal{L}_n DE(\bar{u})) D^2E(\bar{u})(\mathcal{L}_n DE(\bar{u}))\\&-\frac{5}{48}k^3\mathcal{L}_n D^2E(\bar{u})(D\mathcal{L}_n(\mathcal{L}_n DE(\bar{u})) DE(\bar{u}))\\&-\frac{1}{48}k^3 \mathcal{L}_n D^2E(\bar{u})\left(\mathcal{L}_n D^2E(\bar{u})\left(\mathcal{L}_n DE(\bar{u})\right)\right)\\&-\frac{1}{48}k^3\mathcal{L}_nD^3E(\bar{u})\big(\mathcal{L}_n DE(\bar{u}),\mathcal{L}_n DE(\bar{u})\big)+O(k^4) \end{align*} then expand $\bar{u}$ around $u_n$: \begin{align*} u_{n+1}&=u_n-k\mathcal{L}_n DE_n+\frac{1}{2}k^2D\mathcal{L}_n(\mathcal{L}_n DE_n) DE_n+\frac{1}{2}k^2\mathcal{L}_n D^2E_n(\mathcal{L}_n DE_n)\\&-\frac{1}{6}k^3D\mathcal{L}_n(D\mathcal{L}_n(\mathcal{L}_n DE_n) DE_n) DE_n\\&-\frac{1}{6}k^3D^2\mathcal{L}_n(\mathcal{L}_n DE_n,\mathcal{L}_n DE_n) DE_n) DE_n\\&-\frac{1}{6}k^3D\mathcal{L}_n(\mathcal{L}_n D^2E_n(\mathcal{L}_n DE_n)) DE_n\\& -\frac{1}{3}k^3 D\mathcal{L}_n(\mathcal{L}_n DE_n) D^2E_n(\mathcal{L}_n DE_n)\\&-\frac{1}{6}k^3\mathcal{L}_n D^2E_n(D\mathcal{L}_n(\mathcal{L}_n DE_n) DE_n)\\&-\frac{1}{6}k^3 \mathcal{L}_n D^2E_n\left(\mathcal{L}_n D^2E_n\left(\mathcal{L}_n DE_n\right)\right)\\&-\frac{1}{6}k^3\mathcal{L}_nD^3E_n\big(\mathcal{L}_n DE_n,\mathcal{L}_n DE_n\big)+O(k^4)\\ \end{align*} \normalsize The Taylor expansion of $u_{n+1}$ matches \cref{eq:truewmu} to third order. As long as \cref{eq:opreq} holds, \[E(u_{n+1})\leq E(\bar{u})\leq E(u_n)\] by \cref{claim:ms}. \begin{remark} \label{remark:fullyimp} In \cref{alg:m2ndorder} and \cref{alg:m3rdorder} we can instead handle $E(u)$ fully implicitly as we do in \cite{alexander2019variational}. We need to substitute higher order implicit methods for the corresponding semi-implicit methods. We give theses fully implicit versions in \cref{sec:append}. \end{remark} \iffalse \section{Auxiliary Energies} Frequently we would like to add an auxiliary energy that depend on $u_*$ to the implicit side and subtract it from the explicit side in order to decrease lambda in \cref{eq:lmda} and allowing us to solve a constant coefficient linear equation at every step. Even though auxiliary energy depends on $u_*$ we achieve the desired energy stability property \cref{eq:energymin}. For example, say are given an $E(u)=E_1(u)+E_2(u)$ where we handle $E_1(u)$ implicitly and $E_2(u)$ explicitly. We have auxiliary energy $\bar{E}(u,u_*)$. Define $\bar{E}_1(u,u_*)=E_1(u)+\bar{E}(u,u_*)$ and $\bar{E}_2(u,u_*)=E_2(u)-\bar{E}(u,u_*)$. Then $E(u)=\bar{E}_1(u,u_*)+\bar{E}_2(u,u_*)$. For coefficients satisfying \cref{claim:ms} we have \[E(u_{n+1})=\bar{E}_1(u_{n+1},u_*)+\bar{E}_2(u_{n+1},u_*)\leq\bar{E}_1(u_{n},u_*)+\bar{E}_2(u_{n},u_*)=E(u_{n}) \] \fi \section{Numerical Examples} \label{sec:nr} In this section, we will apply the second and third order accurate conditionally stable schemes to a variety of gradient flows, some with fixed inner product and some with solution dependent inner product. Careful numerical convergence studies are presented in each case to verify the anticipated convergence rates of previous sections. \subsection{Gradient Flows with Fixed Inner Product} \iffalse \subsection{Ordinary Differential Equations} Our first test is on the ODE $u' = -\sin(u)$ with the corresponding energy $E(u)=\cos(u)$. With initial condition $u(0)=\pi/2$, the exact solution is $u_*(t)=2\arctan(\exp(-t))$. We let $E_1(u)=u^2$ and $E_2(u)=\cos(u)-u^2$ so that $E_2$ is concave ($\Lambda=0$). The errors for the two new schemes are tabulated in \cref{tab:c5} and \cref{tab:c18}, and once again bear out the anticipated convergence rates. The anticipated order or convergence is clearly observed for both schemes. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Number of& & & & & & &\\ time steps&$2^{4}$&$2^{5}$&$2^{6}$&$2^{7}$&$2^{8}$&$2^9$&$2^{10}$\\ \hline GE(t=2)& 1.3e-04&7.7e-04&2.0e-04&5.2e-05&1.3e-05&3.3e-05&8.2e-06\\ \hline Order&-&1.9&1.9&2.0&2.0&2.0&2.0\\ \hline \end{tabular} \caption{\footnotesize The new second order accurate, unconditionally stable, three-stage scheme \eqref{eq:ms}, \eqref{eq:msintermediate} \& \eqref{eq:2ndordergamma} on the ODE $u'=-\sin(u)$ with energy $E(u) = \cos(u)$.} \label{tab:c5} \end{center} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Number of& & & & & & &\\time steps&$2^{4}$&$2^{5}$&$2^{6}$&$2^{7}$&$2^{8}$&$2^9$&$2^{10}$\\ \hline GE(t=2)&3.3e-05&4.5e-06&5.9e-07&7.5e-08&9.5e-09&1.2e-9&1.5e-10\\ \hline Order&-&2.9&2.9&3.0&3.0&3.0&3.0\\ \hline \end{tabular} \caption{\footnotesize The new third order accurate, unconditionally stable, six-stage scheme \eqref{eq:ms}, \eqref{eq:msintermediate} \& \eqref{eq:3rdordergamma} on the ODE $u'=-\sin(u)$ with energy $E(u) = \cos(u)$.} \label{tab:c18} \end{center} \end{table} \subsection{Partial Differential Equations} \fi \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Number of& & & & &\\time steps&$2^5$&$2^{6}$&$2^{7}$&$2^{8}$&$2^{9}$\\ \hline $L^2$ error (2nd order)&5.28e-05&1.16e-05& 2.71e-06&6.58e-07&1.62e-07\\ \hline Order&-&2.19&2.09&2.04 &2.02 \\ \hline $L^2$ error (3rd order)&1.11e-06&7.44e-07& 1.62e-07&2.51e-08&3.36e-09\\ \hline Order&-&0.57&2.21&2.68 &2.90 \\ \hline \end{tabular} \caption{\footnotesize The new second and third order accurate, unconditionally stable schemes (see \cref{remark:fullyimp}) for gradient flows porous medium equation.} \label{tab:PMEsi} \end{center} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=.45\textwidth]{W1.png} \includegraphics[width=.45\textwidth]{W2.png} \caption{\footnotesize The double well potentials used in the Allen-Cahn \cref{eq:ac} and Cahn-Hilliard \cref{eq:ch} equations: One with unequal depth wells and the other with equal depth wells.} \label{fig:W} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=.5\textwidth]{travelwave.png} % \end{center} \caption{\footnotesize The initial condition (black) and the solution at final time (gray) in the numerical convergence study on the 1D Allen-Cahn equation \cref{eq:ac} with a potential that has unequal depth wells.} \label{fig:1dac} \end{figure} We start with the Allen-Cahn equation \begin{equation} \label{eq:ac} u_t = \Delta u - W'(u) \end{equation} where $W:\mathbb{R}\to\mathbb{R}$ is a double-well potential. This corresponds to gradient flow for the energy \begin{equation} \label{eq:acE} E(u) = \int \frac{1}{2} \|\nabla u\|^2 + W(u) \, dx \end{equation} with respect to the $L^2$ inner product. First, we consider equation \cref{eq:ac} in one space dimension, with the potential $W(u) = 8u-16u^2-\frac{8}{3}u^3+8u^4$. This is a double well potential with unequal depth wells; see \cref{fig:W}. In this case, equation \cref{eq:ac} is well-known to possess traveling wave solutions on $x\in\mathbb{R}$, see \cref{fig:1dac}. We choose the initial condition $u(x,0) = \tanh(4x + 20)$; the exact solution is then $u_*(x,t) = \tanh(4x + 20 - 8t)$. The computational domain is $x\in[-10,10]$, discretized into a uniform grid of $8193$ points. We approximate the solution on $\mathbb{R}$ by using the Dirichlet boundary conditions $u(\pm 10,t) = \pm 1$: The domain size is large enough that the mismatch in boundary conditions do not substantially contribute to the error in the approximate solution over the time interval $t\in[0,5]$. We use $E_1(u)=\int \frac{1}{2}|\nabla u|^2 dx$ and $E_2(u)=\int W(u) dx$. \Cref{tab:twms5} tabulates the error in the computed solution at time $T=5$ for our two new schemes. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Number of& & & & &\\time steps&$2^9$&$2^{10}$&$2^{11}$&$2^{12}$&$2^{13}$\\ \hline $L^2$ error (2nd order) &2.08e-01&5.96e-02&1.61e-02& 4.22e-03&1.08e-03\\ \hline Order&-&1.81&1.89&1.94 &1.97 \\ \hline $L^2$ error (3rd order) &2.06e-03&3.26e-04& 4.68e-05&6.32e-06&8.33e-07\\ \hline Order&-&2.66&2.80&2.89 &2.92 \\ \hline \end{tabular} \caption{\footnotesize The new second \cref{eq:2ndordergamma} and third \cref{eq:3rdordergamma} order accurate, conditionally stable schemes \cref{eq:ms} on the one-dimensional Allen-Cahn equation \cref{eq:ac} with a traveling wave solution.} \label{tab:twms5} \end{center} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=.45\textwidth]{initial2d.png} \includegraphics[width=.45\textwidth]{end2d.png} \end{center} \caption{\footnotesize Initial condition and the solution at final time for the 2D Allen-Cahn equation with a potential that has equal depth wells.} \label{fig:2dac} \end{figure} Next, we consider the Allen-Cahn equation \cref{eq:ac} in two space-dimensions, with the potential $W(u) = u^2(1-u)^2$ that has equal depth wells; see \cref{fig:W}. We take the initial condition $u(x,y,0)=\frac{1}{1+\exp[-(7.5-\sqrt{x^2+y^2})]}$ on the domain $x\in [-10,10]^2$, and impose periodic boundary conditions. Once again we use $E_1(u)=\int \frac{1}{2}\|\nabla u\|^2 dx$ and $E_2(u)=\int W(u) dx$. As a proxy for the exact solution of the equation with this initial data, we compute a very highly accurate numerical approximation $u_*(x,y,t)$ via the following second order accurate in time, semi-implicit, multi-step scheme ~\cite{chen1998applications} on an extremely fine spatial grid and take very small time steps: \begin{equation*} \frac{3}{2}u^{n+1}-2u^{n}+\frac{1}{2}u^{n-1}=k\Delta u^{n+1}-k(2W'(u^{n})-W'(u^{n-1})). \end{equation*} \Cref{tab:acms62d} show the errors and convergence rates for the approximate solutions computed by our new multi-stage schemes. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Number of& & & & &\\time steps&$2^8$&$2^9$&$2^{10}$&$2^{11}$&$2^{12}$\\ \hline $L^2$ error (2nd order)& 3.62e-05&9.07e-06&2.27e-06& 5.68e-07 &1.41e-07\\ \hline Order&-&2.00&2.00&2.00&2.00 \\ \hline $L^2$ error (3rd order)&2.35e-05&3.18e-06& 4.15e-07&5.29e-08&6.24e-09\\ \hline Order&-&2.88&2.94&2.97 &3.08 \\ \hline \end{tabular} \caption{\footnotesize The new second \cref{eq:2ndordergamma} and third \cref{eq:3rdordergamma} order accurate, conditionally stable schemes \cref{eq:ms} on the two-dimensional Allen-Cahn equation \cref{eq:ac} with a potential that has equal depth wells.} \label{tab:acms62d} \end{center} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=.45\textwidth]{initial2dch.png} \includegraphics[width=.45\textwidth]{end2dch.png} \end{center} \caption{\footnotesize Initial condition and the solution at final time for the 2D Cahn-Hillard equation with a potential that has equal depth wells.} \label{fig:2dch} \end{figure} For our next example, we consider the Cahn-Hilliard equation \begin{equation} \label{eq:ch} u_t = -\Delta \big( \Delta u - W'(u) \big) \end{equation} where we take $W$ to be the double well potential $W(u) = u^2(1-u)^2$ with equal depth wells and impose periodic boundary conditions. This flow is also gradient descent for energy \cref{eq:acE}, but with respect to the $H^{-1}$ inner product: \begin{equation*} \langle u \,, v \, \rangle = \int u \Delta^{-1} v \, dx. \end{equation*} Starting from the initial condition $u(x,y,0)=\frac{1}{1+\exp[-(5-\sqrt{x^2+y^2})]}$, we computed a proxy for the ``exact'' solution once again using the second order accurate, semi-implicit multi-step scheme from \cite{chen1998applications}: \begin{equation*} \frac{3}{2}u^{n+1}-2u^{n}+\frac{1}{2}u^{n-1}=-k\Delta[\Delta u^{n+1}-(2W'(u^{n})-W'(u^{n-1}))] \end{equation*} where the spatial and temporal resolution was taken to be high to ensure the errors are small. \Cref{tab:chms32d} show the errors and convergence rates for the approximate solutions computed by our new multi-stage schemes. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Number of& & & & &\\time steps&$2^{7}$&$2^{8}$&$2^{9}$&$2^{10}$&$2^{11}$\\ \hline $L^2$ error (2nd order)& 6.20e-04 & 1.92e-04 & 5.59e-05& 1.55e-05&4.09e-06\\ \hline Order&-&1.69&1.78&1.85&1.92 \\ \hline $L^2$ error (3rd order) &6.45e-06&1.35e-06& 2.51e-07&4.15e-08&7.20e-09\\ \hline Order&-&2.25&2.43&2.60&2.53 \\ \hline \end{tabular} \caption{\footnotesize The new second \cref{eq:2ndordergamma} and third \cref{eq:3rdordergamma} order accurate, conditionally stable schemes \cref{eq:ms} on the two-dimensional Cahn-Hilliard equation \cref{eq:ch} with a potential that has equal depth wells.} \label{tab:chms32d} \end{center} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=.5\textwidth]{PMEsoln.png} % \end{center} \caption{\footnotesize The initial condition (black) and the solution at final time (gray) in the numerical convergence study on the PME} \label{fig:1dpme} \end{figure} As a final example we do the following porous medium equation: \begin{equation} \label{eq:pme53} u_t=\Delta u^{5/3} \end{equation} Under the $H^{-1}$ inner product, \cref{eq:pme53} is gradient flow for the energy \[ E(u)=\frac{3}{8} \int u^{8/3} dx. \] Our initial data is \begin{equation} \label{eq:pmeinit} u(x,0)=\frac{3}{2\sqrt{2\pi}}\exp\bigg(-\frac{9x^2}{8}\bigg) \end{equation} in $x\in [-3,3]$ with derivative zero Neumann boundary conditions. We run the simulation for $T=1$. See \cref{fig:1dpme} for our initial and final curve. We generate the ``true'' solution using the L-stable (but not energy stable) TR-BDF2 method with a high spatial and temporal resolution. See \cref{tab:pme1} for results. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Number of& & & & &\\time steps&$2^{12}$&$2^{13}$&$2^{14}$&$2^{15}$&$2^{16}$\\ \hline $L^2$ error (2nd order)& 1.91e-06 & 6.85e-07 & 2.26e-07& 6.90e-08&1.97e-08\\ \hline Order&-&1.48&1.60&1.71&1.81 \\ \hline $L^2$ error (3rd order) &2.21e-07&5.69e-08& 1.26e-08&2.41e-09&4.13e-10\\ \hline Order&-&1.96&2.18&2.38&2.54 \\ \hline \end{tabular} \caption{\footnotesize The new second \cref{eq:2ndordergamma} and third \cref{eq:3rdordergamma} order accurate, conditionally stable schemes \cref{eq:ms} on the porous medium equation} \label{tab:pme1} \end{center} \end{table} \subsection{Gradient Flow For Solution Dependent Inner Product} Our first example we present in this section is the heat equation, $u_t=\Delta u$, but with a different energy. Under the Wasserstein metric (denoted as $W_2$), the heat equation is a gradient flow for the negative entropy \cite{jordan1998variational}: \begin{equation} \label{SIeq:entropy} E(u)=\int u\log(u) dx. \end{equation} However the minimization \[ \argmin_u E(u)+\frac{1}{2k}W_2^2(u,u_n) \] is a difficult optimization problem. On the other hand, we can approximate the the Wasserstein metric, $W_2(u,v)$, with \begin{equation} \label{eq:WIP} \langle u-v,\mathcal{L}(u)^{-1}(u-v) \rangle_{L^2}\text{ where }\mathcal{L}(u)=-\nabla \cdot u \nabla \end{equation} when $u$ and $v$ are near each other. Indeed \[-\mathcal{L}(u)\nabla_{L^2} E(u)=\nabla \cdot u \nabla(\log(u)+1)=\Delta u.\] Thus, we can alternatively think of the heat equation as minimizing movements on negative entropy with respect to the solution dependent inner product \cref{eq:WIP} and therefore use \cref{alg:m2ndorder} and \cref{alg:m3rdorder} to evolve the heat equation while decreasing the negative entropy \cref{SIeq:entropy} at every step. We use the exact solution $u(x,t)=\cos(\pi x)\exp(-t\pi^2)+2$ as our test with domain $x\in [0,1]$ using derivative zero Neumann boundary conditions. Our initial data is $u(x,0)$ and we run the simulation to final time $T=\frac{1}{10}$. We use $E_1(u)=\frac{1}{2} \int u^2 dx$ and $E_2(u)=\int u\log(u) dx-\frac{1}{2}\int u^2 dx$ in \cref{eq:ms} so at every step we are solving a linear systems of equation. We run simulation for $T=\frac{1}{10}$. See \cref{tab:heat} for results. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Number of& & & & &\\time steps&$2^3$&$2^{4}$&$2^{5}$&$2^{6}$&$2^{7}$\\ \hline $L^2$ error (2nd order)&1.06e-03&3.11e-04&8.58e-05& 2.27e-05&5.85e-06\\ \hline Order&-&1.77&1.86&1.92&1.96 \\ \hline $L^2$ error (3rd order)&1.00e-05&1.57e-06& 2.20e-07&3.04e-08&4.29e-09\\ \hline Order&-&2.69&2.82&2.87 &2.83 \\ \hline \end{tabular} \caption{\footnotesize The new second (\cref{alg:m2ndorder}) and third (\cref{alg:m3rdorder}) order accurate, conditionally stable schemes for gradient flows with solution dependent inner product on the heat equation with Wasserstein metric.} \label{tab:heat} \end{center} \end{table} The next example is the porous medium equation in one dimension. The energy is \[ E(u)=\frac{3}{2}\int u^{5/3} dx \] under the Wasserstein metric. As with the heat equation, we can again replace the Wasserstein metric with \cref{eq:WIP}. We will let $E_1(u)=E(u)$ and $E_2(u)=0$. We use the same test as in the $H^{-1}$ gradient flow porous medium equation (see \cref{eq:pmeinit} and accompanying explanation). We present the results of the porous medium equation test with movement limiter \cref{eq:WIP} in \cref{tab:pme2}. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Number of& & & & &\\time steps&$2^{4}$&$2^5$&$2^{6}$&$2^{7}$&$2^{8}$\\ \hline $L^2$ error (2nd order)& 2.69e-04& 6.10e-05& 1.49e-05&3.71e-06&9.25e-07 \\ \hline Order&-&2.14&2.04&2.01 &2.00 \\ \hline $L^2$ error (3rd order)&2.88e-0& 3.17e-06 & 3.72e-07 & 4.53e-08& 5.50e-09\\ \hline Order&-&3.18&3.09&3.04 &3.04 \\ \hline \end{tabular} \caption{\footnotesize The new second and third order accurate, unconditionally stable schemes (see \cref{remark:fullyimp}) for gradient flows with solution dependent inner product on the porous medium equation with the linearized Wasserstein metric.} \label{tab:pme2} \end{center} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=.6\textwidth]{1dchwm.png} \end{center} \caption{\footnotesize Initial condition (black) and the solution at final time (gray) for the 1D Cahn-Hillard with variable mobility and forcing term example.} \label{fig:1dchwm} \end{figure} For our final example, we consider the Cahn-Hilliard equation with variable mobility and a forcing term: \small \begin{equation} \label{eq:chwm} u_t = -\nabla \cdot \mu(u) \nabla \big( \epsilon^2 \Delta u - W'(u) -F(x)\big) \end{equation} \normalsize where we take $W$ to be the double well potential $W(u) = (1-u^2)^2$ with equal depth wells, the forcing term to be $F(x)=\tanh\big(\frac{\cos(2\pi x)}{10\epsilon}\big)$ and the mobility to be $\mu(u) = (1-\epsilon)(1-u^2)^2+\epsilon$ to avoid degeneracy in the PDE. This flow is gradient descent for energy \small \begin{equation*} E(u) = \int \frac{\epsilon^2}{2} \|\nabla u\|^2 + W(u)+uF(x) \, dx, \end{equation*} \normalsize with respect to the solution dependent inner product \small \begin{equation} \label{eq:CHWMIP} \langle u-v,\mathcal{L}(u)^{-1}(u-v) \rangle_{L^2}\text{ where }\mathcal{L}(u)=-\nabla \cdot \mu{(u)} \nabla. \end{equation} \normalsize For our example, we take $\epsilon=\frac{1}{20}$ and starting from the initial condition \small \[u(x,0)=\tanh\bigg(\frac{\cos(2\pi x)}{10\epsilon}\bigg)\] \normalsize on the domain $x \in [-\frac{1}{2},\frac{1}{2}]^2$, and impose periodic boundary conditions. We run the PDE until time $T=\frac{1}{8}$. We computed a proxy for the ``exact'' solution using the following second order BDF/AB scheme: \small \begin{multline*} 3u^{n+1}+2k\epsilon^2\Delta^2 u^{n+1}=4u^n-u^{n-1}+4\big(k\epsilon^2\Delta^2 u^{n}-k\nabla\cdot \mu(u^n)\nabla[\epsilon^2\Delta u^{n}-W'(u^{n})]\big)\\-2\big(k\epsilon^2\Delta^2 u^{n-1}-k\nabla\cdot \mu(u^{n-1})\nabla[\epsilon^2\Delta u^{n-1}-W'(u^{n-1})]\big) \end{multline*} \normalsize where the spatial and temporal resolution were taken to be high to ensure the errors are negligible. See \cref{fig:1dchwm} for plots of the initial condition and the solution at the final time. \Cref{tab:chmswm} shows the errors and convergence rates for the approximate solutions computed by our new multi-stage schemes. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Number of& & & & &\\time steps&$2^{7}$&$2^{8}$&$2^{9}$&$2^{10}$&$2^{11}$\\ \hline $L^2$ error (2nd order)& 4.10e-04 & 1.29e-04& 3.84e-05 &1.09e-05 & 2.96e-06 \\ \hline Order&-&1.67&1.75&1.82&1.88 \\ \hline $L^2$ error (3rd order) &1.79e-05& 3.68e-06& 6.72e-07 & 1.06e-07 & 1.42e-08\\ \hline Order&-&2.28 & 2.46& 2.66 & 2.90 \\ \hline \end{tabular} \caption{\footnotesize The new second \cref{alg:m2ndorder} and third \cref{alg:m3rdorder} order accurate, conditionally stable schemes for gradient flows with solution dependent inner product on the one-dimensional Cahn-Hilliard equation with variable mobility and forcing term \cref{eq:chwm}.} \label{tab:chmswm} \end{center} \end{table} \section{Conclusion} We presented a new class of implicit-explicit additive Runge-Kutta schemes for gradient flows that are high order and conditionally stable. Additionally, we developed new high order stable schemes for gradient flows on solution dependent inner products. Both of these methods allow us to painlessly increase the order of accuracy of existing schemes for gradient flows without sacrificing stability. We provided many numerical examples of gradient flows, including those that have solution dependent inner product, and have shown that the methods achieve their advertised accuracy. However, in this paper, we have not developed a systematic approach to coming up with conditionally stable methods of a certain order. In fact, there may exist 2nd and 3rd order methods of fewer stages than given here. Additionally, whether these schemes can be used to achieve arbitrarily high (i.e. $\geq 4$) order in time is unknown. We leave these questions to future work. \iffalse \section{Appendix:} \label{sec:append} \begin{align*} &\theta=\left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ \frac{1}{115} & \frac{114}{115} & 0 & 0 & 0 \\ \frac{1}{115} & \frac{113}{114} & \frac{1}{13110} & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & \frac{3545}{3546} & \frac{1}{3546} \\ \end{array} \right) \end{align*} \begin{align*} &\gamma=\left( \begin{array}{ccccc} \frac{610}{69} & 0 & 0 & 0 & 0 \\ -\frac{160}{173} & \frac{595}{111} & 0 & 0 & 0 \\ -\frac{311}{70} & \frac{441}{73} & \frac{189}{199} & 0 & 0 \\ -\frac{217}{66} & \frac{112}{19} & -\frac{27}{77} & \frac{5}{29} & 0 \\ -\frac{74}{19} & -\frac{57}{170} &\gamma_{5,2}&\gamma_{5,3} &\gamma_{5,4} \\ \end{array} \right) \end{align*} \begin{align*} &\gamma_{5,2}= \frac{27599202697564967426193528799389088322431660960894226697353773}{55603292 46722053475268349219652442474861405377150945121271770}\\ &\gamma_{5,3}=-\frac{6574154392783282010412843524961302252246503083889334075630691}{38168230 55742094630057499533958025814684868909684511201318550}\\ &\gamma_{5,4}=\frac{171203297785745115976269476294176405081593050042682866831}{2228131216820 6942200121216898248585524748402139413623674} \end{align*} \fi
{ "timestamp": "2020-07-28T02:40:16", "yymm": "2007", "arxiv_id": "2007.13572", "language": "en", "url": "https://arxiv.org/abs/2007.13572" }
\section{Introduction} Phylogenetics is the study of the evolutionary history of biological species. Traditionally such a history is represented by a phylogenetic tree. However, hybridization and horizontal gene transfer, both so-called \emph{reticulation} events, can lead to multiple seemingly conflicting trees representing the evolution of different parts of the genome \cite{mallet_how_2016, soucy_horizontal_2015}. Directed acyclic networks can be used to combine these trees into a more complete representation of the history \cite{bapteste_networks_2013}. Reticulations are represented by vertices with in-degree greater than one. Therefore, an important problem is how to construct such a network based on a set of input trees that are known to represent the evolutionary history for different parts of the genome. The network should display all of these input trees. In general there are many solutions to this problem, but in accordance with the parsimony principle we are especially interested in the most simple solutions to the problem. These are the solutions with a minimal number of reticulations. Finding a network for which the number of reticulations, also called the \emph{hybridization number}, is minimal now becomes an optimization problem. This problem is NP-complete, even for only two binary input trees \cite{bordewich_computing_2007-2}. The problem is fixed parameter tractable for an arbitrary set of non-binary input trees if either the number of trees or the out-degree in the trees is bounded by a constant \cite{van_iersel_kernelizations_2016-1}. For a set of two binary input trees an FPT algorithm with a reasonable running time exists \cite{bordewich_computing_2007}. For more than two input trees theoretical FPT algorithms and practical heuristic algorithms exist, but no FPT algorithm with a reasonable running time is known. That is why we are interested in slightly modifying the problem to make it easier to solve. One way to do this is by restricting the solution space to the class of tree-child networks, in which each non-leaf vertex has at least one outgoing arc that does not enter a reticulation \cite{cardona_comparison_2007}. The minimum hybridization number over all tree-child networks that display the input trees is called the \emph{tree-child hybridization number}. These networks can be characterized by so-called cherry picking sequences \cite{linz_attaching_2019}. This characterization can be used to create a fixed parameter tractable algorithm for this restricted version of the problem for any number of binary input trees with time complexity $O((8k)^k\cdot poly(n, m))$ where $k$ is the tree-child hybridization number, $n$ is the size of leaves and $m$ is the number of input trees \cite{van_iersel_practical_2019}. The solution space can be reduced even further \cite{humphries_cherry_2013}, leading to the problem of finding the \emph{temporal hybridization number}. The extra constraints enforce that each species can be placed at a certain point in time such that evolution events take a positive amount of time and that reticulation events can only happen between species that live at the same time. For the problem of computing the temporal hybridization number a cherry picking characterization exists too and it can be used to develop a fixed parameter tractable algorithm for problems with two binary input trees with time complexity $O((7k)^k\cdot poly(n, m))$ where $k$ is the temporal hybridization number, $n$ is the number of leaves and $m$ is the number of input trees \cite{humphries_cherry_2013}. In this paper\xspace we introduce a faster algorithm for solving this problem in $O(5^k \cdot n \cdot m)$ time using the cherry picking characterization. Moreover, this algorithm works for any number of binary input trees. A disadvantage of the temporal restrictions is that in some cases no solution satisfying the restrictions exists. In fact determining whether such a solution exists is a NP-hard problem \cite{humphries_complexity_2013}\cite{docker_deciding_2019}. Because of this our algorithm will not find a solution network for all problem instances. However we show that it is possible to find a network with a minimum number of non-temporal arcs, thereby finding a network that is `as temporal as possible'. For that reason we also introduce an algorithm that also works for non-temporal instances. This algorithm is a combination of the algorithm for tree-child networks and the one for temporal networks introduced here. In practical data sets, the trees for parts of the genome are often non-binary. This can be either due to simultaneous divergence events or, more commonly, due to uncertainty in the order of divergence events \cite{linz_hybridization_2009}. This means that many real-world datasets contain non-binary trees, so it is very useful to have algorithms that allow for non-binary input trees. While the general hybridization number problem is known to be FPT when either the number of trees or the out-degree of the trees is bounded by a constant \cite{van_iersel_kernelizations_2016-1}, an FPT algorithm with a reasonable running time ($O(6^kk! \cdot poly(n))$) is only known for an input of two trees \cite{piovesan_simple_2013}. Until recently no such algorithm was known for the temporal hybridization number problem however. In this paper\xspace the first FPT algorithm for constructing optimal temporal networks based on two non-binary input trees with running time \leo{$O(6^kk!\cdot k\cdot n^2)$} is introduced. \leo{We implemented and tested all new algorithms~\cite{sjb_implementation}.} The structure of the paper is as follows. First we introduce some common theory and notation in \cref{sec:preliminaries}. In \cref{sec:algorithm} we present a new algorithm for the temporal hybridization number of binary trees, prove its correctness and analyse the running time. In \cref{sec:non_temporal} we combine the algorithm from \cref{sec:algorithm} with the algorithm from \cite{van_iersel_practical_2019} to obtain an algorithm for constructing tree-child networks with a minimum number of non-temporal arcs. In \cref{sec:non_binary_trees} we present the algorithm for the temporal hybridization number for two non-binary trees. In \cref{sec:implementation} we conduct an experimental analysis of the \leo{algorithms}. \section{Preliminaries} \label{sec:preliminaries} \subsection{Trees} A \emph{rooted binary phylogenetic $X$-tree} $\mathcal{T}$ is a rooted binary tree for which the leaf set is \leo{equal to} $X$ with $|X|=n$. Because we will mostly use rooted binary phylogenetic trees in this paper\xspace we will just refer to them as \emph{trees}. Only in \cref{sec:non_binary_trees} trees that are not necessarily binary are mentioned, but we will explicitly call them non-binary trees. Each of the leaves of a tree \leo{is an element} of $X$. We will also refer to the set of \leo{leaves} in $\mathcal{T}$ as $\mathcal{L}(\mathcal{T})$. For a tree $\mathcal{T}$ and a set of leaves $A$ with the notation $\mathcal{T}\setminus A$ we refer to the tree obtained by removing all leaves \leo{that are in} $A$ from $\mathcal{T}$ and repeatedly contracting all vertices with both in- and out-degree one. Observe that $\left(\mathcal{T}\setminus \{x\}\right)\setminus \{y\} = \mathcal{T}\setminus \{x,y\} = \left(\mathcal{T}\setminus \{y\}\right)\setminus \{x\}$. We will often use $T$ to refer to a set of $m$ trees $\mathcal{T}_1,\ldots, \mathcal{T}_m$. We will write $T\setminus A$ for $\{\mathcal{T}_1\setminus A,\ldots, \mathcal{T}_m\setminus A \}$ and $\mathcal{L}(T)=\cup_{i=1}^m\mathcal{L}(\mathcal{T}_i)$. \subsection{Temporal networks} A \emph{network} \leo{on~$X$} is a rooted acyclic directed graph satisfying: \begin{enumerate} \item The root $\rho$ has in-degree $0$ and an out-degree not equal to $1$. \item The \emph{leaves} are the nodes with out-degree zero. \leo{The set of leaves is~$X$.} \item The remaining vertices are \emph{tree vertices} or \emph{hybridization vertices} \begin{enumerate} \item A tree vertex has in-degree $1$ and out-degree at least $2$. \item A hybridization vertex (also called \emph{reticulation}) has out-degree $1$ and in-degree at least $2$. \end{enumerate} \end{enumerate} We will call the arcs ending in a hybridization vertex \emph{hybridization arcs}. All other arcs are \emph{tree arcs}. A network is a \emph{tree-child} network if every tree vertex has at least one outgoing tree arc. We say that a network $\mathcal{N}$ on $X$ displays a set of trees $T$ on $X'$ with $X'\subseteq X$ if every tree in $T$ can be obtained by removing edges and vertices and contracting vertices with both in-degree $1$ and out-degree $1$. For a set of leaves $A$ we define $\mathcal{N}\setminus A$ to be the network obtained from $\mathcal{N}$ by removing all leaves in $A$ and afterwards removing all nodes with out-degree zero and contracting all nodes with both in- and out-degree one. \begin{figure} \centering \begin{subfigure}[b]{.25\textwidth} \includegraphics{build/figures/example_1_a} \caption{ \label{subfig:first_tree}} \end{subfigure} \begin{subfigure}[b]{.25\textwidth} \includegraphics{build/figures/example_1_b} \caption{ \label{subfig:second_tree} } \end{subfigure} \begin{subfigure}[b]{.45\textwidth} \includegraphics{build/figures/network} \caption{\label{subfig:network}} \end{subfigure} \caption{The binary trees in (a) and (b) are both displayed by the network in (c). } \label{fig:simple_example} \end{figure} For a tree-child network $\mathcal{N}$, the \emph{hybridization number} $h_t(\mathcal{N})$ is defined as \begin{align*} r(\mathcal{N})=\sum_{v\neq \rho}(d^-(v)-1)\text{. } \end{align*} where $d^-(v)$ is the in-degree of a vertex $v$ and $\rho$ is the root of $\mathcal{N}$. A tree-child network $\mathcal{N}$ with set of vertices $V$ is \emph{temporal} if there exists a map $t:V\to \mathbb{R}^+$, called a temporal labelling, such that for all $u,v\in V$ we have $t(u)=t(v)$ when $(u,v)$ is a hybridization arc and $t(u)<t(v)$ when $(u,v)$ is a tree arc. In \cref{fig:temporal_and_non_temporal_network} both a temporal and a non-temporal network are shown. \begin{figure} \begin{subfigure}[b]{.45\textwidth} \includegraphics[scale=.9]{build/figures/network_temporal} \caption{ A temporal labeling is shown in the network above, asserting that the network is temporal. \label{subfig:network_temporal} } \end{subfigure} \hfill \begin{subfigure}[b]{.45\textwidth} \includegraphics[scale=.9]{build/figures/network_non_temporal} \caption{ No temporal labeling exists for this network. Therefore the network is not temporal. \label{subfig:network_non_temporal}} \end{subfigure} \caption{\label{fig:temporal_and_non_temporal_network}} \end{figure} For a set of trees $T$ we define the minimum temporal-hybridization number as \begin{align*} h_t(T)=\min \{r(\mathcal{N}):\mathcal{N}\text{ is a temporal network that displays }T \} \end{align*} This definition leads to the following decision problem. \problem{Temporal hybridization}{A set of trees $T$ and an integer $k$}{Is $h_t(T)\leq k$?} Note that there are sets of trees such that no temporal network exists that displays them. In \cref{fig:non_temporal_trees} an example is given. For such a set $T$ we have $h_t(T)=\infty$. \begin{figure} \centering \includegraphics{build/figures/non_temporal_trees} \caption{ No temporal network that displays these trees exists. \label{fig:non_temporal_trees} } \end{figure} \subsection{Cherry picking sequences} Temporal networks can now be characterized by so-called cherry-picking sequences \cite{humphries_cherry_2013}. A \emph{cherry} is a set of children of a tree vertex that only has leaves as children. So for binary trees a cherry is a pair of leaves. We will write $(a,b)\in \mathcal{T}$ if $\{a,b\}$ is a \emph{cherry} of $\mathcal{T}$ and $(a,b)\in T$ if there is a $\mathcal{T}\in T$ with $(a,b)\in \mathcal{T}$. First we introduce some notation to make it easier to speak about cherries. \begin{defn} \label{def:ht} For a set of binary trees $T$ on the same taxa define $H(T)$ to be the set of leaves that is in a cherry in every tree. \end{defn} If two leaves are in a cherry together we call them \emph{neighbors}. We also introduce notation to speak about the \emph{neighbors} of a given leaf: \begin{defn} Define $N_\mathcal{T}(x) =\{y\in\mathcal{X}: (y,x)\in \mathcal{T} \}$. For a set of trees $T$ define $N_T(x)=\cup_{\mathcal{T}\in T}N_\mathcal{T}(x)$. \end{defn} \begin{defn} For a set of binary trees $T$ containing a leaf $x$ define $w_T(x)=|N_T(x)| -1$. We will also call this the \emph{weight} of $x$ in $T$. \label{def:weight} \end{defn} Using this theory, we can now give the definition of cherry picking sequences. \begin{defn} A sequence of leaves $s=(s_1,s_2,\ldots, s_n)$ is a \emph{cherry picking sequence} (CPS) for a set of binary trees $T$ on the same set of taxa if it contains all leaves of $T$ exactly once and if for all $i\in [n-1]$ we have $s_i \in H(T\setminus \{s_1, \ldots, s_{i-1} \})$. The weight $w_T(s_1,\ldots s_n)$ of the sequence is defined as $w_T(s)=\sum_{i=1}^{n-1}w_{T\setminus \{s_1, \ldots, s_{i-1} \}} (s_{i})$. \label{def:cps} \end{defn} \begin{exmp} \label{exmp:simple_example} For the two trees in \cref{fig:simple_example}, $\leo{(\boldsymbol{b},e,\boldsymbol{c},d,a)}$ is a minimum weight cherry-picking sequence of weight $2$. Leaves $b$ and $c$ (indicated in bold) have weight $1$ and the rest of the leaves have weight $0$ in the sequence. \end{exmp} For a cherry picking sequence $s$ with $s_i=x$ we say that $x$ is \emph{picked} in $s$ at index $i$. \begin{thm}[{\cite[Theorem 1, Theorem 2]{humphries_cherry_2013}}] \label{lem:exists_cherry_sequence} Let $T$ be a set of trees on $\mathcal{X}$. There exists a temporal network $\mathcal{N}$ that displays $T$ with $h_t(\mathcal{N})=k$ if and only if there exists a cherry-picking sequence $s$ for $T$ with $w_T(s)=k$. \end{thm} This has been proven in \cite[Theorem 1, Theorem 2]{humphries_cherry_2013}. The proof works by constructing a cherry picking sequence from a temporal network and vice versa. Here, we only repeat the construction to aid the reader, and refer to \cite{humphries_cherry_2013} for the proof of correctness. The construction of cherry picking sequence $s$ from a temporal network $\mathcal{N}$ with temporal labeling $t$ works in the following way: For $i=1$ choose $s_i$ to be a leaf $x$ of $\mathcal{N}$ such that $t(p_x)$ is maximal where $p_x$ is the parent of $x$ in $\mathcal{N}$. Then increase $i$ by one and again choose $s_i$ to be a leaf $x$ of $\mathcal{N}\setminus \{s_1, \ldots, s_{i-1} \}$ that maximizes $t(p_x)$ where $p_x$ is the parent of $x$ in $\mathcal{N}\setminus \{s_1, \ldots, s_{i-1} \}$. In \cite[Theorem 1, Theorem 2]{humphries_cherry_2013} it is shown that now $s$ is a cherry picking sequence with $w_T(s)=r(\mathcal{N})$. The construction of a temporal network $\mathcal{N}$ from a cherry picking $s$ is somewhat more technical: for cherry picking sequence $s_1,\ldots, s_t$, define $\mathcal{N}_{n}$ to be the tree, only consisting of a root and \leo{leaf~$s_n$} Now obtain $\mathcal{N}_{i}$ from $\mathcal{N}_{i+1}$ by adding node $s_i$ and a new node $p_{s_i}$, adding edge $(p_{s_i},s_i)$ subdividing $(p_x,x)$ for every $x\in N_{T\setminus \{s_1, \ldots, s_{i-1} \}}(s_i)$ with node $q_x$ and adding an edge $(q_x,p_{s_i})$ and finally suppressing all nodes with in- and out-degree one. Then $\mathcal{N}=\mathcal{N}_1$ displays $T$ and $r(\mathcal{N})=w_T(s)$. The theorem implies that the weight of a minimum weight CPS is equal to the temporal hybridization number of the trees. Because finding an optimal temporal reticulation network for a set of trees is an NP-hard problem \cite{humphries_complexity_2013}, this implies that finding a minimum weight CPS is an NP-hard problem. \begin{defn} We call two sets of trees $T$ and $T'$ \emph{equivalent} if a bijection from $\mathcal{L}(T)$ to $\mathcal{L}(T')$ exists that transforms $T$ into $T'$. We call them equivalent because have the same structure and consequently the same (temporal-) hybridization \leo{number}, however the biological interpretation can be different. We will write this as $T\simeq T'$. \label{defn:equivalent} \end{defn} \section{Algorithm for constructing temporal networks from binary trees} \label{sec:algorithm} \label{section:foundation_of_v3} Finding a cherry picking sequence comes down to deciding in which order to pick the leaves. Our algorithm relies on the observation that this order does not always matter. Intuitively the observation is that the order of two leaves in a cherry picking sequence only matters if they appear in a cherry together somewhere during the execution of the sequence. Therefore the algorithm keeps track of the pairs of leaves for which the order of picking matters. We will make this more precise in the remainder of this section. The algorithm now works by branching on the choice of which element of a pair to pick first. These choices are stored in a so-called constraint set. Each call to the algorithm branches into subcalls with more constraints added to the constraint set. As soon as it is known that a certain leaf has to be picked before all of its neighbors and is in a cherry in all of the trees, the leaf can be picked. \begin{defn} Let $C\subseteq \mathcal{L}(T) \times \mathcal{L}(T)$. We call $C$ a \emph{constraint set} on $T$ if every pair $(a,b)\in C$ is a cherry in $T$. A cherry picking sequence $s=(s_1,\ldots, s_k)$ of $T$ \emph{satisfies} $C$ if for all $(a,b)\in C$, we have $s_i=a$ and $(a,b)\in T'$ and $w_{T'}(a)>0$ with $T'=T\setminus \{s_1,\ldots , s_{i-1} \}$ for some $i$. \end{defn} Intuitively, a cherry picking sequence satisfies a constraint set if for every pair $(a,b)$ in the set $a$ is picked with positive weight and $(a,b)$ is a cherry just before picking $a$. This implies that $a$ occurs in the cherry picking sequence before $b$. \markj{We now prove a series of results about what sets of constraints are valid, which will then be used to guide our algorithm.} \begin{figure} \centering \includegraphics[scale = 0.2]{build/figures/CherryAdjacencyExampleOneConstraint.eps} \caption{ \markj{ An example showing the neighbour relation for the trees in Figure~\ref{fig:simple_example}, together with a constraint $(b,d)$. Two elements $x,y \in X$ are depicted as adjacent if $x \in N_T(y)$ i.e. if $x$ and $y$ appear in a cherry together. An arc from $x$ to $y$ indicates the presence of a constraint $(x,y)$. } \label{fig:cherry_adjacency_example} } \end{figure} \begin{obs}\label{obs:ConstraintThreeCases} Let $s$ be a cherry picking sequence for $T$ and $w_T(x) > 0$ and $a,b\in N_T(x)$. Then $s$ satisfies one of the following constraint sets: \\$\{(a,x)\}, \{(b,x)\}, \{(x,a),(x,b)\}$. \label{lem:branch_in_three} \end{obs} \begin{proof} Let $i$ be the lowest index such that $s_i \in\{x,a,b\}$. If $s_i=x$, then $(x,a)\in T\setminus \{s_1,\ldots, s_{i-1} \}$ and $(x,b)\in T\setminus \{s_1,\ldots, s_{i-1} \}$, so $s$ satisfies $\leo{\{(x,a),(x,b)\}}$. If $s_i=a$, then there is a $\mathcal{T}\in T\setminus \{s_1,\ldots, s_{i-1} \}$ with $(x,b)\in \mathcal{T}$, so $(a,x)\notin \mathcal{T}$, which implies that $w_{T\setminus \{s_1,\ldots, s_{i-1} \}}(s_i)>0$, so $s$ satisfies $\{(a,x)\}$. Similarly if $s_i=b$ then $s$ satisfies $\{(b,x)\}$. \end{proof} \begin{exmp}\label{ex:ConstraintThreeCases} The trees in \cref{subfig:first_tree} and \cref{subfig:second_tree} contain the cherries $(a,b)$ and $(d,b)$. So by \cref{lem:branch_in_three} every cherry picking sequence for these trees satisfies one of the constraint sets $\{(a,b)\}, \{(d,b)\}, \{(b,a),(b,d)\}$. For example, $\leo{({\bf b},d,{\bf c},e,a)}$ is a cherry picking sequence of weight $2$ for these trees. This sequence satisfies the constraint set $\{(b,a),(b,d)\}$. \markj{See Figure~\ref{fig:constraint_three_cases}.} \end{exmp} \begin{figure} \centering \hfill \begin{subfigure}{.3\textwidth} \includegraphics[scale=.3]{build/figures/CherryAdjacencyExampleOneInA.eps} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \includegraphics[scale=.3]{build/figures/CherryAdjacencyExampleOneInD.eps} \end{subfigure} \hfill \begin{subfigure}{.3\textwidth} \includegraphics[scale=.3]{build/figures/CherryAdjacencyExampleTwoOut.eps} \end{subfigure} \hfill \caption{ \markj{ Illustration of Example~\ref{ex:ConstraintThreeCases}, showing the possible constraint sets on $a,b,d$ implied by Observation~\ref{obs:ConstraintThreeCases}. } \label{fig:constraint_three_cases} } \end{figure} This observation implies that the problem can be reduced to three subproblems, corresponding to either appending $\{(a,x)\}$, $\{(b,x)\}$ or $\{(x,a),(x,b)\}$ to $C$. As we will see, this is used by the algorithm. It is possible to implement an algorithm using only this rule, but the running time of the algorithm can be improved by using a second rule that branches into only two subproblems when it is applicable. The rule relies on the following observation. Note that we will write $\pi_i(C)$ for the set obtained by projecting every element of $C$ to the $i$'th coordinate. \begin{obs}\label{obs:ConstraintTwoCases} If $C$ is satisfied by $s$ then for all $x\in \pi_1(C)$ and $y\in N_T(x)$ we have that either $C\cup \{(y,x)\}$ or $C\cup \{ (x,y)\}$ is also satisfied by $s$. \label{lem:branch_in_two} \end{obs} \begin{proof} If $x\in \pi_1(C)$ then $C$ contains a pair $(x, a)$. If $a=y$ it is trivial that $s$ satisfies $C\cup \{ (x,y)\}=C$. Otherwise \cref{lem:branch_in_three} implies that $s$ satisfies one of the constraint sets $\{(a,x)\}, \{(y,x)\}, \{(x,a),(x,y)\}$. Because $s$ satisfies $\{(x,a)\}$, $s$ can not satisfy $\{(a,x)\}$. So $s$ will satisfy either $\{(y,x)\}$ or $\{(x,a),(x,y)\}$. \end{proof} Using this observation we can let the algorithm branch into two paths by either adding $(x,y)$ or $(y,x)$ to the constraint set $C$ if $x\in\pi_1(C)$. \begin{exmp}\label{ex:ConstraintTwoCases} \leo{Consider again the situation in Example~\ref{ex:ConstraintThreeCases}. Suppose we guess that the solution satisfies the constraint set $\{(d,b)\}$. Then we have~$d\in\pi_1(C)$. Hence, we are in the situation of Observation~\ref{obs:ConstraintTwoCases} and we can conclude that either $(d,e)$ or $(e,d)$ can be added to the constraint set~$C$. See Figure~\ref{fig:constraint_two_cases}.} \end{exmp} \begin{figure} \centering \begin{subfigure}{.4\textwidth} \includegraphics[scale=.65]{build/figures/CherryAdjacencyExampleInCase.eps} \end{subfigure} \begin{subfigure}{.4\textwidth} \includegraphics[scale=.65]{build/figures/CherryAdjacencyExampleOutCase.eps} \end{subfigure} \caption{ \markj{ Illustration of Example~\ref{ex:ConstraintTwoCases}, showing the two possible constraints on $d$ and $e$ implied by Observation~\ref{obs:ConstraintTwoCases}, in the case that there already exists a constraint $\leo{(d,b)} \in C$ and thus $d \in \pi_1(C)$. } \label{fig:constraint_two_cases} } \end{figure} We define $G(T,C)$ to be the set of cherries for which there is no constraint in $C$, so $G(T,C)=\{(x,y):(x,y) \in T \land (x,y), (y,x) \notin C \}$. Observe that $(x,y)\in G(T,C)$ is equivalent with $(y,x) \in G(T,C)$. \markj{Before proving the next result about constraints, we need the following lemma. This} states that if we have a set of trees, a leaf that is in a cherry in all of the trees and a corresponding cherry picking sequence then the following holds: for every element in a cherry picking sequence, we can either move it to the front of the sequence without affecting the weight of the sequence or there is a neighbor of this element that occurs earlier in the sequence. \begin{lem} \label{lem:twoconditions} Let $(s_1,s_2,\ldots)$ be a cherry picking sequence for a set of trees $T$ that satisfies constraint set $C$. Let $x\in H(T)$. Then at least one of the following statements is true: \begin{enumerate} [ {(}1{)} ] \item $\exists i: s_i=x $ and $s'=(s_i,s_1,\ldots ,s_{i-1},s_{i+1},\ldots)$ is a cherry picking sequence for $T$ satisfying $C$ and $w(s)=w(s')$. \item If $s_i=x$ then $\exists j: s_j \in N_T(x)$ such that $j<i$. \end{enumerate} \end{lem} \begin{proof} Let $r$ be the smallest number such that $s_r\in N_T(x) \cup \{x\}$. In case $s_r\neq x$ it follows directly that condition (2) holds for $j=r$. For $s_r=x$ we will prove that condition (1) holds with $i=r$. \markj{The key idea is that, because $s_i$ is not in a cherry with any of $s_1,\ldots, s_{i-1}$, removing $s_i$ first will not have any effect on the cherries involving $s_1,\ldots, s_{i-1}$.} \markj{More formally,} take an arbitrary tree $\mathcal{T}\in T$. Now take arbitrary $j,k$ with $s'_j=s_k$. Now we claim that for an arbitrary $z$ we have \markj{$(s'_j,z) \in \mathcal{T}\setminus\{s'_1,\ldots, s'_{j-1}\}$} if and only if \markj{$(s_k,z) \in \mathcal{T}\setminus\{s_1,\ldots, s_{k-1}\}$}. For $s'_j=s'_1=s_i=s_k$ this is true because none of the elements $s_1,\ldots, s_{i-1}$ are in $N_T(s_i)$ so for each $z$ we have $(s'_1,z)\in \mathcal{T}$ if and only if \markj{$(s_i,z)\in \mathcal{T}\setminus\{s_1,\ldots,s_{i-1}\}$}. For $k$ with $k<i$ we have $s'_{j+1}=s_j$. Because $s_i\notin N_T(s_j)$ we have that $(s_j,z) \in \mathcal{T}\setminus \{s'_1,\ldots, s'_j\}=\{s_1,\ldots, s_{j-1},s_i\}$ if and only if $(s_j,z) \in \mathcal{T}\setminus \{s_1,\ldots, s_{j-1}\}$. For $k>i$ we have $j=k$ and also $\mathcal{T}\setminus \{s'_1, \ldots s'_{j-1}\}=\mathcal{T}\setminus \{s_1, \ldots s_{j-1}\}$ because $\{s_1, \ldots s_{j-1}\}=\{s'_1, \ldots s'_{j-1}\}$. It directly follows that \markj{$(s'_j,z)\in \mathcal{T}\setminus\{s'_1, \ldots s'_{j-1}\}$} if and only if \markj{$(s_j,z)\in \mathcal{T}\setminus\{s_1, \ldots s_{j-1}\}$.} Now because we know that for each $k$ we have \markj{$s_k\in H(T\setminus\{s_1,\ldots, s_{k-1}\})$} and $s_k=s'_j$ is in exactly the same cherries in \markj{$T\setminus\{s_1,\ldots, s_{k-1}\}$} as in \markj{$T\setminus\{s'_1,\ldots, s'_{j-1}\}$}, we know that \markj{$s'_j\in H(T\setminus\{s'_1,\ldots, s'_{j-1}\})$}, that \markj{$w_{T\setminus\{s'_1,\ldots, s'_{j-1}\}}(s'_j)=w_{T\setminus\{s_1,\ldots, s_{k-1}\}}(s_k)$} and that $s'$ satisfies $C$. This implies that $s'$ is a CPS with $w_T(s)=w_T(s')$. \end{proof} As soon as we know that a leaf in $H(T)$ has to be picked before all its neighbors we can pick it, as stated by the following lemma. \begin{lem} Suppose $x\in H(T)$ and constraint set $C$ is satisfied by cherry picking sequence $s$ of $T$, with $\{(x,n):n\in N_T(x) \}\subseteq C$. Then there is a cherry picking sequence $s'$ with $s'_1 = x$ and $w(s')=w(s)$. \label{lem:remove_immediately} \end{lem} \begin{proof} This follows from \cref{lem:twoconditions}, because statement (2) can not be true because for every $j$ with $s_j\in N_T(x)$ we have $(x, s_j)\in C$ and therefore $i < j$ for $s_i=x$. So statement (1) has to hold which yields a sequence $s'$ with $w(s)=w(s')$ and $s'_1=x$. \end{proof} The following lemma shows that we can also safely remove all leaves that are in a cherry with the same leaf in every tree. \begin{lem} Let $s$ be a cherry picking sequence for $T$ satisfying constraint set $C$ with $x\notin \pi_1(C)$ and $x\notin \pi_2(C)$. If $x\in H(T)$ and $w_T(x)=0$, then there is a cherry picking sequence $s'$ with $s'_1 = x$ and $w(s')=w(s)$ satisfying $C$. \label{lem:remove_trivial} \end{lem} \begin{proof} Because $w_T(x)=0$ we have $N_{T}(x)=\{y\}$. Then from \cref{lem:twoconditions} it follows that a sequence $s'$ exists such that either $s''=(x)|s'$ or $s''=(y)|s'$ is a cherry picking sequence for $T$ and $w_T(s'')=w(s)$ and $s''$ satisfies $C$. However, because the position of $x$ and $y$ in the trees are equivalent (i.e. swapping $x$ and $y$ does not change $T$) both are true. \end{proof} \markj{We are almost ready to describe our algorithm. There is one final piece to introduce first: the measure $P(C)$. This is a measure on a set of constraints $C$, which will be used to provide a termination condition for our algorithm. We show below that $P(C)$ provides a lower bound on the weight of any cherry picking sequence satisfying $C$, and so if during any recursive call to the algorithm $P(C)$ is greater than the desired weight, we may stop that call.} \label{section:the_algorithm} \begin{defn} Let $\psi = \frac{\log(2)}{\log(5)} \markj{\simeq 0.4307}$. Let $P(C) = \psi \cdot |C| + (1-2\psi) |\pi_1(C)|$. \end{defn} \begin{lem} If cherry picking sequence $s$ for $T$ satisfies $C$, then $w_T(s)\geq P(C)$. \label{lem:bound_w_pc} \end{lem} \begin{proof} For $x=s_i$ with $i<n$ we prove that for $C_x:=\{(a,b):(a,b)\in C \land a=x \}$ we have $w_{T\setminus \{s_1, \ldots, s_{i-1} \}}(x)\geq P(C_x)$. If $|C_x|=0$, then $P(C_x)=0$ and the inequality is trivial. If $|C_x|=1$, then there is some $(x,b)\in C$, which implies that $w_{T\setminus \{s_1, \ldots, s_{i-1} \}}(x)>0$, so $w_{T\setminus \{s_1, \ldots, s_{i-1} \}}(x)\geq |\pi_1(C_x)|=1\geq P(C)$. Otherwise if $|C_x|\geq 2$, then \markj{$w_{T\setminus \{s_1, \ldots, s_{i-1} \}}(x)=N_T(x)-1\geq |C_x|-1= \psi \cdot |C_x|-1 + (1-\psi)|C_x| \geq \psi \cdot |C_x|-1 + 2(1-\psi)= \psi \cdot |C_x|+(1-2\psi)=P(C_x)$.} Now the result follows because $w_T(s)=\sum_{i=1}^{n-1}w_{T\setminus \{s_1,\ldots, s_{i-1}\}}(s_i)\geq \sum_{i=1}^{n-1}P(C_{s_i})=P(C)$. \end{proof} \markj{We now present our algorithm, which we split into two parts. The main algorithm is \texttt{CherryPicking}, a recursive algorithm which takes as input parameters a set of trees $T$, a desired weight $k$ and a set of constraints $C$, and returns a cherry picking sequence for $T$ of weight at most $k$ satisfying $C$, if one exists.} \markj{ The second part is the procedure \texttt{Pick}. In this} procedure zero-weight cherries and cherries for which all neighbors are contained in the constraint set are greedily removed from the trees. \begin{algorithm} \caption{} \label{alg:better_temporal} \begin{algorithmic}[1] \Procedure{CherryPicking}{$T,k, C$} \If {$k-P(C) < 0$} \label{line:if_smaller_pc} \State \Return $\emptyset$ \label{line:first_return} \EndIf \State $T',k', C',p\gets $\Call{Pick}{$T,k,C$} \label{line:callpick} \If{$|\mathcal{L}(T')| =1 $} \State\Return $\{p\}$ \label{line:emptysequence} \ElsIf{$\pi_1(C')\nsubseteq \mathcal{L}(T') $} \State \Return $\emptyset$ \label{line:second_return} \ElsIf{$k'-P(C')\leq 0$} \label{line:if_smaller_pc_2} \State \Return $\emptyset$ \label{line:third_return} \EndIf \\ \State $R\gets \emptyset$ \If{$\exists (x, y) \in G(T',C'): w_T(x)>0 \land x\in \pi_1(C')$ } \label{line:if-statement} \State $R\gets R \cup $ \Call{CherryPicking}{$T'$,$k'$,$C'\cup \{ (x,y ) \}$} \State $R\gets R \cup $ \Call{CherryPicking}{$T'$,$k'$,$C'\cup \{ (y,x ) \}$} \ElsIf{$\exists (x,a) \in G(T',C'): w_{T'}(x)>0 \land x\notin \pi_2(C')$ } \label{line:else-if-statement} \State \text{Choose} $b\neq a$ such that $(x,b)\in G(T',C')$ \label {line:two_elements} \State $R\gets R \cup $ \Call{CherryPicking}{$T'$,$k'$,$C'\cup \{ (a,x ) \}$} \State $R\gets R \cup $ \Call{CherryPicking}{$T'$,$k'$,$C'\cup \{ (b,x ) \}$} \State $R\gets R \cup $ \Call{CherryPicking}{$T'$,$k'$,$C'\cup \{ (x,a ), (x,b) \}$} \EndIf \State\Return $\{p|r: r \in R\}$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{} \begin{algorithmic}[1] \Procedure{Pick}{$T', k', C'$} \State $(T^{(0)},k_1,C_1)\gets (T',k',C')$ \State $p^{(0)}\gets ()$ \State $i\gets 1$ \While{$\exists x_i\in H(T^{(i-1)}): w_{T^{(i-1)}}(x)=0 \lor \{(x_i,n) : n \in N_{T^{(i-1)}}(x_i) \subseteq C_i $} \State $p^{(i)}\gets p^{(i-1)}|(x_i)$ \State $k_{i}\gets k_{i-1}-w_{T^{(i-1)}}(x_i)$ \State $T^{(i)}\gets T^{(i-1)}\setminus \{x_i\}$ \State $C_{i}\gets \{(a,b)\in C_{i-1}: a\neq x_i\}$ \State $i\gets i+1$ \EndWhile \State \Return $T^{(i-1)},k_{i-1},C_{i-1},p^{(i-1)}$ \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Proof of correctness} In this section a proof of correctness will be given. First some properties of the auxiliary procedure \texttt{Pick} are proven. \begin{obs} Suppose \texttt{Pick}$(T',k', C')$ returns $(T,k,C,p)$. \begin{enumerate} \item There are no $x\in H(T)$ with $w_T(x)=0$. \item There are no $x\in H(T)$ with $\{(x_i,n) : n \in N_{T^{(i-1)}}(x_i)\}\subseteq C$. \end{enumerate} \end{obs} \begin{lem}[Correctness of \texttt{Pick}] Suppose \texttt{Pick}$(T',k', C')$ returns $(T,k,C,p)$. \begin{enumerate} \item If a cherry picking sequence $s$ of weight at most $k$ for $T$ that satisfies $C$ exists then a cherry picking sequence $s'$ of weight at most $k'$ for $T'$ that satisfies $C'$ exists. \item If $s$ is a cherry picking sequence of weight at most $k$ for $T$ that satisfies $C$ then $p|s$ is a cherry picking sequence for $T'$ of weight at most $k'$ and satisfying $C'$. \end{enumerate} \label{lem:correctness_pick} \end{lem} \begin{proof} We will prove the first claim for $(T,k,C,p)=(T^{(i)}, k_i, C_i, p^{(i)})$ for all $i$ defined in \texttt{Pick}. We will prove this with induction on $i$. For $i=1$ this is obvious because $T^{(1)}=T$, $p^{(1)}=()$, $C_1= C$ and $k_1=k$. Now assume the claim is true for $i=i'$. Now there are two cases to consider: \begin{itemize} \item If we have $\{(x_{i'},n):n\in N_T(x_{i'}) \}\subseteq C_{i'}$ we know from \cref{lem:remove_immediately} that if a cherry picking sequence $s$ satisfying $C_i$ exists then also a cherry picking sequence $(x)|s'$ that satisfies $C'$ exists with $w(p|(x)|s')=w(p|s)$. Note that this implies that $s'$ is a cherry picking sequence for $T^{(i+1)}=T'\setminus \{x\}$, that $C_{i+1}={c\in C': x \notin \{c_1, c_2\}}$ is satisfied by $s_{i+1}$ and that $w(s_{i+1})=w(s_i)-w_{T^{(i)}}(x_i)=k_i-w_{T^{(i)}}(x)$. So this proves the statement for $i=i'+1$. \item Otherwise we have $w_{T^{(i')}}(x)=0$ and $x\notin \pi_1 (C)$ and $x\notin \pi_2(C)$. Then the statement for $i=i'+1$ follows directly from \cref{lem:remove_trivial}. \end{itemize} Let $j$ be the maximal value such $x_j$ is defined in a given invocation of \texttt{Pick}. We will prove the second claim for $(T,k,C,p)=(T^{(i)}, k_{i}, C_{i}, p^{(i)})$ for all $i=0,\ldots, j$ with induction on $i$. For $i=0$ this is trivial. Now assume the claim is true for $i=i'$ and assume $s$ is a cherry picking sequence for $T^{(i'+1)}$ of weight at most $k_{i'+1}$ that satisfies $C_{i'+1}$. Then if $x_{i'}$ is defined, it will be in $H(T^{(i')})$, so $s'=(x_{i'})|s$ is a cherry picking sequence for $T^{(i')}$. Because $w_{T^{i'}}(x_{i'})=k_{i'}-k_{i'+1}$, $s'$ will have weight at most $k_{i'}$. We can write $C_{i'}=C_x\cup C_{-x}$ where $C_x = \{(a,b):(a,b)\in C_{i'} \land a=x \}$ and $C_{-x}=C_{i'}\setminus C_x$. Note that $s$ satisfies $C_{i'+1}=C_{-x}$, so $s'=(x_{i'})|s$ also satisfies $C_{i'+1}$. Because for every $(a,b)\in C_x$, also $(a,b)\in T^{i'}$, $s'$ also satisfies $C_{x}$, so $s'$ satisfies $C_{i'}$. Now it follows from the induction hypothesis that $p^{i'+1}|s=p^{i'}|s'$ is a cherry picking sequence for $T'$ of weight at most $k'$ and satisfying $C'$. \end{proof} Note that on \cref{line:two_elements} \markj{of \cref{alg:better_temporal}} an element $b\neq a$ with $(x,b) \in G(T', C')$ is chosen. The following lemma states that such an element does indeed exist. \begin{lem} When the algorithm executes \cref{line:two_elements} there exist an element $b\neq a$ with $(x,b) \in G(T', C')$. \label{lem:exists_uncovered_pair} \end{lem} \begin{proof} Because $w_{T'}(x)>0$, there is at least a $b\neq a$ such that $b\in N_{T'}(x)\setminus \{x\}$. Because $x \notin \pi_2(C')$ we have $(b, x)\notin C'$. If $(x,b)\in C'$ then $x\in \pi_1(C')$, but then $x$ satisfies the if-statement on \cref{line:if-statement} and it would not have gotten to this line. Therefore $(x,b)\notin C'$ and so $(x,b)\in G(T',C')$. \end{proof} The proof of correctness of \cref{alg:better_temporal} will be given in two parts. First, in \label{lem:procedure_returns_sequence} we show that for any feasible problem instance the algorithm will return a sequence. Second, in \label{lem:returned_sequences_satisfy_demands} we show that every sequence that the algorithm returns is a valid cherry picking sequence for the problem instance. \begin{lem} When a cherry picking sequence of weight at most $k$ that satisfies $C$ exists, \texttt{CherryPicking}$(T,k, C)$ from \Cref{alg:better_temporal} returns a non-empty set. \label{lem:procedure_returns_sequence} \end{lem} \begin{proof} Let $W(k,u)$ be the claim that if a cherry picking sequence $s$ of weight at most $k$ exists that satisfies constraint set $C$ with $n^2-|C|\leq u$, then \leo{calling} \texttt{CherryPicking}$(T,k, C)$ will return a non-empty set. We will prove this claim with induction on $k$ and $n^2-|C|$. For the base case $k=0$ if a cherry picking sequence of weight $k$ exists we must have that all trees are equal, so $|\mathcal{L}(T)|=1$. In this case a sequence is returned on \cref{line:emptysequence}. Note that we can never have a constraint set $C$ with $|C|>n^2$ because $C\subseteq \mathcal{L}(T)^2$. Therefore $W(k,-1)$ is true for all $k$. Now suppose $W(k, n^2-|C|)$ is true for all cases where $0\leq k< k_b$ and all cases where $k=k_b$ and $n^2-|C| \leq u$. We consider the case where a cherry picking sequence $s$ of weight at most $k=k_b+1$ exists for $T$ that satisfies $C$ and $n^2-|C|\leq u+1$. \Cref{lem:bound_w_pc} implies that $k-P(C)\geq 0$, so the condition of the if-statement on \cref{line:if_smaller_pc} will not be satisfied. From \cref{lem:correctness_pick} it follows that a CPS $s'$ of weight at most $k'$ exists for $T'$ that satisfies $C'$. From the way the \texttt{Pick} works it follows that either $k'<k$ or $n^2-C'= n^2-C$. If $|\mathcal{L}(T')=1$ then $\{()\}$ is returned and we have proven $W(k_b+1, u+1)$ to be true for this case. Because $s'$ satisfies $C'$, we know that $\pi_1(C) \subseteq \mathcal{L}(T')$. We know there is an $y\in N_{T'}(s'_1)$ with $(s'_1,y)\notin C'$, because otherwise $s'_1$ would be picked by \texttt{Pick}. Also $s'$ satisfies $C'\cup \{(s'_1,y)\}$, which implies that $k\geq P(C'\cup \{(s'_1,y)\})>P(C')$, so the condition of the if-statement on \cref{line:if_smaller_pc_2} will not be satisfied. Note that we have $(s'_1, x) \in G(T',C')$, $w_{T'}(s'_1)>0$ and $s'_1\notin \pi_2(C')$. This implies that either the body of the if-statement on \cref{line:if-statement} or the body of the else-if-statement on \cref{line:else-if-statement} will be executed. Suppose the former is true. By \cref{lem:branch_in_two} we know that $s$ satisfies $C'\cup \{(x,y)\}$ or $C'\cup \{(y,x)\}$. Because $(x,y)\in G(T',C')$ we know $|C'\cup \{x,y\}|=|C'\cup \{y,x\}|=|C'|+1$ and therefore $n^2-|C'\cup \{x,y\}|=n^2-|C'\cup \{y,x\}| \leq u$. So by our induction hypothesis we know that at least one of the two subcalls will return a sequence, so the main call to the function will also return a sequence. If instead the body of the else-if-statement on line \cref{line:else-if-statement} is executed we know by \cref{lem:branch_in_three} that at least one of the constraint sets $C'_1=C\cup \{(a,x)\}$, $C'_2=C\cup \{(b,x)\}$ and $C'_3=C\cup \{(x,a),(x,b)\}$ is satisfied by $s$. Note that $|C'_3|\geq |C'_2|=|C'_1|\geq |C'| +1$, so $n^2-|C'_3|\leq n^2-|C'_2|=n^2-|C'_1|\leq u$. By the induction hypothesis it now follows that at least one of the three subcalls will return a sequence, so the main call to the function will also return a sequence. So for both cases we have proven $W(k_b+1, u+1)$ to be true. \end{proof} \begin{lem} Every element in the set returned by \texttt{CherryPicking}$(T,k, C)$ from \Cref{alg:better_temporal} is a cherry picking sequence for $T$ of weight at most $k$ that satisfies $C$. \label{lem:returned_sequences_satisfy_demands} \end{lem} \begin{proof} Consider a certain call to \texttt{CherryPicking}$(T,k, C)$. Assume that the lemma holds for all subcalls to \texttt{CherryPicking}. We claim that during the execution every element that is in $R$ is a partial cherry picking sequence for $T'$ of weight at most $k'$ that satisfies $C'$. This is true because $R$ starts as an empty set, so the claim is still true at that point. At each point in the function where sequences are added to $R$, these sequences are elements returned by \texttt{CherryPicking}($T',k', C''$) with $C'\subseteq C''$. By our assumption we know that all of these elements are cherry picking sequences for $T'$ of weight at most $k'$ and satisfy $C''$. The latter implies that every elements also satisfies $C'$ because $C'\subseteq C''$. The procedure now return $\{p|r:r\in R \}$ and from \cref{lem:correctness_pick} it follows that all elements of this set are cherry picking sequences for $T$ of weight at most $k$ and satisfying $C$. \end{proof} \subsection{Runtime analysis} \markj{The key idea behind our runtime analysis is that at each recursive call in \cref{alg:better_temporal2}, the measure $k-P(C)$ is decreased by a certain amount, and this leads to a bound on the number of times \cref{alg:better_temporal} is called. It is straightforward to get a bound of $O(9^k)$. Indeed, it can be shown that for $k<|C|/2$ no feasible solution exists, and so the algorithm could stop whenever $2k - |C| < 0$. One call to the algorithm results in at most $3$ subcalls, and in each subcall $|C|$ increases by at least one. Then the total number of subcalls to \cref{alg:better_temporal} would be bounded by $O(3^{2k}) = O(9^k)$. By more careful analysis, and using the lower bound of $P(C)$ on the weight of a sequence satisfying $C$, we are able to improve this bound to $O(5^k)$. } We will now state some lemmas that are needed for the runtime analysis of the algorithm. \markj{We first show that the measure $k - P(C)$ will never increase at any point in the algorithm. The only time this may happen is during \texttt{Pick}, as the values of $k$ and $C$ are not otherwise changed, except at the point of a recursive call where constraints are added to $C$ (which cannot increase $P(C)$). Thus we first show that \texttt{Pick} cannot cause $k - P(C)$ to increase.} \begin{lem} \label{lem:pick_decrease_k} Let $(s,T',k',C')=\texttt{Pick}(T,k,C)$ from \cref{alg:better_temporal}. Then $k'-P(C')\leq k-P(C)$. \end{lem} \begin{proof} We will prove with induction that for the variables $k_i$ and $C_i$ defined in the function body, we have $k_i-P(C_i) \leq k-P(C)$ for all $i$, from which the result follows. Note that for $i=0$ this is trivial. Now suppose the inequality holds for $i$. Then we also have \begin{align*} k_{i+1} - P(C_{i+1}) & = (k_{i} - w_{T^{(i)}}(x_i)) - (P(C_i)- (w_{T^{(i)}}(x_i)+1) \cdot \psi - (1-2\psi)) \\ &= k_i - P(C_i) - (w_{T^{(i)}}(x_i) - 1) (1-\psi) \\ &\leq k_i - P(C_i) \\ &\leq k - P(C) \end{align*} \end{proof} \FloatBarrier \markj{The next lemma will be used later to show that a recursive call to \texttt{CherryPicking} always increases $k-P(C)$ b a certain amount.} \begin{lem} For $a$ and $b$ on \cref{line:two_elements} \markj{of \cref{alg:better_temporal}} it holds that $a\notin \pi_1(C')$ \markj{and $b\notin \pi_1(C')$.} \label{lem:algline_a_not_before} \end{lem} \begin{proof} Suppose $a\in \pi_1(C')$. Then $(a,z)\in C'$ for some $z\in N_{T'}(x)$. If $w_{T'}(a)>0$ then $a$ satisfies the conditions in the if-statement on \cref{line:if-statement}, so \cref{line:two_elements} would not be executed. If $w_{T'}(a)=0$ then we must have $|N_{T'}(a)\setminus \{a\}| = 1$, so $N_{T'}(a)\setminus \{a\} = \{x\}$, which implies that $z=x$. But $(a,x)\notin C'$ because $(x,a)\in G(T',C')$, which contradicts that $(a,z)\in C'$. So $a\notin \pi_1(C')$. Because of symmetry, the same argument holds for $b$. \end{proof} \FloatBarrier \markj{We now give the main runtime proof.} \begin{lem} \texttt{CherryPicking} from \Cref{alg:better_temporal} has a time complexity of $O(5^k \cdot knm)$. \label{lem:running_time_better_temporal} \end{lem} \begin{proof} \sander{Let $n$ be the number of leaves and $m$ the number of trees. The non-recursive part of \texttt{CherryPicking}($T$,$k$,$C$) can be implemented to run in $O(n\cdot m)$ time by constructing $H(T^{(i)})$ from $H(T^{(i-1)})$ in each step.} Let $f(n,m)$ be an upper bound for its computation time with $f(n,m)=O(n\cdot m)$. Let the runtime of \texttt{CherryPicking}($T$,$k$,$C$) be $t(n, k, C)$. We will prove this with induction on $k-P(C)$ that \begin{align*} \markj{t(n, k, C) \leq 5^{k-P(C)+1} (k-P(C)+1)f(n,m)} \text{. } \end{align*} For $-1\leq k-P(C)\leq 0$ the claim follows from the fact that the function will return on either \cref{line:first_return} or \cref{line:third_return} and therefore will not do any recursive calls. Now assume the claim holds for $-1\leq k-P(C)\leq w$. Now consider an instance with $k-P(C)\leq w+\psi $. Note that $k'-P(C')\leq k-P(C)$ (\cref{lem:pick_decrease_k}). If the function \texttt{CherryPicking} does any recursive calls then it either executes the body of the if-clause on \cref{line:if-statement}, or the body of the else-if clause on \cref{line:else-if-statement}. If the former is true then the function does $2$ recursive calls. Each recursive call \leo{to the function} \texttt{CherryPicking}($T'$, $k'$, $C''$) is done with a constraint set $C''$ for which $|C''|=|C'|+1$. Therefore for both subproblems $P(C'')\geq P(C') + \psi$ and also $k'-P(C'')\leq k'-P(C') - \psi \leq k-P(C) - \psi \leq w$. By our induction hypothesis the running time of each of the subcalls is now bounded by \markj{${5^{k'-P(C'')+1}(k'-P(C'')+1)f(n,m)}$.} So therefore the total running time of this call is bounded by \markj{ \begin{align*} & 2\cdot 5^{k'-P(C'') + 1}(k'-P(C'') + 1)f(n,m) + f(n,m)\\ &\leq 2\cdot 5^{k-P(C)-\psi +1}(k-P(C)-\psi +1)f(n,m) +f(n,m) \\&=5^\psi 5^{k-P(C)-\psi +1}(k-P(C)-\psi +1)f(n,m)+ f(n,m) \\& = 5^{k-P(C) +1}(k-P(C)-\psi +1) f(n,m)+ f(n,m) \\ &\sander{\leq} 5^{k-P(C) +1}(k-P(C)+1)f(n,m) - 5\psi f(n,m) + f(n,m)\\ &\leq 5^{k-P(C) +1}(k-P(C)+1) f(n,m)\text{. } \end{align*}} So in this case we have proven the claim for $-1\leq k-P(C)\leq w+\psi $. If instead the body of the else-if statement on \cref{line:else-if-statement} is executed then 3 recursive subcalls are made. Consider the first subcall $\texttt{CherryPicking}(T',k',C'')$. We have $C''=C'\cup \{(a,x)\}$. Because \markj{$(x,a)\in G(T',C')$ we have $(a,x)\notin C'$.} Therefore $|C''|=|C'|+1$. By \cref{lem:algline_a_not_before} we know that $a\notin \pi_1(C')$, but we have $a\in \pi_1(C')$, so $|\pi_1(C'')|=|\pi_1(C')|+1$. Therefore $P(C'')=P(C')+1-\psi$, so $k'-P(C'')=k'-P(C')-1+\psi \markj{\leq k - P(C) -1 + \psi }< k-P(C)-\psi \leq w$. By our induction hypothesis we now know that the running time of this subcall is bounded by \markj{ \begin{align*} 5^{k'-P(C'')+1} (k'-P(C'')+1)f(n,m) \leq 5^{k-P(C)+\psi} (k-P(C)+\psi)f(n,m) \text{. } \end{align*}} Note that by symmetry the same holds for the second subcall. For the third subcall $\texttt{CherryPicking}(T',k',C'')$ , \markj{because $(x,a),(x,b) \in G(T',C')$ we have $|C''| = |C'|+2$, and because $x \notin \pi_1(C')$ we have $|\pi_1(C'')| = |\pi_1(C')| + 1$. So} we know that $P(C'')=\markj{P(C') + 2\psi + (1-2\psi) =} P(C')+1$ \markj{and $k'-P(C'') +1 \leq k-P(C)$.} \markj{Therefore the} running time is bounded by \markj{\begin{align*} 5^{k-P(C)}(k-P(C))f(n,m)\text{. } \end{align*}} So the total running time of this call is bounded by \markj{ \begin{align*} & \hphantom{= =} 2\cdot 5^{k-P(C)+\psi} (k-P(C)+\psi)f(n,m)+ 5^{k-P(C)}(k-P(C))f(n,m)+f(n,m) \\ & = 2\cdot 5^\psi \cdot 5^{k-P(C)} (k-P(C)+\psi)f(n,m)+ 5^{k-P(C)}(k-P(C))f(n,m)+f(n,m) \\ & = 4 \cdot 5^{k-P(C)} (k-P(C)+\psi)f(n,m)+ 5^{k-P(C)}(k-P(C))f(n,m)+f(n,m) \\ & = 5 \cdot 5^{k-P(C)} (k-P(C))f(n,m)+ 4\cdot\psi\cdot5^{k-P(C)}f(n,m)+f(n,m) \\ & \leq 5 \cdot 5^{k-P(C)} (k-P(C))f(n,m)+ 5\cdot5^{k-P(C)}f(n,m) \\ & = 5\cdot 5^{k-P(C)} (k-P(C)+1)f(n,m)\\ & = 5^{k-P(C)+1} (k-P(C)+1)f(n,m) \end{align*}} So also for this case we have proven the claim for $k-P(C)\leq w+\psi $. \end{proof} \begin{thm} \texttt{CherryPicking}$(T,k, C)$ from \Cref{alg:better_temporal} returns a cherry picking sequence of weight at most $k$ that satisfies $C$ if and only if such a sequence exists. The algorithm terminates in $O(5^k\cdot poly(n,m))$ time. \end{thm} \begin{proof} This follows directly from \cref{lem:returned_sequences_satisfy_demands}, \cref{lem:procedure_returns_sequence} and \cref{lem:running_time_better_temporal}. \end{proof} \section{Constructing non-temporal tree-child networks from binary trees} \label{sec:non_temporal} For every set of trees there exists a tree-child network that displays the trees. However there are sets of trees for which no temporal network displaying the trees exist, so we can not always find such a network. As shown in \cref{fig:temporal_tree_child_difference}, approximately 5 percent of the instances used in \cite{van_iersel_practical_2019} do not admit a temporal solution. \begin{figure} \includegraphics[scale=.8]{build/plots/temporal_vs_tree_child_number_difference.pdf} \caption{The difference between the tree-child reticulation number and the temporal reticulation number on the dataset generated in \cite{van_iersel_practical_2019}. If no temporal network exists, the instance is shown under `Not temporal'. Instances for which it could not be decided if they were temporal within 10 minutes ($2.6\%$ of the instances), are excluded. \label{fig:temporal_tree_child_difference} } \end{figure} In this section we introduce theory that makes it possible to quantify how close a network is to being temporal. We can then pose the problem of finding the `most' temporal network that displays a set of trees. \begin{defn} For a tree-child network with vertices $V$ we call a function $t: V\to \mathbb{R}^+$ a semi-temporal labeling if: \begin{enumerate} \item For every tree arc $(u,v)$ we have $t(u)<t(v)$. \item For every hybridization vertex $v$ we have $t(v)=\min \{t(u): (u,v)\in E\}$. \end{enumerate} \end{defn} Note that network has a semi-temporal labeling. \begin{defn} For a tree-child network $\mathcal{N}$ with a semi-temporal labeling $t$, define $d(\mathcal{N}, t)$ to be number of hybridization arcs $(u,v)$ with $t(u)\neq t(v)$. We call these arcs non-temporal arcs. \end{defn} \begin{defn} For a tree-child network $\mathcal{N}$ define \begin{align*} d(\mathcal{N})=\min \{d(\mathcal{N}, t): t \text{ is a semi-temporal labeling of }\mathcal{N} \} \end{align*} Call this number the \emph{temporal distance} of $\mathcal{N}$. Note that this number is finite for every network, because there always exist semi-temporal labelings. \end{defn} The temporal distance is a way to quantify how close a network is to being temporal. The networks with temporal distance zero are the temporal networks. We can now state a more general version of the decision problem. \problem{Semi-temporal hybridization}{A set of~$m$ trees $T$ with~$n$ leaves and integers $k,p$.}{Does there exist a tree-child network $\mathcal{N}$ with $r(\mathcal{N})\leq k$ and $d(\mathcal{N})\leq p$?} There are other, possibly more biologically meaningful ways to define such a temporal distance. The reason for defining the temporal distance in this particular way is that an algorithm for solving the corresponding decision problem exists. For further research it could be interesting to explore if other definitions of temporal distance are more useful and whether the corresponding decision problems could be solved using similar techniques. Van Iersel et al. \ presented an algorithm to solve the following decision problem in $O((8k)^k\cdot \text{poly}(m,n))$ time. \problem{Tree-child hybridization}{A set of~$m$ trees $T$ with~$n$ leaves and integer $k$.}{Does there exist a tree-child network $\mathcal{N}$ with $r(\mathcal{N})\leq k$?} Notice that for $p=k$ \textsc{Semi-temporal hybridization} is equivalent to \textsc{Tree-child hybridization} and for $p=0$ it is equivalent to \textsc{Temporal hybridization}. The algorithm for \textsc{Tree-child hybridization} uses a characterization by Linz and Semple \cite{linz_attaching_2019} using \emph{tree-child sequences}, that we will describe in the next section. We describe a new algorithm that can be used to decide \textsc{Semi-temporal hybridization}. This algorithm is a combination of the algorithms for \textsc{Tree-child hybridization} and \textsc{Temporal hybridization}. \subsection{Tree-child sequences} First we will define the \emph{generalized cherry picking sequence} (generalized CPS), which is called a cherry picking sequence in \cite{van_iersel_practical_2019}. We call it generalized cherry picking sequence because it is a generalization of the cherry picking sequence we defined in \cref{def:cps}. \begin{defn} A \emph{partial generalized CPS} on $X$ is a sequence \begin{align*} s=((x_1,y_1),\ldots, (x_r,y_r),(x_{r+1},-),\ldots ,(x_{t},-)) \end{align*} with $\{x_1, x_2, \ldots, x_s,y_1,\ldots, y_r \}\subseteq X$. A generalized CPS is \emph{full} if $t>r$ and $\{x_1,\ldots, x_t\}=X$. \end{defn} For a tree $\mathcal{T}$ on $X'\subseteq X$ the sequence $s$ defines a sequence of trees $(\mathcal{T}^{(0)}, \ldots, \mathcal{T}^{(r)})$ as follows: \begin{itemize} \item $\mathcal{T}^{(0)}=\mathcal{T}$. \item If $(x_j,y_j)\in \mathcal{T}^{(j-1)}$, then $\mathcal{T}^{(j)}=\mathcal{T}^{(j-1)} \setminus \{x_j\}$. Otherwise $\mathcal{T}^{(j)}=\mathcal{T}^{(j-1)}$. \end{itemize} We will refer to $\mathcal{T}^{(r)}$ as $\mathcal{T}(s)$, the tree obtained by applying sequence $s$ to $\mathcal{T}$. \iffalse Call a sequence \emph{non-redundant} for $T$ if $T^{(i-1)}=T^{(i)}$ for all $1\leq i\leq r$. \fi A full generalized CPS on $X$ is a \emph{generalized CPS} for a set $T$ of trees if for each $\mathcal{T}\in T$ the tree $\mathcal{T}(s)$ contains just one leaf and that leaf is in $\{x_{r+1},\ldots ,x_{t} \}$. The \emph{weight} of a sequence $s$ for a set of trees on $X$ is defined as $w_T(s)=|s|-|X|$. A generalized CPS is a \emph{tree-child} sequence if $|s|\leq r+1$ and $y_j\neq x_i$ for all $1\leq i<j\leq |s|$. If for such a \emph{tree-child} sequence $|s|=r$, then $s$ is also called a \emph{tree-child} sequence prefix. It has been proven that a tree-child network displaying a set of trees $T$ with $r(\mathcal{N})=k$ exists if and only if a tree-child sequence $s$ with $w(s)=k$ exists. The network can be efficiently computed from the corresponding sequence. The algorithm presented by Van Iersel et al. works by searching for such a sequence. We will show that it is possible to combine their algorithm with the algorithm presented in \cref{sec:non_temporal}. This yields an algorithm that decides \textsc{Semi-temporal hybridization} in $O(5^{k}(8k)^p\cdot k\cdot n\cdot m)$ time. \begin{defn} Let $s=((x_1,y_1),\ldots,(x_t,-))$ be a full generalized CPS. An element $(x_i,y_i)$ is a \emph{non-temporal} element when there are $j,k\in [t]$ with $i<j<k\leq t$ and $x_j\neq x_i$ and $x_k=x_i$. \end{defn} \begin{defn} For a sequence $s$ we define $d(s)$ to be the number of non-temporal elements in $s$. \end{defn} \newcommand{\lemSemiTemporalTequenceToNetworkText}{Let $s$ be a full tree-child sequence $s$ for $T$. Then there exists a network $\mathcal{N}$ with semi-temporal labeling $t$ such that $r(\mathcal{N})\leq w_T(s)$ and $d(\mathcal{N},t)\leq d(s)$.} \newtheorem*{lemSemiTemporalTequenceToNetwork}{Lemma~\ref{lem:SemiTemporalTequenceToNetwork}} \begin{lem} \label{lem:SemiTemporalTequenceToNetwork} \lemSemiTemporalTequenceToNetworkText \end{lem} \markj{The full proof of \cref{lem:SemiTemporalTequenceToNetwork} is given in the appendix. We construct a tree-child network $\mathcal{N}$ from $s$ in a similar way to \cite[Proof of Theorem 2.2]{linz_attaching_2019}, working backwards through the sequence. At each stage when a pair $(x,y)$ is processed, we adjust the network to ensure there is an arc from the parent of $y$ to the parent of $x$. Our contribution is to also maintain a semi-temporal labeling $t$ on $\mathcal{N}$. This can done in such a way that for each pair $(x,y)$, at most one new non-temporal arc is created, and only if $(x,y)$ is a non-temporal element of $s$. This ensures that $d(\mathcal{N},t)\leq d(s)$.} \newcommand{\lemSemiTemporalNetworkToSequenceText}{ For a tree-child network $\mathcal{N}$ there exists a full tree-child sequence $s$ with $d(s)\leq d(\mathcal{N})$ and $w_T(s)\leq r(\mathcal{N})$. } \newtheorem*{lemSemiTemporalNetworkToSequence}{Lemma~\ref{lem:SemiTemporalNetworkToSequence}} \begin{lem} \label{lem:SemiTemporalNetworkToSequence} \lemSemiTemporalNetworkToSequenceText \end{lem} \markj{ The full proof of \cref{lem:SemiTemporalNetworkToSequence} is given in the appendix. We construct the sequence in a similar way to \cite[Lemma 3.4]{linz_attaching_2019}. The key idea is that at any point the network will contain some pair of leaves $x,y$ that either form a \emph{cherry} (where $x$ and $y$ share a parent) or a \emph{reticulated cherry} (where the parent of $x$ is a reticulation, with an incoming edge from the parent of $y$). We process such a pair by appending $(x,y)$ to $s$, deleting an edge from $\mathcal{N}$, and simplifying the resulting network. By being careful about the order in which we process reticulated cherries, we can ensure that we only add a non-temporal element to $s$ when we delete a non-temporal arc from $\mathcal{N}$. This ensures that $d(s) \leq d(\mathcal{N},t)$.} \begin{obs} \label{obs:tc:either_ab_or_ba} A tree-child sequence $s$ can not contain both $(a,b)$ and $(b,a)$. \end{obs} \begin{obs} \label{lem:tree_child_subsequence} If a tree-child sequence $s$ has a subsequence $s'$ that is a generalized cherry picking sequence for $T$, then $s$ is also a generalized cherry picking sequence for $T$. \end{obs} \begin{lem} \label{lem:tc:minus_still_seq} If $s=((x_1,y_1),\ldots,(x_{r+1},-))$ is a generalized CPS for $\mathcal{T}$ and there is a $z$ such that $y_i\neq z$ for all $i$. Then $(\mathcal{T}\setminus \{z\})(s)=\mathcal{T}(s)$ and therefore $s$ is also a generalized CPS for $\mathcal{T}\setminus \{z\}$. \end{lem} \begin{proof} Suppose this is not true. Because $\mathcal{T}(S)$ consists of a tree with only one leaf $x_{r+1}$, this implies that $\mathcal{L}((\mathcal{T}\setminus \{z\})(s))\not \subseteq \mathcal{L}(\mathcal{T}(s))$. Let $i$ be the smallest $i$ for which \leo{we have that} $\mathcal{L}((\mathcal{T}\setminus \{z\})((x_1,y_1),\ldots,(x_i,y_i))) \not\subseteq \mathcal{L}(\mathcal{T}((x_1,y_1),\ldots,(x_i,y_i))\setminus \{z\})$. \begin{sloppypar} This implies that $x_i \in \mathcal{L}((\mathcal{T}\setminus \{z\})((x_1,y_1),\ldots,(x_i,y_i)))$ but $x_i\notin \mathcal{L}(\mathcal{T}((x_1,y_1),\ldots,(x_i,y_i))\setminus \{z\})$, so $(x_i,y_i)\notin (\mathcal{T}\setminus \{z\})((x_1,y_1),\ldots,(x_{i-1},y_{i-1}))$, but $(x_i,y_i)\in \mathcal{T}((x_1,y_1),\ldots,(x_{i-1},y_{i-1}))\setminus\{z\}$. Let $p$ be the lowest vertex that is an ancestor of both $x_i$ and $y_i$ in the tree $(\mathcal{T}\setminus~\{z\})((x_1,y_1),\ldots,(x_{i-1},y_{i-1}))$. Because $x_i$ and $y_i$ do not form a cherry in this tree, there is another leaf $q$ that is reachable from $p$. Because $q\in \mathcal{L}(\mathcal{T}((x_1,y_1),\ldots,(x_{i-1},y_{i-1}))\setminus \{z\})$, $q$ is also reachable from the lowest common ancestor $p'$ in $\mathcal{T}((x_1,y_1),\ldots,(x_{i-1},y_{i-1}))\setminus \{z\}$, contradicting the fact that $(x_i,y_i)$ is a cherry in this tree. \end{sloppypar} \end{proof} \subsection{Constraint sets} The new algorithm also uses constraint sets. However, because the algorithm searches for a generalized cherry picking sequence, we need to define what it means for such a sequence to satisfy a constraint set. \begin{defn} A generalized cherry picking sequence $s=((x_1,y_1),\ldots, (x_k,y_k))$ satisfies constraint set $C$ if for every $(a,b)\in C$ there is an $i$ with $(x_i,y_i)=(a,b)$ and there is some $j\neq i$ with $x_j=a$. \end{defn} In \cref{def:ht} the function $H(T)$ was defined for sets of binary trees with the same leaves. After applying a tree-child sequence not all trees will necessarily have the same leaves. Because of this, we generalize the definition of $H(T)$ to sets of binary trees. \begin{defn} For a set of binary trees $T$ define $H(T) =\{x\in\mathcal{L}(T): \forall \mathcal{T} \in T \text{ if } x\in \mathcal{T} \text{ then }x\text{ is in a cherry in }\mathcal{T} \}$. \end{defn} \begin{lem} If $s=((x_1,y_1),\ldots,(x_{r+1},-))$ is a tree-child sequence for $T$ and $(a,b)\in T$, then there is an $i$ such that $(x_i,y_i)=(a,b)$ or $(x_i,y_i)=(b,a)$. \label{lem:tc:leastonepicked} \end{lem} \begin{proof} Let $T\in T$ be a tree in $T$ containing cherry $(a,b)$. Because $s$ fully reduces $T$, $\mathcal{T}(s)$ consists of only the leaf $x_{r+1}$. So $a$ or $b$ has to be removed from $\mathcal{T}$ by applying $s$. Without loss of generality we can assume $a$ is removed first. This can only happen if there is an $i$ with $(x_i, y_i)=(a,b)$. \end{proof} Now we prove that if there are two cherries $(a,z)$ and $(b,z)$ in $T$, then we can branch on three possible additions to the constraint set, just like we did for cherry picking sequences. \begin{lem} Let $s$ be a tree-child sequence for $T$ and $a,b\in N_T(z)$ with $a\neq b$. Then $s$ satisfies one of the following constraint sets: \\$\{(a,z)\}, \{(b,z)\}, \{(z,a),(z,b)\}$. \label{lem:tc:branch_in_three} \end{lem} \begin{proof} From \cref{lem:tc:leastonepicked} it follows that either $(a,z)$ or $(z,a)$ is in $s$ and that either $(b,z)$ or $(z,b)$ is in $s$. Now let $s_i=(x_i,y_i)$ be the element of these that appears first in $s$. Now we have three cases: \begin{enumerate} \item If $x_i=a$, then $s_i=(a,z)$. Let $\mathcal{T} \in T$ be the tree in which $(b,z)$ is a cherry. Now $(b,z)\in \mathcal{T}(s_1,\ldots, s_i)$. Because $(s_{i+1},\ldots, s_{r+1})$ is a tree-child sequence for $\mathcal{T}(s_1,\ldots, s_i)$, this implies that there is some $j>i$ with $x_j=a$. Consequently $\{(a,z)\}$ is satisfied by $s$. \item If $x_i=b$, then the same argument as in (1) can be applied to show that $\{(b,z)\}$ is satisfied by $s$. \item If $x_i=z$, then we either have $y_i=a$ or $y_i=b$. Without loss of generality we can assume $y_i=a$. We still have $(b,z)\in T(s_1,\ldots, s_i)$, which implies that there is some $j>i$ with $(x_j,y_j)=(b,z)$ or $(x_j,y_j)=(z,b)$. Because $j>i$ and $s$ is tree-child, we know that $y_j\neq z$. So $(x_j,y_j)=(z,b)$, and consequently $\{(z,a),(z,b)\}$ is satisfied by $s$. \end{enumerate} \end{proof} We also prove that if $a\in \pi_1(C)$ and $(a,b)\in T$, then we only need to do two recursive calls. \begin{lem} \label{lem:tc:branch_in_two} Let $s$ be a tree-child sequence for $T$ that satisfies constraint set $C$ and $a,b\in N_T(z)$ with $(z,b)\in C$. Then $s$ satisfies one of the following constraint sets: \\$\{(a,z)\}, \{(z,a)\}$. \end{lem} \begin{proof} From \cref{lem:tc:branch_in_three} it follows that $s$ satisfies one of the constraint sets $\{(a,z)\}$, $\{(b,z)\}$ and $\{(z,a),(z,b)\}$. However, because $s$ satisfies $C$ and $(z,b)\in C$, from \cref{obs:tc:either_ab_or_ba} it follows that $(b,z)$ does not appear in $s$. Therefore $s$ has to satisfy either $\{(a,z)\}$ or $\{(z,a),(z,b)\}$. If $s$ satisfies $\{(z,a),(z,b)\}$, then it also satisfies $\{(z,a)\}$. \end{proof} \begin{lem} If a tree-child sequence $s=((x_1,y_1),\ldots,(x_r,x_y),(x_{r+1},-))$ for $T$ satisfies constraint set $C$, then $w_T(s)\geq P(C)$. \label{lem:bound_w_pc_nontemp} \end{lem} \begin{proof} For $z\in \mathcal{L}(T)\setminus \{x_{r+1}\}$, let $C_z:=\{(a,b):(a,b)\in C \land a=z \}$ and let $S_z:=\{(x_i,y_i):i\leq r \land x_i=z \}$. We show that we have $|S_z|-1\geq P(C_x)$. If $|C_z|=0$, then $P(C_z)=0$ and the inequality is trivial. If $|C_z|=1$, then from the definition of constraint sets it follows that $|S_z|\geq 2$, so $|S_z|-1\geq 1\geq P(C_z)$. Otherwise if $|C_z|\geq 2$, then because $C_z\subseteq S_z$, $|S_z|-1\geq |C_z|-1= \psi \cdot |C_z|-1 + (1-\psi)|C_z| \geq |C_z|-1 + 2(1-\psi)=|C_z|+(1-2\psi)=P(C_z)$. Now the result follows because $w_T(s)=|s|-|\mathcal{L}(T)|=\sum_{z\in \mathcal{L}(T)\setminus \{x_{r+1}\}}(|S_z|-1)\geq \sum_{z\in \mathcal{L}(T)\setminus \{x_{r+1}\}}P(C_z)=P(C)$. \end{proof} Next we prove that if a leaf $z$ is in $H(T)$ and appears in $s$ with all of its neighbors, then we can move all elements containing $z$ to the start of the sequence. \begin{lem} \label{lem:tc:remove_immediately} If $s=((x_1,y_1),\ldots,(x_{r+1},-))$ is a tree-child sequence for $T$, $z\in H(T)$ and $I$ is a set of indices such that $\{y_i: i\in I \}=N_T(z)$ and $x_i=z$ for all $i\in I$. Then the sequence $s'$ obtained by first adding the elements from $s$ with an index in $I$ and then adding elements $(x,y)$ of $s$ for which $x\neq z$ is a tree-child sequence for $T$. We have $d(s')\leq d(s)$. \end{lem} \begin{proof} We can write $s'=((x'_1,y'_1),\ldots,(x_{r+1},-))=s^a|s^b$ where $s^a$ consists of the elements $\{s_i:i\in I\}$ and $s^b$ is $s$ with the elements at indices in $I$ removed. First we prove that $s'$ is a tree-child sequence. Suppose that $s'$ is not a tree-child sequence. Then there are $i,j$ with $i<j$ such that $x'_i = y'_j$. Note that we can not have that $y'_j=z$, because of how we constructed $s'$. This implies that both indices $i$ and $j$ are in $s^b$, implying that $s^b$ is not tree-child. But because $s^b$ is a subsequence of $s$ this implies that $s$ is not tree-child, which contradicts the conditions from the lemma. So $s'$ is tree-child. We now prove that $s'$ fully reduces $T$. Because $T(s^a)=T\setminus \{z\}$ from \cref{lem:tc:minus_still_seq} it follows that $s^a|s$ is a generalized CPS for $T$. Because $z\notin \mathcal{L}(T(s^a))$, $T(s^a|s)=T(s^a|s^b)$. So $s'$ is a generalized CPS for $T$. Finally since for every non-temporal element in $s'$ the corresponding element in $s$ is also non-temporal. We conclude that $d(s')\leq d(s)$. \end{proof} \subsection{Trivial cherries} We will call a pair $(a,b)$ a \emph{trivial cherry} if there is a $\mathcal{T}\in T$ with $a\in \mathcal{L}(\mathcal{T})$ and for every tree $\mathcal{T}\in T$ that contains $a$, we have $(a,b)\in \mathcal{T}$. They are called trivial cherries because they can be picked without limiting the possibilities for the rest of the sequence, as stated in the following lemma. \begin{lem} \label{lem:tc:trivial_cherries} If $s=((x_1,y_1),\ldots,(x_{r+1},-))$ is a tree-child sequence for $T$ of minimum length and $(a,b)$ is a trivial cherry in $T$, then there is an $i$ such that $(x_i,y_i)=(a,b)$ or $(x_i,y_i)=(b,a)$. Also, there exists a tree-child sequence $s'$ for $T$ with $|s|=|s'|$, $d(s')=d(s)$ and $s'_1=(a,b)$. \end{lem} \begin{proof} This follows from \cref{lem:tc:remove_immediately}. \end{proof} \begin{algorithm} \caption{} \label{alg:better_temporal2} \begin{algorithmic}[1] \Procedure{SemiTemporalCherryPicking}{$T,k, k^\star, p, C$} \If {$k-P(C) < 0$} \label{line2:first_if} \State \Return $\emptyset$ \label{line2:first_return} \EndIf \State $T',k', C',f\gets $\Call{Pick}{$T,k,C$} \label{line2:callpick} \If{$|\mathcal{L}(T')| =1 $} \State\Return $\{f\}$ \label{line2:emptysequence} \ElsIf{$k'-P(C')\leq 0 \lor \pi_1(C')\nsubseteq \mathcal{L}(T') $}\label{line2:third_if} \State \Return $\emptyset$ \label{line2:second_return} \EndIf \\ \State $R\gets \emptyset$ \If{$\exists (x, y) \in T: w_T(x)>0 \land x\in \pi_1(C')$ } \label{line2:if-statement} \State $R\gets R \cup $ \Call{SemiTemporalCherryPicking}{$T'$, $k'$, $k^\star$, $p$, $C'\cup \{ (x,y ) \}$} \State $R\gets R \cup $ \Call{SemiTemporalCherryPicking}{$T'$, $k'$, $k^\star$, $p$, $C'\cup \{ (y,x ) \}$} \ElsIf{$\exists (x,a) \in G(T',C'): w_{T'}(x)>0 \land x\notin \pi_2(C')$ } \label{line2:else-if-statement} \State \text{Choose} $b\neq a$ such that $(x,b)\in G(T',C')$ \label {line2:two_elements} \State $R\gets R \cup $ \Call{SemiTemporalCherryPicking}{$T'$, $k'$, $k^\star$, $p$, $C'\cup \{ (a,x ) \}$} \State $R\gets R \cup $ \Call{SemiTemporalCherryPicking}{$T'$, $k'$, $k^\star$, $p$, $C'\cup \{ (b,x ) \}$} \State $R\gets R \cup $ \Call{SemiTemporalCherryPicking}{$T'$, $k'$, $k^\star$, $p$, $C'\cup \{ (x,a ), (x,b) \}$} \ElsIf{$p> 0$} \label{alg:line:alternative_algorithm} \State $P\gets \{(x,y) \in T': y\in \mathcal{T}'\ \forall \mathcal{T}'\in T' \land x\notin \pi_2(C) \}$ \If {$|P|>8k^\star$} \State\Return $\emptyset$ \EndIf \For{$(x,y)\in P$} \State $C''\gets C\setminus \{(x,y)\}$ \If{$|\{(x,z)\in C \}|=1$ } \State $C''\gets C''\setminus \{(x,z)\in C \}$ \EndIf \State $R\gets R\ \cup\ \{(x,y)|r:r\in $ \Call{SemiTemporalCherryPicking}{$T'((x,y))$, $k'-1$, $k^\star$, $p-1$, $C''$}$\}$.\label{line:non_temporal_return} \EndFor \EndIf \State\Return $\{f|r: r \in R\}$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm} \begin{algorithmic} \Procedure{Pick}{$T', k', C'$} \State $(T^{(1)},k_1,C_1)\gets (T',k',C')$ \State $p^{(1)}\gets ()$ \State $i\gets 1$ \While{$\exists x_i\in H(T^{(i)}): \left(w_{T^{(i)}}(x_i)=0\lor \{(x_i,n) : n \in N_{T^{(i)}}(x_i) \} \subseteq C_i \right) \land (\forall y\in N_{T^{(i)}}\forall \mathcal{T}\in T:y\in \mathcal{T})$} \State $(n_1,\ldots, n_t)\gets N_T(x_i)$ \State $p^{(i+1)}\gets p^{(i)}|((x_i, n_1),\ldots, (x_i,n_t))$ \State $k_{i+1}\gets k_{i}-w_{T^{(i)}}(x_i)$ \State $T^{(i+1)}\gets T^{(i)}$ \State $C_{i+1}\gets \{c\in C_{i}: x \notin \{c_1, c_2\}\}$ \State $i\gets i+1$ \EndWhile \State \Return $T^{(i)},k_{i},C_{i},p_{i}$ \EndProcedure \end{algorithmic} \label{alg:non_temporal_pick} \caption{} \end{algorithm} \begin{lem}[Correctness of \texttt{Pick}] Suppose \texttt{Pick}$(T',k', C')$ in \cref{alg:non_temporal_pick} returns $(T,k,C,p)$. Then a tree-child sequence $s$ of weight at most $k$ for $T$ that satisfies $C$ exists if and only if a tree-child sequence $s'$ of weight at most $k'$ for $T'$ that satisfies $C'$ exists. In this case $p|s$ is a tree-child sequence for $T'$ of weight at most $k'$ and satisfying $C'$. \label{lem:tc:correctness_pick} \end{lem} The proof for this lemma is the same as for \cref{lem:correctness_pick}, but uses \cref{lem:tc:remove_immediately} instead of \cref{lem:remove_immediately}. The following lemma was proven in \cite[Lemma 11]{van_iersel_practical_2019}. \begin{lem} Let $s^a|s^b$ be a tree-child sequence for $T$ with weight $k$. If $T(s^a)$ contains no trivial cherries, then the number of unique cherries is at most $4k$. \label{lem:tc:unique_cherries} \end{lem} \begin{lem} If $((x_1,y_1),\ldots, (x_2,y_2),(x_{r+1},-), ,(x_{t},-))$ is a full tree child-sequence of minimal length for $T$ satisfying $C$ and $H(T)\setminus \pi_2(C)=\emptyset$, then $(x_1,y_1)$ is a non-temporal element. \label{lem:first_non_temporal} \end{lem} \begin{proof} First observe that $x_1\notin \pi_2(C)$ because the sequence satisfies $C$. Suppose $(x_1,y_1)$ is a temporal element. This implies that there is an $i$ such that for all $j<i$ we have $x_j=x_1$ and $x_k\neq x_1$ for all $k\geq i$. This implies that for every $\mathcal{T} \in T$ there is a $j<i$ such that $x_1$ is not in $\mathcal{T}((x_j,y_j))$. Consequently $(x_j,y_j)$ is a cherry in $\mathcal{T}$. Because this holds for every tree $\mathcal{T} \in T$ we must have $H(T)\setminus \pi_2(C)$, contradicting the assumption that $H(T)\setminus \pi_2(C)=\emptyset$. \end{proof} \subsection{The algorithm} \markj{ We now present our algorithm for \textsc{Semi-temporal hybridization}. As with \textsc{Tree-child hybridization}, we split the algorithm into two parts: \texttt{SemiTemporalCherryPicking}(\cref{alg:better_temporal2}) is the main recursive procedure, and \texttt{Pick}(\cref{alg:non_temporal_pick}) is the auxiliary procedure.} \markj{The key idea is that we try to follow the procedure for temporal sequences as much as possible. \cref{alg:better_temporal2} only differs from \cref{alg:better_temporal} in the case where neither of the recursion conditions of \cref{alg:better_temporal} apply, but there are still cherries to be processed. In this case, we can show that there are no trivial cherries, and hence \cref{lem:tc:unique_cherries} applies. Then we may assume there are at most $4k^*$ unique cherries, where $k^*$ is the original value of $k$ that we started with. In this case, we branch on adding $(x,y)$ or $(y,x)$ to the sequence, for any $x$ and $y$ that form a cherry. Any such pair will necessarily be a non-temporal element, and so we decrease $p$ by $1$ in this case. A full proof of the following lemma is given in the appendix.} \FloatBarrier \newcommand{\lemTCProcedureReturnsSequenceText}{ Let $s^\star$ be a tree-child sequence prefix, $T^\star$ a set of trees with the same leaves and define $T:=T^\star(s)$. Suppose $k,p\in \mathbf{N}$ and $C\in \mathcal{L}(T)^2$. When a generalized cherry picking sequence $s$ exists that satisfies $C$ and such that $s^ \star|s$ is a tree-child sequence for $T^\star$ with $w_{T^\star}(s^\star|s)\leq k^\star$ and $d(s)\leq p$ exists, \texttt{SemiTemporalCherryPicking}$(T, k, k^\star, p, C)$ from \Cref{alg:better_temporal2} returns a non-empty set. } \newtheorem*{lemTCProcedureReturnsSequence}{Lemma~\ref{lem:tc:procedure_returns_sequence}} \begin{lem} \label{lem:tc:procedure_returns_sequence} \lemTCProcedureReturnsSequenceText \end{lem} \begin{lem} \label{lem:tc:returned_sequences_satisfy_demands} Let $s^\star$ be a tree-child sequence prefix, $T^\star$ a set of trees with the same leaves and define $T:=T^\star(s)$. Suppose $k,p\in \mathbf{N}$ and $C\in \mathcal{L}(T)^2$. If \leo{$S$ is returned by} a call to \texttt{SemiTemporalCherryPicking}$(T, k, k^\star, p, C)$, then for every $s\in S$, the sequence $s'=s^\star | s$ is a tree-child sequence for $T^\star$ with $d(s)\leq p$ and $w(s)\leq k$. \end{lem} The proof of this lemma is similar to the proof of \cref{lem:procedure_returns_sequence} using \cref{lem:tc:correctness_pick}. \begin{lem} \Cref{alg:better_temporal2} has a running time of $O(5^k\cdot (8k)^p\cdot k \cdot n\cdot m)$. \label{lem:tc:running_time_better_temporal} \end{lem} \begin{proof} This can be proven by combining the proofs from \cref{lem:running_time_better_temporal} and \cite[Lemma 11]{van_iersel_practical_2019}. \end{proof} \begin{thm} \texttt{SemiTemporalCherryPicking}$(T, k, k, p, \emptyset)$ from \Cref{alg:better_temporal2} returns a cherry picking sequence of weight at most $k$ if and only if such a sequence exists. The algorithm terminates in $O(5^k\cdot (8k)^p\cdot k \cdot n\cdot m)$ time. \end{thm} \begin{proof} This follows directly from \cref{lem:tc:returned_sequences_satisfy_demands}, \cref{lem:tc:procedure_returns_sequence} and \cref{lem:tc:running_time_better_temporal}. \end{proof} \section{Constructing temporal networks from two non-binary trees} \label{sec:non_binary_trees} The algorithms described in the previous sections only work when all input trees are binary. In this section we introduce the first algorithm for constructing a minimum temporal hybridization number for a set of two non-binary input trees. The algorithm is based on \cite{piovesan_simple_2013} and has time complexity $O(6^kk!\cdot k \cdot n^2)$. We say that a binary tree $\mathcal{T}'$ is a refinement of a non-binary tree $\mathcal{T}$ when $\mathcal{T}$ can be obtained from $\mathcal{T}'$ by contracting some of the edges. Now we say that a network $\mathcal{N}$ displays a non-binary tree $\mathcal{T}$ if there exists a binary refinement $\mathcal{T}'$ of $\mathcal{T}$ such that both $\mathcal{N}$ displays $T'$. Now the hybridization number $h_t(T)$ can be defined for a set of non-binary trees $T$ like in the binary case. \begin{defn} A set $S\subseteq N_T(x)$ is a \emph{neighbor cover} for $x$ in $T$ if $S\cap N_\mathcal{T}(x) \neq \emptyset$ for all $\mathcal{T}\in T$. \end{defn} \begin{defn} For a set of non-binary trees $T$, define $w_T(x)$ as the minimum size of a neighbor cover of $x$ in $T$ minus one. \end{defn} Note that computing the minimum size of a neighbor cover is a NP-hard problem itself. However if $|T|$ is constant the problem can be solved in polynomial time. Note that for binary trees this definition is equivalent to the definition given in \cref{def:weight}. Next \cref{def:ht} is generalized to non-binary trees. \begin{defn} For a set of non-binary trees $T$ on the same taxa define $H(T) =\{x\in\mathcal{L}(T): \forall \mathcal{T} \in T \text{ }N_\mathcal{T}(x)\neq \emptyset \}$. \end{defn} The non-binary analogue of \cref{def:cps} is given by the following lemma. \begin{defn} For a set of non-binary trees $T$ with $n=\mathcal{L}(T)$, let $s=(s_1,\ldots, s_{n-1})$ be a sequence of leaves. Let $T_0=T$ and $T_i=T_{i-1}\setminus \{s_1, \ldots, s_i\}$. The sequence $s$ is a \emph{cherry picking sequence} if for all $i$, $s_i\in H(T\setminus\{s_1, \ldots, s_{i-1}\})$. Define the \emph{weight} of the sequence as $w_T(s)=\sum_{i=1}^{n-1} w_{T_{i-1}}(s_i)$. \end{defn} \begin{lem} A temporal network $\mathcal{N}$ that displays a set of nonbinary trees $T$ with reticulation number $r(\mathcal{N})=k$ exists if and only if a cherry picking sequence of weight at most $k$ exists. \end{lem} \begin{proof} Note that this is a generalization of \cref{lem:exists_cherry_sequence} to the case of non-binary input trees and the proof is essentially the same. A cherry picking sequence with weight $k$ can be constructed from a temporal network with reticulation number $k$ in the same way as in the proof of \cref{lem:exists_cherry_sequence}. The construction of a temporal network $\mathcal{N}$ from a cherry picking $s$ is also very similar to the binary case: for cherry picking sequence $s_1,\ldots, s_t$, define $\mathcal{N}_{t+1}$ to be the network, only consisting of a root, the only leaf of $T\setminus \{s_1, \ldots, s_t \}$ and an edge between the two. For each $i$ let $S_i$ be a minimal neighbor cover of $s_i$ in $T\setminus \{s_{1}, \ldots, s_{i-1} \}$. Now obtain $\mathcal{N}_{i}$ from $\mathcal{N}_{i+1}$ by adding node $s_i$, subdividing $(p_x,x)$ for every $x\in S_i$ with node $q_x$ and adding an edge $(q_x,s_i)$ and finally suppressing all nodes with in- and out-degree one. It can be shown that $r(\mathcal{N})=w_T(s)$. \end{proof} \begin{lem} If $s$ is a cherry picking sequence for $T$ and for $x\in H(T)$ we have $w_T(x)=0$ then there is a cherry picking sequence $s'$ for $T$ with $w_T(s')=w_T(s)$ and $s'_1=x$. \label{lem:nonbin:remove_trivial} \end{lem} \begin{proof} We have $N_T(x)=\{y\}$. Now let $z$ be the element of $\{x,y\}$ that appears in $s$ first with $s_i=z$. Now $s'=(s_i,s_1,\ldots, s_{i-1},s_{i+1},\ldots)$ is a cherry picking sequence for $T$ with $w_T(s')=w_T(s)$. If $z=x$, then this proves the lemma. Otherwise we note that by swapping $x$ and $y$ in $T$, the trees stay the same. So we can also swap $x$ and $y$ in $s'$ without affecting the weight. Now $s'=x$, which proves the lemma. \end{proof} The algorithm relies on some theory from \cite{piovesan_simple_2013}, that we will introduce first. For a vertex $v$ of $\mathcal{T}$ we say that all vertices reachable by $v$ form a pendant subtree. For a pendant subtree $S$ we define $\mathcal{L}(S)$ set of the leaves of $S$. Now we define \begin{align*} Cl(\mathcal{T}) = \{\mathcal{L}(S): S \text{ is a pendant subtree of } \mathcal{T} \}\text{. } \end{align*} We call this the set of \emph{clusters} of $\mathcal{T}$. Then we define $Cl(T)=\bigcup_{\mathcal{T} \in T}Cl(\mathcal{T})$. Call a cluster $C$ with $|C|=1$ \emph{trivial}. Now we call a nontrivial cluster $C\in Cl(T)$ a \emph{minimal} cluster if there is no $C'\in Cl(T)$ with $C'$ nontrivial and $C'\subsetneq C$. In a cherry picking sequence $s$ we say that at index $i$ the cherry $(s_i,y)$ is \emph{reduced} if there is a $\mathcal{T}\in T$ such that $N_{T\setminus \{s_1,\ldots, s_{i-1}\}}(s_i)=\{y\}$. \begin{lem} Let $T$ be a set of trees with $|T|=2$ such that $T$ contains no leaf $x$ with $w_T(x)=0$. Let $s$ be a cherry picking sequence for $T$. Then there is a minimal cluster $C$ in $T$ and a cherry picking sequence $s'=(s'_1,\ldots)$ for $T$ with $s'_i\in C$ for $i =1,\ldots, |C| -1$ and $w_T(s')\leq w_T(s)$. \label{lem:can_pick_min_cluster} \end{lem} \begin{proof} Let $p$ be the first index that a cherry is reduced in $s$. Let $(a,b)$ be one of the cherries that is reduced at index $p$. Now there will be a cherry in $T$ that contains both $a$ and $b$. Let $C$ be one of the minimum clusters that is contained in this cherry. Let $x$ be the element of $C$ that occurs last in $s$. Now let $c_1,\ldots, c_t$ be the elements from $C\setminus \{x\}$ ordered by their index in $s$. Now we claim that for any permutation $\sigma$ of $[t]$ we have $s'=(c_{\sigma(1)},\ldots,c_{\sigma(t)})|(s\setminus (C\setminus \{x\}))$ is a cherry picking sequence for $T$ and $w_T(s')\leq w_T(s)$. Let $i$ be the index of the last element of $C\setminus \{x\}$ in $s$. Suppose that $s'$ is not a CPS for $T$. Let $j$ be the smallest index for which $s'_j\notin H(T\setminus \{s'_1, \ldots, s'_{j-1} \})$. Let $\mathcal{T} \in T$ be such that $s'_j$ is not in a cherry in $T\setminus \{s'_1, \ldots, s'_{j-1}\}$. Choose $k$ such that $s_k=s'_j$. Now there are three cases: \begin{itemize} \item Suppose $j> i$, then $k=j$ and $\{s_1,\ldots, s_k \}=\{s'_1,\ldots, s'_j \}$. This implies that $s'_j\in H(T\setminus \{s'_1,\ldots, s'_j\})$, which contradicts our assumption. \item Otherwise, suppose $s'_j\in \{c_1,\ldots, c_t\}$. Then $j\leq t$. Now $s_k$ has to be in a cherry in $\mathcal{T}\setminus \{s_1,\ldots, s_{k-1}\}$. Because no cherries are reduced before index $i$ in $s$ this means that $s'_j$ is in a cherry in $\mathcal{T}$. Because no cherries are reduced in $s'$ before index $t$, this implies that the same cherry is still in $\mathcal{T}\setminus \{s'_1,\ldots, s'_{j-1} \}$, which contradicts our assumption. \item Otherwise we must have $j\leq i$. Because no cherries are reduced before index $i$ in $s$ this means that $s'_j$ is in a cherry $Q$ in $\mathcal{T}$. If this cherry contains a leaf $y$ with $s'_w=y$ for $w>j$, then $s'_j$ is still in a cherry in $\mathcal{T}\setminus \{s'_1,\ldots, s'_{j-1} \}$, contradicting our assumption, so this can not be true. However, that implies that the neighbors of $s_k$ in $\mathcal{T} \setminus \{s_1, \ldots, s_{k-1} \}$ are all elements of $\{c_1, \ldots, c_t \}$. Let $v$ be the second largest number such that $c_v$ is one of these neighbors. Let $q$ be the index of $c_v$ in $s$. Now cherry $Q$ will be reduced by $s$ at index $\max (q, j)< i$, which contradicts the fact that $C$ is contained in a cherry of $T$ that is reduced first by $s$. \end{itemize} Now to prove that $w_T(s')\leq w_T(s)$, we will prove that for $s_j=s'_k$ we have \begin{align*} w_{T\setminus \{s_1, \ldots, s_{j-1} \}}(s_j)\geq w_{T\setminus \{s'_1, \ldots, s'_{k-1} \}}(s'_k)\text{. } \end{align*} Note that for $j\geq i$ this is trivial, so assume $j<i$. If $s_j\in C\setminus \{x\}$, then $w_{T\setminus \{s_1, \ldots, s_{j-1} \}}(s_j)\geq w_{T}(s_j)$ because no cherries are reduced before $i$, which implies that no new elements added to cherries before $i$. For the same reason we must have $s_j\in H(T)$. Because there are no $x\in H(T)$ with $w_T(x)=0$ we must have $w_{T}(s_j)=1$. So $w_{T\setminus \{s'_1, \ldots, s'_{k-1} \}}(s'_k)\leq w_{T\setminus \{s_1, \ldots, s_{j-1} \}}(s_j)=1$. \end{proof} \subsection{Bounding the number of minimal clusters} By \cref{lem:can_pick_min_cluster} in the construction of a cherry picking sequence we can restrict ourselves to only appending elements from minimal clusters. We use the following theory from \cite{piovesan_simple_2013} to bound the number of minimal clusters. \begin{defn} Define the relation $x\xrightarrow[]{T} y$ for leaves $x$ and $y$ of $T$ if every nontrivial cluster $C\in Cl(T)$ also contains $y$. \end{defn} \begin{obs}[{\cite[Observation 2]{piovesan_simple_2013}}] The relation $\xrightarrow[]{T}$ defines a partial ordering on $\mathcal{L}(T)$. \end{obs} Now call $x\in \mathcal{L}(T)$ a \emph{terminal} if there is no $y\neq x$ with $x\xrightarrow[]{T} y$. Now we will first show that all minimal clusters contain a terminal. Then a bound on the number of terminals gives a bound on the number of minimal clusters. \begin{lem} Every minimal cluster contains a terminal. \end{lem} \begin{proof} Let $C$ be a minimal cluster of $T$. Let $x$ be an element of $C$ that is maximal in $C$ with respect to the partial ordering `$\xrightarrow[]{T}$' (if we say that $x\xrightarrow[]{T}y$ means that $y$ is `greater than or equal to' $y$). Now suppose that $x$ is not a terminal. Then there is an $y$ such that $x\xrightarrow[]{T}y$. However then $y\in C$, but this contradicts the fact that $x$ is a maximal element in $C$ with respect to `$\xrightarrow[]{T}$'. Because this is a contradiction, $x$ has to be a terminal. \end{proof} \begin{lem} \label{lem:max3kterminals} Let $T$ be a set of trees with $h_t(T)\geq1$ containing no zero-weight leaves. Let $\mathcal{N}$ be a network that displays $T$. Then $T$ contains at most $2r(\mathcal{N})$ terminals that are not directly below a reticulation node. \end{lem} \begin{proof} We reformulate the proof from \cite[Lemma 3]{piovesan_simple_2013}. We use the fact that for each terminal one of the following conditions holds: the parent $p_x$ of $x$ in $N$ is a reticulation (condition 1) or a reticulation is reachable in a directed tree-path from the parent $p_x$ of $x$ (condition 2). This is always true because if neither of the conditions holds, because otherwise another leaf $y$ is reachable from $p_x$, implying that $x\xrightarrow[]{T}y$, which contradicts that $x$ is a terminal. Let $R$ be the set of reticulation nodes in $\mathcal{N}$ and let $W$ be the set of terminals in $T$ that are not directly beneath a reticulation. We describe a mapping $F:W\to R$ such that each reticulation $r$ is mapped to at most $d^-(r)$ times. Note that for each $x\in W$ condition 2 holds. For these elements let $F(x)=y$ where $y$ is a reticulation reachable from $p(x)$ by a tree-path. Note that there can not be a path from $p(x)$ to $y$ containing only tree arcs when $x\neq y$ are both in $H(T)$ because then $x\to y$ which contradicts that $x$ is a terminal. It follows that each reticulation $r$ can be mapped to at most $d^-(r)$ times: at most once incoming edge. Then for the set of terminals $\Omega$ we have $|\Omega| \leq \sum_{r\in R} d^-(r) \leq \sum_{r\in R} (1+(d^-(r)-1)) \leq |R| + k \leq 2k$. \end{proof} \begin{lem} \label{lem:max2kterminals} Let $T$ be a set of nonbinary trees such that $h_t(T)\geq1$. Then any set $S$ of terminals in $T$ with $|S|\geq 2h_t(T)+1$ contains at least one element $x\in H(T)$ such that $s$ is a cherry picking sequence for $T$ with $w_T(s)=h_t(T)$ and $s_1=x$. \end{lem} \begin{proof} Let $\mathcal{N}$ be a temporal network that displays $T$ such that $r(\mathcal{N})=h_t(T)$ with corresponding cherry picking sequence $s$. From the \cref{lem:max3kterminals} it follows that at most $r(\mathcal{N})$ terminals exist in $T$ that are not directly below a reticulation. So there is an $x\in S$ that is directly below a reticulation. Now let $T'$ be the set of all binary trees displayed by $\mathcal{N}$. Note that $s$ is a cherry picking sequence for $T'$. Let $i$ be such that $s_i=x$. Because $x$ is directly below a reticulation in $\mathcal{N}$, we have $s_j\notin N_{T'}(x)$, which implies by \cref{lem:twoconditions} that $s'=(s_i,s_1,\ldots, s_{i-1},s_{ i+1},\ldots)$ is a cherry picking sequence for $T'$ with $w_{T'}(s')=w_{T'}(s)=r(\mathcal{N})=h_t(T)$. Now $w_{T}(s')\leq w_{T'}(s')=h_t(T)$, so $w_{T}(s')=h_t(T)$. \end{proof} \begin{algorithm} \caption{\label{alg:non_binary_new}} \begin{algorithmic}[1] \Procedure{CherryPicking}{$T,k$} \State $s\gets ()$ \While{$\exists x\in H(T): w_T(x)=0$} \State $T\gets T\setminus \{x\}$ \label{line:remove_trivial} \State $s\gets s|(x)$ \EndWhile\\ \If{$|\mathcal{L}(T)|=1$} \State\Return $\{s\}$ \ElsIf{$k=0$} \State \Return $\emptyset$ \EndIf\\ \State $S\gets $ set of terminals in $T$ \If{$|S|>2k$} \State $S'\gets $subset of $S$ of size $2k+1$ \For{$x\in S'\cap H(T)$} \State $R\gets R \cup \{ (x)\ |\ x : x \in $ \Call{CherryPicking}{$T\setminus \{x \}, k-1$ } $\}$ \EndFor \Else \For{$q\in S$} \State $D\gets$ set of minimum clusters that contain $q$ \If{$\exists y,z: D=\{\{q,y\},\{q,z\}\}$\label{line:if_split_3} } \For{$x\in \{q,y,z\} \cap H(T)$} \State $R\gets R \cup \{ (y)\ |\ x : x \in $ \Call{CherryPicking}{$T\setminus \{y\}, k-1$}$\}$ \EndFor \Else \For{$C\in D$} \For{$x\in C: C\setminus \{x\} \subseteq H(T)$} \State $(c_1,\ldots, c_t)\gets C\setminus \{x\}$ \State $R\gets R \cup \{ (c_1,\ldots, c_t)\ |\ x : x \in $ \Call{CherryPicking}{$T\setminus \{c_1, \ldots, c_t \}, k-t$}$\}$ \EndFor \EndFor \EndIf \EndFor \EndIf \State\Return $\{s|x:x\in R\}$ \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Run-time analysis} \begin{lem} The running time of $\texttt{CherryPicking}(T,k)$ from \cref{alg:non_binary_new} is $O(6^kk! \cdot k\cdot n^2)$ if $T$ is a set consisting of two nonbinary trees. \end{lem} \begin{proof} Let $f(n)$ be an upper bound for the running time of the non-recursive part of the function. We claim that the maximum running time $t(n,k)$ for running the algorithm on trees with $n$ leaves and parameter $k$ is bounded by $6^{k}k! kf(n)$. For $k=0$ it is clear that this claim holds. Now we will prove that it holds for any call, by assuming that the bound holds for all subcalls. If $|S|>2k$, then the algorithm branches into $2k+1$ subcalls. The total running time can then be bounded by \begin{align*} (2k+1)t(n,k-1) + f(n) & \leq (2k+1)6^{k-1}(k-1)! (k-1)f(n)+ f(n) \\&\leq 6^{k}(k)!(k)f(n)\text{.} \end{align*} If the condition of the if-statement on \cref{line:if_split_3} is true, then for that $q$ the functions does $3$ subcalls with $k$ reduced by one. So the recursive part of the total running time for this $q$ is bounded by \begin{align*} 3T(k-1)\leq 6^{k-1}(k-1)! (k-1)f(n) = 3^{k}2^{k-1}(k-1)!(k-1)f(n) \text{.} \end{align*} If the condition on \cref{line:if_split_3} holds then there is at most one $d\in D$ with $|d|\leq 2$. Using this information we can bound the total running time of the subcalls that are done for $q$ in the else clause by \begin{align} & \sum_{d\in D}|d|t(k-|d|+1)\leq \sum_{d\in D}|d|6^{k-|d|+1}(k-|d|+1)! (k-|d|+1)f(n)\nonumber \\ & \leq \sum_{d\in D}|d|6^{k-|d|+1}(k-|d|+1)!(k-|d|+1)f(n)\nonumber \\ & \leq (k-1)!(k-1)f(n) \sum_{d\in D}|d|6^{k-|d|+1} \\ & \leq (k-1)!(k-1)f(n)(2\cdot 6^{k-1} + 3\cdot 6^{k-2})\label{eq:decreasing_function} \\ & = (k-1)!(k-1)f(n)2^{k-1}(9\cdot 3^{k-2})\nonumber \\ & =(k-1)!(k-1)f(n)2^{k-1}3^{k} \text{.} \end{align} Note that \cref{eq:decreasing_function} follows from the fact that $x\mapsto x 6^{k-x+1}$ is a decreasing function for $x\in [1,\infty)$. So for each $q$ the running time of the subcalls is bounded by $(k-1)! (k-1)f(n)2^{k-1}3^{k}$. Now the total running time is bounded by \begin{align} & \phantom{\leq . } f(n) + (k-1)!(k-1)f(n)2^{k-1}3^{k}|S| \\ & \leq f(n) + (k-1)!(k-1)f(n)2^{k-1}3^{k}2k \\ & \leq f(n) + k!(k-1)f(n)6^{k} \\ & \leq 6^{k}k!kf(n) \end{align} Because the non-recursive part of the function can be implemented to run in $O(n^2)$ time the total running time of the function is $O(6^{k}k! \cdot k\cdot n^2)$. \end{proof} \begin{lem} Let $T$ be a set of non-binary trees. If $h_t(T)\leq k$, then \texttt{CherryPicking}$(T,k)$ from \cref{alg:non_binary_new} returns a cherry picking sequence for $T$ of weight at most $k$. \end{lem} \begin{proof} First we will prove with induction on $k$ that if $h_t(T)\leq k$ then a sequence is returned. For $k=0$ it is true because if $h_t(T)=0$, as long as $\mathcal{L}(T)>1$ then $|H(T)|>0$ and all elements of $H(T)$ will have zero weight, so they are removed on \cref{line:remove_trivial}. After that $\mathcal{L}(T)=1$ so an empty sequence will be returned, which proves that the claim is true for $k=0$. Now assume that the claim holds for for $k<k'$ and assume that $h_t(T)\leq k'$. Now we will prove that a sequence is returned by \texttt{CherryPicking}($T,k$) in this case. After removing an element $x$ with weight zero on \cref{line:remove_trivial} we still have $h_t(T)\leq k'$ (\cref{lem:nonbin:remove_trivial}). If $|\mathcal{L}(T)|=1$, an empty sequence is returned. If this is not the case then $0<h_t(T)\leq k$, so the else if is not executed. If $|S|>2k$ then from \cref{lem:max2kterminals} it follows that for $S'\subseteq S$ with $|S'|=2k+1$ there is at least one $x\in S'$ such that $h_t(T\setminus \{x\})\leq k'-1$. Now from the induction hypothesis it follows that \texttt{CherryPicking}$(T\setminus \{x\},k')$ returns at least one sequence, which implies that $R$ is not empty. Because of that the main call will return at least one sequence, which proves that the claim holds for $k=k'$. The only thing left to prove is that every returned sequence is a cherry picking sequence for $T$. This follows from the fact that only elements from $H(T)$ are appended to $s$ and that $R$ consists of cherry picking sequences for $T\setminus \{s_1,\ldots,s_t \}$. \end{proof} \section{Experimental results} \label{sec:implementation} \FloatBarrier We developed implementations of \cref{alg:better_temporal}, \cref{alg:non_binary_new} and \cref{alg:better_temporal2}, \leo{which are freely available}~\cite{sjb_implementation}. To analyse the performance of the algorithms we made use of dataset generated in \cite{van_iersel_practical_2019} for experiments with an algorithm for construction of tree-child networks with a minimal hybridization number. \subsection{\cref{alg:better_temporal}} In \cref{fig:time_k_plot} the running time of \cref{alg:better_temporal} on the dataset from \cite{van_iersel_practical_2019} is shown. The results are consistent with the bound on the running time that was proven in \cref{sec:algorithm}. Also, the algorithm is able to compute solutions for relatively high values of $k$, indicating that the algorithm performs well in practice. \begin{figure}\centering \includegraphics[scale=.8]{build/plots/time_k_plot} \caption{The running time of \cref{alg:better_temporal} on problem shown relative to the corresponding temporal hybridization number. A timeout of 10 minutes was used. Instances for which the algorithm timed out are shown in red at the value of $k$ where they timed out. On the log scale the exponential relation is clearly visible. However fitting an exponential function on the data yields a $O(2.5^k)$ function for temporal hybridization number $k$, while the worst-case bound that we proved is $O(5^k)$. \label{fig:time_k_plot} } \end{figure} The authors of \cite{van_iersel_practical_2019} also provide an implementation of their algorithm for tree-child algorithms. The implementation contains several optimizations to improve the running time. One of them is an operation called cluster reduction \cite{linz_cluster_2011}. The implementation is also multi-threaded. In \cref{fig:runtime_comparison} we provide a comparison of the running times of the tree-child algorithm with \cref{alg:better_temporal}. In this comparison we let both implementations use a single thread, because our implementation of the algorithm for computing the hybridization number does not support multithreading. The implementation could however be modified to solve different subproblems in different threads which will probably also result in a significant speed-up. In \cref{alg:better_temporal} we see that the difference in time complexity between the $O((8k)^k)$ algorithm and the $O(5^k)$ algorithm is also observable in practice. \begin{figure}\centering \includegraphics[scale=.8]{build/plots/temporal_vs_treechild_runtime.pdf} \caption{Difference between the running time of \cref{alg:better_temporal} and the algorithm for tree-child networks from \cite{van_iersel_practical_2019}.} \label{fig:runtime_comparison} \end{figure} \subsection{\cref{alg:non_binary_new}} We used the software from \cite{van_iersel_practical_2019} to generate random binary problem instances and afterwards randomly contracted edges in the trees to obtain non-binary problem instances. We used this dataset to test the running time of \cref{alg:non_binary_new}. The results are shown in \cref{fig:time_k_plot_nonbinary}. We see that the algorithm is usable in practice and has a reasonable running time. \begin{figure}\centering \includegraphics[scale=.8]{build/plots/time_k_plot_nonbinary/t25.pdf} \caption{Running time of \cref{alg:non_binary_new} on a generated set of instances consisting of trees with average out-degree $2.5$ relative to the temporal hybridization number. A timeout of 10 minutes was used. } \label{fig:time_k_plot_nonbinary} \end{figure} \begin{figure}\centering \includegraphics[scale=.8]{build/plots/semi_temporal_vs_treechild_runtime.pdf} \caption{Difference between the running time of \cref{alg:better_temporal2} and the algorithm for constructing tree-child networks from \cite{van_iersel_practical_2019} on all non-temporal instances in the dataset from \cite{van_iersel_practical_2019}.} \label{fig:semi_temporal_vs_treechild_runtime} \end{figure} \FloatBarrier \subsection{\cref{alg:better_temporal2}} \cref{alg:better_temporal2} was tested on all non-temporal instances in the dataset from \cite{van_iersel_practical_2019}. In \cref{fig:semi_temporal_vs_treechild_runtime} the running time of \cref{alg:better_temporal2} is compared to that of the algorithm from \cite{van_iersel_practical_2019}. The data show that the algorithm from \cite{van_iersel_practical_2019} is often faster than \cref{alg:better_temporal2}. \leo{However, there also also instances for which \cref{alg:better_temporal2} is much faster. Hence, in practice it can be worthwile to run this algorithm on instances that cannot be solved by the algorithm from \cite{van_iersel_practical_2019} in a reasonable time.} It should also be noted that we only tested the algorithms on a relatively small dataset. \section{Discussion} \label{sec:further_research} \Cref{alg:better_temporal}, the algorithm for constructing minimum temporal hybridization networks, has a significantly better running time than the algorithms that were known before. The results from the implementation show that the algorithm also works well in practice. However this implementation could still be improved, for example by making use of parallelization. While we also present an algorithm that finds optimal temporal networks for nonbinary trees, the running time of this algorithm is significantly worse and, moreover, it only works for pairs of trees. An open question is whether this could be improved to a running time of $O(c^k\cdot poly(n))$ for some $c\in \mathbb{R}$, perhaps using techniques similar to our approach for binary trees. Another important open problem is whether Temporal Hybridization is FPT for a set of more than two non-binary input trees. In \cref{sec:non_temporal} a metric is provided to quantify how close a hybridization network is to being temporal. However, other, possibly more biologically meaningful, metrics could also be used for this purpose. An open problem is whether an FPT algorithm exists that solves the decision problem associated with these metrics. \bibliographystyle{plain}
{ "timestamp": "2020-07-28T02:41:36", "yymm": "2007", "arxiv_id": "2007.13615", "language": "en", "url": "https://arxiv.org/abs/2007.13615" }
\section{Introduction} Let $(K,v)$ be a valued field, $\bar{K}$ an algebraic closure of $K$ and $\bar{v}$ an extension of $v$ to $\bar{K}$. Let $X$ be a transcendental element over $K$. With the aim of giving a characterization of residual transcendental extensions of $v$ to $K(X)$ \cite{AZ}, V. Alexandru and A. Zaharescu introduce the notion of "a minimal pair of definition". They prove that describing all such extensions is equivalent to describing all the minimal pairs $(a,\delta)\in\bar{K}\times\Gamma_{\bar{v}}$ (see Definition \ref{minimalpairofdefinition} below), where $\Gamma_{\bar{v}}$ is the value group of $\bar{v}$.\\ In \cite{APZ}, V. Alexandru, N. Popescu and A. Zaharescu investigate which pairs $(a,\delta)\in\bar{K}\times \Gamma_{\bar{v}}$ are minimal pairs. Given an extension $w$ of $K$ to $K(X)$, the authors define a common extension of $\bar{v}$ and $w$ to $\bar{K}(X)$, and they prove that there exists an integer, denoted by $[K:w]$, depending only on $v$ and $w$, and that the number of common extensions $\bar{w}$ is less than or equal to $[K:w]$ (see \cite{APZ}, Corollary 2.3 and its proof).\\ Another way of understanding the extensions $w$ of $v$ to $K(X)$ is via the theory of key polynomials introduced by S. Mac Lane (see \cite{ML1} and \cite{ML2}). The theory was introduced by S. Mac Lane in the case of discrete valuations of rank $1$ and generalized by M. Vaqui\'e to the case of arbitrary valuations. One important difference with the case of discrete rank $1$ valuations is the presence of limit key polynomials. Another notion of key polynomials was introduced by F. H. Herrera, M. A. Olalla and M. Spivakovsky (see \cite{HOS} or \cite{HMOS}). Yet another new notion of key polynomials was introduced in \cite{DMS} and \cite{J1}. Comparison between these notions are given in \cite{DMS} and \cite{Ma}. For more information about the key polynomial theory and its applications see \cite{Na2}, \cite{CMT}, \cite{Kas}, \cite{Na1}, \cite{San} and \cite{San2}.\\ It seems that the notions of key polynomials and minimal pairs are closely related. For recent studies on the relation between the two notions, see \cite{J2} and \cite{V3}. From Theorem 1.1 and Proposition 3.1 of \cite{J2}, one can deduce that given a valuation $w$ which is residue-transcendental there exists a polynomial $Q$ such that $w=w_Q$ and every common extension $\bar{w}$ of $\bar{v}$ and $w$ can be described by a pair of definition $(a,\delta)$ where $a$ is a root of $Q$ and $\epsilon(Q)$ (see Section \ref{keypolynomials} for the definition of $w_Q$ and $\epsilon(Q)$). As well, one can deduce that such a $Q$ must be the last key polynomial in a complete sequence of key polynomials for $w$ (see Section \ref{keypolynomials} for the definition of a complete sequence of key polynomials).\\ In this paper we study the relation between key polynomials and minimal pairs. We also describe the classification given in Section 3 of \cite{Ku} (``value-transcendental", ``residue-transcendental", ``valuation-transcendental'' and ``valuation-algebraic", see Definition \ref{valuation-} below) of all the possible extensions of $v$ from $K$ to $K(X)$ in terms of a complete sequence of key polynomials. Finally, we prove that the extension $w$ is residue-transcendental or value-transcendental if and only if the complete sequence of key polynomials for $w$ has a last element $Q$. In this case every common extension $\bar{w}$ of $\bar{v}$ and $w$ can be described by a pair of definition $(a,\delta)$ where $a$ is a root of $Q$. Moreover, we prove that in the case when the sequence of key polynomials does not admit a limit key polynomial, any root $a$ of $Q$ can be used to define a minimal pair $(a,\delta)$ that defines a common extension $\bar{w}$ of $\bar{v}$ and $w$.\\ In Section \ref{basics} we recall some basic facts about extensions of valuations. In Section \ref{minimalpairs} we give some properties relating common extensions and minimal pairs. Although the results of this section can also be found in \cite{AZ} and \cite{APZ}, we reproduce them here in order to make the paper as self-contained as possible.\\ In Section \ref{keypolynomials} we recall some basic results on key polynomials. We use the construction of a complete set of key polynomials given in \cite{J1} and summarize briefly the main results of \cite{J1} used in the sequel.\\ In Section \ref{kp_mp} we clarify the relation between the notion of key polynomials and the notion of minimal pair. We also describe the classification of the extension $w$ in terms of the complete sequence of key polynomials of $w$. This is accomplished in Corollaries \ref{vt_kp} and \ref{rt_kpresult} and Theorem \ref{valuation_trans}. We also deduce that the number of common extensions $\bar{w}$ of $\bar{v}$ and $w$ is less than or equal to the number of roots of the last key polynomial of the sequence (see Corollary \ref{val_trans_numext}).\\ Finally, in Section \ref{common_ext} we study the relation between the roots of two consecutive key polynomials. We show that in the case when the sequence does not contain limit key polynomials, every root $a$ of the last key polynomial in the sequence can be used to construct a common extension $\bar{w}$ of $\bar{v}$ and $w$ (see Corollary \ref{allroots}). \section{Basics and Notation}\label{basics} Throughout this paper, we fix a valued field $(K,v)$, an algebraic closure $\bar{K}$ of $K$ and an extension $\bar{v}$ of $v$ to $\bar{K}$. We also fix a variable $X$ and an extension $w$ of $v$ to $K(X)$.\\ By Proposition 2.1 in \cite{APZ}, there exists a common extension of $\bar{v}$ and $w$ to $\bar{K}(X)$, that is, a valuation $\bar{w}$ on $\bar{K}(X)$ that is equal to $\bar{v}$ on $\bar{K}$ and to $w$ on $K(X)$.\\ Let $\bar{w}$ be a common extension of $w$ and $\bar{v}$ to $\bar{K}(X)$.\\ We denote by $\Gamma_{v},\ \Gamma_{\bar{v}},\ \Gamma_{w}$ and $\Gamma_{\bar{w}}$ the respective value groups of $v,\ \bar{v},\ w$ and $\bar{w}$. We have natural embeddings $\Gamma_{v}\subseteq\Gamma_{\bar{v}}\subseteq\Gamma_{\bar{w}}$ and $\Gamma_{v}\subseteq\Gamma_{w}\subseteq\Gamma_{\bar{w}}$.\\ We denote by $k_{v},\ k_{\bar{v}},\ k_{w}$ and $k_{\bar{w}}$ the respective residue fields of $v,\ \bar{v},\ w$ and $\bar{w}$. We have natural extensions $k_{v}\subseteq k_{\bar{v}}\subseteq k_{\bar{w}}$, and $k_{v}\subseteq k_{w}\subseteq k_{\bar{w}}$.\\ Recall the following definitions from \cite{Ku}: \begin{definition}\label{valuation-} The extension $w$ of $v$ to $K(X)$ is said to be {\bf valuation-algebraic} if $\frac{\Gamma_w}{\Gamma_v}$ is a torsion group and $k_w$ is algebraic over $k_v$. The extension $w$ is said to be {\bf value-transcendental} if $\frac{\Gamma_w}{\Gamma_v}$ has rational rank 1 and $k_w$ is algebraic over $k_v$. The extension $w$ is said to be {\bf residue-transcendental} if $k_w$ has transcendence degree 1 over $k_v$ and $\frac{\Gamma_w}{\Gamma_v}$ is a torsion group. We will combine the value-transcendental case and the residue-transcendental case by saying that $w$ is {\bf valuation-transcendental} if either $\frac{\Gamma_w}{\Gamma_v}$ has rational rank 1 or $k_w$ has transcendence degree 1 over $k_v$. \end{definition} For a polynomial $f(X)\in K[X]$, we define the set $\mathcal{R}(f)$ by $$ \mathcal{R}(f):=\left\{a\in\bar{K}\ /\ f(a)=0\right\}. $$ For an element $y$ we denote by $y^*$ its image in the corresponding residue field.\\ We note that \begin{equation} \Gamma_{\bar{v}}=\Gamma_{v}\bigotimes_{\mathbb{Z}}\mathbb{Q}\label{eq:gammavbar} \end{equation} and that $k_{\bar{v}}$ is an algebraic closure of $k_{v}$.\\ We define the set $M_{\bar{w}}:=\{\bar{w}(X-a)\ /\ a\in \bar{K}\}$. \\ Let $a\in\bar{K}$ and let $\delta$ be an element in an ordered group containing $\Gamma_{\bar{v}}$. We define the valuation $w_{(a,\delta)}$ in the following manner:\\ For a polynomial $f(X)\in \bar{K}[X]$, write the Taylor expansion of $f$: $$ f(X)=a_n(X-a)^{n}+\dots+a_1(X-a)+a_0. $$ Put $w_{(a,\delta)}(f(X))=\inf_{0\leq j\leq n}\{\bar{v}(a_j)+j\delta\}$.\\ We define the set $S_{(a,\delta)}(f):=\{j\in\{1,\dots,n\}\ /\ \bar{v}(a_j)+j\delta=w_{(a,\delta)}(f(X))\}$.\\ We note that if we fix $a\in \bar{K}$ and $\delta=\bar{w}(X-a)$, then for all $f\in \bar{K}[X]$ we have \begin{equation} \bar{w}(f(X))\geq w_{(a,\delta)}(f(X)).\label{eq:barw>wadelta} \end{equation} If the inequality is strict then $\#S_{(a,\delta)}(f)>1$. \begin{remark} In the literature, minimal pairs are defined for residue transcendental valuations. In our case, we give the same definition but more generally for valuation-transcendental valuations. \end{remark} \begin{definition} For a valuation $\mu$ of $\bar{K}(X)$, we say that the pair $(a,\delta)$ is {\bf a pair of definition for the valuation} $\mu$ if $\mu=w_{(a,\delta)}$. \end{definition} \begin{lemma}\label{pairofdefinition} Let $(a,\delta)$ be a pair of definition for a valuation $\mu$. Let $b\in \bar{K}$, and let $\delta'$ be an element of an ordered group containing $\Gamma_{\bar{v}}$ as an ordered subgroup. Then $(b,\delta')$ is a pair of definition for $\mu$ if and only if $\delta'=\delta$ and $\bar{v}(a-b)\geq \delta$. \end{lemma} \begin{proof} Suppose first that $\delta'=\delta$ and $\bar{v}(a-b)\geq \delta$. It is sufficient to prove that for all $c\in\bar{K}$, we have \begin{equation*} \mu(X-c)=\inf\{\delta,\ \bar{v}(b-c)\}. \end{equation*} Take an element $c\in\bar{K}$. We have \begin{equation*} \mu(X-c)=\inf\{\delta,\ \bar{v}(a-c)\}. \end{equation*} Therefore we have to prove that \begin{equation}\label{pairofdefinition_eq} \inf\{\delta,\ \bar{v}(b-c)\}=\inf\{\delta,\ \bar{v}(a-c)\}. \end{equation} We will use the equality $a-c=a-b+b-c$ and the fact that $\bar{v}(a-b)\geq \delta$.\\ If $\delta>\bar{v}(b-c)$ then $\bar{v}(a-b)>\bar{v}(b-c)=\bar{v}(a-c)$, and (\ref{pairofdefinition_eq}) is proved.\\ Otherwise, if $\delta\leq\bar{v}(b-c)$ then $\bar{v}(a-c)\geq \inf\{\bar{v}(a-b),\ \bar{v}(a-c)\}\geq \delta$, and again (\ref{pairofdefinition_eq}) is proved.\\ Conversely, suppose that $(b,\delta')$ is a pair of definition for $\mu$. We have: \begin{equation*} \delta=\mu(X-a)=w_{(b,\delta')}(X-a)=\inf\{\delta',\ \bar{v}(a-b)\}, \end{equation*} hence $\delta\leq\bar{v}(a-b)$ (this proves the second statement), and $\delta\leq\delta'$. On the other hand, we have \begin{equation*} \delta'=\mu(X-b)=w_{(a,\delta)}(X-b)=\inf\{\delta,\ \bar{v}(a-b)\}, \end{equation*} hence $\delta'\leq\delta$ and we get the desired equality. \end{proof} Let $\mu=w_{(a,\delta)}$ for a certain pair $(a,\delta)$ as above. We define the degree of $\mu$ $$ \mathcal{D}(\mu):=min\{[K(b):K]/\ b\in \bar{K}, \mu=w_{(b,\delta)}\}. $$ \begin{definition}\label{minimalpairofdefinition} A pair of definition $(a,\delta)$ for $\mu$ is said to be {\bf minimal} if $[K(a):K]=\mathcal{D}(\mu)$. We also say in this case that $(a,\delta)$ is a minimal pair of definition for $\mu$. \end{definition} \section{Minimal Pairs}\label{minimalpairs} In this section, we give some properties of common extensions and minimal pairs.\\ Keep the notation of the previous section.\\ The following result is in \cite{AZ}, Proposition 1.1. \begin{lemma}\label{rt_lem} Suppose that $\Gamma_{\bar{v}}=\Gamma_{\bar{w}}$ and let $a\in\bar{K}$. The following conditions are equivalent: \begin{enumerate}[(a)] \item $\bar{w}(X-a)=Max ( M_{\bar{w}})$. \item for each $b\in\bar{K}$ with $\bar{w}(X-a)=\bar{v}(b)$, the element $\left(\frac{X-a}{b}\right)^*$ is transcendental over $k_{\bar{v}}$. \end{enumerate} \end{lemma} \begin{proof} Suppose (a) is satisfied. Let $b\in\bar{K}$ be such that $\bar{w}(X-a)=\bar{v}(b)$. Put $t=\left(\frac{X-a}{b}\right)^*$. If $t$ is algebraic over $k_{\bar{v}}$ then $t\in k_{\bar{v}}$, since $k_{\bar{v}}$ is algebraically closed. Choose $c\in \bar{K}$ so that $c^*=t$. Now we have $\bar{w}\left(\frac{X-a}{b}-c\right)>0$, that is, $\bar{w}(X-a-cb)>\bar{w}(X-a)$, which is a contradiction.\\ Suppose (b) is satisfied and suppose that there exists $c\in \bar{K}$ such that $\bar{w}(X-c)>\bar{w}(X-a)$. Then $\bar{w}(X-a+a-c)>\bar{w}(X-a)=\bar{v}(a-c)$. Therefore we have $\bar{w}\left(\frac{X-a}{c-a}-1\right)>0$ and hence $\left(\frac{X-a}{c-a}\right)^*=1$ in $K_{\bar{v}}$, which is a contradiction. \end{proof} \begin{lemma}\label{vt_lem} Suppose that $\Gamma_{\bar{v}}\subsetneqq\Gamma_{\bar{w}}$. Then $Max ( M_{\bar{w}})$ exists and $Max ( M_{\bar{w}})\notin \Gamma_{\bar{v}}$. Moreover, $Max ( M_{\bar{w}})$ is the unique element in $M_{\bar{w}}$ that does not belong to $\Gamma_{\bar{v}}$. \end{lemma} \begin{proof} Let $f\in \bar{K}[X]$ be such that $\bar{w}(f(X))\notin \Gamma_{\bar{v}}$. Write $f(X)=c\prod\limits_{i=1}^{n}(X-c_i)$. There exists $i$, $1\leq i\leq n$, such that $\bar{w}(X-c_i)\notin \Gamma_{\bar{v}}$.\\ Let $a=c_i$ for such an $i$ and let $\delta=\bar{w}(X-a)$.\\ Let $b\in \bar{K}$. If $\bar{w}(X-b)>\delta$. We can write $\bar{w}(X-a+a-b)>\delta$. Then $\bar{v}(a-b)$ is equal to $\delta$ which is impossible.\\ If $\bar{w}(X-b)< \delta$, then $\bar{w}(X-b)=\bar{v}(a-b)\in \Gamma_{\bar{v}}$. \end{proof} It is easy to see that $w$ is a residue-transcendental extension of $v$ if and only if $\bar{w}$ is a residue-transcendental extension of $\bar{v}$. Similarly, $w$ is a value-transcendental extension of $v$ if and only if $\bar{w}$ is a value-transcendental extension of $\bar{v}$.\\ Using this fact together with Lemma \ref{rt_lem} and Lemma \ref{vt_lem} we deduce \begin{proposition}\label{classification} \begin{enumerate}[(a)] \item $w$ is residue-transcendental extension of $v$ if and only if $\Gamma_{\bar{v}}=\Gamma_{\bar{w}}$ and $M_{\bar{w}}$ has a maximal element. \item $w$ is value-transcendental extension of $v$ if and only if $\Gamma_{\bar{v}}\subsetneqq\Gamma_{\bar{w}}$. In this case, again, $M_{\bar{w}}$ has a maximal element. \item $w$ is a valuation-algebraic extension of $v$ if and only if $M_{\bar{w}}$ does not have a maximal element. \end{enumerate} \end{proposition} \begin{proposition}\label{max_ele} The set $M_{\bar{w}}$ has a maximal element $\delta$ if and only if $\bar{w}=w_{(a,\delta)}$ for some $(a,\delta)\in\bar{K}\times \Gamma_{\bar{w}}$ with $\bar{w}(X-a)=\delta$. \end{proposition} \begin{proof} Suppose that $M_{\bar{w}}$ has a maximal element $\delta$.\\ Suppose first that $\delta\notin\Gamma_{\bar{v}}$. Take an $f(X)\in \bar{K}[X]$. Write the Taylor expansion $$ f(X)=a_n(X-a)^{n}+\dots+a_0. $$ For $0\leq \ell_1<\ell_2\leq n$ we must have $\bar{v}(a_{\ell_1})+\ell_{1}\delta\neq \bar{v}(a_{\ell_2})+\ell_{2}\delta$. Hence $\bar{w}(f(X))=w_{(a,\delta)}(f(X))$.\\ Now suppose that $\delta\in\Gamma_{\bar{v}}$. Then we are in the case when $w$ is a residue-transcendental extension of $v$. We always have the inequality (\ref{eq:barw>wadelta}), so we only need to rule out the {\it strict} inequality in (\ref{eq:barw>wadelta}). Suppose there exists a polynomial $f(X)\in \bar{K}[X]$, such that \begin{equation} \bar{w}(f(X))>w_{(a,\delta)}(f(X)).\label{eq:strictinequality} \end{equation} Choose a monic polynomial $f$ of minimal degree satisfying the strict inequality (\ref{eq:strictinequality}). Write the Taylor expansion $f(X)=a_n(X-a)^{n}+a_{n-1}(X-a)^{n-1}+\dots+a_0$ with $a_n=1$.\\ Write $f=\sum\limits_{j\in S_{(a,\delta)}(f)}a_j(X-a)^j+\sum\limits_{j\notin S_{(a,\delta)}(f)}a_j(X-a)^j$. We have\\ \begin{align*} \bar{w}\left(\sum\limits_{j\in S_{(a,\delta)}(f)}a_j(X-a)^j\right)&\geq \inf\left\{\bar{w}(f),\bar{w}\left(\sum\limits_{j\notin S_{(a,\delta)}(f)}a_j(X-a)^j\right)\right\}\\ >& w_{(a,\delta)}(f)=w_{(a,\delta)}\left(\sum\limits_{j\in S_{(a,\delta)}(f)}a_j(X-a)^j\right). \end{align*} Thus replacing $f(X)$ by $\sum\limits_{j\in S_{(a,\delta)}(f)}a_j(X-a)^j$ does not affect the strict inequality (\ref{eq:strictinequality}). Hence we may assume that for all $j$, $0\leq j\leq n$, if $a_j\neq 0$ then $j\in S_{(a,\delta)}(f)$. We will make this assumption from now on. In particular, we have $w_{(a,\delta)}(f)=n\delta$. Now by Lemma \ref{rt_lem} there exists $b$ such that $t=\left(\frac{X-a}{b}\right)^*$ is transcendental over $k_{\bar{v}}$.\\ Using the fact that $\delta=\bar{w}(X-a)=\bar{v}(b)$, we see that $w_{(a,\delta)}\left(\frac{f}{b^n}\right)=n\delta-n\delta=0$. Hence for each $j\in\{0,\dots,n\}$ we have $\bar{w}\left(\frac{a_j}{b^{n-j}}\frac{(X-a)^j}{b^j}\right)=0$, therefore $\bar{v}\left(\frac{a_j}{b^{n-j}}\right)=0$. We also have $\bar{w}\left(\frac{f}{b^n}\right)>w_{(a,\delta)}\left(\frac{f}{b^n}\right)=0$.\\ Consider the image of $\frac{f}{b^n}$ in $k_{\bar{w}}$. We have $\sum\limits_{j=0}^n\frac{a_j}{b^{n-j}}t^j=0$ with the coefficient of $t^n$ equal to $1$. This contradicts the fact that $t$ is transcendental over $k_{\bar{v}}$.\\ Conversely, suppose that $\bar{w}=w_{(a,\delta)}$ for $(a,\delta)\in \bar{K}\times \Gamma_{\bar{w}}$, with $\bar{w}(X-a)=\delta$. Then for all $b\in \bar{K}$ we have $X-b=X-a+a-b$, hence $\bar{w}(X-b)=\inf \{\delta,\ \bar{v}(a-b)\}$. Therefore $\bar{w}(X-b)\leq \bar{w}(X-a)$. \end{proof} \begin{lemma}\label{minimal_pair} Let $(a,\delta)$ be a pair of definition of $\bar{w}$. For each polynomial $f\in K[X]$ of degree $\deg f<\mathcal{D}(\bar{w})$, we have $w(f(X))=\bar{v}(f(a))$. \end{lemma} \begin{proof} Take a polynomial $f\in K[X]$ of degree $r<\mathcal{D}(\bar{w})$. Let $b_1\dots,b_r$ be the roots of $f$ (the $b_i$ need not be distinct).\\ For each $t$, $1\leq t\leq r$, we have $\bar{v}(a-b_t)<\delta$. Indeed, if there existed $t$ such that $\bar{v}(a-b_t)\ge\delta$, by Lemma \ref{pairofdefinition} $(b_t,\delta)$ would be a pair of definition of $\bar{w}$, with $[K(b_t):K]<\mathcal{D}(\bar{w})$. This contradicts the definition of $\mathcal{D}(\bar{w})$.\\ Therefore, for each $t$, $1\leq t\leq r$, we have $\bar{w}(X-b_t)=\inf\{\delta, \bar{v}(a-b_t)\}=\bar{v}(a-b_t)$.\\ Now write $f(X)=c\prod\limits_{t=1}^r (X-b_t)$.\\ We have $w(f(X))=\bar{w}(f(X))=\bar{v}(c)\sum\limits_{t=1}^r \bar{w}(X-b_t)=\bar{v}(c)\sum\limits_{t=1}^r \bar{v}(a-b_t)=\bar{v}(f(a))$. \end{proof} From Proposition \ref{classification} and Proposition \ref{max_ele} we deduce the following Corollary.\\ \begin{corollary} The following conditions are equivalent: (1) $w$ is valuation-transcendental extension of $v$ to $K(X)$ (2) for every algebraic closure $\bar{K}$ of $K$ and every extension $\bar{v}$ to $\bar{K}$ and a common extension of $w$ and $\bar{v}$ to $\bar{K}(X)$, there exists $(a,\delta)\in\bar{K}\times\Gamma_{\bar{w}}$ such that $\bar{w}=w_{(a,\delta)}$ (3) there exists an algebraic closure $\bar{K}$ of $K$, an extension $\bar{v}$ to $\bar{K}$ and a common extension of $w$ and $\bar{v}$ to $\bar{K}(X)$ such that $\bar{w}=w_{(a,\delta)}$ for a certain $(a,\delta)\in\bar{K}\times \Gamma_{\bar{w}}$. \end{corollary} \section{Key Polynomials}\label{keypolynomials} In this section, we introduce the notion of key polynomials of a valuation and study some of their basic properties.\\ \begin{enumerate} \item For each strictly positive integer $b$, we write $\partial_{b}:=\frac{\partial^{b}}{b!\partial X^{b}}$, the $b${\bf-th formal derivative} with respect to $X$. \item For each polynomial $f\in K[X]$, let $\epsilon(f):=\max\limits _{b\in\mathbb{N}^{\ast}}\left\{\frac{w(f)-w(\partial_{b}P)}{b}\right\}$. \end{enumerate} \begin{definition} Let $Q$ be a monic polynomial in $K[X]$. We say that $Q$ is a {\bf key polynomial} for $w$ if for each polynomial $f$ satisfying $$ \epsilon(f)\geq\epsilon(Q), $$ we have $\deg(f)\geq\deg(Q)$. \end{definition} For a monic polynomial $Q\in K[X]$, every polynomial $f\in K[X]$ can be written in a unique way as \begin{equation} f=\sum\limits _{j=0}^{s}f_{j}Q^{j},\label{eq:Qexpansion} \end{equation} with all the $f_{j}\in K[X]$ of degree strictly less than $\deg(Q)$. We call (\ref{eq:Qexpansion}) the $Q${\bf-expansion} of $f$. \begin{definition} Let $g=\sum\limits _{j=0}^{s}g_{j}Q^{j}$ be the $Q$-expansion of an element $g\in K[X]\setminus\{0\}$. We put $$ w_{Q}(g):=\min\limits _{\underset{g_{j}\neq0}{0\leq j\leq s}}w\left(g_{j}Q^{j}\right). $$ We adopt the convention that $w_Q(0)=\infty$. The mapping $w_{Q}:K[X]\rightarrow\Gamma_w$ is called the {\bf truncation} of $w$ with respect to $Q$. \end{definition} We have the following Proposition (\cite{J1}, Proposition 2.4 (ii) and Proposition 2.6). \begin{proposition} If $Q$ is a key polynomial then $Q$ is irreducible and $w_Q$ is a valuation.\\ \end{proposition} For each polynomial $f(X)\in K[X]$, let $\delta(\bar{w},f)=max \{\bar{w}(X-a)/\ a\in \bar{K},\ f(a)=0\}$.\\ \begin{proposition}(\cite{J2}, Proposition 3.1) Let $f(X)\in K[X]$ be a monic polynomial. We have $\delta(\bar{w}, f)=\epsilon(f)$. \end{proposition} The quantity $\delta(\bar{w}, f)$ depends only on $w$ and $f$, but not on $\bar{w}$, therefore we will denote it by $\delta(f)$.\\ \begin{definition} Let $\Lambda$ be an ordered set. A set $\{Q_i\}_{i\in\Lambda}$ of key polynomials is said to be {\bf complete} for $w$ if for every $f\in K[X]$ there exists $i\in\Lambda$ such that $w_{Q_i}(f)=w(f)$. \end{definition} \begin{remark}\label{prop_key_pol} By \cite{J1}, Theorem 1.1 and its proof, there is a complete set of key polynomials $\{Q_i\}_{i\in\Lambda}$ for the valuation $w$, having the following properties: \begin{enumerate} \item $\Lambda=\bigcup\limits_{j\in I}\Lambda_j$ , with $I=\{0,\dots,N\}$ or $\mathbb{N}$, and for each $j$ we have $\Lambda_j=\{j\}\cup\vartheta_j$, where $\vartheta_j$ an ordered set without a maximal element, which may be empty. \item There exists $a\in K$ such that $Q_0=X-a$. \item For all $j\in I\setminus\{0\}$ we have $j-1<\vartheta_{j-1}<j$. \item All the polynomials $Q_i$ with $i\in \Lambda_j$ have the same degree, and the polynomials $Q_i$ with $i\in\Lambda_j$ have degree strictly less than the polynomials $Q_{i'}$, $i'\in \Lambda_{j+1}$. \item For each $i<i'\in\Lambda$ we have $w(Q_i)<w(Q_i')$ and $\epsilon(Q_i)<\epsilon(Q_i')$. \end{enumerate} \end{remark} \medskip For each $i\in \Lambda$, put $w_{i}=w_{Q_i}$, $\beta_i=w(Q_i)$ and $\epsilon_i=\epsilon(Q_i)$.\\ Even though the set of key polynomials $\{Q_i\}_{i\in \Lambda}$ is not unique, the cardinality of $I$ and the degrees $d_j$ of the key polynomials $Q_j$ for each $j\in I$ are uniquely determined by $w$. As well, the valuations $w_j$ for $j\in I$ are uniquely determined by $w$. \medskip \begin{remark}\label{uniqueness} The above uniqueness statements together with conditions 1 and 3 above imply, in particular, that if $i\in\Lambda$ is not the maximal element then the set $\{Q_j\}_{\overset{j\in\Lambda}{j\le i}}$ of key polynomials is not complete (equivalently, $w_i\ne w$). \end{remark} \noi{\bf Notation.} We will by denote by $d(w)$ the degree of $Q_N$ for the maximal element $N$ of $I$ if it exists. If $I=\mathbb N$, we put $d(w)=\infty$.\\ \begin{remark}\label{directconsequence} A direct consequence of the construction of \cite{J1} is that for all $i\in\Lambda_j$ the value group $\Gamma_i$ of $w_i$ is equal to $\Gamma_v+\beta_0\mathbb{Z}+\dots+\beta_j\mathbb{Z}$.\\ \end{remark} Also by construction, $\Lambda$ has a maximal element if and only if the following two conditions hold: \begin{enumerate} \item the set $I=\{0,\dots,N\}$ is finite and \item $\Lambda_n=\{N\}$. \end{enumerate} Keep the above notation. Take an element $i\in\Lambda$. \begin{definition} We say that $Q_i$ is a {\bf limit key polynomial} if the following two conditions hold: \begin{enumerate} \item $i\in I\setminus\{0\}$ \item $\vartheta_{i-1}\ne\emptyset$. \end{enumerate} \end{definition} \begin{proposition}\label{kp_form} There exists a complete set $\{Q_i\}_{i\in \Lambda}$ of key polynomials having the following additional property. For every $j\in I$ such that $\Lambda_{j+1}\ne\emptyset$ and $\vartheta_j=\emptyset$ (in other words, $Q_{j+1}$ is not a limit key polynomial), we can write $Q_{j+1}=q_nQ_j^{n}+\dots+q_tQ_j^t+\dots+q_0$, with $q_n=1$, $\deg q_t<d_j$ and $w(q_t)+t\beta_j=n\beta_j$ for each $t$, $0\leq t\leq n-1$.\\ \end{proposition} For a proof of Proposition \ref{kp_form}, one can imitate the proof of Theorem 9.4 \cite{ML2} or Theorem 1.11 \cite{V}. \medskip \noi{\bf In the sequel, we will always choose a complete set $\{Q_i\}_{i\in \Lambda}$ of key polynomials satisfying the conclusion of Proposition \ref{kp_form}.} \section{Minimal Pairs and Key Polynomials}\label{kp_mp} Let $\{Q_i\}_{i\in\Lambda}$ be a complete set of key polynomials for $w$.\\ In this section, we study the relation between the properties of $\{Q_i\}_{i\in\Lambda}$, minimal pairs for a common extension $\bar{w}$, and the type of $w$ as residue-transcendental, value-transcendental or valuation-algebraic. \begin{proposition}\label{non_max} Take an element $i\in\Lambda$. If $i$ is not the maximal element of $\Lambda$ then \begin{equation}\label{eq:grouprational} \Gamma_i\subset \Gamma_v\otimes_\mathbb{Z} \mathbb{Q}. \end{equation} \end{proposition} \begin{proof} By Remark \ref{directconsequence}, it is sufficient to prove (\ref{eq:grouprational}) for $i\in I$. Take an element $i\in I$. Suppose, inductively, that for each $i'\in I$, $i'<i$ we have $\beta_{i'}\in\Gamma_v\otimes_\mathbb{Z} \mathbb{Q}$ (by Remark \ref{directconsequence} this implies that $\beta_j\in\Gamma_v\otimes_\mathbb{Z} \mathbb{Q}$ for each $j\in\Lambda$ with $j<i$). Assume that $\beta_i\notin \Gamma_v\otimes_\mathbb{Z} \mathbb{Q}$, aiming for contradiction. For each element $g\in K[X]$, all the terms of its $Q_i$-expansion have different values, hence $w_i(g)=w(g)$. Since this holds for all $g\in K[X]$, we have $w_i=w$. By Remark \ref{uniqueness}, this is impossible since $i$ is not the maximal element of $\Lambda$. \end{proof} \begin{corollary}\label{vt_kp} The valuation $w$ is a value-transcendental extension of $v$ if and only if $\Lambda$ has a maximal element $i_0$, that is $w=w_{i_0}$, $\beta_i\in\Gamma_v\otimes_\mathbb{Z} \mathbb{Q}$ for all $i<i_0$ and $\beta_{i_0}\notin\Gamma_v\otimes_\mathbb{Z} \mathbb{Q}$. \end{corollary} \begin{proposition}\label{rt_kp} If $\beta_i\in\Gamma_v\otimes_\mathbb{Z}\mathbb{Q}$ then $w_i$ is a residue-transcendental extension of $v$. \end{proposition} \begin{proof} In what follows, we will adopt the convention that $\Gamma_{-1}:=\Gamma_v$. Let $j\in I$ be such that $i\in\Lambda_j$. Let $l=min\{s\ /\ s\beta_j\in \Gamma_{j-1}\}$. There exists a polynomial $g(X)$ with $\deg(g)<\deg(Q_i)$ such that $w_i(g)=l\beta_i$.\\ Put $t=\left(\frac{Q_i^l}{g}\right)^*$ in $k_{w_i}$. Assume that $t$ is algebraic over $k_v$ and let $$ t^{n}+a_{n-1}t^{n-1}+\dots+a_0 $$ be an algebraic equation satisfied by $t$ over $k_v$. Choose representatives $a'_{n-1},\dots,a'_0$ in $K$ of the coefficients $a_{n-1},\dots,a_0$. Then $w_i\left(\left(\frac{Q_i^l}{g}\right)^{n}+a'_{n-1}\left(\frac{Q_i^l}{g}\right)^{n-1}+\dots+a'_0\right)>0$. That is,\\ $w_i\left((Q_i^l)^{n}+a'_{n-1}g(Q_i^l)^{n-1}+\dots+a_0g^n\right)>w_i(g^n)= nl\beta_i\geq w_i\left((Q_i^l)^{n}+a'_{n-1}g(Q_i^l)^{n-1}+\dots+a_0g^n\right)$, which is absurd. \end{proof} \begin{corollary}\label{rt_kpresult} If $i$ is not the maximal element of $\Lambda$ then the valuation $w_i$ is residue-transcendental. \end{corollary} \begin{proof} This is a direct consequence of Proposition \ref{non_max} and Proposition \ref{rt_kp}. \end{proof} \begin{proposition}\label{mp_kp} If $M_{\bar{w}}$ has a maximal element $\delta$, then $\Lambda$ has a maximal element $i_0$ and there exists a root $a\in\mathcal{R}(Q_{i_0})$ such that $(a,\delta)$ is a minimal pair of definition for $\bar{w}$. \end{proposition} \begin{proof} Assume that $M_{\bar{w}}$ has a maximal element $\delta$. Let $(a,\delta)$ be a minimal pair of definition for $\bar{w}$ (it exists by Proposition \ref{max_ele}).\\ Let $f(X)$ be the minimal polynomial of $a$ over $K$. The polynomial $f(X)$ is a key polynomial for $w$, since if $g(X)$ is such that $\epsilon(g)\geq\epsilon(f)$ then $\delta(g)\geq\delta (f)=\delta$ hence $\deg(g)\geq \deg(f)$, since $(a,\delta)$ is a minimal pair.\\ If there existed $i\in\Lambda$ such that $\epsilon(Q_i)>\epsilon(f)$, we would have $\delta(Q_i)>\delta(f)=\delta$ which is impossible by definition of $f$ and minimal pair of definition. Hence \begin{equation}\label{eq:epsilonQileepsilonf} \epsilon(Q_i)\le\epsilon(f)\quad\text{ for all }i\in\Lambda. \end{equation} By definition of key polynomial, this implies that $\deg(Q_i)\le\deg(f)$ for all $i\in\Lambda$.\\ Let $i\in\Lambda$ be such that $w_i(f)=w(f)$. By \cite{J1} Proposition 2.10 (ii) we must have $\epsilon(Q_i)\ge\epsilon(f)$, hence $\epsilon(Q_i)=\epsilon(f)$ in view of (\ref{eq:epsilonQileepsilonf}). We conclude that $\delta(Q_i)=\delta$.\\ Choose $a'\in\mathcal{R}(Q_i)$, such that $\bar{w}(X-a')=\delta$. By Lemma \ref{pairofdefinition} $(a',\delta)$ is a pair of definition for $\bar{w}$, and since $\mathcal D(\bar{w})=[K(a'):K]$, we have that $(a',\delta)$ is a minimal pair of definition for $\bar{w}$.\\ Now $i$ must be the greatest element of $\Lambda$ since otherwise, if there exists $i'>i$, by Remark \ref{prop_key_pol} 3 we have $e(Q_{i'})>e(Q_i)$ and $\delta(Q_{i'})>\delta(Q_i)=\delta$, which is impossible. \end{proof} \begin{theorem}\label{valuation_trans} The valuation $w$ is a valuation-transcendental extension of $v$ if and only if $\Lambda$ has a maximal element.\\ Moreover, in this case, if $\{Q_i\}_{i\in\Lambda}$ is a complete set of key polynomials for $w$ and $i_0$ the maximal element of $\Lambda$, then for every common extension $\bar{w}$ of $\bar{v}$ and $w$ to $\bar{K}(X)$, $\mathcal D(\bar{w})=d(w)$ and there exists $a\in \mathcal{R}(Q_{i_0})$ such that $w_{(a,\epsilon_{i_0})}=\bar{w}$. \end{theorem} \begin{proof} Let $\bar{w}$ be a common extension of $\bar{v}$ and $w$ to $\bar{K}(X)$. By Proposition \ref{classification} $w$ is valuation-transcendental if and only if $M_{\bar{w}}$ has a maximal element.\\ Now if $w$ is valuation-transcendental, by Proposition \ref{mp_kp}, $\Lambda$ has a maximal element.\\ Conversely, if $\Lambda$ has a maximal element, by Corollary \ref{vt_kp} and Proposition \ref{rt_kp}, $w$ is valuation-transcendental. \\ The last statement of the Theorem is a direct consequence of Proposition \ref{mp_kp}. \end{proof} \begin{corollary}\label{val_trans_numext} If $n$ is the number of distinct roots of the final key polynomial in $\{Q_i\}_{i\in\Lambda}$, then there exist at most $n$ common extensions of $w$ and $\bar{v}$ to $\bar{K}(X)$. \end{corollary} \begin{example}\label{ExampleX} The pair $(0,\beta_0)=(0,w(X-a))=(0,\epsilon_0)$ is a minimal pair for the valuation $w_0$ (see Remark \ref{prop_key_pol} 2 and the notation that follows it). \end{example} \section{Common Extensions}\label{common_ext} Let $\{Q_i\}_{i\in\Lambda}$ be a complete set of key polynomials for $w$.\\ By Theorem \ref{valuation_trans}, if $w$ is a valuation-transcendental extension of $v$ then $w=w_{i_0}$, where $i_0$ is the maximal element of $\Lambda$, and if $\bar{w}$ is a common extension of $w$ and $\bar{v}$ to $\bar{K}(X)$, then there exists $a\in\mathcal{R}(Q_{i_0})$, such that $\bar{w}:=w_{(a,\delta)}$, where $\delta=max (M_{\bar{w}})=\epsilon(Q_{i_0})$.\\ In this section, we investigate which roots $a\in\mathcal{R}(Q_{i_0})$ are such that $w_{(a,\delta)}$ is a common extension of $w$ and $\bar{v}$. By definition of $w_{(a,\delta)}$, it is an extension of $\bar{v}$, hence the question is if the restriction of $w_{(a,\delta)}$ to $K(X)$ is equal to $w$.\\ Note that for each $i\in\Lambda$, since the valuation $w_i$ is a valuation-transcendental extension of $v$, we have that every common extension of $\bar{v}$ and $w_i$ to $\bar{K}(X)$ has the form $w_{(b,\epsilon_i)}$, with $b\in\mathcal{R}(Q_i)$ (Proposition \ref{max_ele}).\\ Assume that we know a minimal pair of definition $(a,\delta)$ for a common extension $\bar{w}$ of $\bar{v}$ and $w$ to $\bar{K}(X)$. The following Lemma gives a criterion to characterize the other minimal pairs of definition for common extensions of $\bar{v}$ and $w$ to $\bar{K}(X)$. \begin{lemma}\label{criterion_roots_mp} Let $\bar{w}$ be a common extension of $\bar{v}$ and $w$ to $\bar{K}(X)$ and let $(a,\delta)$ be a minimal pair for $\bar{w}$. Let $f$ be the minimal polynomial of $a$ over $K$ and let $b\in\mathcal{R}(f)$. Then $w_{(b,\delta)}$ is a common extension of $\bar{v}$ and $w$ to $\bar{K}(X)$ if and only if for every $g\in K[X]$ with $\deg(g)<d(w)$, we have $w(g(X))=\bar{v}(g(b))$. \end{lemma} \begin{proof} Suppose first that $w_{(b,\delta)}$ is a common extension of $\bar{v}$ and $w$. By Theorem \ref{valuation_trans} we have $\mathcal D(w_{(b,\delta)})=d(w)$. By Lemma \ref{minimal_pair}, for every $g\in K[X]$ with $\deg(g)<d(w)$ we have $\bar{v}(g(b))=w(g(X))$.\\ Conversely, suppose that for every $g\in K[X]$ with $\deg(g)<d(w)$ we have $$ \bar{v}(g(b))=w(g(X)). $$ We claim that for every $g\in K[X]$ we have $\bar{v}(g(a)))=\bar{v}(g(b))$.\\ By Lemma \ref{minimal_pair} we have $\bar{v}(g(a)))=w(g(X))$ for every $g\in K[X]$ with $\deg(g)<d(w)$. Hence for every $g\in K[X]$ with $\deg(g)<d(w)$, we have $\bar{v}(g(a))=\bar{v}(g(b))$. We still need to prove that for every $g\in K[X]$ with $\deg(g)\geq d(w)$ we have $\bar{v}(g(a)))=\bar{v}(g(b))$.\\ Consider $g\in K[X]$ with $\deg g\geq d(w)$. Since $(a,\delta)$ is a minimal pair, $d(w)=\deg(f)$. Let $g(X)=q(X)f(X)+r(X)$ be the Euclidean division of $g(X)$ by $f(X)$, with $\deg r(X)<d(w)$.\\ We have $g(a)=r(a)$ and $g(b)=r(b)$. Since we already know that $\bar{v}(r(a))=\bar{v}(r(b))$, the claim is proved.\\ We want to prove that $w_{(b,\delta)}$ is equal to $w$ on $K(X)$. For this it is sufficient to prove that it is equal to $w$ on $K[X]$, therefore it is sufficient to prove that for every $g(X)\in K[X]$ we have $w_{(a,\delta)}(g(X))=w_{(b,\delta)}(g(X))$.\\ Take $g(X)\in K[X]$ and write the Taylor expansions of $g(X)$:\\ \begin{align*} g(X)&=g_n(a)(X-a)^{n}+\dots+g_0(a),\\ g(X)&=g_n(b)(X-b)^{n}+\dots+g_0(b), \end{align*} where for each $t$, $0\leq t\leq n$, we have that $g_t(X)=\partial_t g(X)$ is a polynomial in $K[X]$, hence, by the above discussion, $\bar{v}(g_t(a))=\bar{v}(g_t(b))$.\\ Now by definition of $w_{(a,\delta)}$ and of $w_{(b,\delta)}$ we have \begin{align*} w_{(a,\delta)}(g(X))&=\inf_{0\leq t\leq n}\{\bar{v}(g_t(a))+t\delta\}\\ &=\inf_{0\leq t\leq n}\{\bar{v}(g_t(b))+t\delta\}\\ &=w_{(b,\delta)}(g(X)). \end{align*} \end{proof} \begin{lemma}\label{roots_kp_ineq} Let $i\in \Lambda$ and suppose that for every $b\in\mathcal{R}(Q_i)$, $\bar{w}_i:=w_{(b,\epsilon_i)}$ is an extension of $w_i$. Let $\alpha\in\bar{K}$. If for every root $b\in\mathcal{R}(Q_i)$ we have $\bar{v}(\alpha-b)<\epsilon_i$ then $\bar{v}(Q_i(\alpha))<w_i(Q_i)=\beta_i$. \end{lemma} \begin{proof} Choose $b\in\mathcal{R}(Q_i)$ such that $\bar{v}(\alpha-b)\geq \bar{v}(\alpha-c)$ for every $c\in \mathcal{R}(Q_i)$. Let $\bar{w}_i:=w_{(b,\epsilon_i)}$. We will prove that \begin{equation}\label{roots_kp_ineq_main_eq} \bar{w}_i(X-c)\geq \bar{v}(\alpha-c)\ \text{ for\ every\ }c\in \mathcal{R}(Q_i) \end{equation} For every $c\in\mathcal{R}(Q_i)$, we have, $\bar{w}_i(X-c)=\inf\{\epsilon_i,\ \bar{v}(b-c)\}$.\\ Now, $\epsilon_i>\bar{v}(\alpha-c)$ by assumption and $\bar{v}(b-c)\geq \inf\{\bar{v}(b-\alpha)$ and $\bar{v}(\alpha-c)\}\geq \bar{v}(\alpha-c)$, where the last inequality follows from the definition of $b$.\\ We have $Q_i(X)=\prod\limits_l(X-c_l)$, where the $c_l$ are the roots of $Q_i$, which need not be distinct. Now the result follows from the facts that $w_i(Q_i(X))=\bar{w}_i(Q_i(X))$, $\bar{w}_i(X-b)=\epsilon_i>\bar{v}(\alpha-c_l)$ and (\ref{roots_kp_ineq_main_eq}). \end{proof} \begin{lemma}\label{roots_kp_eq} Let $j\in I$ be such that $\vartheta_j=\emptyset$ and for every $b\in\mathcal{R}(Q_j)$ the valuation $\bar{w}_{bj}:=w_{(b,\epsilon_j)}$ is an extension of $w_j$. Assume that $\Lambda_{j+1}\ne\emptyset$ and let $a\in \mathcal{R}(Q_{j+1})$. If there exists $b\in\mathcal{R}(Q_{j})$ such that $\bar{v}(a-b)\geq\epsilon_j$ then $\beta_j=w_j(Q_j)=\bar{v}(Q_j(a))$. \end{lemma} \begin{proof} Choose $b\in\mathcal{R}(Q_{j})$ such that $\bar{v}(a-b)\geq \epsilon_j$, and let $\bar{w}_{bj}:=w_{(b,\epsilon_j)}$. By Lemma \ref{pairofdefinition}, $(a,\epsilon_j)$ is also a pair of definition for $\bar{w}_{bj}$.\\ By Proposition \ref{kp_form} and the comment that follows it, we can write $$ Q_{j+1}=q_nQ_j^{n}+\dots+q_tQ_j^t+\dots+q_0,\quad\text{ with }q_n=1,\ \deg q_t< d_j $$ and \begin{equation}\label{eq:homogeneous} w(q_t)+t\beta_j=n\beta_j\quad\text{ for }t\in\{0,\dots,n\}. \end{equation} By (\ref{eq:homogeneous}), given integers $t_1,t_2$ with $0\le t_1<t_2$, we have $w(q_{t_1})+t_1\beta_j=w(q_{t_2})+t_2\beta_j$, so $w(Q_j(X))=\beta_j=\frac{w(q_{t_2}(X))-w(q_{t_1}(X))}{t_2-t_1}$. Combining this with Lemma \ref{minimal_pair}, we obtain $$ w(Q_j(X))=\frac{\bar{v}(q_0(a))}n. $$ Now $0=Q_{j+1}(a)=Q_j^{n}(a)+\dots+q_t(a)Q_j^t(a)+\dots+q_0(a)$. Therefore, there exist $t_1,t_2$, $0\leq t_1<t_2\leq n$ such that $\bar{v}\left(q_{t_1}(a)Q_j^{t_1}(a)\right)=\bar{v}\left(q_{t_2}(a)Q_j^{t_2}(a)\right)$.\\ Thus we have $\bar{v}(Q_j(a))=\frac{\bar{v}(q_{t_2}(a))-\bar{v}(q_{t_1}(a))}{t_2-t_1}=w(Q_j(X))$. \end{proof} \begin{lemma}\label{key_pol_roots_lem} Let $j\in I$ be such that $\vartheta_j=\emptyset$ and for every $b\in\mathcal{R}(Q_j)$ the valuation $\bar{w}_{bj}:=w_{(b,\epsilon_j)}$ is an extension of $w_j$. Assume that $\Lambda_{j+1}\ne\emptyset$. Then for every root $a\in \mathcal{R}(Q_{j+1})$ there exists $b\in\mathcal{R}(Q_j)$ such that $\bar{v}(a-b)\geq\epsilon_j$. \end{lemma} \begin{proof} By Proposition \ref{kp_form} and the comment that follows it, we can write $$ Q_{j+1}=q_nQ_j^{n}+\dots+q_tQ_j^t+\dots+q_0,\quad\text{ where }q_n=1,\ \deg q_t< d_j\text{ and (\ref{eq:homogeneous}) is satisfied}. $$ Let $b_1,\dots,b_t$ be the roots of $Q_j$ and $a_1,\dots,a_s$ the roots of $Q_{j+1}$ (not necessarily distinct). We have $s=n\cdot t$ and the resultant of $Q_{j}$ and $Q_{j+1}$ is given by \begin{equation} \prod\limits_{\ell=1}^{t}Q_{j+1}(b_\ell)=(-1)^{nt^2}\prod\limits_{k=1}^{s}Q_j(a_k). \end{equation} For each $\ell$, $1\leq \ell\leq t$, we have $\bar{v}(Q_{j+1}(b_\ell))=\bar{v}(q_0(b_\ell))=w_j(q_0(X))=n\beta_j$, where the second equality is obtained from Lemma \ref{minimal_pair}. We obtain \begin{equation} \bar{v}\left(\prod\limits_{k=1}^{s}Q_j(a_k)\right)=\bar{v}\left(\prod\limits_{\ell=1}^{t}Q_{j+1}(b_\ell)\right)=tn\beta_j=s\beta_j. \end{equation} Hence \begin{equation}\label{key_pol_roots_eq} \sum\limits_{k=1}^{s}\bar{v}(Q_j(a_k))=s\beta_j. \end{equation} By Lemmas \ref{roots_kp_ineq} and \ref{roots_kp_eq} we have, for each $k$, $1\leq k\leq s$, $\bar{v}(Q_j(a_k))\leq \beta_j$ (where we apply Lemma \ref{roots_kp_ineq} in the case when for every root $b\in\mathcal{R}(Q_i)$ we have $\bar{v}(a_k-b)<\epsilon_i$ and Lemma \ref{roots_kp_eq} in the case when there exists $b\in\mathcal{R}(Q_{j})$ such that $\bar{v}(a_k-b)\geq\epsilon_j$). Hence, in view of (\ref{key_pol_roots_eq}) we must have the equality $\bar{v}(Q_j(a_k))= \beta_j$, for each $k$, $1\leq k\leq s$.\\ Therefore, by Lemma \ref{roots_kp_ineq}, for each $k$, $1\leq k\leq s$, there exists $\ell$, $1\leq \ell\leq t$, such that $\bar{v}(a_k-b_\ell)\geq \epsilon_j$.\\ \end{proof} \begin{proposition}\label{key_pol_roots} Let $j\in I$ be such that $\vartheta_j=\emptyset$ and consider $b\in\mathcal{R}(Q_j)$ such that $\bar{w}_{bj}:=w_{(b,\epsilon_j)}$ is an extension of $w_j$. Assume that $\Lambda_{j+1}\ne\emptyset$, and let $a\in \mathcal{R}(Q_{j+1})$ be such that $\bar{v}(a-b)\geq\epsilon_j$. Then $w_{(a,\epsilon_{j+1})}$ is an extension of $w_{j+1}$. \end{proposition} \begin{proof} Let $\bar{w}_{j+1}$ be a common extension of $\bar{v}$ and $w_{j+1}$ to $\bar{K}(X)$. By Theorem \ref{valuation_trans}, there exists a root $a_1$ of $Q_{j+1}$ such that $\bar{w}_{j+1}=w_{(a_1,\delta)}$. If $a=a_1$, we are done, there is nothing to prove. Otherwise, assume that $a\neq a_1$.\\ In view of Lemma \ref{criterion_roots_mp}, it is sufficient to prove that for all $g\in K[X]$ with $\deg g<d_{j+1}$ we have $\bar{v}(g(a))=w(g(X))$.\\ By Lemma \ref{pairofdefinition}, $(a,\epsilon_j)$ is also a pair of definition for $\bar{w}_{bj}$. Therefore, by Lemma \ref{minimal_pair}, \begin{equation} \text{for all }g\in K[X]\text{ with }\deg\ g<d_{j}\text{ we have }\bar{v}(g(a))=w_j(g(X))=w(g(X)).\label{eq:equalityfordegg<dj+1} \end{equation} By Lemma \ref{roots_kp_eq}, we have $\bar{v}(Q_j(a))=\beta_j$.\\ We will first prove, by contradiction, that for all $g\in K[X]$ with $\deg g<d_{j+1}$, we have $\bar{v}(g(a))\geq w(g(X))$.\\ Assume that there exists $g\in K[X]$, $\deg g<d_{j+1}$, such that $\bar{v}(g(a))< w(g(X))$. Choose $g$ to be of minimal degree subject to this inequality. By (\ref{eq:equalityfordegg<dj+1}) we must have \begin{equation} \deg g\geq d_j.\label{eq:degreelessorequal} \end{equation} Let $g(X)=q(X)Q_j(X)+r(X)$ be the Euclidean division of $g(X)$ by $Q_j(X)$, with $$ \deg r< d_j. $$ Note that \begin{equation} \deg q<\deg g.\label{eq:degq<degg} \end{equation} We have \begin{align*} \bar{v}(g(a))&\geq \inf\left\{\bar{v}(q(a))+\bar{v}(Q_j(a)),\ \bar{v}(r(a))\right\}\\ &\geq \inf\left\{w_j(q(X))+w_j(Q_j(X)),\ w_j(r(X))\right\}\\ &=w_j(g(X))\\ &=w(g(X)), \end{align*} where the second inequality follows from (\ref{eq:equalityfordegg<dj+1}), (\ref{eq:degq<degg}), the assumed minimality of $\deg\ g$ and the fact that $(a,\epsilon_j)$ is a pair of definition for $\bar{w}_{bj}$ which extends $w_j$. The above inequality contradicts our assumption. Again, aiming for contradiction, we will assume that there exists $$ g\in K[X],\quad d_j\leq \deg g<d_{j+1} $$ such that $\bar{v}(g(a))>w(g(X))$. Assume that $g$ is chosen of minimal degree subject to this inequality. Let $Q_{j+1}=Qg+R$ be the Euclidean division of $Q_{j+1}(X)$ by $g(X)$, with $\deg R< \deg g$.\\ We have $w(Q_{j+1})>w(Qg)=w(R)$, since $w_j(Qg)=w(Qg)$ and $w_j(R)=w(R)$ but $w(Q_{j+1})>w_j(Q_{j+1})$. Moreover, we have $0=Q_{j+1}(a)=Q(a)g(a)+R(a)$. Therefore, we must have $$ \bar{v}(R(a))=\bar{v}(Q(a)g(a)). $$ But $\bar{v}(R(a))=w(R(X))$ by the assumed minimality of the degree of $g$. Hence we must have $w(Q(X)g(X))=\bar{v}(Q(a)g(a))$, but $w(Q(X))\leq \bar{v}(Q(a))$ and $w(g(X))<\bar{v}(g(a))$ which is impossible. \end{proof} As a direct consequence of Lemma \ref{key_pol_roots_lem} and Proposition \ref{key_pol_roots} we have the following Theorem: \begin{theorem}\label{key_pol_all_roots} Let $j\in I$ be such that $\vartheta_j=\emptyset$, and for every $b\in\mathcal{R}(Q_j)$, $\bar{w}_{bj}:=w_{(b,\epsilon_j)}$ is an extension of $w_j$. Assume that $\Lambda_{j+1}\ne\emptyset$. Then for every root $a\in \mathcal{R}(Q_{j+1})$, $w_{(a,\epsilon_{j+1})}$ is an extension of $w_{j+1}$. \end{theorem} \begin{corollary}\label{allroots} Let $j_0\in I$ be such that $w=w_{j_0}$ and assume that for every $j\in I$ we have $\vartheta_j=\emptyset$. Then for every root $a\in \mathcal{R}(Q_{j_0})$ the valuation $w_{(a,\epsilon_{j_0})}$ is an extension of $w$. \end{corollary} \begin{proof} We use induction on $j\leq j_0$. The base of the induction is nothing but Example \ref{ExampleX}. The induction step is given by Theorem \ref{key_pol_all_roots}. \end{proof}
{ "timestamp": "2020-07-28T02:40:24", "yymm": "2007", "arxiv_id": "2007.13578", "language": "en", "url": "https://arxiv.org/abs/2007.13578" }
\section{introduction \label{Sec_I}} In a seminal paper, Ref. \cite{Efimov_1}, Efimov showed that three bosons that interact attractively in vacuum via short-range interactions form three-body bound states at interaction strengths that are not yet sufficient to support two-body bound states. He also showed that the number of the three-body bound states is in principle infinite, and that there is a geometric scaling law that governs the bound states \cite{Efimov_2,Braaten_universality,Grimm_and_Ferlaino,Pascal_review_paper,Greene_review,DIncao_review}. Technical advances in the trapping and cooling of atoms \cite{trapping_atoms,kF_and_n} as well as in the Feshbach resonances \cite{many_body_ultracold_atoms,Feshbach_resonance_Review} have led to the observation of the Efimov effect in ultracold atomic gases \cite{Efimov_1st_expt,expt_Li_K,expt_ultracold_second_trimer,expt_Li_Cs_1,expt_Li_Cs_2,Efimov_ultracold_summary} and helium beams experiments \cite{expt_helium_beam_1,expt_helium_beam_2}. Excited three-body bound states were observed \cite{expt_ultracold_second_trimer,expt_Efimov_scaling_factor}, and the Efimov scaling law was confirmed. The Efimov effect was also generalized to more than three particles \cite{Pascal_review_paper,four_body_0}. It was shown that for a critical mass ratio three fermions and a lighter particle form a four-body bound state \cite{four_body_1}. The four-body bound states of two heavy and two light bosons for different mass ratios was investigated in Ref. \cite{four_body_2}. The formation of a five-body bound state in fermionic mixtures was discussed in Ref. \cite{five_body_1}. Recently, we demonstrated the formation of three-electron bound states in conventional superconductors, and showed that the trimer state competes with the formation of the two-electron Cooper pair \cite{Sanayei}. For that, we modeled the interaction between two particles ``$i$'' and ``$j$'' as a negative constant $g_{ij}$ in momentum space for an incoming and outgoing momentum of a particle smaller than a cutoff $\Lambda_{i}$, following the reasoning of the Cooper problem \cite{Cooper}. We fixed the cutoffs by a typical value of the Debye energy in a conventional superconductor \cite{Sanayei}. In this paper we determine the three-body bound states of an atom in a Fermi mixture for contact interactions. To describe contact interactions we take the limit of the cutoffs $\Lambda_{i}$ to infinity. We show that this model is separable \cite{separable}, leading to a system of two coupled integral equations. This model enables us to calculate the three-body bound-state spectrum in the presence of Fermi seas. In this work, we consider a cold-atom system of Fermi mixtures. We assume a density of the species, labeled ``2'', that interacts attractively with another species of the same density, labeled ``3''. We assume that the two species ``2'' and ``3'' are in different internal states. Next, we include an additional atom, labeled ``1'', that interacts attractively with the other atoms via contact interactions; see Fig. \ref{Fig1}. In general, the three masses $m_{1}$, $m_{2}$, and $m_{3}$ can be different, but we are primarily interested in the case $m_{3}=m_{2}$. We assume that atom ``1'' is a fermion. A similar analysis can be applied when it is a boson. The species ``2'' and ``3''define the Fermi seas with the Fermi momentum $k_{F}$. This imposes the constraints $k_{2}>k_{F}$ and $k_{3}>k_{F}$ on the momentum of atoms ``2'' and ``3'', respectively. We also assume that the interatomic distances, proportional to $1/k_{F}$, are much larger than the range of the atomic interactions. With this, we neglect the many-body effects on the formation of a three-body bound state within the interatomic distances. For contact interactions we introduce the \emph{s}-wave scattering lengths as it relates to the contact interaction in its regularized form. We also define a three-body parameter, $\Lambda$, in order to regularize the range of the three-body interactions and to prevent Thomas collapse \cite{Thomas_collapse}. This parameter defines a length scale of the range of the atomic interactions using the van der Waals length \cite{Pascal_review_paper,three_body_parameter_1,three_body_parameter_2}. \begin{figure}[t] \includegraphics[width=0.96\columnwidth]{01_system} \caption{Sketch of an atom in a Fermi mixture. All species interact attractively via contact interactions. Species ``2'' and ``3'' are a Fermi mixture, and atom ``1'' can in general be a boson or fermion. The interaction strengths are shown by three negative constants $g_{12}$, $g_{13}$, and $g_{23}$. The species ``2'' and ``3'' are assumed to be in different internal states and $m_{3}=m_{2}$. The density of each species ``2'' and ``3'' is $n_{\mathrm{tot}}/2$, defining an inert Fermi sea with the Fermi momentum $k_{F}=(3\pi^{2}n_{\mathrm{tot}})^{1/3}$. The interatomic distances are proportional to $1/k_{F}$. } \label{Fig1} \end{figure} We calculate the three-body bound states for different mass ratios. We provide an analytical description of the lowest-energy two-body bound states and the two-body continuum, and find the three-body bound-state solutions numerically. For a noninteracting mixture, $g_{23}=0$, we provide an analytical formula for the onset of the lowest-energy two-body bound state at zero energy. For a high mass ratio $m_{2}/m_{1}$, where the excited three-body bound states appear, we also find an analytical estimate for the onset of a highest-energy excited three-body bound state at zero energy. With this, we can estimate the amount of the shift that the spectrum undergoes near unitarity due to the Fermi seas. Further, for our system and interaction model we demonstrate that a generalized scaling law governs the three-body bound states in the presence of Fermi seas. Finally, we propose three experimental scenarios in an ultracold system of fermionic mixtures of Yb isotopes to observe three-body bound states in the presence of Fermi seas. Here the $^{171}\mathrm{Yb}$ isotopes, that are in two different internal states, constitute the Fermi seas, and interact attractively with $^{173}\mathrm{Yb}$. We predict the onset of the three-body bound states and provide an estimate for the threshold energy. This paper is organized as follows. In Sec. \ref{Sec_II} we provide the main formulation of the problem for contact interactions, and derive a system of two coupled integral equations describing an atom in a Fermi mixture. In Sec. \ref{Sec_III} we represent our results for two- and three interacting pairs in the presence of Fermi seas, and demonstrate a generalized scaling law governing the three-body bound states. Here we also derive an analytical estimate to describe the effect of the Fermi seas near unitarity. In Sec. \ref{Sec_IV} we present three experimental signatures of a three-body bound state in an ultracold Fermi mixture of Yb isotopes. Finally, in Sec. \ref{Sec_V} we present our concluding remarks. \section{formulation of the problem\label{Sec_II}} The Schr{\"o}dinger equation for a system of three atoms in momentum space is \begin{equation} \left(\frac{\hbar^{2}k_{1}^{2}}{2m_{1}}+\frac{\hbar^{2}k_{2}^{2}}{2m_{2}}+\frac{\hbar^{2}k_{3}^{2}}{2m_{3}}+\hat{U}_{12}+\hat{U}_{13}+\hat{U}_{23}-E\right)\psi=0,\label{main Schrodinger Eq} \end{equation} where $\hbar$ is the reduced Planck's constant, $m_{i}$ and $\mathbf{k}_{i}$ is the atom mass and momentum, respectively, $E$ is the energy, and $\psi=\psi(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})$ is the wave function. We consider the interaction $\hat{U}_{ij}$ between the atom ``$i$'' and ``$j$'', $i,j=1,2,3$ and $i\neq j$, as \begin{equation} \hat{U}_{ij}\psi=g_{ij}\theta_{\Lambda_{i}}(\mathbf{k}_{i})\theta_{\Lambda_{j}}(\mathbf{k}_{j})\int\frac{d^{3}\mathbf{q}}{(2\pi)^{3}}\theta_{\Lambda_{i}}(\mathbf{k}_{i}-\mathbf{q})\theta_{\Lambda_{j}}(\mathbf{k}_{j}+\mathbf{q})\psi,\label{Cooper interaction operator} \end{equation} where $\mathbf{q}$ is the momentum transfer \cite{momentum_transfer} and $g_{ij}<0$ is the interaction strength; see Ref. \cite{Sanayei}. The resulting operators $\hat{U}_{ij}\psi$ are given in Appendix B. The cutoff function $\theta_{a,b}(\mathbf{k})$ for two real numbers $0\leqslant a<b$ is defined as \begin{equation} \theta_{a,b}(\mathbf{k})=\begin{cases} 1 & \text{for }a\leqslant|\mathbf{k}|\leqslant b,\\ 0 & \text{otherwise}, \end{cases}\label{cutoff func} \end{equation} and $\theta_{b}(\mathbf{k})\equiv\theta_{0,b}(\mathbf{k})$. Here we consider three-body bound states with vanishing total momentum. We also consider a singlet state for the species \textquotedblleft 2\textquotedblright{} and \textquotedblleft 3\textquotedblright{} in the following. The Fermi seas demand the constraints $k_{2}>k_{F}$ and $k_{3}>k_{F}$ on the momentum of the atoms ``2'' and ``3'', respectively. The threshold energy of the bound states is \begin{figure}[t] \includegraphics[width=1\columnwidth]{02_eqMassComp2} \caption{Energy $\mathcal{E}=2\mu_{12}E/\hbar^{2}$ in units of $r_{D}^{-2}$ vs $r_{D}/a_{23}$ for three interacting pairs, where $m_{2}/m_{1}=1$ and $a_{12}\approx-36r_{D}$. The red curves show the solution in vacuum, $k_{F}=0$. The blue curves show the result in the presence of Fermi seas, $k_{F}r_{D}\approx0.17$. The single blue curve is the three-body bound-state solution for $k_{F}\protect\neq0$. The gray dashed curves are the lowest-energy two-body bound-state solutions of the two-body continuum in vacuum, cf. Eq. (\ref{first dimer in vacuum}), and in the presence of Fermi seas; cf. Eq. (\ref{Cooper pair solution}). The onset of the two-body bound-state continuum is shifted towards negative values of $a_{23}$. The onset of the three-body bound state is pushed towards positive values of $a_{23}$. The dependence of the trimer energy on $a_{23}$ is modified noticeably.} \label{Fig2} \end{figure} \begin{equation} E_{\mathrm{thr}}=\frac{\hbar^{2}}{m_{2}}k_{F}^{2}=2E_{F},\label{threshold energy} \end{equation} where $E_{F}$ denotes the Fermi energy and $m_{3}=m_{2}$. To describe contact interactions we take the limit of the cutoffs $\Lambda_{i}$ and $\Lambda_{j}$ to infinity. We introduce the \emph{s}-wave scattering length, $a_{ij}$, using the following regularization identity: \begin{equation} \frac{2\pi\hbar^{2}}{\mu_{ij}}\frac{1}{g_{ij}}+\frac{2}{\pi}\Lambda_{j}\equiv\frac{1}{a_{ij}}\text{ as }\Lambda_{j}\rightarrow\infty,\label{s wave scattering length} \end{equation} for $i,j=1,2,3$ and $i\neq j$; see Appendix A. Here, $\mu_{ij}$ is a reduced mass, $1/\mu_{ij}=1/m_{i}+1/m_{j}$, $m_{3}=m_{2}$, and $\Lambda_{i}\sim\Lambda_{j}$. Next, we define $\Lambda$ as the three-body parameter that fixes the range of the atomic interactions and regularizes the three-body bound states \cite{Pascal_review_paper,three_body_parameter_1,three_body_parameter_2}. We also define a length scale, $r_{D}$, as \begin{equation} r_{D}=\frac{1}{\Lambda}.\label{length scale} \end{equation} The value of $\Lambda$ is chosen such that $\Lambda\gg k_{F}$, implying that $r_{D}\ll1/k_{F}$. With this, we neglect the many-body effects on the formation of a three-body bound state. We determine $r_{D}$ as the range of the atomic interactions, which we take as the van der Waals length, $\ell_{ij}^{(\mathrm{vdW})}=\frac{1}{2}(2\mu_{ij}C_{6}/\hbar^{2})^{1/4}$, where $C_{6}$ is a dispersive coefficient associated with the polarizability of the electronic cloud of the atoms \cite{Pascal_review_paper,Feshbach_resonance_Review,C6_coeff_1,C6_coeff_Yb_1,C6_coeff_2,C6_coeff_3}. We also assume that the range of the interactions is much larger than the Compton wave length of the particles, $r_{D}\gg\lambda_{\mathrm{C}}$, implying that relativistic corrections to the three-body bound-state spectrum can be neglected. In what follows, we refer to a two-body bound state of atoms ``$i$'' and ``$j$'' as a dimer-$ij$, and to a three-body bound state of atoms ``$i$'', ``$j$'' and ``$l$'' as a trimer-$ijl$. We also refer to a two-body bound state of species ``2'' and ``3'' as a Cooper pair for $k_{F}\neq0$, and as a dimer-23 for $k_{F}=0$. \begin{figure}[t] \includegraphics[width=1\columnwidth]{03_eqMassComp1New} \caption{Energy $\mathcal{E}=2\mu_{12}E/\hbar^{2}$ in units of $r_{D}^{-2}$ vs $r_{D}/a_{12}$ for $g_{23}=0$ and $m_{2}/m_{1}=1$. The single red curve is the three-body bound-state solution for $k_{F}=0$, and the single blue curve is the solution for $k_{F}r_{D}\approx0.02$. The gray dashed curves are the lowest-energy two-body bound states of the two-body continuum in vacuum, cf. Eq. (\ref{first dimer in vacuum}), and in the presence of Fermi seas; cf. Eq. (\ref{solution of dimers-12}). The Fermi seas push the onset of the two-body bound-state continuum as well as the onset of the three-body bound state to positive values of $a_{12}$.} \label{Fig3} \end{figure} \begin{figure*} \includegraphics[width=0.5\textwidth]{04_MiddleMass}$\;$\includegraphics[width=0.47\textwidth]{04_LargeMass} \caption{Energy $\mathcal{E}=2\mu_{12}E/\hbar^{2}$ in units of $r_{D}^{-2}$ vs $r_{D}/a_{12}$ for $g_{23}=0$. Red curves correspond to Efimov states, $k_{F}=0$, and blue curves are the results for $k_{F}r_{D}\approx0.01$: (a) $m_{2}/m_{1}\approx6.64$, (b) $m_{2}/m_{1}\approx22.26$. As the mass ratio $m_{2}/m_{1}$ increases, excited three-body bound states appear. A zoom on the region where a highest-energy excited three-body bound state emerges is depicted in Fig. \ref{Fig5}.} \label{Fig4} \end{figure*} We note that the interaction model (\ref{Cooper interaction operator}) is separable, as shown in Appendix B. This constitutes a system of the two coupled integral equations of the functions $F_{1}$ and $F_{2}$: \begin{align} \Omega_{12}(g_{12},\mathbf{k}_{2};k_{F},E)F_{2}(\mathbf{k}_{2})= & \xi_{1}(\mathbf{k}_{2};F_{2})+\xi_{2}(\mathbf{k}_{2};F_{1}),\label{Main Eq 1}\\ \Omega_{23}(g_{23},\mathbf{k}_{1};k_{F},E)F_{1}(\mathbf{k}_{1})= & \xi_{3}(\mathbf{k}_{1};F_{2}).\label{Main Eq 2} \end{align} The two functions $\Omega_{12}$ and $\Omega_{23}$ describe the two-body bound state continuum, dimers-12 and dimers-23, respectively: \begin{equation} \Omega_{12}(g_{12},\mathbf{k}_{2};k_{F},E)=\frac{1}{g_{12}}+\int\frac{d^{3}\mathbf{p}_{3}}{(2\pi)^{3}}\,K_{1}(\mathbf{k}_{2},\mathbf{p}_{3};E),\label{Omega_12 main definition} \end{equation} \begin{equation} \Omega_{23}(g_{23},\mathbf{k}_{1};k_{F},E)=\frac{1}{g_{23}}+\int\frac{d^{3}\mathbf{p}_{3}}{(2\pi)^{3}}\,K_{3}(\mathbf{k}_{1},\mathbf{p}_{3};E),\label{Omega_23 main definition} \end{equation} where for contact interactions we use the regularization relation (\ref{s wave scattering length}) to introduce the \emph{s}-wave scattering lengths. The three functions $\xi_{1}$, $\xi_{2}$, and $\xi_{3}$ describe the coupling of a pair to the third atom within the range of the length scale $r_{D}$ that is introduced by the three-body parameter $\Lambda$: \begin{equation} \xi_{1}(\mathbf{k}_{2};F_{2})=-\int\frac{d^{3}\tilde{\mathbf{p}}_{3}}{(2\pi)^{3}}\,\tilde{K}_{1}(\mathbf{k}_{2},\tilde{\mathbf{p}}_{3};E)F_{2}(\tilde{\mathbf{p}}_{3}),\label{ksi_1} \end{equation} \begin{equation} \xi_{2}(\mathbf{k}_{2};F_{1})=-\int\frac{d^{3}\tilde{\mathbf{p}}_{1}}{(2\pi)^{3}}\,\tilde{K}_{2}(\mathbf{k}_{2},\tilde{\mathbf{p}}_{1};E)F_{1}(\tilde{\mathbf{p}}_{1}),\label{ksi_2} \end{equation} \begin{equation} \xi_{3}(\mathbf{k}_{1};F_{2})=-2\int\frac{d^{3}\tilde{\mathbf{p}}_{3}}{(2\pi)^{3}}\,\tilde{K}_{3}(\mathbf{k}_{1},\tilde{\mathbf{p}}_{3};E)F_{2}(\tilde{\mathbf{p}}_{3});\label{ksi_3} \end{equation} see Appendix B. The integral kernels $K_{i}$ and $\tilde{K}_{i}$, $i=1,2,3$, and also the functions $F_{1}$ and $F_{2}$ are represented in Appendix B. We assume that $F_{i}(\mathbf{k})=F_{i}(k)$, implying \emph{s}-wave symmetry of the states. We notice that the system of the integral Eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2}) can be interpreted as the Skorniakov\textendash Ter-Martirosian equation for the zero-range limit of the interaction model (\ref{Cooper interaction operator}); cf. Ref. \cite{Skorniakov_Ter-Martirosian}. \section{results\label{Sec_III}} The coupled integral equations (\ref{Main Eq 1}) and (\ref{Main Eq 2}) describe three interacting pairs. For contact interactions and\emph{ s}-wave symmetry of the states we calculate the two functions $\Omega_{23}$ and $\Omega_{12}$ analytically; see Appendices C and D. These functions describe the lowest-energy two-body bound states and the two-body continuum, dimers-23 and dimers-12, respectively. Next, for a given value of the three-body parameter $\Lambda\gg k_{F}$ we evaluate the functions $\xi_{1}$, $\xi_{2}$, and $\xi_{3}$ numerically, and solve the system of the integral Eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2}) in order to find the three-body bound-state solutions. For that, we discretize the interval $(k_{F},\Lambda)$, and evaluate each integral as a truncated sum following the Gauss-Legendre quadrature rule \cite{Gaussian_quadrature_1,Gaussian_quadrature_2,Gaussian_quadrature_3,Gaussian_quadrature_4}. We construct the corresponding matrix equation and calculate the eigenvalues for different values of energy $E\leqslant E_{\mathrm{thr}}$, resulting in the \emph{s}-wave scattering lengths $a_{23}$ and $a_{12}$. We find the values of the functions $F_{1}$ and $F_{2}$ at the grid points as the corresponding eigenvectors; see Appendix E. We note that the two-body bound states appear as continuum states, whereas the three-body bound states appear at discrete energy levels. For three interacting pairs and for a fixed value of $a_{12}$, Fig. \ref{Fig2} shows the energy as a function of the inverse \emph{s}-wave scattering length $1/a_{23}$ for $m_{2}/m_{1}=1$, and comparison with the result for $k_{F}=0$. It reveals a deformation of the Efimov spectrum in the presence of Fermi seas. We notice that for vanishing $k_{F}$, the two-body bound-state continuum emerges at unitarity, $a_{23}\rightarrow\pm\infty$, whereas the presence of Fermi seas expands the region of the two-body bound states to negative values of $a_{23}$. The single red and blue curves show the three-body bound-state solution for $k_{F}=0$ and $k_{F}\neq0$, respectively. For $k_{F}\neq0$ the three-body bound state emerges at a larger value of $|a_{23}|$ at $E=E_{\mathrm{thr}}$, and converges asymptotically to the three-body bound-state solution in vacuum. As a general tendency, the effect of the Fermi seas is more pronounced as we approach unitarity. Our results are consistent with Refs. \cite{Efimov_FermiSea_0_0,Efimov_FermiSea_0,Efimov_FermiSea_1,Efimov_FermiSea_2,QCD_Efimov_Cooper}, which explore different, but related scenarios. \begin{figure*} \includegraphics[width=0.495\textwidth]{05_MiddleMassZoom}$\quad$\includegraphics[width=0.47\textwidth]{05_LargeMassZoom} \caption{A zoom on the plot of energy $\mathcal{E}=2\mu_{12}E/\hbar^{2}$ in units of $r_{D}^{-2}$ vs $r_{D}/a_{12}$ for (a) $m_{2}/m_{1}\approx6.64$ corresponding to Fig. \ref{Fig4}(a), and (b) $m_{2}/m_{1}\approx22.26$ corresponding to Fig. \ref{Fig4}(b). Both panels show the region where a highest-energy excited three-body bound state emerges. The red vertical arrow locates the onset of a highest-energy excited three-body bound state at zero energy, given by Eq. (\ref{a_12 critical trimer}). The black vertical arrow locates the onset of the lowest-energy two-body bound state at zero energy, given by Eq. (\ref{dimer onset at zero energy}). } \label{Fig5} \end{figure*} To find an analytical solution of the lowest-energy two-body bound state, Cooper pair-23, we note that for $g_{12},g_{13}=0$ the system of the integral Eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2}) reduces to \begin{equation} \frac{1}{g_{23}}+\lim_{\Lambda_{2}\rightarrow\infty}\int\frac{d^{3}\mathbf{p}_{3}}{(2\pi)^{3}}\,\frac{\theta_{k_{F},\Lambda_{2}}(\mathbf{p}_{3})}{\frac{\hbar^{2}}{m_{2}}p_{3}^{2}-E_{23}}=0,\label{integral Eq for Cooper pair} \end{equation} where $E_{23}<0$ is the bound-state energy of the Cooper pair. We use the regularization relation (\ref{s wave scattering length}) and solve Eq. (\ref{integral Eq for Cooper pair}) for \emph{s}-wave symmetry of the states, resulting in \begin{alignat}{1} \frac{1}{a_{23}}= & \frac{2}{\pi}k_{F}+\frac{2}{\pi}\sqrt{-\mathcal{E}_{23}}\arctan\left(\frac{\sqrt{-\mathcal{E}_{23}}}{k_{F}}\right),\label{Cooper pair solution} \end{alignat} where $\mathcal{E}_{23}=2\mu_{23}E_{23}/\hbar^{2}$ and $\mu_{23}$ is a reduced mass, $1/\mu_{23}=1/m_{2}+1/m_{3}=2/m_{2}$; see gray dashed curves in Fig. \ref{Fig2}. Far from the resonance, the Cooper-pair solution for $k_{F}\neq0$ converges asymptotically to the lowest-energy two-body bound state in vacuum, $1/a_{23}=\sqrt{-\mathcal{E}_{23}}$, described by Eq. (\ref{Cooper pair solution}) as $k_{F}\rightarrow0$. For a noninteracting mixture, $g_{23}=0$, Eq. (\ref{Main Eq 2}) has no effect anymore. For \emph{s}-wave symmetry of the states the integral Eq. (\ref{Main Eq 1}) reduces to \begin{align} \Omega_{12}F_{2}(k_{2})= & -\frac{1}{2\pi\frac{\mu_{12}}{m_{1}}k_{2}}\int_{k_{F}}^{\Lambda}d\tilde{p}_{3}\,\tilde{p}_{3}\nonumber \\ & \times\ln\left(\frac{\tilde{p}_{3}^{2}+\frac{2\mu_{12}}{m_{1}}k_{2}\tilde{p}_{3}+k_{2}^{2}-\mathcal{E}}{\tilde{p}_{3}^{2}-\frac{2\mu_{12}}{m_{1}}k_{2}\tilde{p}_{3}+k_{2}^{2}-\mathcal{E}}\right)F_{2}(\tilde{p}_{3}),\label{main Eq for g23 Zero} \end{align} where $\mathcal{E}=2\mu_{12}E/\hbar^{2}$, $E$ is the energy of the three-body bound state, and $\mu_{12}$ is a reduced mass, $1/\mu_{12}=1/m_{1}+1/m_{2}$; see Appendix D. The analytical calculation of the function $\Omega_{12}$ is given by Eq. (\ref{Omega_12_result}). We solve the integral Eq. (\ref{main Eq for g23 Zero}) numerically, using the Gauss-Legendre quadrature rule; see Appendix E. Figure \ref{Fig3} shows the result for vanishing and nonvanishing $k_{F}$, where $m_{2}/m_{1}=1$. In the presence of the Fermi seas, the onset of the three-body bound state is pushed to positive values of $a_{12}$, and the three-body bound-state solution converges asymptotically to the corresponding Efimov state in vacuum. \begin{figure*}[t] \includegraphics[width=1\textwidth]{06_Scaling} \caption{Demonstration of the generalized scaling law (\ref{scaling_law_1}) and (\ref{scaling_law_2}) for $g_{23}=0$ and $m_{2}/m_{1}\approx22.26$: (a) energy $\mathcal{E}=2\mu_{12}E/\hbar^{2}$ in units of $r_{D}^{-2}$ vs $r_{D}/a_{12}$ for $k_{F}r_{D}\approx0.01$, (b) rescaled energy $\mathcal{\tilde{E}}=2\mu_{12}\tilde{E}/\hbar^{2}$ in units of $r_{D}^{-2}$ vs rescaled $r_{D}/\tilde{a}_{12}$, for $k_{i}\protect\mapsto\lambda k_{i}$, $i=1,2,3$, $k_{F}\protect\mapsto\lambda k_{F}$, $a_{12}\protect\mapsto\lambda^{-1}a_{12}$, $E\protect\mapsto\lambda^{2}E$, where $\lambda=\exp(\pi/|s_{0}|)\approx4.84998$. The red vertical arrow in panel (a) locates the onset of the ($n+1$)-th excited three-body bound state at $\mathcal{E}_{\mathrm{thr}}=2\mu_{12}E_{\mathrm{thr}}/\hbar^{2}$. The red vertical arrow in (b) locates the onset of the $n$-th excited three-body bound state of the rescaled spectrum at $\mathcal{\tilde{E}}_{\mathrm{thr}}=\lambda^{2}\mathcal{E}_{\mathrm{thr}}$. The gray dashed lines in both panels show the value of \emph{$\mathcal{E}_{\mathrm{thr}}$.}} \label{Fig6} \end{figure*} We note that for a given value of $k_{F}$, as we increase the mass ratio $m_{2}/m_{1}$, excited three-body bound states appear \cite{excited_trimers}. Figure \ref{Fig4}(a) shows the result for $m_{2}/m_{1}\approx6.64$, where two excited additional three-body bound states are visible. In Fig. \ref{Fig4}(b) we increase the mass ratio to $m_{2}/m_{1}\approx22.26$, and obtain three excited three-body bound states. The red curves in Fig. \ref{Fig4}(a) and Fig. \ref{Fig4}(b) show the result in vacuum, which are the Efimov states. The blue curves show the result in the presence of Fermi seas. Near unitarity the Fermi seas have a noticeable influence on the spectrum. Far from the resonance and for low energies, the effect of the Fermi seas is negligible. In the presence of the Fermi seas the translational invariance is broken, and the Efimov scaling law in vacuum does not hold anymore, which we discuss in the following. For $g_{23}=0$ we describe the two-body bound-state continuum, dimers-12, by solving \begin{equation} \Omega_{12}(a_{12},k_{2};k_{F},\mathcal{E}_{12})=0,\label{solution of dimers-12} \end{equation} where $\Omega_{12}$ is given by Eq. (\ref{Omega_12_result}), $\mathcal{E}_{12}=2\mu_{12}E_{12}/\hbar^{2}$, and $E_{12}$ is the energy of the dimers-12. For the lowest-energy dimer-12 we solve Eq. (\ref{solution of dimers-12}) as $k_{2}\rightarrow k_{F}$. The result converges asymptotically to the lowest-energy two-body bound-state solution in vacuum; see gray dashed curves in Fig. \ref{Fig3} for $m_{2}/m_{1}=1$. At zero energy we find an analytical estimate for the onset of the the lowest-energy two-body bound state. For that, we solve Eq. (\ref{solution of dimers-12}) as $k_{2}\rightarrow k_{F}$ and $\mathcal{E}_{12}\rightarrow0$, resulting in a critical \emph{s}-wave scattering length, $a_{12,\mathrm{dimer}}^{(c)}\equiv a_{12}(E_{12}=0)$: \begin{alignat}{1} \frac{1}{a_{12,\mathrm{dimer}}^{(c)}}= & \frac{k_{F}}{\pi}\left[1+\frac{1+\frac{2m_{2}}{m_{1}}}{\frac{2m_{2}}{m_{1}}(1+\frac{m_{2}}{m_{1}})}\ln\left(1+\frac{2m_{2}}{m_{1}}\right)\right.\nonumber \\ & \left.+\frac{\pi}{2}\frac{1}{1+\frac{m_{2}}{m_{1}}}\sqrt{1+\frac{2m_{2}}{m_{1}}}\right].\label{dimer onset at zero energy} \end{alignat} Equation (\ref{dimer onset at zero energy}) gives an estimate of the shift to the repulsive region of $a_{12}$ that the lowest-energy two-body bound state undergoes at zero energy in the presence of Fermi seas; see black vertical arrows in Fig. \ref{Fig5}(a) and \ref{Fig5}(b). For $m_{2}\gg m_{1}$ this amount approaches $k_{F}/\pi$. Moreover, for $g_{23}=0$ and a high mass ratio $m_{2}/m_{1}\gg1$, we find an analytical estimate for the onset of a highest-energy excited three-body bound state at zero energy. For that, we note that near the Fermi surface we can approximate the momentum of the species ``2'' and ``3'' to be around $k_{F}$ but in opposite directions, $\mathbf{k}_{2}\sim-\mathbf{k}_{3}$. Because we have assumed that the total momentum of the three-body bound state is zero, this results in the vanishing momentum of the atom ``1'', $\mathbf{k}_{1}\sim\mathbf{0}$. Next, we consider the pair-12, where $m_{2}/m_{1}\gg1$ and $k_{1}\sim0$. With these assumption, the relative momentum of the pair-12, defined as $\mathbf{p}_{12}\equiv[m_{2}/(m_{1}+m_{2})]\mathbf{k}_{1}-[m_{1}/(m_{1}+m_{2})]\mathbf{k}_{2}$, approaches zero. We note that the Fermi surface, $k_{2}\sim k_{F}$, can be described in terms of the relative momentum, $\mathbf{p}_{12}$, and total momentum, $\mathbf{P}_{12}$, of the pair-12 as $|(\mu_{12}/m_{1})\mathbf{P}_{12}-\mathbf{p}_{12}|\sim k_{F}$, where $\mathbf{P}_{12}\equiv\mathbf{k}_{1}+\mathbf{k}_{2}$; see Appendix F. This implies that for $m_{2}/m_{1}\gg1$ and $k_{1}\sim0$ we can approximate the total momentum of the pair-12 to be $P_{12}\sim(\mu_{12}/m_{1})^{-1}k_{F}$. We also note that for large mass ratios $m_{2}/m_{1}$, the threshold energy of the three-body bound state, $\mathcal{E}_{\mathrm{thr}}=2\mu_{12}E_{\mathrm{thr}}/\hbar^{2}=2(1-\mu_{12}/m_{1})k_{F}^{2}$, approaches the threshold energy of the pair-12, $\mathcal{E}_{\mathrm{thr}}^{(12)}=\mathcal{E}_{\mathrm{thr}}/2$. To find the onset of a highest-energy excited three-body bound state at $E=0$, we calculate the onset of the lowest-energy pair-12 for total momentum $P_{12}\sim(\mu_{12}/m_{1})^{-1}k_{F}$ and $E_{12}\sim0$. To do this, we use the interaction model (\ref{Cooper interaction operator}), and write the Schr{\"o}dinger equation describing the pair-12 for a contact interaction in terms of the relative and total momenta; see Appendix F. The solution for $P_{12}\rightarrow(\mu_{12}/m_{1})^{-1}k_{F}$ and $E_{12}\rightarrow0$ results in an estimate for the critical \emph{s}-wave scattering length, $a_{12,\mathrm{trimer}}^{(c)}\equiv a_{12}(E\approx0)$: \begin{align} \frac{1}{a_{12,\mathrm{trimer}}^{(c)}}\approx & \frac{k_{F}}{\pi}\left[1+\frac{1}{4}\frac{1}{1+\frac{m_{2}}{m_{1}}}\ln\left(4(1+\frac{m_{2}}{m_{1}})\right)\right.\nonumber \\ & \left.-\frac{\pi}{2}\frac{1}{\sqrt{1+\frac{m_{2}}{m_{1}}}}+\frac{1}{2}\frac{1}{1+\frac{m_{2}}{m_{1}}}\right]\text{for }\frac{m_{2}}{m_{1}}\gg1;\label{a_12 critical trimer} \end{align} see Appendix F. For a high mass ratio $m_{2}/m_{1}\gg1$, Eq. (\ref{a_12 critical trimer}) gives an estimate for the amount of the shift to the repulsive region of $a_{12}$ that a highest-energy excited three-body bound state undergoes at zero energy in the presence of Fermi seas. Figure \ref{Fig5} reveals a zoom on the region where a highest-energy three-body bound state emerges for $m_{2}/m_{1}=6.64$ and $m_{2}/m_{1}\approx22.26$. The red vertical arrows locate the critical value (\ref{a_12 critical trimer}). For a very large mass ratio $m_{2}/m_{1}$, the critical value (\ref{a_12 critical trimer}) eventually approaches $k_{F}/\pi$, converging to the lowest-energy two-body bound state at zero energy. Equations (\ref{dimer onset at zero energy}) and (\ref{a_12 critical trimer}) provide a quantitative analysis for the effect of the Fermi seas on the near-resonant spectrum. Finally, we elaborate on the observation that the Fermi seas deform the Efimov spectrum. This effect is more pronounced as we approach unitarity. As a result, the Efimov scaling factor that governs the three-body bound states in vacuum does not hold anymore. Here we show that a scaling transformation $k_{F}\mapsto\lambda k_{F}$, where $\lambda$ is the Efimov scaling factor, gives rise to a generalized scaling law for our system and interaction model (\ref{Cooper interaction operator}). To this end, we notice that $k_{F}\mapsto\lambda k_{F}$ implies a scaling transformation of all momenta as $k_{i}\mapsto\lambda k_{i}$, for $i=1,2,3$. It also rescales the threshold energy as $E_{\mathrm{thr}}\mapsto\lambda^{2}E_{\mathrm{thr}}$, cf. Eq. (\ref{threshold energy}), implying a general scaling transformation of energy as $E\mapsto\lambda^{2}E$. To ensure that the system of the coupled integral Eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2}) remains valid, it requires a scaling transformation of the \emph{s}-wave scattering length as $a\mapsto\lambda^{-1}a$; see Eqs. (\ref{Omega_first_case_result}), (\ref{Omega_second_case_result}), and (\ref{Omega_12_result}). This results in a discrete scaling law for the three-body bound states in the presence of Fermi seas: \begin{alignat}{1} \frac{\lambda}{a_{n+1}(k_{F})}= & \frac{1}{a_{n}(\lambda k_{F})},\label{scaling_law_1}\\ \lambda^{2}E_{n+1}(k_{F},1/a)= & E_{n}(\lambda k_{F},\lambda/a),\label{scaling_law_2} \end{alignat} where $n\in\mathbb{N}$ is an index labeling the three-body bound state, $\lambda=\exp(\pi/|s_{0}|)$, and the parameter $s_{0}$, that depends on the mass ratio $m_{2}/m_{1}$, is determined in Appendix G. Our finding is in agreement with the result of Ref. \cite{Efimov_FermiSea_4}. Figure \ref{Fig6} demonstrates the generalized scaling law (\ref{scaling_law_1}) and (\ref{scaling_law_2}) for an atomic system of three fermions with a noninteracting mixture, $g_{23}=0$, and $m_{2}/m_{1}\approx22.26$. \begin{figure} \includegraphics[width=1\columnwidth]{07_firstScenario} \caption{Visualization of the first scenario for the experimental signature of a three-body bound state in an ultracold fermionic mixture of $\text{Yb}$ isotopes. The plot shows the energy $\mathcal{E}=2\mu^{(\mathrm{Yb})}E/\hbar^{2}$ in units of $r_{D}^{-2}$ vs $r_{D}/a_{23}$, where $r_{D}\equiv\ell_{23}^{(\mathrm{vdW})}\approx4.145\text{ nm}$. The \emph{s}-wave scattering length of $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$ is fixed as the value measured via photoassociation spectroscopy (PAS), $a_{12}=a_{13}=a_{12}^{(\mathrm{PAS})}\approx-30.6\text{ nm}$. The three-body bound state emerges at $a_{23}\approx-20.7\text{ nm}$ at the threshold energy $E_{\mathrm{thr}}\approx1.10\text{ kHz}$.} \label{Fig7} \end{figure} \section{experimental signatures \label{Sec_IV}} We propose three scenarios to observe three-body bound states in mixtures of Yb isotopes, in particular a mixture of $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$. In the terminology that is illustrated in Fig. \ref{Fig1}, $^{173}\mathrm{Yb}$ plays the role of species ``1'', and species ``2'' and ``3'' are two internal states of $^{171}\mathrm{Yb}$. The density of each of the $^{171}\mathrm{Yb}$ species is $n_{\mathrm{tot}}/2$, whereas the density of $^{173}\mathrm{Yb}$ is much smaller. We denote the \emph{s}-wave scattering lengths of $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$ by $a_{12}$ and $a_{13}$, and the \emph{s}-wave scattering length of two $^{171}\mathrm{Yb}$ isotopes by $a_{23}$. We also assume that $a_{13}=a_{12}$. As measured via two-color photoassociation spectroscopy (PAS), see Ref. \cite{C6_coeff_Yb_1}, $^{171}\mathrm{Yb}$ isotopes are almost noninteracting, while the \emph{s}-wave scattering length between $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$ atoms is $a_{12}^{(\mathrm{PAS})}\approx-30.6\text{ nm}\approx-578.23a_{0}$, where $a_{0}$ denotes the Bohr radius \cite{Bohr_radius}. We note that $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$ have almost the same atomic mass, where the reduced mass is $\mu_{12}\approx85.9657\text{ u}$ \cite{NIST_data}. The reduced mass of two $^{171}\mathrm{Yb}$ isotopes is $\mu_{23}\approx85.4682\text{ u}$ \cite{NIST_data}. The van der Waals dispersive coefficient, $C_{6}^{(\mathrm{Yb})}$, that determines the atomic interaction in a $\mathrm{Yb}_{2}$ molecule is given by Refs. \cite{C6_coeff_Yb_1,C6_coeff_Yb_2}. We calculate the van der Waals lengths to be $\ell_{12}^{(\mathrm{vdW})}=\frac{1}{2}[2\mu_{12}C_{6}^{(\mathrm{Yb})}/\hbar^{2}]^{1/4}\approx4.151\text{ nm}\approx78.44a_{0}$ and $\ell_{23}^{(\mathrm{vdW})}=\frac{1}{2}[2\mu_{23}C_{6}^{(\mathrm{Yb})}/\hbar^{2}]^{1/4}\approx4.145\text{ nm}\approx78.33a_{0}$. These values fix the corresponding length scales $r_{D}$. Next, for each internal state we assume that the density of $^{171}\mathrm{Yb}$ species is $n_{\mathrm{tot}}/2=\frac{1}{2}\times10^{17}\text{ m}^{-3}$. We calculate the value of the Fermi momentum as $k_{F}=(3\pi^{2}n_{\mathrm{tot}})^{1/3}$; cf. Ref. \cite{kF_and_n}. \begin{figure} \includegraphics[width=1\columnwidth]{08_secondScenario} \caption{Visualization of the second scenario in which $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$ are interact attractively, while the two $^{171}\mathrm{Yb}$ species are noninteracting. The plot shows the energy $\mathcal{E}=2\mu^{(\mathrm{Yb})}E/\hbar^{2}$ in units of $r_{D}^{-2}$ vs $r_{D}/a_{12}$, where $r_{D}\equiv\ell_{12}^{(\mathrm{vdW})}\approx4.151\text{ nm}$ and $a_{13}=a_{12}$. The onset of the three-body bound state is at $a_{12}\approx-3193\text{ nm}$ with the threshold energy $E_{\mathrm{thr}}\approx1.09\text{ kHz}$.} \label{Fig8} \end{figure} We adopt the \emph{s}-wave scattering length of $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$ as reported in Ref. \cite{C6_coeff_Yb_1}, i.e., $a_{12}=a_{12}^{(\mathrm{PAS})}$, and calculate the three-body bound-state solution for three interacting pairs \cite{Error}. Figure \ref{Fig7} shows the three-body bound-state energy as a function of $1/a_{23}$. We find that the onset of the three-body bound state is $a_{23}\approx-20.7\text{ nm}\approx-391.16a_{0}$, emerging at the threshold energy $E_{\mathrm{thr}}\approx1.10\text{ kHz}$. As a first experimental scenario, we propose to tune the interaction between two $^{171}\mathrm{Yb}$ isotopes via optical Feshbach resonances \cite{optical_Feshbach_1,optical_Feshbach_2,optical_Feshbach_3,optical_Feshbach_4,optical_Feshbach_5}, across the onset of the three-body bound state, which should result in increased atomic losses. As a second scenario we consider two noninteracting $^{171}\mathrm{Yb}$ isotopes, and calculate the three-body bound-state solution for two interacting pairs $^{171}\mathrm{Yb}$ - $^{173}\mathrm{Yb}$. Figure \ref{Fig8} shows the energy of the three-body bound state as a function of $1/a_{12}$. It reveals that the three-body bound state emerges at $a_{12}\approx-3193\text{ nm}\approx-60336.40a_{0}$ at the threshold energy $E_{\mathrm{thr}}\approx1.09\text{ kHz}$. Here the \emph{s}-wave scattering length $a_{12}$ is much larger in amplitude than $a_{12}^{(\mathrm{PAS})}$, and the threshold energy is smaller than the value obtained in the first scenario. A three-body bound state is observed, if the interaction between two $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$ is tuned via interisotope Feshbach resonances \cite{interisotope_Feshbach_1}, or via orbital Feshbach resonances \cite{orbital_Feshbach_2,orbital_Feshbach_3}. \begin{figure} \includegraphics[width=1\columnwidth]{09_thirdScenario} \caption{Visualization of the third scenario. The plot shows the energy $\mathcal{E}=2\mu^{(\mathrm{Yb})}E/\hbar^{2}$ in units of $r_{D}^{-2}$ vs $r_{D}/a_{23}$, where $r_{D}\equiv\ell_{23}^{(\mathrm{vdW})}\approx4.145\text{ nm}$. The \emph{s}-wave scattering length of $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$ is fixed to be $a_{12}=a_{13}=2a_{12}^{(\mathrm{PAS})}\approx-2\times30.6\text{ nm}$. The three-body bound state emerges at $a_{23}\approx-10.4\text{ nm}$ at the threshold energy $E_{\mathrm{thr}}\approx1.10\text{ kHz}$.} \label{Fig9} \end{figure} As a third scenario, if the interaction between two $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$ isotopes is tuned to a larger value in amplitude than $a_{12}^{(\mathrm{PAS})}$, e.g., $a_{12}=2a_{12}^{(\mathrm{PAS})}$, we find that the three-body bound state emerges at $a_{23}\approx-10.4\text{ nm}\approx-196.52a_{0}$ with the same threshold energy of the first scenario; see Fig. \ref{Fig9}. Here the value of $a_{12}$ is much smaller in amplitude than the value obtained in the second scenario. Also, the value of $a_{23}$ is smaller in amplitude than the value found in the first scenario. A three-body bound state is observed, if the interaction between two $^{171}\mathrm{Yb}$ isotopes and also the interaction between $^{171}\mathrm{Yb}$ and $^{173}\mathrm{Yb}$ are tuned simultaneously. We note that in all scenarios we have assumed that the interatomic distances are much larger than the range of the atomic interactions, $1/k_{F}\gg r_{D}$. The onset of the three-body bound states might slightly deviate if this criterion is not met. Here there will be a competition of $^{171}\mathrm{Yb}$ isotopes to form a three-body bound state with $^{173}\mathrm{Yb}$. \section{conclusions\label{Sec_V}} In conclusion, we have demonstrated and characterized three-body bound states of a single fermionic atom interacting with a Fermi mixture of two fermionic species. For this purpose, we have expanded and elaborated on a model previously used to determine trimer states in conventional superconductors, Ref. \cite{Sanayei}. We have shown that the expanded interaction model is separable, leading to a system of integral equations in momentum space. Based on these equations we have presented their full numerical solution, as well as analytical solutions of limiting cases. Compared to three atoms interacting in vacuum, the presence of the Fermi seas renormalizes the eigenstates and eigenenergies, in particular near unitarity. Compared the Efimov scaling law of three atoms in vacuum, we have shown that our system and interaction model obeys a generalized discrete scaling law. We have also proposed three scenarios to obtain experimental signatures of the modified Efimov effect in an ultracold Fermi system of Yb isotopes. \subsection*{ACKNOWLEDGMENTS } We acknowledge support from the Deutsche Forschungsgemeinschaft through Program No. SFB 925, and also The Hamburg Centre for Ultrafast Imaging. We would like to thank Pascal Naidon for very valuable discussions. A.S. also thanks C. Becker, K. Sponselee, B. Abeln, and L. Freystatzky for useful discussions on the experimental signatures of our results. \setcounter{equation}{0} \renewcommand{\theequation}{A\arabic{equation}} \section*{appendix a. introducing the {\large{}{\lowercase{\textit{s}}}}-wave scattering lengths} We consider the Schr{\"o}dinger equation in momentum space governing two atoms ``A'' and ``B'' in vacuum: \begin{equation} \left(\frac{\hbar^{2}k_{\mathrm{A}}^{2}}{2m_{\mathrm{A}}}+\frac{\hbar^{2}k_{\mathrm{B}}^{2}}{2m_{\mathrm{B}}}+\hat{U}_{\mathrm{AB}}-E_{\mathrm{AB}}\right)\phi=0,\label{Schrodinger Eq for atoms a and b} \end{equation} where $m_{i}$ and $\mathbf{k}_{i}$, $i\in\{\mathrm{A},\mathrm{B}\}$, is the atom mass and momentum, respectively, $E_{\mathrm{AB}}$ is the energy, and $\phi=\phi(\mathbf{k}_{\mathrm{A}},\mathbf{k}_{\mathrm{B}})$ is the wave function. The interaction $\hat{U}_{\mathrm{AB}}$ between the atoms ``A'' and ``B'' follows from the interaction model (\ref{Cooper interaction operator}). The resulting operator $\hat{U}_{\mathrm{AB}}\phi$ reads: \begin{align} \hat{U}_{\mathrm{AB}}\phi= & g_{\mathrm{AB}}\theta_{\Lambda_{\mathrm{A}}}(\mathbf{k}_{\mathrm{A}})\theta_{\Lambda_{\mathrm{B}}}(\mathbf{k}_{\mathrm{B}})\int\frac{d^{3}\mathbf{q}}{(2\pi)^{3}}\,\theta_{\Lambda_{\mathrm{A}}}(\mathbf{k}_{\mathrm{A}}-\mathbf{q})\nonumber \\ & \times\theta_{\Lambda_{\mathrm{B}}}(\mathbf{k}_{\mathrm{B}}+\mathbf{q})\phi(\mathbf{k}_{\mathrm{A}}-\mathbf{q},\mathbf{k}_{\mathrm{B}}+\mathbf{q}),\label{V_ab} \end{align} where $g_{\mathrm{AB}}<0$ and $\mathbf{q}$ is the momentum transfer \cite{momentum_transfer}. We assume the zero total momentum, $\mathbf{k}_{\mathrm{A}}+\mathbf{k}_{\mathrm{B}}=\mathbf{0}$, and $\Lambda_{\mathrm{B}}=\Lambda_{\mathrm{A}}$. Next, we define the variables $\boldsymbol{\kappa}_{i}\equiv\mathbf{q}+\mathbf{k}_{i}$, $i\in\{\mathrm{A},\mathrm{B}\}$, and write the Schr{\"o}dinger Eq. (\ref{Schrodinger Eq for atoms a and b}) as \begin{align} \left(\frac{\hbar^{2}k_{\mathrm{A}}^{2}}{2\mu_{\mathrm{AB}}}-E_{\mathrm{AB}}\right)\phi(\mathbf{k}_{\mathrm{A}})= & -g_{\mathrm{AB}}\theta_{\Lambda_{\mathrm{A}}}(\mathbf{k}_{\mathrm{A}})\int\frac{d^{3}\boldsymbol{\kappa}_{\mathrm{B}}}{(2\pi)^{3}}\nonumber \\ & \times\theta_{\Lambda_{\mathrm{A}}}(\boldsymbol{\kappa}_{\mathrm{B}})\phi(\boldsymbol{\kappa}_{\mathrm{B}}),\label{Schrodinger Eq for electrons a and b, V2} \end{align} where $\mu_{\mathrm{AB}}$ is a reduced mass, $1/\mu_{\mathrm{AB}}=1/m_{\mathrm{A}}+1/m_{\mathrm{B}}$. We define \begin{equation} \mathcal{F}\equiv-4\pi\left(\frac{2\mu_{\mathrm{AB}}}{4\pi\hbar^{2}}g_{\mathrm{AB}}\right)\theta_{\Lambda_{\mathrm{A}}}(\mathbf{k}_{\mathrm{A}})\int\frac{d^{3}\mathbf{k}_{\mathrm{A}}}{(2\pi)^{3}}\theta_{\Lambda_{\mathrm{A}}}(\mathbf{k}_{\mathrm{A}})\phi(\mathbf{k}_{\mathrm{A}}),\label{function F} \end{equation} and rewrite Eq. (\ref{Schrodinger Eq for electrons a and b, V2}) as \begin{equation} (k_{\mathrm{A}}^{2}-\mathcal{E}_{\mathrm{AB}})\phi(\mathbf{k}_{\mathrm{A}})=\mathcal{F},\label{Schrodinger Eq for electrons a and b, V3} \end{equation} where $\mathcal{E}_{\mathrm{AB}}=2\mu_{\mathrm{AB}}E_{\mathrm{AB}}/\hbar^{2}$. For $\mathcal{E}_{\mathrm{AB}}>0$ the solution of Eq. (\ref{Schrodinger Eq for electrons a and b, V3}) is \begin{equation} \phi(\mathbf{k}_{\mathrm{A}})=(2\pi)^{3}\delta^{(3)}(\mathbf{k}_{\mathrm{A}}-\mathbf{K})+\frac{\mathcal{F}}{k_{\mathrm{A}}^{2}-\mathcal{E}_{\mathrm{AB}}+i\varepsilon},\label{ansatz for wave function E>0} \end{equation} where $0<\varepsilon\ll1$, $|\mathbf{K}|^{2}=\mathcal{E}_{\mathrm{AB}}$, and $\delta^{(3)}$ denotes the three-dimensional Dirac delta function. We insert the ansazt (\ref{ansatz for wave function E>0}) into Eq. (\ref{function F}): \begin{alignat}{1} \frac{\mathcal{F}}{4\pi\left(\frac{2\mu_{\mathrm{AB}}}{4\pi\hbar^{2}}g_{\mathrm{AB}}\right)}= & -\theta_{\Lambda_{\mathrm{A}}}(\mathbf{k}_{\mathrm{A}})\int\frac{d^{3}\mathbf{k}_{\mathrm{A}}}{(2\pi)^{3}}\theta_{\Lambda_{\mathrm{A}}}(\mathbf{k}_{\mathrm{A}})\Bigl[(2\pi)^{3}\nonumber \\ & \times\delta^{(3)}(\mathbf{k}_{\mathrm{A}}-\mathbf{K})+\frac{\mathcal{F}}{k_{\mathrm{A}}^{2}-\mathcal{E}_{\mathrm{AB}}+i\varepsilon}\Bigr].\label{ansatz in F} \end{alignat} We note that in the zero-energy limit, $\mathcal{E}_{\mathrm{AB}}\rightarrow0^{+}$, we have $\mathcal{F}=-4\pi a_{\mathrm{AB}}$, where $a_{\mathrm{AB}}$ is the \emph{s}-wave scattering length; see Ref. \cite{Scattering_Book}. Next, for contact interactions and \emph{s}-wave symmetry of the states, we evaluate Eq. (\ref{ansatz in F}) by taking the limit of $\Lambda_{\mathrm{A}}$ to infinity: \begin{equation} \frac{4\pi\hbar^{2}}{2\mu_{\mathrm{AB}}g_{\mathrm{AB}}}+\frac{2}{\pi}\lim_{\Lambda_{\mathrm{A}}\rightarrow\infty}\int_{0}^{\Lambda_{\mathrm{A}}}dk_{\mathrm{A}}\,\frac{k_{\mathrm{A}}^{2}}{k_{\mathrm{A}}^{2}+i\varepsilon}=\frac{1}{a_{\mathrm{AB}}},\label{equation for a and g} \end{equation} which yields \begin{equation} \frac{2\pi\hbar^{2}}{\mu_{\mathrm{AB}}}\frac{1}{g_{\mathrm{AB}}}+\frac{2}{\pi}\Lambda_{\mathrm{A}}=\frac{1}{a_{\mathrm{AB}}}\text{ as }\Lambda_{\mathrm{A}}\rightarrow\infty.\label{renormalization Eq} \end{equation} In this paper, we use Eq. (\ref{renormalization Eq}) as a regularization relation to introduce the \emph{s}-wave scattering length. With this, we can eliminate the ultraviolet divergences due to contact interactions. We also notice that for the bound states, $\mathcal{E}_{\mathrm{AB}}<0$, the solution of Eq. (\ref{Schrodinger Eq for electrons a and b, V3}) is \begin{equation} \phi(\mathbf{k}_{\mathrm{A}})=\frac{\mathcal{F}}{k_{\mathrm{A}}^{2}-\mathcal{E}_{\mathrm{AB}}}.\label{ansatz for wave function E<0} \end{equation} We insert the ansatz (\ref{ansatz for wave function E<0}) into Eq. (\ref{function F}), take the limit $\Lambda_{\mathrm{A}}\rightarrow\infty$, and use Eq. (\ref{renormalization Eq}). This results in \begin{equation} \frac{1}{a_{\mathrm{AB}}}=\sqrt{-\mathcal{E}_{\mathrm{AB}}};\label{first dimer in vacuum} \end{equation} cf. Fig. \ref{Fig10}. Equation (\ref{first dimer in vacuum}) shows that for contact interactions the lowest-energy two-body bound state in vacuum emerges at unitarity, $a_{\mathrm{AB}}\rightarrow\pm\infty$, where $|\mathcal{E}_{\mathrm{AB}}|\rightarrow0^{+}$; cf. Figs. \ref{Fig2} and \ref{Fig3}. \setcounter{equation}{0} \renewcommand{\theequation}{B\arabic{equation}} \section*{appendix b. separable interaction model (\ref{Cooper interaction operator}) and derivation of the system of two coupled integral eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2})} We apply the interaction operators $\hat{U}_{ij}$, given by Eq. (\ref{Cooper interaction operator}), on the wave function $\psi=\psi(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})$, and write the Schr{\"o}dinger Eq. (\ref{main Schrodinger Eq}) as follows: \begin{equation} \left(\frac{\hbar^{2}k_{1}^{2}}{2m_{1}}+\frac{\hbar^{2}k_{2}^{2}}{2m_{2}}+\frac{\hbar^{2}k_{3}^{2}}{2m_{3}}-E\right)\psi=-(\hat{U}_{12}+\hat{U}_{13}+\hat{U}_{23})\psi,\label{integral Eq 1} \end{equation} where \begin{align} \hat{U}_{12}\psi= & g_{12}\theta_{\Lambda_{1}}(\mathbf{k}_{1})\theta_{\Lambda_{2}}(\mathbf{k}_{2})\int\frac{d^{3}\mathbf{q}}{(2\pi)^{3}}\theta_{\Lambda_{1}}(\mathbf{k}_{1}-\mathbf{q})\nonumber \\ & \times\theta_{\Lambda_{2}}(\mathbf{k}_{2}+\mathbf{q})\psi(\mathbf{k}_{1}-\mathbf{q},\mathbf{k}_{2}+\mathbf{q},\mathbf{k}_{3}),\label{U12 on psi} \end{align} \begin{align} \hat{U}_{13}\psi= & g_{13}\theta_{\Lambda_{1}}(\mathbf{k}_{1})\theta_{\Lambda_{3}}(\mathbf{k}_{3})\int\frac{d^{3}\mathbf{q}}{(2\pi)^{3}}\theta_{\Lambda_{1}}(\mathbf{k}_{1}-\mathbf{q})\nonumber \\ & \times\theta_{\Lambda_{3}}(\mathbf{k}_{3}+\mathbf{q})\psi(\mathbf{k}_{1}-\mathbf{q},\mathbf{k}_{2},\mathbf{k}_{3}+\mathbf{q}),\label{U13 on psi} \end{align} \begin{align} \hat{U}_{23}\psi= & g_{23}\theta_{\Lambda_{2}}(\mathbf{k}_{2})\theta_{\Lambda_{3}}(\mathbf{k}_{3})\int\frac{d^{3}\mathbf{q}}{(2\pi)^{3}}\theta_{\Lambda_{2}}(\mathbf{k}_{2}-\mathbf{q})\nonumber \\ & \times\theta_{\Lambda_{3}}(\mathbf{k}_{3}+\mathbf{q})\psi(\mathbf{k}_{1},\mathbf{k}_{2}-\mathbf{q},\mathbf{k}_{3}+\mathbf{q}),\label{U23 on psi} \end{align} and the cutoff function $\theta$ is defined by Eq. (\ref{cutoff func}). The resulting operators (\ref{U12 on psi})-(\ref{U23 on psi}) reveal that the interaction operator $\hat{U}$ is separable \cite{separable}. Next, we define the variables $\boldsymbol{\kappa}_{i}\equiv\mathbf{q}+\mathbf{k}_{i}$, for $i=1,2,3$, and also assume $m_{3}=m_{2}$ and $\Lambda_{1}\sim\Lambda_{2}=\Lambda_{3}$. We consider the zero total momentum of the three-body bound states, $\psi(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})=\psi(\mathbf{k}_{2},\mathbf{k}_{3})\delta^{(3)}(\mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3})$, where $\delta^{(3)}$ denotes the three-dimensional Dirac delta function. We also define three functions $F_{1}$, $F_{2}$, and $F_{3}$ as \begin{align} F_{1}(\mathbf{k}_{1})= & g_{23}\int\frac{d^{3}\boldsymbol{\kappa}_{3}}{(2\pi)^{3}}\theta_{\Lambda_{2}}(-\mathbf{k}_{1}-\boldsymbol{\kappa}_{3})\theta_{\Lambda_{3}}(\boldsymbol{\kappa}_{3})\nonumber \\ & \times\psi(-\mathbf{k}_{1}-\boldsymbol{\kappa}_{3},\boldsymbol{\kappa}_{3}),\label{F1 - V1}\\ F_{2}(\mathbf{k}_{2})= & g_{13}\int\frac{d^{3}\boldsymbol{\kappa}_{3}}{(2\pi)^{3}}\theta_{\Lambda_{1}}(-\mathbf{k}_{2}-\boldsymbol{\kappa}_{3})\theta_{\Lambda_{3}}(\boldsymbol{\kappa}_{3})\psi(\mathbf{k}_{2},\boldsymbol{\kappa}_{3}),\label{F2 - V1}\\ F_{3}(\mathbf{k}_{3})= & g_{12}\int\frac{d^{3}\boldsymbol{\kappa}_{2}}{(2\pi)^{3}}\theta_{\Lambda_{1}}(-\mathbf{k}_{3}-\boldsymbol{\kappa}_{2})\theta_{\Lambda_{2}}(\boldsymbol{\kappa}_{2})\psi(\boldsymbol{\kappa}_{2},\mathbf{k}_{3}).\label{F3 - V1} \end{align} We use Eqs. (\ref{F1 - V1})-(\ref{F3 - V1}) and rewrite Eq. (\ref{integral Eq 1}) as follows: \begin{align} \left(\frac{\hbar^{2}(\mathbf{k}_{2}+\mathbf{k}_{3})^{2}}{2m_{1}}+\frac{\hbar^{2}k_{2}^{2}}{2m_{2}}+\frac{\hbar^{2}k_{3}^{2}}{2m_{3}}-E\right)\psi(\mathbf{k}_{2},\mathbf{k}_{3})\qquad\qquad\nonumber \\ =-\theta_{\Lambda_{2}}(\mathbf{k}_{2})\theta_{\Lambda_{2}}(\mathbf{k}_{3})F_{1}(-\mathbf{k}_{2}-\mathbf{k}_{3})-\theta_{\Lambda_{1}}(-\mathbf{k}_{2}-\mathbf{k}_{3})\nonumber \\ \times\theta_{\Lambda_{3}}(\mathbf{k}_{3})F_{2}(\mathbf{k}_{2})-\theta_{\Lambda_{1}}(-\mathbf{k}_{2}-\mathbf{k}_{3})\theta_{\Lambda_{2}}(\mathbf{k}_{2})F_{3}(\mathbf{k}_{3}).\label{integral Eq 2} \end{align} Equation (\ref{integral Eq 2}) provides an ansatz for the wave function: \begin{widetext} \begin{equation} \psi(\mathbf{k}_{2},\mathbf{k}_{3})=-\frac{\theta_{\Lambda_{2}}(\mathbf{k}_{2})\theta_{\Lambda_{2}}(\mathbf{k}_{3})F_{1}(-\mathbf{k}_{2}-\mathbf{k}_{3})+\theta_{\Lambda_{1}}(-\mathbf{k}_{2}-\mathbf{k}_{3})\theta_{\Lambda_{3}}(\mathbf{k}_{3})F_{2}(\mathbf{k}_{2})+\theta_{\Lambda_{1}}(-\mathbf{k}_{2}-\mathbf{k}_{3})\theta_{\Lambda_{2}}(\mathbf{k}_{2})F_{3}(\mathbf{k}_{3})}{\frac{\hbar^{2}(\mathbf{k}_{2}+\mathbf{k}_{3})^{2}}{2m_{1}}+\frac{\hbar^{2}k_{2}^{2}}{2m_{2}}+\frac{\hbar^{2}k_{3}^{2}}{2m_{3}}-E}.\label{wavefunction ansatz} \end{equation} \end{widetext}We take into account the Fermi sea constraints by $k_{2}>k_{F}$ and $k_{3}>k_{F}$. We also assume $g_{13}=g_{12}$. If the species ``2'' and ``3'' are in a singlet state, then $F_{3}=F_{2}$. Now we define $\mathbf{p}_{1}\equiv-\mathbf{k}_{2}-\boldsymbol{\kappa}_{3}$, $\mathbf{p}_{2}\equiv-\mathbf{k}_{1}-\boldsymbol{\kappa}_{3}$, $\mathbf{p}_{3}\equiv\boldsymbol{\kappa}_{3}$, and rewrite the unknown functions $F_{1}$ and $F_{2}$ as follows: \begin{align} F_{1}(\mathbf{k}_{1})= & g_{23}\int\frac{d^{3}\mathbf{p}_{3}}{(2\pi)^{3}}\theta_{k_{F},\Lambda_{2}}(-\mathbf{k}_{1}-\mathbf{p}_{3})\theta_{k_{F},\Lambda_{2}}(\mathbf{p}_{3})\nonumber \\ & \times\psi(-\mathbf{k}_{1}-\mathbf{p}_{3},\mathbf{p}_{3}),\label{F1 _main_representation}\\ F_{2}(\mathbf{k}_{2})= & g_{12}\int\frac{d^{3}\mathbf{p}_{3}}{(2\pi)^{3}}\theta_{\Lambda_{1}}(-\mathbf{k}_{2}-\mathbf{p}_{3})\theta_{k_{F},\Lambda_{2}}(\mathbf{p}_{3})\psi(\mathbf{k}_{2},\mathbf{p}_{3}).\label{F2_main_representation} \end{align} Finally, we choose a three-body parameter $\Lambda\gg k_{F}$ to fix the range of the interactions and to regularize the three-body bound-state solutions. We insert the ansatz (\ref{wavefunction ansatz}) into Eqs. (\ref{F1 _main_representation}) and (\ref{F2_main_representation}), and arrive at the system of two coupled integral Eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2}), where the integral kernels $K_{i}$ and $\tilde{K_{i}}$, $i=1,2,3$, are: \begin{align} K_{1}(\mathbf{k}_{2},\mathbf{p}_{3};E)= & \frac{\theta_{\Lambda_{1}}(-\mathbf{k}_{2}-\mathbf{p}_{3})\theta_{k_{F},\Lambda_{2}}(\mathbf{p}_{3})}{\frac{\hbar^{2}(\mathbf{k}_{2}+\mathbf{p}_{3})^{2}}{2m_{1}}+\frac{\hbar^{2}k_{2}^{2}}{2m_{2}}+\frac{\hbar^{2}p_{3}^{2}}{2m_{2}}-E},\label{K1}\\ K_{2}(\mathbf{k}_{2},\mathbf{p}_{1};E)= & \frac{\theta_{\Lambda_{1}}(\mathbf{p}_{1})\theta_{k_{F},\Lambda_{2}}(-\mathbf{p}_{1}-\mathbf{k}_{2})}{\frac{\hbar^{2}p_{1}^{2}}{2m_{1}}+\frac{\hbar^{2}k_{2}^{2}}{2m_{2}}+\frac{\hbar^{2}(\mathbf{p}_{1}+\mathbf{k}_{2})^{2}}{2m_{2}}-E},\label{K2}\\ K_{3}(\mathbf{k}_{1},\mathbf{p}_{3};E)= & \frac{\theta_{k_{F},\Lambda_{2}}(-\mathbf{k}_{1}-\mathbf{p}_{3})\theta_{k_{F},\Lambda_{2}}(\mathbf{p}_{3})}{\frac{\hbar^{2}k_{1}^{2}}{2m_{1}}+\frac{\hbar^{2}(\mathbf{k}_{1}+\mathbf{p}_{3})^{2}}{2m_{2}}+\frac{\hbar^{2}p_{3}^{2}}{2m_{2}}-E},\label{K3} \end{align} \begin{align} \tilde{K}_{1}(\mathbf{k}_{2},\tilde{\mathbf{p}}_{3};E)= & \frac{\theta_{\Lambda}(-\mathbf{k}_{2}-\tilde{\mathbf{p}}_{3})\theta_{k_{F},\Lambda}(\mathbf{k}_{2})\theta_{k_{F},\Lambda}(\tilde{\mathbf{p}}_{3})}{\frac{\hbar^{2}(\mathbf{k}_{2}+\tilde{\mathbf{p}}_{3})^{2}}{2m_{1}}+\frac{\hbar^{2}k_{2}^{2}}{2m_{2}}+\frac{\hbar^{2}\tilde{p}_{3}^{2}}{2m_{2}}-E},\label{K1_tilde}\\ \tilde{K}_{2}(\mathbf{k}_{2},\tilde{\mathbf{p}}_{1};E)= & \frac{\theta_{\Lambda}(\tilde{\mathbf{p}}_{1})\theta_{k_{F},\Lambda}(\mathbf{k}_{2})\theta_{k_{F},\Lambda}(-\tilde{\mathbf{p}}_{1}-\mathbf{k}_{2})}{\frac{\hbar^{2}\tilde{p}_{1}^{2}}{2m_{1}}+\frac{\hbar^{2}k_{2}^{2}}{2m_{2}}+\frac{\hbar^{2}(\tilde{\mathbf{p}}_{1}+\mathbf{k}_{2})^{2}}{2m_{2}}-E},\label{K2_tilde}\\ \tilde{K}_{3}(\mathbf{k}_{1},\tilde{\mathbf{p}}_{3};E)= & \frac{\theta_{k_{F},\Lambda}(-\mathbf{k}_{1}-\tilde{\mathbf{p}}_{3})\theta_{\Lambda}(\mathbf{k}_{1})\theta_{k_{F},\Lambda}(\tilde{\mathbf{p}}_{3})}{\frac{\hbar^{2}k_{1}^{2}}{2m_{1}}+\frac{\hbar^{2}(\mathbf{k}_{1}+\tilde{\mathbf{p}}_{3})^{2}}{2m_{2}}+\frac{\hbar^{2}\tilde{p}_{3}^{2}}{2m_{2}}-E}.\label{K3_tilde} \end{align} \setcounter{equation}{0} \renewcommand{\theequation}{C\arabic{equation}} \section*{appendix c. calculation of the function {\large{}$\Omega_{23}$} } For \emph{s}-wave symmetry of the states we write the integral kernel $K_{3}(\mathbf{k}_{1},\mathbf{p}_{3};\mathcal{E})$ as \begin{equation} \mathcal{K}_{3}(k_{1},p_{3};\mathcal{E})=p_{3}\ln\left(\frac{p_{3}^{2}+k_{1}p_{3}v_{\mathrm{max}}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}{p_{3}^{2}+k_{1}p_{3}v_{\mathrm{min}}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}\right),\label{K3 - V2} \end{equation} where $\mathcal{E}=2\mu_{12}E/\hbar^{2}$, $E$ is the energy of the three-body system, $v_{\mathrm{max}}$ and $v_{\mathrm{min}}$ denote the upper- and lower bound of $v\equiv\cos\vartheta_{\mathbf{p}_{3},\mathbf{k}_{1}}$, respectively, and $\mu_{23}$ is a reduced mass, $1/\mu_{23}=1/m_{2}+1/m_{3}=2/m_{2}$. For contact interactions we have: \begin{equation} v_{\mathrm{max}}=\min_{p_{3}}\left(1,\frac{\Lambda_{2}^{2}-k_{1}^{2}-p_{3}^{2}}{2k_{1}p_{3}}\right)\rightarrow1\;\text{as }\Lambda_{2}\rightarrow\infty,\label{v_max} \end{equation} \begin{align} v_{\mathrm{min}}= & \max_{p_{3}}\left(-1,\frac{k_{F}^{2}-k_{1}^{2}-p_{3}^{2}}{2k_{1}p_{3}}\right)\nonumber \\ = & \begin{cases} -1, & \begin{array}{c} \text{for }k_{F}<p_{3}<k_{1}-k_{F}\\ \text{ or }p_{3}>k_{1}+k_{F}, \end{array}\\ \\ \frac{k_{F}^{2}-k_{1}^{2}-p_{3}^{2}}{2k_{1}p_{3}}, & \text{for }k_{1}-k_{F}<p_{3}<k_{1}+k_{F}. \end{cases}\label{v_min} \end{align} Next, without loss of generality we assume that $\mathbf{p}_{3}=p_{3}\mathbf{e}_{z}$, where $\mathbf{e}_{z}$ is the unit vector in the direction of the $z$-axis, and calculate the function $\Omega_{23}$ for contact interactions: \begin{alignat}{1} \Omega_{23}\equiv & \Omega_{23}(a_{23},k_{1};k_{F},\mathcal{E})\nonumber \\ \equiv & \frac{4\pi\hbar^{2}}{2\mu_{23}g_{23}}+\frac{1}{\frac{2\mu_{23}}{m_{2}}\pi k_{1}}\lim_{\Lambda_{2}\rightarrow\infty}\int_{k_{F}}^{\Lambda_{2}}dp_{3}\nonumber \\ & \times p_{3}\ln\left(\frac{p_{3}^{2}+k_{1}p_{3}v_{\mathrm{max}}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}{p_{3}^{2}+k_{1}p_{3}v_{\mathrm{min}}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}\right).\label{Omega_23_V1} \end{alignat} To calculate Eq. (\ref{Omega_23_V1}) we consider two cases. For $0<k_{1}\leqslant2k_{F}$ we have: \begin{align} \Omega_{23}= & \frac{4\pi\hbar^{2}}{2\mu_{23}g_{23}}+\frac{1}{\pi k_{1}}\int_{k_{F}}^{k_{1}+k_{F}}dp_{3}\,p_{3}\nonumber \\ & \qquad\times\ln\left(\frac{p_{3}^{2}+k_{1}p_{3}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}{\frac{1}{2}p_{3}^{2}+(\frac{\mu_{23}}{\mu_{12}}-\frac{1}{2})k_{1}^{2}+\frac{1}{2}k_{F}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}\right)\nonumber \\ & +\frac{1}{\pi k_{1}}\lim_{\Lambda_{2}\rightarrow\infty}\int_{k_{1}+k_{F}}^{\Lambda_{2}}dp_{3}\,p_{3}\nonumber \\ & \qquad\times\ln\left(\frac{p_{3}^{2}+k_{1}p_{3}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}{p_{3}^{2}-k_{1}p_{3}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}\right).\label{Omega_first_case_1} \end{align} We calculate each integral and use Eq. (\ref{s wave scattering length}). The result is \begin{align} \Omega_{23}= & \frac{1}{a_{23}}-\frac{k_{1}}{2\pi}-\frac{k_{F}}{\pi}+\frac{2\sqrt{\kappa}}{\pi}\left[\arctan\left(\frac{\frac{1}{2}k_{1}+k_{F}}{\sqrt{\kappa}}\right)-\frac{\pi}{2}\right]\nonumber \\ & +\frac{1}{\pi k_{1}}\left((\frac{\mu_{23}}{\mu_{12}}-\frac{1}{2})k_{1}^{2}+k_{F}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}\right)\nonumber \\ & \times\ln\left(\frac{(\frac{\mu_{23}}{\mu_{12}}-\frac{1}{2})k_{1}^{2}+k_{F}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}{\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}+k_{F}k_{1}+k_{F}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}\right),\label{Omega_first_case_result} \end{align} where $\kappa\equiv(\frac{\mu_{23}}{\mu_{12}}-\frac{1}{4})k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}$. The lowest-energy two-body bound state, Cooper pair-23, is described by \begin{equation} \Omega_{23}(a_{23},k_{1}\rightarrow0;k_{F},\mathcal{E}\rightarrow\mathcal{E}_{23})=0,\label{first dimer23} \end{equation} resulting in Eq. (\ref{Cooper pair solution}); cf. Fig. \ref{Fig2}. For $k_{1}\geqslant2k_{F}$ we have: \begin{align} \Omega_{23}= & \frac{4\pi\hbar^{2}}{2\mu_{23}g_{23}}+\frac{1}{\pi k_{1}}\int_{k_{F}}^{k_{1}-k_{F}}dp_{3}\,p_{3}\nonumber \\ & \qquad\times\ln\left(\frac{p_{3}^{2}+k_{1}p_{3}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}{p_{3}^{2}-k_{1}p_{3}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}\right)\nonumber \\ & +\frac{1}{\pi k_{1}}\int_{k_{1}-k_{F}}^{k_{1}+k_{F}}dp_{3}\,p_{3}\nonumber \\ & \qquad\times\ln\left(\frac{p_{3}^{2}+k_{1}p_{3}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}{\frac{1}{2}p_{3}^{2}+(\frac{\mu_{23}}{\mu_{12}}-\frac{1}{2})k_{1}^{2}+\frac{1}{2}k_{F}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}\right)\nonumber \\ & +\frac{1}{\pi k_{1}}\lim_{\Lambda_{2}\rightarrow\infty}\int_{k_{1}+k_{F}}^{\Lambda_{2}}dp_{3}\,p_{3}\nonumber \\ & \qquad\times\ln\left(\frac{p_{3}^{2}+k_{1}p_{3}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}{p_{3}^{2}-k_{1}p_{3}+\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}\right).\label{Omega_second_case_1} \end{align} We calculate each integral and use Eq. (\ref{s wave scattering length}), which results in \begin{align} \Omega_{23}= & \frac{1}{a_{23}}-\frac{2k_{F}}{\pi}-\frac{2\sqrt{\kappa}}{\pi}\left[\arctan\left(\frac{\frac{1}{2}k_{1}-k_{F}}{\sqrt{\kappa}}\right)+\right.\nonumber \\ & \left.-\arctan\left(\frac{\frac{1}{2}k_{1}+k_{F}}{\sqrt{\kappa}}\right)+\frac{\pi}{2}\right]+\nonumber \\ & +\frac{1}{\pi k_{1}}\left((\frac{\mu_{23}}{\mu_{12}}-\frac{1}{2})k_{1}^{2}+k_{F}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}\right)\nonumber \\ & \times\ln\left(\frac{\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}+k_{F}k_{1}+k_{F}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}{\frac{\mu_{23}}{\mu_{12}}k_{1}^{2}-k_{F}k_{1}+k_{F}^{2}-\frac{\mu_{23}}{\mu_{12}}\mathcal{E}}\right).\label{Omega_second_case_result} \end{align} \setcounter{equation}{0} \renewcommand{\theequation}{D\arabic{equation}} \section*{appendix d. calculation of the function {\large{}$\Omega_{12}$} } For a noninteracting mixture, $g_{23}=0$, the system of the integral Eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2}) reduces to \begin{multline} \left[\frac{1}{g_{12}}+\int\frac{d^{3}\mathbf{p}_{3}}{(2\pi)^{3}}K_{1}(\mathbf{k}_{2},\mathbf{p}_{3};E)\right]F_{2}(\mathbf{k}_{2})\\ =-\int\frac{d^{3}\tilde{\mathbf{p}}_{3}}{(2\pi)^{3}}\tilde{K}_{1}(\mathbf{k}_{2},\tilde{\mathbf{p}}_{3};E)F_{2}(\tilde{\mathbf{p}}_{3}),\label{main integral Eq for g23 Zero} \end{multline} where the integral kernels $K_{1}$ and $\tilde{K}_{1}$ are given by Eqs. (\ref{K1}) and (\ref{K1_tilde}), respectively. The cutoff function $\theta_{\Lambda_{1}}(-\mathbf{k}_{2}-\mathbf{p}_{3})$, which appears in $K_{1}$, imposes an upper bound, $u_{\mathrm{max}}$, on the angle between the two momenta $\mathbf{k}_{2}$ and $\mathbf{p}_{3}$, $u\equiv\cos\vartheta_{\mathbf{p}_{3},\mathbf{k}_{2}}$: \begin{equation} u_{\mathrm{max}}=\min_{p_{3}}\left(1,\frac{\Lambda_{1}^{2}-k_{2}^{2}-p_{3}^{2}}{2k_{2}p_{3}}\right)\rightarrow1\;\text{as }\Lambda_{1}\rightarrow\infty.\label{u_max} \end{equation} Next, without loss of generality we assume that $\mathbf{p}_{3}=p_{3}\mathbf{e}_{z}$, where $\mathbf{e}_{z}$ is the unit vector in the direction of the $z$-axis. For contact interactions and \emph{s}-wave symmetry of the states we write Eq. (\ref{main integral Eq for g23 Zero}) as Eq. (\ref{main Eq for g23 Zero}), where \begin{align} \Omega_{12}\equiv & \Omega_{12}(a_{12},k_{2};k_{F},\mathcal{E})\nonumber \\ \equiv & \frac{4\pi\hbar^{2}}{2\mu_{12}g_{12}}+\frac{1}{2\pi\frac{\mu_{12}}{m_{1}}k_{2}}\lim_{\Lambda_{2}\rightarrow\infty}\int_{k_{F}}^{\Lambda_{2}}dp_{3}\nonumber \\ & \times p_{3}\ln\left(\frac{p_{3}^{2}+\frac{2\mu_{12}}{m_{1}}k_{2}p_{3}+k_{2}^{2}-\mathcal{E}}{p_{3}^{2}-\frac{2\mu_{12}}{m_{1}}k_{2}p_{3}+k_{2}^{2}-\mathcal{E}}\right).\label{Omega_12_V1} \end{align} Here, $\mathcal{E}=2\mu_{12}E/\hbar^{2}$ and $E$ is the energy of the three-body system. We calculate the integral (\ref{Omega_12_V1}), and use Eq. (\ref{s wave scattering length}) to obtain: \begin{align} \Omega_{12}= & \frac{1}{a_{12}}-\frac{k_{F}}{\pi}+\frac{\sqrt{\eta}}{\pi}\left[\arctan\left(\frac{\frac{\mu_{12}}{m_{1}}k_{2}+k_{F}}{\sqrt{\eta}}\right)\right.\nonumber \\ & \left.-\arctan\left(\frac{\frac{\mu_{12}}{m_{1}}k_{2}-k_{F}}{\sqrt{\eta}}\right)-\pi\right]+\frac{1}{4\pi\frac{\mu_{12}}{m_{1}}k_{2}}\nonumber \\ & \times\left[\left(2(\frac{\mu_{12}}{m_{1}})^{2}-1\right)k_{2}^{2}-k_{F}^{2}+\mathcal{E}\right]\nonumber \\ & \times\ln\left(\frac{k_{2}^{2}+\frac{2\mu_{12}}{m_{1}}k_{F}k_{2}+k_{F}^{2}-\mathcal{E}}{k_{2}^{2}-\frac{2\mu_{12}}{m_{1}}k_{F}k_{2}+k_{F}^{2}-\mathcal{E}}\right),\label{Omega_12_result} \end{align} where $\eta\equiv[1-(\mu/m_{1})^{2}]k_{2}^{2}-\mathcal{E}$. \setcounter{equation}{0} \renewcommand{\theequation}{E\arabic{equation}} \section*{appendix e. numerical solution of the system of integral eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2})} Recall that we only consider the isotropic solutions of Eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2}), i.e., $F_{i}(\mathbf{k})=F_{i}(k)$. To solve the system of the two coupled integral Eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2}) we replace the three-dimensional integrals over momentum by the absolute value of each momentum. Next, we calculate the two functions $\Omega_{23}$ and $\Omega_{12}$ analytically; see Appendices C and D. The analytical results reveal the lowest-energy dimer state and the two-body bound-state continuum. We solve the coupled Eqs. (\ref{Main Eq 1}) and (\ref{Main Eq 2}) for a given three-body parameter $\Lambda\gg k_{F}$. For that, we discretize the integral ranges on the grid points $\{x_{j}^{(N)}\}$, $j=1,2,\ldots,N$, that are the sets of zeros of the Legendre polynomials $P_{N}(x)$. We approximate each integral by a truncated sum that is weighted by $w_{j}^{(N)}$: \begin{equation} w_{j}^{(N)}=\frac{2}{1-[x_{j}^{(N)}]^{2}}\frac{1}{[P'_{N}(x_{j}^{(N)})]^{2}},\label{Gauss-Legendre weights} \end{equation} where $P'_{N}(x)=dP_{N}(x)/dx$ \cite{Gaussian_quadrature_1,Gaussian_quadrature_2}. This choice is the so-called Gauss-Legendre quadrature rule, supporting the highest order of accuracy among the other quadrature rules \cite{Gaussian_quadrature_1}. We apply the Gauss-Legendre quadrature rule on each integral and construct a matrix equation analog to an integral equation. For given values of $E$ below the threshold energy (\ref{threshold energy}), we calculate the eigenvalues resulting in the corresponding values of the $s$-wave scattering lengths. The unknown functions $F_{1}$ and $F_{2}$ will be obtained as the eigenvectors of the matrix equations. \setcounter{equation}{0} \renewcommand{\theequation}{F\arabic{equation}} \section*{appendix f. derivation of eq. (\ref{a_12 critical trimer})} The atoms ``1'' and ``2'' interact attractively via contact interactions according to Eq. (\ref{Cooper interaction operator}). We follow Appendix B and rewrite the Schr{\"o}dinger equation describing the pair-12 in terms of the relative momentum, $\mathbf{p}_{12}\equiv(\mu_{12}/m_{1})\mathbf{k}_{1}-(1-\mu_{12}/m_{1})\mathbf{k}_{2}$, and the total momentum, $\mathbf{P}_{12}\equiv\mathbf{k}_{1}+\mathbf{k}_{2}$, as \begin{equation} \frac{4\pi\hbar^{2}}{2\mu_{12}g_{12}}=-4\pi\int\frac{d^{3}\mathbf{p}_{12}}{(2\pi)^{3}}\,\frac{1}{p_{12}^{2}+\frac{\mu_{12}}{m_{1}}(1-\frac{\mu_{12}}{m_{1}})P_{12}^{2}-\mathcal{E}_{12}},\label{integral Eq for two-body 12} \end{equation} where $\mathcal{E}_{12}=2\mu_{12}E_{12}/\hbar^{2}$, $E_{12}$ is the energy of the pair-12, and $\mu_{12}$ is a reduced mass, $1/\mu_{12}=1/m_{1}+1/m_{2}$. The Fermi sea demands a constraint on the momentum of the atom ``2'', $k_{2}>k_{F}$, which in terms of the relative and total momenta reads $|\frac{\mu_{12}}{m_{1}}\mathbf{P}_{12}-\mathbf{p}_{12}|>k_{F}$. This constraint imposes an upper bound on $\cos\vartheta_{\mathbf{p}_{12},\mathbf{P}_{12}}$. Without loss of generality we assume that $\mathbf{P}_{12}=P_{12}\mathbf{e}_{z}$, where $\mathbf{e}_{z}$ is the unit vector in the direction of the $z$-axis. \begin{figure}[b] \includegraphics[width=1\columnwidth]{10_CoopersComp} \caption{Energy $\mathcal{E}=2\mu E/\hbar^{2}$ in units of $R^{-2}$ vs $R/a$ for two equal-mass atoms with a reduced mass $\mu$ and the \emph{s}-wave scattering length $a$, where $R$ denotes an arbitrary length scale. The green curve is the result in vacuum, $k_{F}=0$, given by Eq. (\ref{first dimer in vacuum}). The blue curve shows the result of a Cooper pair with vanishing total momentum described by Eq. (\ref{Cooper pair solution}), where both atoms are immersed in an inert Fermi sea with the Fermi momentum $k_{F}R=1$. The red curve is the result for a pair with the total momentum $k_{F}$, where one atom is in vacuum and the other is subject to an inert Fermi sea with the Fermi momentum $k_{F}R=1$; cf. Eqs. (\ref{a12 for pair-12 with P _1 result}) and (\ref{a12 for pair-12 with P -2 result}). The gray dashed lines show $\mathcal{E}_{\mathrm{thr}}$ and $\mathcal{E}_{\mathrm{thr}}/2$, where $\mathcal{E}_{\mathrm{thr}}=2\mu E_{\mathrm{thr}}/\hbar^{2}=k_{F}^{2}$.} \label{Fig10} \end{figure} To solve Eq. (\ref{integral Eq for two-body 12}) analytically, we assume \emph{s}-wave symmetry of the states and consider two cases. For $P_{12}\leqslant(\mu_{12}/m_{1})^{-1}k_{F}$ we have: \begin{align} \frac{4\pi\hbar^{2}}{2\mu_{12}g_{12}}= & \frac{-1}{\frac{2\mu_{12}}{m_{1}}\pi P_{12}}\int_{k_{F}-\frac{\mu_{12}}{m_{1}}P_{12}}^{k_{F}+\frac{\mu_{12}}{m_{1}}P_{12}}dp_{12}\,p_{12}\nonumber \\ & \times\frac{p_{12}^{2}+(\frac{\mu_{12}}{m_{1}})^{2}P_{12}^{2}-k_{F}^{2}}{p_{12}^{2}+\frac{\mu_{12}}{m_{1}}(1-\frac{\mu_{12}}{m_{1}})P_{12}^{2}-\mathcal{E}_{12}}\nonumber \\ & -\frac{1}{\pi}\int_{k_{F}-\frac{\mu_{12}}{m_{1}}P_{12}}^{k_{F}+\frac{\mu_{12}}{m_{1}}P_{12}}dp_{12}\,p_{12}^{2}\nonumber \\ & \times\frac{1}{p_{12}^{2}+\frac{\mu_{12}}{m_{1}}(1-\frac{\mu_{12}}{m_{1}})P_{12}^{2}-\mathcal{E}_{12}}\nonumber \\ & -\frac{2}{\pi}\int_{k_{F}+\frac{\mu_{12}}{m_{1}}P_{12}}^{\Lambda_{2}}dp_{12}\,p_{12}^{2}\nonumber \\ & \times\frac{1}{p_{12}^{2}+\frac{\mu_{12}}{m_{1}}(1-\frac{\mu_{12}}{m_{1}})P_{12}^{2}-\mathcal{E}_{12}}.\label{a12 for pair-12 with P _1 v1} \end{align} We calculate each integral, take the limit $\Lambda_{2}\rightarrow\infty$, and use Eq. (\ref{s wave scattering length}). The result is \begin{align} \frac{1}{a_{12}}= & \frac{k_{F}}{\pi}-\frac{1}{\pi}\sqrt{\varrho}\left[\arctan\left(\frac{k_{F}-\frac{\mu_{12}}{m_{1}}P_{12}}{\sqrt{\varrho}}\right)\right.\nonumber \\ & \left.+\arctan\left(\frac{k_{F}+\frac{\mu_{12}}{m_{1}}P_{12}}{\sqrt{\varrho}}\right)-\pi\right]+\frac{1}{4\pi\frac{\mu_{12}}{m_{1}}P_{12}}\nonumber \\ & \times\left(\frac{\mu_{12}}{m_{1}}(\frac{2\mu_{12}}{m_{1}}-1)P_{12}^{2}-k_{F}^{2}+\mathcal{E}_{12}\right)\nonumber \\ & \times\ln\left(\frac{\frac{\mu_{12}}{m_{1}}P_{12}^{2}-\frac{2\mu_{12}}{m_{1}}k_{F}P_{12}+k_{F}^{2}-\mathcal{E}_{12}}{\frac{\mu_{12}}{m_{1}}P_{12}^{2}+\frac{2\mu_{12}}{m_{1}}k_{F}P_{12}+k_{F}^{2}-\mathcal{E}_{12}}\right),\label{a12 for pair-12 with P _1 result} \end{align} where $\varrho\equiv\frac{\mu_{12}}{m_{1}}(1-\frac{\mu_{12}}{m_{1}})P_{12}^{2}-\mathcal{E}_{12}$ . For $P_{12}\geqslant(\mu_{12}/m_{1})^{-1}k_{F}$ we have: \begin{align} \frac{4\pi\hbar^{2}}{2\mu_{12}g_{12}}= & -\frac{2}{\pi}\int_{0}^{\frac{\mu_{12}}{m_{1}}P_{12}-k_{F}}dp_{12}\,p_{12}^{2}\nonumber \\ & \times\frac{1}{p_{12}^{2}+\frac{\mu_{12}}{m_{1}}(1-\frac{\mu_{12}}{m_{1}})P_{12}^{2}-\mathcal{E}_{12}}\nonumber \\ & -\frac{1}{\frac{2\mu_{12}}{m_{1}}\pi P_{12}}\int_{\frac{\mu_{12}}{m_{1}}P_{12}-k_{F}}^{\frac{\mu_{12}}{m_{1}}P_{12}+k_{F}}dp_{12}\,p_{12}\nonumber \\ & \times\frac{p_{12}^{2}+(\frac{\mu_{12}}{m_{1}})^{2}P_{12}^{2}-k_{F}^{2}}{p_{12}^{2}+\frac{\mu}{m_{1}}(1-\frac{\mu}{m_{1}})P_{12}^{2}-\mathcal{E}_{12}}\nonumber \\ & -\frac{1}{\pi}\int_{\frac{\mu_{12}}{m_{1}}P_{12}-k_{F}}^{\frac{\mu_{12}}{m_{1}}P_{12}+k_{F}}dp_{12}\,p_{12}^{2}\nonumber \\ & \times\frac{1}{p_{12}^{2}+\frac{\mu_{12}}{m_{1}}(1-\frac{\mu_{12}}{m_{1}})P_{12}^{2}-\mathcal{E}_{12}}\nonumber \\ & -\frac{2}{\pi}\int_{\frac{\mu_{12}}{m_{1}}P_{12}+k_{F}}^{\Lambda_{2}}dp_{12}\,p_{12}^{2}\nonumber \\ & \times\frac{1}{p_{12}^{2}+\frac{\mu_{12}}{m_{1}}(1-\frac{\mu_{12}}{m_{1}})P_{12}^{2}-\mathcal{E}_{12}}.\label{a12 for pair-12 with P _2 v1} \end{align} We calculate each integral, take the limit $\Lambda_{2}\rightarrow\infty$, use Eq. (\ref{s wave scattering length}), and arrive at: \begin{align} \frac{1}{a_{12}}= & \frac{k_{F}}{\pi}+\frac{1}{\pi}\sqrt{\varrho}\left[\arctan\left(\frac{\frac{\mu_{12}}{m_{1}}P_{12}-k_{F}}{\sqrt{\varrho}}\right)\right.\nonumber \\ & \left.-\arctan\left(\frac{\frac{\mu_{12}}{m_{1}}P_{12}+k_{F}}{\sqrt{\varrho}}\right)+\pi\right]+\frac{1}{4\pi\frac{\mu_{12}}{m_{1}}P_{12}}\nonumber \\ & \times\left(\frac{\mu_{12}}{m_{1}}(\frac{2\mu_{12}}{m_{1}}-1)P_{12}^{2}-k_{F}^{2}+\mathcal{E}_{12}\right)\nonumber \\ & \times\ln\left(\frac{\frac{\mu_{12}}{m_{1}}P_{12}^{2}-\frac{2\mu_{12}}{m_{1}}k_{F}P_{12}+k_{F}^{2}-\mathcal{E}_{12}}{\frac{\mu_{12}}{m_{1}}P_{12}^{2}+\frac{2\mu_{12}}{m_{1}}k_{F}P_{12}+k_{F}^{2}-\mathcal{E}_{12}}\right);\label{a12 for pair-12 with P -2 result} \end{align} see Fig. \ref{Fig10}. As discussed in the text, for $m_{2}/m_{1}\gg1$ we estimate the onset of a highest-energy excited three-body bound state at zero energy by calculating the onset of the lowest-energy pair-12. To do that, we expand Eq. (\ref{a12 for pair-12 with P _1 result}) or Eq. (\ref{a12 for pair-12 with P -2 result}) for $m_{2}/m_{1}\gg1$, as $\mathcal{E}_{12}\rightarrow0$ and $P_{12}\rightarrow(\frac{\mu_{12}}{m_{1}})^{-1}k_{F}$, which results in Eq. (\ref{a_12 critical trimer}). \setcounter{equation}{0} \renewcommand{\theequation}{G\arabic{equation}} \section*{appendix g. calculation of the parameter {\large{}{\lowercase{$s_{0}$}}}} The Efimov scaling factor is $\lambda=\exp(\pi/|s_{0}|)$, where the effect of the mass ratio $m_{2}/m_{1}$ is described by the parameter $s_{0}$. For \emph{s}-wave symmetry of the states, if we have a system of three species only with two-resonantly interacting pairs, then $s_{0}$ is the purely imaginary root of the transcendental equation \begin{equation} \cos\left(\frac{\pi}{2}s_{0}\right)=\frac{2}{\sin2\vartheta}\frac{\sin(\vartheta s_{0})}{s_{0}},\label{Eq for s0 for two interacting pairs} \end{equation} where $\vartheta=\arcsin[(m_{2}/m_{1})/(1+m_{2}/m_{1})]$, $\vartheta\in[0,\pi/2]$. If all three species are resonantly interacting, we obtain $s_{0}$ as the purely imaginary root of the equation \begin{multline} \left[\cos\left(\frac{\pi}{2}s_{0}\right)-\frac{2}{\sin2\vartheta}\frac{\sin(\vartheta s_{0})}{s_{0}}\right]\cos\left(\frac{\pi}{2}s_{0}\right)\\ =\frac{8}{\sin^{2}2\gamma}\frac{\sin^{2}(\gamma s_{0})}{s_{0}^{2}},\label{Eq for s0 for three interacting pairs} \end{multline} where $\gamma=\arcsin\{\sqrt{(m_{1}/m_{2})/[2(1+m_{2}/m_{1})]}\}$, $\gamma\in[0,\pi/4]$. For a proof, see Ref. \cite{Pascal_review_paper}.
{ "timestamp": "2020-08-04T02:36:18", "yymm": "2007", "arxiv_id": "2007.13511", "language": "en", "url": "https://arxiv.org/abs/2007.13511" }
\section{Introduction} This contribution deals with the numerical evaluation of the Cauchy integral operator \begin{equation}\label{eq:int_op} (\mathcal C\varphi)(z):=\frac{1}{2\pi i}\int_\Gamma\frac{\varphi(\zeta)}{\zeta-z}\de \zeta,\quad z\in\C\setminus\Gamma, \end{equation} and related complex contour integrals, where $\Gamma$ is a closed curve enclosing a simply connected domain $\Omega\subset\C$ and $\varphi\in C(\Gamma)$, with the contour integral performed in the counterclockwise direction. This class of integral expressions play a fundamental role in the analytical and numerical solution of numerous problems in applied mathematics that relies on (complex) contour integral representations of smooth (analytic) functions, including, for instance, conformal mapping~\cite{kythe2019handbook,henriciVol3,papamichael2010numerical} and boundary value problems in electrostatics, elastostatics, and potential and Stokes flows~\cite{jaswon1977integral,mikhlin1964integral,muskhelishvili2008} (see Section.~\ref{sec:applications}). As is well known, the Cauchy integral operator~\eqref{eq:int_op} defines a complex analytic function $\mathcal C\varphi$ in both~$\Omega$ and in its unbounded open complement $\Omega'=\C\setminus\overline\Omega$~\cite{kress2012linear,henriciVol3}. Moreover, if $f$ is an analytic function in $\Omega$, it holds that $\mathcal Cf=f$ in $\Omega$ and $\mathcal Cf=0$ in $\Omega'$, or more conventionally, \begin{equation}\label{eq:CF} \frac{1}{2\pi i}\int_\Gamma \frac{f(\zeta)}{\zeta-z}\de \zeta=\begin{cases}f(z),&z\in\Omega,\\0,&z\in\Omega',\end{cases} \end{equation} which summarizes both the Cauchy integral formula and the Cauchy-Goursat theorem~\cite{henriciVol1}. Assuming further that $\Gamma$ is of class $C^2$ and that $\varphi\in C^{0,\alpha}(\Gamma)$ (i.e., that $\varphi$ is a H\"older continuous complex-valued function with exponent $0<\alpha\leq 1$), we have---by the Sokhotski-Plemelj theorem~\cite{kress2012linear,henriciVol3}---that the analytic function $\mathcal C\varphi$ in $\C\setminus\Gamma$ can be uniformly H\"older continuously extended from $\Omega$ to $\overline\Omega$ and from $\Omega'$ to $\overline{\Omega'}$ with limiting values \begin{equation}\label{eq:sing_int} \lim_{h\to 0}(\mathcal C\varphi)(z\pm h\nu(z))=\frac{1}{2}(H\varphi)(z) \mp \frac{1}{2}\varphi(z),\quad z\in\Gamma, \end{equation} where $\nu$ denotes the exterior unit normal to the contour $\Gamma$ and where $H\varphi$ is the principal-value integral \begin{equation}\label{eq:hilbert} (H\varphi)(z) :=\frac{1}{\pi i}\,{\rm p.v.}\!\!\int_\Gamma\frac{\varphi(\zeta)}{\zeta-z}\de \zeta,\quad z\in\Gamma. \end{equation} This paper introduces a unified approach for regularizing the contour integral expressions in~\eqref{eq:int_op} and~\eqref{eq:hilbert}, together with their corresponding derivatives, that enables their accurate numerical evaluation through direct use of elementary quadrature rules. \begin{figure}[ht] \centering \subfloat[Without regularization.]{\includegraphics[height=0.45\textwidth]{fig1a_AraucariaNoReg.pdf}\label{fig_ANR}}\qquad \subfloat[With regularization.]{\includegraphics[height=0.45\textwidth]{fig1b_AraucariaReg.pdf}\label{fig_AR}} \caption{Logarithm in base ten of the absolute error in the numerical evaluation of the Cauchy operator $(\mathcal C\varphi)(z)$, defined in~\eqref{eq:int_op}, within an Araucaria tree-shaped contour parametrized by means of cubic splines. (a) Direct evaluation produced without using regularization, and (b) using the proposed density interpolation method of order three, everywhere inside the curve. The input density function $\varphi(z)=\sin z$ and the same Chebyshev quadrature nodes were used in both examples.}\label{fig:AraucariaExample} \end{figure} Since the pole singularity in the Cauchy integral~\eqref{eq:int_op} does not lie on~$\Gamma$, the contour integrand is as smooth as the input function~$\varphi$ and the contour~$\Gamma$. The numerical evaluation of~\eqref{eq:int_op} at any $z\in\C\setminus\Gamma$ could be accomplished, in principle, employing elementary quadrature rules. The trapezoidal rule, for example, yields exponential convergence as the number of quadrature nodes increases whenever both~$\varphi$ and~$\Gamma$ are analytic~\cite{lyness1967numerical,fornaro1973numerical}. Issues arise, however, when~$z$ approaches the contour, resulting in a nearly singular integrand (which is the term coined to refer to functions that although smooth develop large derivatives at certain points due to the presence of nearby singularities) at the points on $\Gamma$ that are the closest to $z$. At the numerical level, this phenomenon translates into a severe deterioration in the accuracy of the approximate integral, as the given set of quadrature nodes becomes incapable of properly resolving the localized features of the integrand taking place at the location of the nearly singular points, which could lie anywhere on $\Gamma$ depending on the location of~$z$. This problem is amplified when derivatives of the Cauchy operator are considered. In order to illustrate the severity of the accuracy deterioration near the contour, we present Figure~\ref{fig:AraucariaExample} which displays the absolute error in the direct evaluation evaluation the Cauchy operator. A number of approaches have been developed to tackle this problem. Lyness~\&~Delves~\cite{lyness1967numerical} developed a method that combines the Cauchy integral formula with Taylor series expansions at suitably located points inside the region enclosed by the contour. Instead of directly evaluating the integral~\eqref{eq:int_op}, this method produces the Taylor series expansion $(\mathcal C\varphi)(z)\approx \sum_{j=0}^Nc_n(z-z_0)^n$ at a point $z_0\in\Omega$ sufficiently far from~$\Gamma$, by recasting the coefficients, via the Cauchy formula, as $c_n=n!(2\pi i)^{-1}\int_{\Gamma}\varphi(\zeta)/(\zeta-z_0)^{n+1}\de\zeta$. Making sure $z$ lies in inside the region of convergence of the series, one obtains an approximation of $(\mathcal C\varphi)(z)$. A similar but more direct and efficient approach was later introduced by Ioakimidis, Papadakis~\&~Perdios~\cite{ioakimidis1991numerical} which makes use of the Cauchy-Goursat theorem, instead of the Cauchy formula, and the trapezoidal rule. Some of the ideas put forth in these works have resurged in recent years in the form of highly sophisticated and accurate algorithms for close surface evaluation of two-dimensional Laplace, Helmholtz and Stokes layer potentials operators~\cite{helsing2008evaluation,Barnett:2014tq,Barnett:2015kg} which suffer from the same nearly singular integrand problem. A different set of techniques have been developed for the numerical evaluation of challenging principal-value and finite-part contour integrals---such as~\eqref{eq:hilbert} and its tangential derivative considered below in Section~\ref{sec:DL_formulation}. There is ample literature on this particular subject (e.g., \cite{monegato1994numerical,paget1972algorithm,theocaris1979method,hunter1972some,chawla1974modified,chawla1974numerical,elliott1979gauss,Kolm:2001bt,monegato1982numerical}) that we do not attempt to review here. We do, however, mention a few important contributions concerning the evaluation of finite-part (hypersingular) contour integrals arising in the context of boundary integral equations, which include the quadrature by expansion method~\cite{klockner2013quadrature}, which bears similarities to~\cite{ioakimidis1991numerical} and~\cite{Barnett:2014tq}, and the spectrally accurate techniques based on trigonometric interpolation~\cite{Kress:1995} and trigonometric differentiation~\cite{kress2014collocation}. This paper introduces a unified approach to the regularization of nearly-singular, principal-value, finite-part complex contour integrals. The proposed methodology relies on the ideas of density interpolation methods for boundary integral operators~\cite{perez2018plane,perez2019harmonic,perez2019planewave,perez2020IEEE}. It combines a Taylor-like interpolation of the contour density $\varphi$ at the nearly-singular (resp. singular) point on the contour $\Gamma$, with the Cauchy integral formula~\eqref{eq:CF} (resp. Sokhotski-Plemelj formula~\eqref{eq:sing_int}) to recast nearly-singular (resp. principal-value and finite-part) contour integrals in terms of integrands whose smoothness is controlled by the density interpolation order (see Section~\ref{sec:nearly_singular}). The resulting contour integrals can thus be directly evaluated using elementary quadrature rules. Unlike the methods put forth in~\cite{lyness1967numerical,ioakimidis1991numerical}, which rely on~$\varphi$ being the restriction to $\Gamma$ of an analytic function in $\overline\Omega$, the density interpolation technique only requires smoothness of $\varphi$ at the interpolation points on the contour~$\Gamma$. On the other hand, the density interpolation approach bears a number of advantages in comparison with existing methods as it requires fewer number of parameters to be tuned in order to achieve its optimal performance. It is worth mentioning that, as the recently introduced general-purpose density interpolation method~\cite{faria2020general}, the proposed technique is fully compatible with standard fast algorithms such as $\mathcal H$-matrices~\cite{hackbusch2015hierarchical} and the Fast Multipole Method~\cite{greengard1987fast}, for the efficient evaluation of the integral operators hereby considered. Two relevant applications of the proposed technique are discussed in detail in Section~\ref{sec:applications}, which concern (a) the regularization of the Laplace layer potentials and the associated boundary integral operators of Calder\'on calculus, and (b) the numerical evaluation of conformal mappings. Regarding the Laplace integral operators, this new density interpolation method amounts to a significant improvement over the related 2D~harmonic density interpolation method (HDI)~\cite{perez2019harmonic}. In particular, it allows for stand-alone kernel regularizations meaning that, unlike the HDI, where regularizations are effected by evaluation of pairs of integral operators, the present approach requires evaluation of just one integral operator to achieve the same integrand regularity degree. Regarding conformal mapping, on the other hand, it exploits the relation between conformal mappings and Laplace Dirichlet boundary value problems~\cite{symm1966integral,symm1967numerical} to derive Fredholm second-kind integral equations for the construction of both interior and exterior conformal mappings based on a Cauchy-operator integral representation of the double-layer potential. Upon regularization, the resulting conformal mappings can be accurately evaluated near and at the contour. High-order numerical methods for the practical implementation of the contour-integral regularization strategy are presented in Section~\ref{sec:numerics} for both smooth and piecewise smooth curves based on the trapezoidal and Fej\'er quadrature rules, respectively. An efficient high-order FFT-based algorithm is developed for the construction of the Cauchy-operator density interpolant in Section~\ref{sec:det_coef}. Section~\ref{sec:examples}, finally, presents a variety of numerical examples designed to validate and demonstrate the effectiveness, applicability, and accuracy, of the proposed methodology. \section{Regularization via density interpolation}\label{sec:nearly_singular} We start off this section by addressing the problem of the regularization of the Cauchy integral operator~\eqref{eq:int_op} and its derivatives at points $z\in\C\setminus\Gamma$ near the contour~$\Gamma$. Assuming~$\Gamma$ to be a Jordan curve of class~$C^1$ and provided the evaluation point $z$ in the Cauchy operator~\eqref{eq:int_op} lies close enough to~$\Gamma$, there is a unique~$z_0\in\Gamma$ such that \begin{equation}\label{eq:ns_point} z_0= \argminA_{\zeta\in\Gamma}|z-\zeta|.\end{equation} Our goal is then to regularize the integrand $g(\zeta)=\varphi(\zeta)/(\zeta-z)$ in~\eqref{eq:int_op} at and around the nearly-singular point $z_0\in\Gamma$, at which $g$ itself and also its derivatives reach large values. In order to accomplish that we introduce the following Taylor-like complex polynomial \begin{equation}\label{eq:truc_taylor} P_N(z, z_0) := \sum_{j=0}^N \frac{c_j(z_0)}{j!}(z- z_0)^j,\quad z\in\C,\ z_0\in\Gamma, \end{equation} where the set of coefficients $\{c_j(z_0)\}_{j=0}^N$ are to be determined by imposing appropriate interpolation conditions at $z_0$. Since $P_N(\cdot,z_0)$ is an entire function, the Cauchy integral formula~\eqref{eq:CF} together with the Cauchy--Goursat theorem~\cite{henriciVol1} yield the identity \begin{equation}\label{eq:correction_formula} \frac{1}{2\pi i}\int_{\Gamma}\frac{P_N(\zeta,z_0)}{\zeta-z}\de \zeta=\begin{cases}P_N(z, z_0),& z\in\Omega,\\ 0,&z\in\Omega'.\end{cases} \end{equation} Subtracting~\eqref{eq:correction_formula} from~\eqref{eq:int_op} we obtain that the Cauchy operator~\eqref{eq:int_op} can be recast as \begin{equation}\label{eq:reg_formula} (\mathcal C\varphi)(z)=\frac{1}{2\pi i}\int_{\Gamma}\frac{\varphi(\zeta)-P_N(\zeta,z_0)}{\zeta-z}\de \zeta + \mathbf{ 1}_{\Omega}(z)P_N(z,z_0),\quad z\in\Omega\setminus\Gamma,\ z_0\in\Gamma, \end{equation} where $\mathbf{1}_{\Omega}$ denotes the indicator function of the domain $\Omega$. The main idea of the proposed methodology lies then in constructing $P_N(\cdot,z_0)$, or equivalently, finding coefficients $\{c_j(z_0)\}_{j=0}^{N}$, so that the numerator of the integrand in~\eqref{eq:reg_formula} vanishes to high order precisely at $z_0$. In detail, we want \begin{equation}\label{eq:proto_interp_cond} |\varphi(\zeta)-P_N(\zeta,z_0)|= o\left(|\zeta-z_0|^{N}\right)\quad\mbox{as}\quad \Gamma\ni\zeta\to z_0\in\Gamma, \end{equation} so that the regularized integrand in~\eqref{eq:reg_formula} satisfies $$ \left|\frac{\varphi(\zeta)-P_N(\zeta,z_0)}{\zeta-z}\right|= o\left(\frac{|\zeta-z_0|^{{N}}}{\delta}\right)\quad\mbox{as}\quad \Gamma\ni\zeta\to z_0\in\Gamma, $$ where $\delta = |z-z_0|$. This implies that if both the curve $\Gamma$ and the density $\varphi$ are sufficiently smooth at $z_0$, not only the integrand in~\eqref{eq:proto_interp_cond} vanishes at $z_0$ but also all its derivatives up to oder $N\geq 1$, regardless of the distance $\delta$ from $z$ to the $\Gamma$. Assuming enough local regularity of the curve and the density at $z_0\in\Gamma$, the desired property~\eqref{eq:proto_interp_cond} can be recast in a more amenable form. Indeed, let $\gamma:[0,2\pi)\to\Gamma$ be a counterclockwise $2\pi$-periodic parametrization of $\Gamma$, and let $\phi(\tau):=\varphi(\gamma(\tau))$ and $p_N(\tau,t_0):=P_N(\gamma(\tau),\gamma(t_0))$ for all $\tau\in [0,2\pi)$, with $z_0=\gamma(t_0)$ ($t_0\in [0,2\pi)$). Then, it can be directly shown---via Taylor series expansions---that~\eqref{eq:proto_interp_cond} is attained provided the interpolation conditions \begin{equation}\label{eq:inter_cond} \lim_{\tau\to t_0}\frac{\p^m}{\p \tau^m}\left[\phi(\tau)- p_N(\tau,t_0)\right] = 0,\quad m=0,\ldots,N, \end{equation} are satisfied, which require both $\gamma$ and $\phi$ to be $N$~times differentiable at $\tau=t_0$. As it turns out, the interpolation conditions~\eqref{eq:inter_cond} suffice to uniquely determine the coefficients $\{c_j(z_0)\}_{j=0}^{N}$. Indeed, differentiating $p_N(\cdot,t_0)$ at $t_0$, we obtain---by repeated use of the chain rule---the Fa\'a di Bruno formula \begin{equation}\label{eq:chain_rule} \left.\frac{\partial^{m}}{\partial \tau^{m}} p_N(\tau, t_0)\right|_{\tau=t_0}=\sum_{j=1}^{m} c_{j}(z_0) \mathbb{B}_{m, j}\left(\gamma^{\prime}(t_0), \gamma^{\prime \prime}(t_0), \ldots, \gamma^{(m-j+1)}(t_0)\right),\quad m=1,\ldots,N, \end{equation} where $\mathbb{B}_{m, j}$ are the incomplete Bell polynomials~\cite{aldrovandi2001special}. From the interpolation condition~\eqref{eq:inter_cond} for $m=0$ we readily get that $c_0(z_0) = \phi(t_0)(=\varphi(z_0))$. Making use of~\eqref{eq:chain_rule}, on the other hand, we get from~\eqref{eq:inter_cond} that the~$N$~remaining coefficients are given by the solution of the linear system \begin{equation}\label{eq:linear_system} A(t_0) \mathbf{c}(z_0)=\mathbf{b}(t_0), \end{equation} where $\mathbf{c}(z_0)=[c_1(z_0),\ldots,c_N(z_0)]^T\in\C^{N}$, $\mathbf{b}(t_0) =[\phi^{(1)}(t_0),\ldots, \phi^{(N)}(t_0)]^T\in\C^{N}$, and where $A(t_0)\in\C^{(N+1)\times (N+1)}$ is the lower triangular matrix with entries $$a_{m, j}(t):=\begin{cases}0, & 1\leq m<j\leq N, \\ \mathbb{B}_{m, j}\left(\gamma^{\prime}(t), \ldots, \gamma^{(m-j+1)}(t)\right), & 1\leq j\leq m \leq N.\end{cases}$$ The existence and uniqueness of the coefficients $\{c_j(z_0)\}_{j=0}^{N}$ thus follows from the invertibility of~$A(z_0)$, which is a direct consequence of the fact that~$A(z_0)$ is a lower triangular matrix and its diagonal entries satisfy $a_{m,m}(t_0) =\mathbb B_{m,m}(\gamma'(t_0))=(\gamma'(t_0))^m\neq0$ ($\gamma$ is a regular parametrization of $\Gamma$). It is worth mentioning here that in Section~\ref{sec:det_coef} we show that the coefficients $\{c_j(z_0)\}_{j=0}^{N}$ can be determined by an efficient recursive procedure so that the (somewhat laborious) construction of the matrix $A(t_0)$ can be completely avoided in practice. The proposed approach can as well be utilized to the regularize the derivatives of the Cauchy operator~\eqref{eq:int_op}, which are given by: \begin{equation}\label{eq:cauchy_der} (\mathcal C\varphi)^{(n)}(z)=\frac{n!}{2\pi i}\int_\Gamma\frac{\varphi(\zeta)}{(\zeta-z)^{n+1}}\de \zeta,\quad z\in\Omega\setminus\Gamma,\ n\geq 1. \end{equation} In fact, using the Cauchy integral representation of the derivatives of the (analytic) density interpolant~\eqref{eq:truc_taylor}, we obtain \begin{equation}\label{eq:correction_formula_v2} \frac{n!}{2\pi i}\int_{\Gamma}\frac{P_N(\zeta,z_0)}{(\zeta-z)^{n+1}}\de \zeta =\begin{cases}\displaystyle\frac{\p^n}{\p z^n}P_N(z, z_0) =\sum_{j=0}^{N-n} \frac{c_{j+n}(z_0)}{j!}(z- z_0)^j,& z\in\Omega,\\ 0,&z\in\Omega', \end{cases} \end{equation} for $N> n$. Subtracting~\eqref{eq:correction_formula_v2} from~\eqref{eq:cauchy_der} we arrive at the regularized expression \begin{equation}\label{eq:reg_formula_der} (\mathcal C\varphi)^{(n)}(z)=\frac{n!}{2\pi i}\int_{\Gamma}\frac{\varphi(\zeta)-P_N(\zeta,z_0)}{(\zeta-z)^{n+1}}\de \zeta + \mathbf{ 1}_{\Omega}(z)\frac{\p^n}{\p z^n}P_N(z,z_0),\quad z\in\Omega\setminus\Gamma, \end{equation} for the $n$th-order derivative of the Cauchy operator, with the integrand satisfying \begin{equation} \left|\frac{\varphi(\zeta)-P_N(\zeta,z_0)}{(\zeta-z)^{n+1}}\right|= o\left(\frac{|\zeta-z_0|^{N}}{\delta^{n+1}}\right)\quad\mbox{as}\quad \Gamma\ni\zeta\to z_0\in\Gamma. \label{eq:to_please_the_reviewer} \end{equation} Finally, we apply the density interpolation technique to regularize the Cauchy principal value integral~\eqref{eq:hilbert} for which we assume here that both $\Gamma$ and $\varphi$ are of class $C^N$, $N\geq 2$. Applying the Sokhotski-Plemelj formula~\eqref{eq:sing_int} to the analytic density interpolant~\eqref{eq:truc_taylor}, we obtain \begin{equation}\label{eq:cauchy_on_b} P_N(z,z_0) =\frac{1}{\pi i}\,{\rm p.v.}\!\!\int_{\Gamma}\frac{P_N(\zeta,z_0)}{\zeta-z}\de \zeta ,\quad z\in\Gamma, \end{equation} where the limit was taken from inside of $\Omega$. Setting $z_0=z$ in the formula above and subtracting it from~\eqref{eq:hilbert} we arrive at \begin{equation}\label{eq:reg_hilbert} (H\varphi)(z)=\frac{1}{\pi i}{\rm p.v.}\!\!\int_{\Gamma}\frac{\varphi(\zeta)-P_N(\zeta,z)}{\zeta-z}\de \zeta + \varphi(z),\quad z \in\Gamma, \end{equation} where we have used that $P_N(z,z) = \varphi(z)$ by construction. The integrand in~\eqref{eq:reg_hilbert} is smooth (in parametric form it is $(N-1)$ times differentiable at $\tau=t$, where $\zeta=\gamma(\tau)$ and $z=\gamma(t)$) and it satisfies $$ \left|\frac{\varphi(\zeta)-P_N(\zeta,z)}{\zeta-z}\right|= o\left(|\zeta-z|^{N-1}\right)\quad\mbox{as}\quad \Gamma\ni\zeta\to z_0\in\Gamma. $$ It is easy to see that just the zeroth order interpolant (i.e., $\Phi_0(z,z_0) = \varphi(z_0)$) suffices to produce a regular integrand in~\eqref{eq:reg_hilbert}. However, the real utility of the density interpolation method lies in that more singular (e.g., finite part) integrals associated with tangential derivatives of the principal value integral~\eqref{eq:hilbert} can be regularized by means of the proposed methodology, as is the case of the Laplace hypersingular operator addressed in Section~\ref{sec:DL_formulation}. \begin{remark} Note that if $\varphi$ is the restriction to $\Gamma$ of a function $f$ that is (complex) analytic in a region containing $\Gamma$, we have that the expansion coefficients~\eqref{eq:truc_taylor} are given by $c_j(z_0)=f^{(j)}(z_0)$, $j=0,\ldots,N$, and hence $P_N$ is just the $N$th-order Taylor series expansion of $f$ at $z_0\in\Gamma$. This is so because the analyticity implies that the tangential derivatives of $\varphi$ along the curve $\Gamma$ coincide with the complex derivatives of $f$ in this case. This is clearly not necessarily true for less regular density functions~$\varphi$. \end{remark} \section{Applications}\label{sec:applications} This section describes a variety of applications of the proposed regularization technique. \subsection{Laplace boundary integral equations}\label{sec:DL_formulation} The first application that we present concerns the solution of the Laplace equation by means of boundary integral equation methods. We start off by defining the double- and single-layer potentials in two spatial dimensions, which are given by \begin{subequations}\begin{eqnarray} (\mathcal D\varphi)(\ner) &:=&\frac{1}{2\pi}\int_{\Gamma}\frac{\nu(\ney)\cdot(\ner-\ney)}{|\ner-\ney|^2}\varphi(\ney)\de s(\ney)\andtext\label{eq:DL_pot}\\ (\mathcal S\varphi)(\ner) &:=& -\frac{1}{2\pi}\int_{\Gamma} \log|\ner-\ney|\varphi(\ney)\de s(\ney),\quad \ner\in \R^2\setminus \Gamma,\label{eq:SL_pot} \end{eqnarray}\label{eq:potentials}\end{subequations} respectively, where $\Omega\subset\R^2$ denotes a bounded and simply connected domain with boundary $\Gamma$ of class smooth $C^2$, and where the density function $\varphi:\Gamma\to\R$ is real-valued and continuous. These potentials define $C^2(\R^2\setminus\Gamma)$ functions that satisfy the Laplace equation in $\R^2\setminus\Gamma$. \begin{remark} Before continuing we warn the reader that in what follows of this section, the same symbol $\Omega$ is used to refer to both the (real) domain $\Omega\subset\R^2$ and its corresponding complex counterpart. Likewise, the same holds for the boundary $\Gamma$, its unit normal $\nu$, the density function $\varphi$, and the curve parametrization $\gamma$. We also mention that, in what follows we utilize the symbol $\ner$ to refer to points belonging to $\R^2\setminus\Gamma$ and the symbols~$\nex$ and~$\ney$ to denote points lying on the curve $\Gamma$.\end{remark} As is well-known~\cite{kress2012linear}, the limit values of the double-layer potential~\eqref{eq:DL_pot} on $\Gamma$, give rise to the double-layer operator $K:C(\Gamma)\to C^{0,\alpha}(\Gamma)$: \begin{equation}\label{eq:DL_op}\begin{split} (K\varphi)(\nex) :=&\frac{1}{2\pi}\int_{\Gamma}\frac{\nu(\ney)\cdot(\nex-\ney)}{|\nex-\ney|^2}\varphi(\ney)\de s(\ney),\quad \nex\in\Gamma,\end{split} \end{equation} which is given in terms of a smooth (at least continuous) integral kernel~\cite{perez2019harmonic}. In view of the smoothness of the integrand in~\eqref{eq:DL_op}, numerical evaluation of the double-layer operator~\eqref{eq:DL_op} requires neither kernel regularization nor specialized quadrature rules. In contrast, the normal derivative of the double-layer potential on $\Gamma$ leads to the so-called hypersingular operator $T:C^{1,\alpha}(\Gamma)\to C^{0,\alpha}(\Gamma)$: \begin{equation}\label{eq:hypersingular}\begin{split} (T\varphi)(\nex) =& \lim_{\epsilon\to 0}\nu(\nex)\cdot\nabla (\mathcal D\varphi)(\nex+\epsilon\nu(\nex))\\ =&\frac{1}{2\pi}\int_{\Gamma}\left\{\frac{\nu(\ney)\cdot\nu(\nex)}{|\nex-\ney|^2}-2\frac{\nu(\ney)\cdot(\nex-\ney)(\nex-\ney)\cdot\nu(\nex)}{|\nex-\ney|^4}\right\}\varphi(\ney)\de s(\ney),\quad \nex\in\Gamma, \end{split}\end{equation} where the boundary integral in~\eqref{eq:hypersingular} has to be interpreted as a Hadamard finite-part integral. In view of the $O(|\nex-\ney|^{-2})$ asymptotic behavior of the kernel as $\Gamma\ni\ney\to\nex\in\Gamma$, numerical evaluation of the hypersingular operator~\eqref{eq:hypersingular} entails regularization via, e.g., integration by parts~\cite[Corollary 7.33]{kress2012linear}. Similarly, the limit values of the single-layer potential~\eqref{eq:SL_pot} give rise to the single-layer operator $S:C(\Gamma)\to C^{0,\alpha}(\Gamma)$: \begin{equation}\label{eq:SL_op} (S\varphi)(\nex):= -\frac{1}{2\pi}\int_{\Gamma} \log|\nex-\ney|\varphi(\ney)\de s(\ney),\quad \nex\in\Gamma, \end{equation} which bears a weakly-singular kernel. Although integrable, the presence of the logarithmic singularity in~\eqref{eq:SL_op} makes the integrand not suitable for direct application of standard quadrature rules. The normal derivatives of the single-layer potential, on the other hand, yields the adjoint double-layer operator $K^\top:C(\Gamma)\to C^{0,\alpha}(\Gamma)$: \begin{equation}\label{eq:ADL_op} (K^\top\varphi)(\nex) :=-\frac{1}{2\pi}\int_{\Gamma}\frac{\nu(\nex)\cdot(\nex-\ney)}{|\nex-\ney|^2}\varphi(\nex)\de s(\ney),\quad \nex\in\Gamma, \end{equation} which, as the double-layer operator, exhibits a smooth (at least continuous) kernel~\cite{perez2019harmonic}. The next two theorems show that the proposed methodology can be applied to recast nearly-singular double- and single-layer potentials, as well as the hypersingular and the single-layer operators, in terms of smooth integrands of prescribed regularity. \begin{theorem}\label{th:double_layer}Let $\Omega\subset\R^2$ be a bounded simply connected domain with boundary $\Gamma=\{\gamma(t): t\in[0,2\pi)\}$ of class~$C^2$, and let $\varphi\in C(\Gamma)$ be a real-valued function. Assume that both $\gamma$ and~$\varphi\circ\gamma$ are $N$th-times continuously differentiable at $t_0\in[0,2\pi)$. Then, the double-layer potential~\eqref{eq:DL_pot} can be expressed as: \begin{equation} (\mathcal D\varphi)(\ner)=-\real\left\{\frac{1}{2\pi i}\int_{\Gamma}\frac{\varphi(\zeta)-P_N(\zeta,z_0)}{\zeta-z}\de \zeta + \mathbf {1}_{\Omega}(z)P_N(z,z_0)\right\},\label{eq:reg_dl} \end{equation} for all $\ner=(\real z,\mathrm{Im}\, z)\in \R^2\setminus\Gamma$, where $P_N(\cdot,z_0)$ is the $N$th-order $\varphi$-interpolant at $z_0=\gamma(t_0)\in\Gamma$. Furthermore, assuming that $\gamma$ and $\varphi\circ\gamma$ are $C^{N}([0,2\pi])$~functions, it holds that the hypersingular operator~\eqref{eq:hypersingular} can be expressed as: \begin{equation}\label{eq:hyper_complex} (T\varphi)(\nex) =-\real\left\{\frac{\nu(z)}{2\pi i}\int_\Gamma\frac{\varphi(\zeta)-P_N(\zeta,z)}{(\zeta-z)^{2}}\de \zeta\right\}, \end{equation} for all $ \nex=(\real z,\mathrm{Im}\, z)\in\Gamma$. \end{theorem} \begin{proof} We begin the proof by expressing the double-layer potential as~\cite{kress2012linear} \begin{equation}\label{eq:corres_1} (\mathcal D\varphi)(\ner)=-\real\{(\mathcal C\varphi)(z)\} ,\quad \ner = (\real z,\mathrm{Im}\, z)\in\Omega\setminus\Gamma, \end{equation} where the correspondence $\varphi(\ney) =\varphi(\zeta)$, $\ney=(\real \zeta,\mathrm{Im}\, \zeta)\in\Gamma$, between real and complex variables, has been used. Since $\gamma$ and $\varphi\circ \gamma$ are $N$~times differentiable at $t_0$, we have that $\varphi$ admits a density interpolant~$P_N$ at $z_0\in\Gamma$. Formula~\eqref{eq:reg_dl} is hence directly obtained from the regularized expression for the Cauchy operator~\eqref{eq:reg_formula}. Using the fact that the Cauchy operator~\eqref{eq:int_op} defines and analytic function in~$\C\setminus\Gamma$~\cite{kress2012linear}, on the other hand, it follows from the Cauchy-Riemann equations that the gradient of the double-layer potential can be expressed as \begin{equation}\label{eq:corres_2} \nabla (\mathcal D\varphi)(\ner) = \left(-\real\{{(\mathcal C\varphi)'(z)}\},\mathrm{Im}\,\{{(\mathcal C\varphi)'(z)}\}\right),\quad\ner\in\Omega\setminus\Gamma. \end{equation} Therefore, the hypersingular operator~\eqref{eq:hypersingular} can be expressed as \begin{equation*}\label{eq:hyper_complex2} ( T\varphi)(\nex) =-\lim_{\epsilon\to 0}\real\{\nu(z)(\mathcal C\varphi)'(z+\epsilon\nu(z))\}=-\real\left\{\frac{\nu(t)}{2\pi i}\,{\rm f.p.}\!\!\int_0^{2\pi}\frac{\phi(\tau)}{(\gamma(\tau)-\gamma(t))^{2}}\gamma'(\tau)\de \tau\right\}, \end{equation*} for all $\nex=(\real z,\mathrm{Im}\, z)\in\Gamma$, where $z = \gamma(t)$, and $\phi = \varphi\circ \gamma$. The integral above can then be regularized by means of the identity \begin{equation}\label{eq:sub} \frac{1}{\gamma'(t)}\frac{\p}{\p t}{p_N(t, t_0)} =\frac{1}{\pi i}\,{\rm f.p.}\!\!\int_0^{2\pi}\frac{p_N(\tau,t_0)}{(\gamma(\tau)-\gamma(t))^2}\gamma'(\tau)\de \tau ,\quad t\in[0,2\pi), \end{equation} which is obtained by (carefully) differentiating $(H\varphi)(\gamma(t))$ in~\eqref{eq:cauchy_on_b} with respect to $t$. Taking $t_0=t$ in~\eqref{eq:sub} and subtracting it from~\eqref{eq:hyper_complex} we finally arrive at \begin{equation*}\label{eq:hyper_complex_2} (T\varphi)(\nex) =-\real\left\{\frac{\nu(t)}{2\pi i}\int_0^{2\pi}\frac{\phi(\tau)-p_N(\tau,t_0)}{(\gamma(\tau)-\gamma(t))^{2}}\gamma'(\tau)\de \tau\right\},\quad \nex=(\real\gamma(t),\mathrm{Im}\,\gamma(t))\in\Gamma, \end{equation*} which is obtained from: (a) $\lim_{\tau\to t}\p p_N(\tau,t)/\p \tau=\phi'(t) \in\R,$ that follows from the interpolation conditions~\eqref{eq:inter_cond} and that $\phi$ is a real valued function, and; (b)~$\real\{\nu(t)/\gamma'(t)\} = 0$ for all $t\in[0,2\pi)$. This concludes the proof. \end{proof} We now consider the single-layer potential and the single-layer operator. \begin{theorem}\label{th:single_layer} Let $\Omega\subset\R^2$ be a bounded simply connected domain with boundary $\Gamma=\{\gamma(t): t\in[0,2\pi)\}$ of class~$C^2$, and let $\varphi\in C(\Gamma)$ be a real-valued function. Let $\psi \in C(\Gamma)$ be defined in complex parametric form as $\psi\circ \gamma=(\varphi\circ\gamma)\frac{|\gamma'|}{\gamma'}$, and assume that both~$\gamma$ and~$\psi\circ\gamma$ are $N$th-times continuously differentiable at $t_0\in[0,2\pi)$. Let also $\log(\cdot-z)$ be defined so that either its branch-cut path (see Figure~\ref{fig:branchCut}): \begin{itemize} \item Starts at $z\in\Omega$, exits $\Omega$ at $z_0\in\Gamma$, and extends to infinity; \item starts at $z\in\Omega'$ and extends to infinity without intersecting~$\Gamma$; or \item starts at $z\in\Gamma$ and extends to infinity intersecting~$\Gamma$ only at $z$. \end{itemize}Then, the single-layer potential~\eqref{eq:SL_pot} can be recast~as: \begin{equation}\label{eq:SL_pot_reg} (\mathcal S\varphi)(\ner) = \mathrm{Im}\,\left\{\frac{1}{2\pi i}\int_\Gamma \log(\zeta-z)\left\{\psi(\zeta)-Q_N(\zeta,z_0)\right\}\de \zeta- \mathbf{ 1}_{\Omega}(z)\int_{z_0}^z Q_N(\eta,z_0)\de\eta\right\}, \end{equation} for all $\ner=(\real z,\mathrm{Im}\, z)\in \R^2\setminus\Gamma$, where $Q_N(\cdot,z_0)$ is the $N$th-order $\psi$-interpolant at $z_0=\gamma(t_0)\in\Gamma$. Moreover, assuming that $\gamma$ and $\psi\circ\gamma$ are $C^{N}([0,2\pi])$ functions, it holds that the single-layer operator~\eqref{eq:SL_op} can be expressed as: \begin{equation}\label{eq:SL_op_reg} (S\varphi)(\nex)=\mathrm{Im}\,\left\{\frac{1}{2\pi i}\int_{\Gamma}\log(\zeta-z)\left\{\psi(\zeta)-Q_N(\zeta,z)\right\}\de \zeta \right\}, \end{equation} for all $ \nex=(\real z,\mathrm{Im}\, z)\in\Gamma$. \end{theorem} \begin{figure}[ht] \centering \subfloat[$z\in\Omega.$]{\includegraphics[scale=1]{fig2a_int_branch.pdf}\label{fig_in}}\qquad \subfloat[$z\in\Omega'=\C\setminus\overline\Omega.$]{\includegraphics[scale=1]{fig2b_int_branch.pdf}\label{fig_out}}\qquad \subfloat[$z\in\Gamma.$]{\includegraphics[scale=1]{fig2c_int_branch.pdf}\label{fig_on}} \caption{Definition of the branch cut (dashed line) of $\log(\cdot-z)$ in Theorem~\ref{th:single_layer}, for the three relevant locations of the branch point~$z$.}\label{fig:branchCut} \end{figure} \begin{proof} First, we note that the single-layer potential~\eqref{eq:SL_pot} can be expressed as \begin{equation}\label{eq:sl_0} (\mathcal S\varphi)(\ner) = \mathrm{Im}\,\left\{\frac{1}{2\pi i}\int_\Gamma \log(\zeta-z)\psi(\zeta)\de \zeta\right\},\quad \ner = (\real z,\mathrm{Im}\, z)\in\R^2\setminus\Gamma, \end{equation} where $\psi\circ \gamma=(\varphi\circ\gamma)\frac{|\gamma'|}{\gamma'}$, independent of the selection of the branch of the logarithm. From the fact that $\gamma$ and $\psi\circ\gamma$ are $N$~times differentiable at $t_0$, it follows that $\psi$ admits a density interpolant $Q_N(\cdot,z_0)$ at $z_0=\gamma(t_0)\in\Gamma$, which is an entire function. For $z\in\Omega$ it holds, by the Cauchy integral formula~\eqref{eq:CF}, that: \begin{equation}\label{eq:sl_1}\begin{split} \int_{z_0}^z Q_N(\eta,z_0)\de \eta=&\frac{1}{2\pi i}\int_\Gamma\left(\int_{z_0}^z\frac{1}{\zeta-\eta}\de\eta\right)Q_N(\zeta,z_0)\de\zeta=\\ &-\frac{1}{2\pi i}\int_\Gamma\left\{\log(\zeta-z)-\log(\zeta-z_0)\right\}Q_N(\zeta,z_0)\de\zeta, \end{split}\end{equation} where the branch of $\log(\cdot-z_0)$ is selected so that the resulting branch cut of the function inside the curly brackets is a simple open curve contained in $\Omega\cup\{z_0\}$, that connects $z\in\Omega$ and $z_0\in\Gamma$. Similarly, for $z\in\Omega'$ we have, by the Cauchy-Goursat theorem, that \begin{equation}\label{eq:sl_2}\begin{split} 0=\frac{1}{2\pi i}\int_\Gamma\left(\int_{z_0}^z\frac{1}{\zeta-\eta}\de\eta\right)Q_N(\zeta,z_0)\de\zeta=-\frac{1}{2\pi i}\int_\Gamma\left\{\log(\zeta-z)-\log(\zeta-z_0)\right\}Q_N(\zeta,z_0)\de\zeta, \end{split}\end{equation} where the branch of $\log(\cdot-z_0)$ is chosen such that the branch cut of $\log(\cdot-z)-\log(\cdot-z_0)$ is a simple curve contained in $\Omega'\cup\{z_0\}$, that connects $z\in\Omega'$ and $z_0\in\Gamma$. By construction then, in both cases the branch cut of $\log(\cdot-z_0)$ intersects $\Gamma$ only at $z_0\in\Gamma$. In order to simplify the formulae in~\eqref{eq:sl_1} and~\eqref{eq:sl_2} we consider the improper integrals: \begin{equation*} I_j=\int_\Gamma \log(\zeta-z_0)(\zeta-z_0)^{j}\de \zeta\quad\mbox{for } \quad j=0,\ldots,N. \end{equation*} Writing~$I_j$ in parametric form and using the $2\pi$-periodicity of $\gamma$, we obtain \begin{equation*} I_j=\int_{t_0}^{t_0+2\pi} \log(\gamma(t)-z_0)(\gamma(t)-z_0)^{j}\gamma'(t)\de t=\lim_{\epsilon\to 0+}\left.\frac{\log(\zeta-z_0)(\zeta-z_0)^{j+1}}{j+1}\right|_{\zeta=\gamma(t_0-\epsilon)}^{\zeta=\gamma(t_0+\epsilon)}=0, \end{equation*} for all $j=0,\ldots,N$. Therefore, since the $\psi$-interpolant takes the form $Q_N(z,z_0)=\sum_{j=0}^N\frac{b_j(z_0)}{j!}(z-z_0)^{j}$, with coefficients $b_j(z_0)$, $j=0,\ldots,N$, depending on $\psi\circ\gamma$ and its derivatives at $t_0$, we conclude that \begin{equation}\label{eq:sl_3} \frac{1}{2\pi i}\int_\Gamma\log(\zeta-z_0)Q_N(\zeta,z_0)\de\zeta =0, \end{equation} in both~\eqref{eq:sl_1} and~\eqref{eq:sl_2}. Therefore, combining~\eqref{eq:sl_0},~\eqref{eq:sl_1},~\eqref{eq:sl_2} and \eqref{eq:sl_3}, we arrive at \begin{equation}\label{eq:sl_reg_reg}\begin{split} (\mathcal S\varphi)(\ner) =&~\mathrm{Im}\,\left\{\frac{1}{2\pi i}\int_\Gamma \log(\zeta-z)\left\{\psi(\zeta)-Q_N(\zeta,z_0)\right\}\de\zeta-\mathbf{1}_{\Omega}(z) \int_{z_0}^zQ_N(\eta,z_0)\de\eta\right\} \end{split}\end{equation} for all $\ner = (\real z,\mathrm{Im}\, z)\in\R^2\setminus\Gamma.$ Finally, taking the limit of~\eqref{eq:sl_reg_reg} as $\C\setminus\Gamma\ni z\to z_0\in\Gamma$, we obtain $$ (S\varphi)(\nex)=\mathrm{Im}\,\left\{\frac{1}{2\pi i}\int_\Gamma \log(\zeta-z_0)\left\{\psi(\zeta)-Q_N(\zeta,z_0)\right\}\de \zeta\right\}, $$ for all $\nex=(\real z_0,\mathrm{Im}\, z_0)\in\Gamma$. This concludes the proof. \end{proof} \begin{remark} We here discuss the construction of the function $\log(\cdot-z)$ in the definition of the single-layer potential, which has to fulfill the contour-dependent conditions stated in Theorem~\ref{th:single_layer} (see Figure~\ref{fig:branchCut}). In the case of a contour $\Gamma$ enclosing a star-shaped domain $\Omega$ with respect to $z^\star\in\Omega$, the branch cut of the logarithm $\log(\cdot -z)$ can be simply selected as the path stemming from $z$ and extending along the straight line~$\{w=z+s(z-z^\star)\in\C, s\geq 0\}$. For more intricate contours, however, defining a suitable branch cut of $\log(\cdot-z)$ could be a painstaking process. To help defining $\log(\cdot-z)$ in a systematic manner, it can be expressed as \begin{equation}\label{eq:log_del} \log(\cdot-z) =\operatorname{Log}\left(\frac{\cdot-z}{\cdot-z_0}\right)+\operatorname{Log}\left(\frac{\cdot-z_0}{\cdot-z_1}\right)+\cdots+\operatorname{Log}\left(\frac{\cdot-z_{Q-1}}{\cdot-z_{Q}}\right)+\log_{\tau}(\cdot-z_Q), \end{equation} where $\operatorname{Log}$ is the principal branch and where $\log_{\tau}(\cdot-z_Q)$ has its branch cut along the straight line $\{w=z_Q+s\tau\in\C, s\geq 0\}$ $(0\neq\tau\in\C)$. The resulting brach-cut path of~\eqref{eq:log_del} is a piecewise linear curve starting at $z$, passing through the control points ${z_q}$, $q=0,\ldots,Q$, and then extending to infinity in the direction determined by $\tau$. Making use of a triangulation of an annular region $B_R\setminus\overline\Omega$, where $B_R = \{z\in\C:|z-z^*|<R\}$ is a disk containing~$\Omega$, the control points can be selected as the centroids of elements forming a ``chain" consisting of triangles sharing a common segment. The chain starts from the triangle containing $z$, if $z\in B_R\setminus\overline\Omega$, or from the triangle containing $z_0={\rm arg}\min_{\zeta\in\Gamma}|z-\zeta|$, if $z\in\overline\Omega$, and ends at a triangle intersecting the outer boundary $\p B_R$. The direction the line extending from $z_Q$ to infinity can be selected as $\tau = z_Q-z^*,$ where $z_Q$ is the centroid of the last triangle of the chain. \end{remark} \begin{remark} It follows from equations~\eqref{eq:corres_1} and~\eqref{eq:sl_0} that also the gradients of the double- and single-layer layer potentials can be regularized by means of the proposed methodology. Indeed, from~\eqref{eq:corres_2} the gradient of the double-layer potential can be expressed in terms of the derivative of the Cauchy operator, which can in turn be recast as~\eqref{eq:reg_formula_der}. Similarly, for the gradient of the single-layer potential we have the identity $$ \frac{\de }{\de z}\left\{\frac{1}{2\pi i}\int_\Gamma \log(\zeta-z)\psi(\zeta)\de \zeta\right\} = -\frac{1}{2\pi i}\int_\Gamma \frac{\psi(\zeta)}{\zeta-z}\de \zeta = -(\mathcal C\psi)(z),\quad z\in\C\setminus\Gamma, $$ where, as in Theorem~\ref{th:single_layer}, $\psi\circ \gamma= (\varphi\circ\gamma) \frac{|\gamma'|}{\gamma'}$. Using the Cauchy-Riemann equations, this identity leads to $$ \nabla (\mathcal S\varphi)(\ner) = -\left(\mathrm{Im}\, (\mathcal C\psi)(z),\real (\mathcal C\psi)(z)\right),\quad\ner = (\real z,\mathrm{Im}\, z)\in\R^2\setminus\Gamma,$$ where the Cauchy operator can be regularized by means of~\eqref{eq:reg_formula}. \end{remark} It is thus clear from the results in this section that, provided $\varphi$ and $\Gamma$ are smooth enough, the density interpolation technique can be applied to regularize singular and nearly singular integrals associated with the evaluation of the Laplace double- and single-layer potentials, their gradients, and all four integral operators of Calder\'on calculus. \subsection{Conformal mapping } Yet another important application of the proposed methodology concerns conformal mapping, for which we present a straightforward integral equation method based on the double-layer formulation of both interior and exterior Laplace Dirichlet problems. Some related works on this huge subject include the single-layer formulation put forth by G.~T.~Symm in~\cite{symm1966integral,symm1967numerical} and the recent contribution~\cite{wala2018conformal} which, as the one considered here, relies on a double-layer formulation. (We refer the interested reader to~\cite[Sec.~16.7]{henriciVol3} and to the more recent handbook~\cite{kythe2019handbook} for surveys on second-kind integral equations for conformal mapping applications.) \paragraph{Interior regions.}We first consider the problem of mapping a simply connected domain $\Omega\subset\C$, with Jordan boundary $\p\Omega=\Gamma$ of class $C^2$ enclosing the origin, conformally onto the unit disk $D=\{z\in\C:|z|<1\}$. Let~$f_i:\overline\Omega\to\overline D$ denote the unique conformal mapping satisfying $f_i(0) = 0$ and $f'_i(0)>0$~\cite[Corollary 5.10c]{henriciVol3}. Consider then the function $g_i(z) = \log(f_i(z)/z)$ which is analytic in $\Omega$ and satisfies $\real g_i(z) = -\log |z|$ for $z\in\Gamma$. Clearly, $f_i(z) = z\e^{g_i(z)}$ satisfies $f_i(0)=0$. Now, writing $g_i(z) = u_i(\ner) + iv_i(\ner)+i\alpha$, $\ner = (\real z,\mathrm{Im}\, z)$, $\alpha\in\R$, it follows from Cauchy-Riemann equations that~$u_i$ and~$v_i$ are harmonic conjugates of each other in $\Omega$. Therefore, in particular, $u_i\in C^2(\Omega)\cap C(\overline\Omega)$ is the unique solution~\cite[Theorem 6.23]{kress2012linear} of the following Dirichlet interior boundary value problem: $$ \Delta u_i=0\ \mbox{ in }\ \Omega,\quad u_i(\nex)= -\log|\nex|, \quad \nex\in\Gamma. $$ Adopting the notation used in Section~\ref{sec:DL_formulation} and looking for a solution in the form of the double-layer potential: $$ u_i(\ner) = (\mathcal D\varphi_i)(\ner)=-\real\{(\mathcal C\varphi)(z)\},\quad \ner=(\real z,\mathrm{Im}\, z)\in\Omega, $$ we obtain that the unknown real-valued density $\varphi_i\in C(\Gamma)$ is the unique solution of the second-kind integral equation~\cite[Theorems 6.21 and 6.22]{kress2012linear} \begin{equation}\label{eq:int_IE} -\frac{\varphi_i(\nex)}{2} + (K\varphi_i)(\nex)=-\log|\nex|,\quad\nex\in\Gamma, \end{equation} where the operator $K:C(\Gamma)\to C(\Gamma)$ is the double-layer operator~\eqref{eq:DL_op}. Upon solving~\eqref{eq:int_IE} for~$\varphi_i$ and using that $$v_i(\ner)=-\mathrm{Im}\,\{(\mathcal C\varphi_i)(z)\}, \quad\ner\in\Omega,$$ is a harmonic conjugate of $u_i$, we conclude that the sought conformal mapping is given by \begin{equation}\label{eq:conf_map} f_i(z) =z\exp\left(-(\mathcal C\varphi_i)(z)-iv_i(\mathbf 0)\right),\quad z\in\Omega. \end{equation} where the real constant $\alpha$ has been selected so that $f'_i(0)=\exp(u_i(\mathbf 0))>0$. \paragraph{Exterior regions.}Consider now the problem of mapping $\Omega' =\C\setminus\overline\Omega$ conformally onto the exterior domain of the unit disk $D' = \{z\in\C:|z|>1\}$. As is well known~\cite[Theorem 5.10c]{henriciVol3}, there exists a unique mapping function $f_e:\overline{\Omega'}\to\overline{D'}$ such that $f_e(\infty) = \infty$ with Laurent series at infinity given by $$ f_e(z) = \upgamma^{-1} z+a_0+a_1z^{-1}+\cdots, $$ where $\upgamma>0$ is the so-called capacity of $\Gamma$. As in the interior region case, looking for a conformal mapping of the form $f_e(z) = z\exp(g_e(z))$,t we have that $g_e(z) =u_e(\ner)+iv_e(\ner)$, $\ner=(\real z,\mathrm{Im}\, z)$, is analytic in $\Omega'$ and satisfies $\real g_e(z)=-\log|z|$ for $z\in\Gamma$. Therefore, $u_e$ and $v_e$ are harmonic conjugate of each other and $u_e\in C^2(\Omega')\cap C(\overline{\Omega'})$ is the unique bounded solution~\cite[Theorem 6.25]{kress2012linear} of the exterior Dirichlet problem: $$ \Delta u_e=0\ \mbox{ in }\ \Omega',\quad u_e(\nex)=-\log|\nex|, \quad \nex\in\Gamma. $$ Following~\cite{kress2012linear}, we look for the solution in the form of a modified double-layer potential: \begin{equation}\label{eq:dl_pot_mod} u_e(\ner) = (\mathcal D\varphi_e)(\ner)+\int_\Gamma \varphi_e(\ney)\de s(\ney)=-\real\{(\mathcal C\varphi)(z)\}+\int_\Gamma \varphi_e(\ney)\de s,\ \ner=(\real z,\mathrm{Im}\, z)\in\Omega'. \end{equation} Imposing the boundary condition on $\Gamma$, we thus arrive at the following uniquely solvable second-kind integral equation~\cite[Theorems 6.24 and 6.25]{kress2012linear} for the unknown density $\varphi_e\in C(\Gamma)$: \begin{equation}\label{eq:ext_IE} \frac{\varphi_e(\nex)}{2} + (K\varphi_e)(\nex)+\int_\Gamma \varphi_e(\ney)\de s(\ney)=-\log|\nex|,\quad\nex\in\Gamma. \end{equation} The unique conformal mapping satisfying the condition $\upgamma^{-1} = \lim_{z\to\infty}f_e(z)/z>0$ is thus given by \begin{equation}\label{eq:ext_conf_map} f_e(z)= z\exp\left(-(\mathcal C\varphi_e)(z)+\int_\Gamma \varphi_e(\ney)\de s\right),\quad z\in\Omega'. \end{equation} As we show in the numerical examples presented in Section~\ref{sec:examples}, the proposed methodology can be directly applied to produce accurate numerical evaluations of both interior~\eqref{eq:conf_map} and exterior~\eqref{eq:ext_conf_map} conformal mappings at points close to and on the contour $\Gamma$. \subsection{Biharmonic equation, Stokes flow, and elastostatics} We briefly mention other linear elliptic boundary value problems whose solutions can be formulated in terms of Cauchy-like integral operators. These are Stokes flow~\cite{Greengard:2004dg,greengard1996integral,kropinski1999integral,Kropinski:2002fd} and elastostatic~\cite{greengard1996integral,helsing2001complex} problems in the plane, which relying on the complex variable theory for the biharmonic equation~\cite{greenbaum1992numerical,mayo1984fast}, make heavy use of Goursat potentials sought as \begin{equation*}\begin{split} \phi(z)=\frac{1}{2 \pi i} \int_{\Gamma} \frac{\omega(\zeta)}{\zeta-z} \mathrm{d} \zeta\quad\mbox{and}\quad \psi(z)=\frac{1}{2 \pi i} \int_{\Gamma} \frac{\bar{\omega}(\zeta) \mathrm{d} \zeta+\omega(\zeta)\de\overline\zeta}{\zeta-z}-\frac{1}{2 \pi {i}} \int_{\Gamma} \frac{\bar{\zeta} \omega(\zeta)}{(\zeta-z)^{2}} \mathrm{d} \zeta.\end{split}\end{equation*} Clearly, these contour integrals can be directly regularized by means of the proposed methodology at points $z\in\C\setminus\Gamma$ close to the contour $\Gamma$. For instance, writing the second integral in parametric form, we obtain $$\int_{\Gamma} \frac{\bar{\omega}(\zeta) \mathrm{d} \zeta+\omega(\zeta)\de\overline\zeta}{\zeta-z} = \int_0^{2\pi}\frac{\phi(\tau)}{\gamma(\tau)-\gamma(t)}\gamma'(\tau)\de \tau, $$ where $\phi(\tau) = \overline{w(\gamma(\tau))}+w(\gamma(\tau))\overline{\gamma'(\tau)}/\gamma'(\tau)$ with $\gamma:[0,2\pi)\to\Gamma$ being the contour parametrization. Regularization of this integral can be directly achieved by writing the resulting Cauchy operator as in~\eqref{eq:reg_formula} in terms of the density interpolant of $\varphi=\phi\circ\gamma^{-1}$. \section{Numerics}\label{sec:numerics} This section presents numerical procedures for the implementation of the proposed density interpolation technique in the case of smooth and piecewise smooth contours $\Gamma$. We focus here on the numerical evaluation of the regularized Cauchy integral~\eqref{eq:reg_formula} and its derivatives~\eqref{eq:reg_formula_der} which entail the construction of the $N$th-order density interpolant $P_N$ and evaluation of the contour integral \begin{equation}\label{eq:model_int} \int_\Gamma f(\zeta)\de \zeta\quad\mbox{with}\quad f(\zeta):= \frac{n!}{2\pi i}\frac{\varphi(z)-P_N(\zeta,z_0)}{(z-\zeta)^{n+1}}, \end{equation} where $z_0\in\Gamma$, $z\in\C\setminus\Gamma$, and $N>n\geq 0$. The same procedures can be used for the evaluation of the regularized Cauchy principal-value integral in~\eqref{eq:reg_hilbert}, the regularized finite-part integral in~\eqref{eq:hyper_complex_2}, and the regularized weakly singular integral in~\eqref{eq:SL_op_reg}, provided proper care is taken at the singular point; meaning that the corresponding integrand there is either set to zero or computed by means of a L'Hospital limit, depending on the interpolation order $N$ used. \subsection{Smooth contours} \label{sec:smooth} Consider first a smooth Jordan curve $\Gamma$ that admits a global smooth $2\pi$-periodic parametrization $\gamma:[0,2\pi)\to\C$. Applying the trapezoidal quadrature rule the regularized contour integral~\eqref{eq:model_int} can be directly approximated as \begin{subequations}\label{eq:trap_rule} \begin{equation}\label{eq:trap} \int_\Gamma f(\zeta)\de \zeta=\int_\Gamma f(\gamma(t))\gamma'(t)\de t\approx \frac{2\pi}{M}\sum_{m=1}^Mf(\gamma(t_m))\gamma'(t_m), \end{equation} where the quadrature nodes are given by \begin{equation}\label{eq:uniform} t_m:=\frac{2\pi}{M}(m-1)\quad \mbox{for} \quad m=1,\ldots,M.\end{equation}\label{eq:trap_rule_total}\end{subequations} Given $z\in\C\setminus\Gamma$, the interpolation point $z_0\in\Gamma$~\eqref{eq:ns_point} is approximated from the discrete set of contour points $\{\gamma(t_m)\}_{m=1}^{M}$, i.e., $z_0\approx z^*$ with $$z^*=\gamma(t^*)\quad\mbox{where}\quad t^* =\underset{1\leq m\leq M}{\rm arg\,min}|z-\gamma(t_m)|.$$ As is well known~\cite{trefethen2014exponentially} the trapezoidal quadrature rule~\eqref{eq:trap} yields exponential convergence when applied to analytic integrands while it yields superalgebraic convergence for $C^\infty$ integrands~\cite{davis2007methods}. Another significant advantage of using the trapezoidal rule in this context, is that the derivatives $\phi^{(n)}(t^*)=(\varphi\circ\gamma)^{(n)}(t^*)$ and $\gamma^{(n)}(t^*)$ for $n=1,\ldots N,$ at the interpolation/quadrature point $t^*$---which are required in the construction of the density interpolant $P_N$---can be accurately and efficiently computed from $\{\phi(t_m)\}_{m=1}^{M}$ and $\{\gamma(t_m)\}_{m=1}^{M}$ by means of FFT-based differentiation~\cite{johnson2011notes}. A more detailed algorithmic description of an efficient numerical procedure for constructing $P_N$ is presented in Section~\ref{sec:det_coef} below. \subsection{Piecewise smooth contours} \label{sec:psmooth} Consider now a piecewise smooth Jordan curve given by the union $\Gamma=\bigcup_{p=1}^P \Gamma_p$ of $P\geq1$ non-overlapping patches $\Gamma_p$, $p=1,\ldots, P$, of class $C^N$. Letting $\gamma_p:[-1,1]\to \Gamma_p$ denote the parametrizations of the patches, the contour integral~\eqref{eq:model_int} is here approximated as \begin{subequations}\label{eq:fejer_rule}\begin{equation}\label{eq:fejer} \int_\Gamma f(\zeta)\de \zeta = \sum_{p=1}^P \int_{\Gamma_p}f(\zeta)\de \zeta =\sum_{p=1}^P\int_{-1}^1 f(\gamma_p(t))\gamma_p'(t)\de t\approx\sum_{p=1}^P\sum_{m=1}^M\omega_mf(\gamma_p(t_m))\gamma'_p(t_m), \end{equation} by applying the Fej\'er quadrature rule~\cite{davis2007methods} with nodes given by the Chebyshev zero points~\cite{vetterling1992numerical}: \begin{equation}\label{eq:Cheb_points} t_{m}:=\cos \left(\vartheta_{m}\right), \quad \vartheta_{m}:=\frac{(2 m-1) \pi}{2 M}, \quad m=1, \ldots, M, \end{equation} and weights given by: \begin{equation}\label{eq:Fweights} \omega_{m}:=\frac{2}{M}\left(1-2 \sum_{\ell=1}^{[M / 2]} \frac{1}{4 \ell^{2}-1} \cos \left(2 \ell \vartheta_{m}\right)\right), \quad m=1, \ldots, M.\end{equation}\end{subequations} The Fej\'er weights~\eqref{eq:Fweights} can be efficiently computed via the FFT~\cite{waldvogel2006fast}. As the trapezoidal quadrature rule~\eqref{eq:trap_rule}, the F\'ejer rule~\eqref{eq:fejer_rule} yields high-order accuracy for integration of smooth functions~\cite{davis2007methods}. Furthermore, the derivatives of the density and the parametrization at the interpolation point $$z^*=\gamma_{p^*}(t^*)\quad\mbox{where}\quad t^* =\underset{1\leq m\leq M,1\leq p\leq P}{\rm arg\,min} |z-\gamma_p(t_m)|,$$ which are needed in the construction of the interpolant $P_N$, can be obtained from $\{\varphi(\gamma_{p^*}(t_m))\}_{j=1}^{M}$ and $\{\gamma_{p^*}(t_m)\}_{m=1}^{M}$, respectively, via FFT-based differentiation algorithms~\cite{johnson2011notes}. Note that since there are no quadrature points~\eqref{eq:Cheb_points} at the ends of the interval $[-1,1]$, the curve regularity requirements for the construction of $P_N$ are fulfilled at all the discretization points $\{\gamma_p(t_m)\}_{m=1}^M$ for $p=1,\ldots, P.$ As is well-known the accuracy of the numerically approximated derivatives deteriorates significantly as both the number~$M$ of Chebyshev points and the derivative order $N$ increases. In general, this phenomenon occurs because numerical differentiation is an ill-posed problem in the sense that small errors in the functions values, such as those stemming from round-off errors, give rise to large errors in the approximate derivatives. In order to circumvent this issue (which is also present in the context of the trapezoidal rule~\eqref{eq:trap_rule}), we divide the parameter space $[-1,1]$ into a suitable number of subintervals thus generating smaller patches within which just a small number~$M$ of Chebyshev points is needed to accurately perform both integration and differentiation. This strategy ensures that numerical differentiation of $\varphi\circ\gamma_{p^*}$ and $\gamma_{p^*}$ is carried out by differentiation of low-degree Chebyshev interpolation polynomials, which are significantly less affected by ill-conditioning~\cite{bruno2012numerical}. \subsection{Determining the coefficients}\label{sec:det_coef} This section presents a straightforward numerical procedure for computing the set of coefficients $\{c_j(z)\}_{j=0}^N$---that define the density interpolant $P_N(\cdot,z)$ in~\eqref{eq:truc_taylor}---at points~$z\in\Gamma$ corresponding to the quadrature nodes introduced in Sections~\ref{sec:smooth} and~\ref{sec:psmooth}. As discussed, the discrete set $\{\phi(t_m)\}_{m=1}^{M}$, where $\{t_m\}_{m=1}^M$ could be either~\eqref{eq:uniform} or~\eqref{eq:Cheb_points}, can be utilized to produce spectrally accurate approximations of the derivative values $\{\phi'(t_m)\}_{m=1}^{M}$ of a smooth function $\phi$ defined in the parameter space, by means of FFT-based differentiation. Let then $D_{\rm FFT}:\C^{M}\to \C^M$ denote the linear transformation that produces $(D_{\rm FFT}\pmb{\phi})_m \approx \phi'(t_m)$, $m=1,\ldots,M$, via FFT-based differentiation, where $\pmb{\phi} = [\phi(t_1),\ldots,\phi(t_M)]^T\in\C^M$. Then the set of coefficients $\{c_j(\gamma(t_m))\}_{j=0}^N$ associated with each one of the quadrature nodes $t_m$, $m=1,\ldots M$, can be computed all at once by means of the following recursive procedure:\bigskip \begin{algorithm}[H] \KwData{Sample values of the density function $\pmb{\phi}=[\phi(t_1),\ldots,\phi(t_M)]^T\in\C^M$; $\pmb{\gamma}=[\gamma(t_1),\ldots,\gamma(t_M)]^{T}\in\C^M$; and density interpolation order $N\geq 0$\;} \KwResult{Matrix $B\in\C^{M\times (N+1)}$ with entries $B(m,j+1)\approx c_{j}(\gamma(t_m))$, $m=1,\ldots M$, $j=0,\ldots,N$\;} \BlankLine $\pmb{\gamma}' = D_{\rm FFT}(\pmb{\gamma})$\; $B(:,1)=\pmb{\phi}$\; \lIf{$N=0$}{\Return{B}} \For{$j$ from 1 to $N$}{ $ B(:,j+1)=D_{\rm FFT}\left(B(:,j)\right)\oslash\pmb{\gamma}'$\; \tcp{the symbol $\oslash$ denotes element-wise division}} \Return{$B$}\; \caption{Evaluation of the coefficients in the definition of the density interpolant~\eqref{eq:truc_taylor}.}\label{alg:algo} \end{algorithm}\bigskip This simple procedure is based on the identity $P_N(z,z_m) = p_N(\gamma^{-1}(z),t_m)$, where $\gamma$ could be either a global or a local parametrization of the curve $\Gamma$, and $z_m=\gamma(t_m)$. Differentiating this identity $j$-times with respect to $z$, we get $$ \left.\frac{\p^j }{\p z^j}P_N(z,z_m)\right|_{z=z_m} = \left.D^j_\gamma p_N(t,t_m)\right|_{t=t_m},\quad\mbox{where}\quad D_\gamma=\frac{1}{\gamma'(t)}\frac{\p}{\p t}. $$ Therefore, using the identities $$c_j(z_m)=\left.\frac{\p^j}{\p z^j}P_N(z,z_0)\right|_{z=z_m}\andtext \left.D^j_\gamma p_N(t,t_m)\right|_{t=t_m}=D^j_\gamma\phi(t_m),$$ that follow from~\eqref{eq:truc_taylor} and~\eqref{eq:inter_cond}, respectively, we obtain $$ c_j(z_m) = D^j_\gamma\phi(t_m), \quad j=0,\ldots,N,\quad m=1,\ldots,M, $$ which is the formula that Algorithm~\ref{alg:algo} implements. Finally, we comment on the computational complexity of Algorithm~\ref{alg:algo}. Clearly, the overall cost of constructing the $N$th-order density interpolants associated to each one of the discretization points on the contour, amounts to $O(NM\log (M))$ in the case a smooth curves, where $M$ is the total number of discretization points, and it amounts to $O(PNM\log(M))$ in the case of piecewise smooth curves, where~$P$ is the number of patches and~$M$ is number discretization points per patch. \section{Examples} \label{sec:examples} This final section presents a variety of numerical examples designed to validate and showcase the applicability of the proposed methodology. \subsection{Validation} We start off by considering the smooth ``jellyfish'' and the piecewise smooth ``snowflake'' contours displayed in Figures~\ref{fig:jellyfish} and~\ref{fig:snowflake}, respectively. The jellyfish contour is given by the (real) analytic parametrization \begin{equation}\label{eq:curve_param} \gamma(t)=\{1+0.3\cos(4t+2\sin t)\}\e^{i(t-\frac{\pi}{2})},\quad t\in[0,2\pi), \end{equation} while the snowflake curve corresponds to the Koch polygonal domain~\cite{falconer2004fractal} comprising~192 vertices. Here we validate the density interpolation technique developed in Section~\ref{sec:nearly_singular} and the high-order numerical methods presented in Section~\ref{sec:numerics} by using the Cauchy integral formula~\eqref{eq:CF} In our first example, the Cauchy integral operator~$\mathcal Cf$ and its first two derivatives are evaluated at a set of points $\{w_k\}_{k=1}^{100}\in\Omega$ which are uniformly distributed along $\Gamma$ and placed at a distance~$10^{-4}$ or shorter from it. Meromorphic functions of the form~$f(z) = \sum_{\ell =1}^L (z-z_\ell)^{-1}$, with poles lying outside the domain~$\Omega$ bounded by~$\Gamma$ (the location of the poles $z_\ell$ of~$f$ used in this example are marked by the green dots in Figures~\ref{fig:jellyfish} and~\ref{fig:snowflake}), are used as input densities. The numerical errors in the evaluation of the operators~\eqref{eq:reg_formula} and~\eqref{eq:reg_formula_der} are then assessed by comparing the numerically produced values of $(\mathcal Cf)(w_k)$, $(\mathcal Cf)'(w_k)$, and $(\mathcal Cf)''(w_k)$, with the corresponding exact ones~$f(w_k)$, $f'(w_k)$ and $f''(w_k)$, for $k=1,\ldots, 100$. Table~\ref{tab:1} shows the results corresponding to the maximum relative errors achieved without regularization and using the density interpolation method of orders $N=0,\ldots, 5$. A total of $M=800$ quadrature nodes were used in the case of the jellyfish contour which is discretized applying the numerical integration/differentiation approach presented in Section~\ref{sec:smooth}. The snowflake curve, in turn, was discretized using the approach presented in Section~\ref{sec:psmooth} using a total of $P=576$ patches with~$M=8$ quadrature nodes per patch. Figures~\ref{fig:jellyfish} and~\ref{fig:snowflake}, on the other hand, display the logarithm in base ten of the absolute error~$|f(z)-\tilde f(z)|$ for $z\in\Omega$, in the evaluation of the Cauchy operator for various density interpolation orders. The ``$5h$" rule of thumb~\cite{Barnett:2014tq} was used in this example, where the regularization was applied at points $z\in\Omega$ at a distance smaller than $5h$ from the contour where $h$ is an approximation of distance between the contour discretization points that are the closest to~$z$. A total of $M=400$ quadrature nodes were used in the case of the jellyfish contour, and $P=576$ patches and $M=6$ points per patch in the case of the snowflake curve. \begin{table} \caption{Relative errors $E_n = \max_{1\leq k\leq 100}|f^{(n)}(w_k)-\tilde f^{(n)}(w_k)|/|f^{(n)}(w_k)|$, with~$f^{(n)}(w_k)$ and $\tilde f^{(n)}(w_k)$ denoting the exact and approximate values, respectively, in the evaluation of the Cauchy integral operator ($n=0$) and its first ($n=1$) and second ($n=2$) order derivatives, at a fixed set of points $\{w_k\}_{k=1}^{100}\in\Omega$ placed at a distance~$10^{-4}$ or shorter from the contour~$\Gamma$. The table reports the results obtained without using regularization and using the density interpolation method of order $N=0,\ldots,4$, for the (smooth) jellyfish and the (polygonal) snowflake contours displayed in Figures~\ref{fig:jellyfish} and~\ref{fig:snowflake}, respectively. } \begin{center} \begin{tabular}{ c|c|c|c|c|c|c } Reg. order & \multicolumn{3}{|c}{Errors (jellyfish)} & \multicolumn{3}{|c}{Errors (snowflake)} \\ \toprule $N$& $E_0$&$ E_1$&$ E_2$& $E_0$&$ E_1$&$ E_2$\\ \toprule without & $2.48\cdot 10^{+01}$ & $3.98\cdot 10^{+04}$&$1.11\cdot 10^{+08}$& $3.24\cdot 10^{+00}$&$3.05\cdot 10^{+03}$&$2.75\cdot 10^{+06}$\\ 0th & $1.60\cdot 10^{-02}$ & $5.04\cdot 10^{-01}$ &$1.35\cdot 10^{+01}$&$4.02\cdot 10^{-03}$&$4.60\cdot 10^{-01}$&$9.91\cdot 10^{+02}$\\ 1st & $5.72\cdot 10^{-06}$ & $8.79\cdot 10^{-03}$ &$5.02\cdot 10^{-01}$&$4.47\cdot 10^{-06}$&$3.18\cdot 10^{-03}$&$4.38\cdot 10^{-01}$\\ 2nd & $2.43\cdot 10^{-09}$ & $7.57\cdot 10^{-06}$&$1.03\cdot 10^{-02}$&$5.20\cdot 10^{-09}$&$7.98\cdot 10^{-06}$&$2.81\cdot 10^{-03}$ \\ 3rd & $1.04\cdot 10^{-12}$ & $4.55\cdot 10^{-09}$ &$1.27\cdot 10^{-05}$&$7.67\cdot 10^{-12}$&$1.43\cdot 10^{-08}$&$1.14\cdot 10^{-05}$\\ 4th & $3.54\cdot 10^{-13}$ & $1.46\cdot 10^{-10}$ &$2.10\cdot 10^{-07}$&$3.52\cdot 10^{-12}$&$2.68\cdot 10^{-10}$&$4.27\cdot 10^{-08}$\\ \bottomrule \end{tabular} \end{center}\label{tab:1} \end{table} \begin{figure}[ht] \centering \subfloat[Without regularization. $E=1.27\cdot 10^1$.]{\includegraphics[height=0.3\textwidth]{fig3a_Example0_FigNoReg.pdf}}\quad \subfloat[$0$th order. $E=1.18\cdot 10^{-1}$.]{\includegraphics[height=0.3\textwidth]{fig3b_Example0_Fig0.pdf}}\quad \subfloat[ $1$st order. $E=6.40\cdot 10^{-3}$.]{\includegraphics[height=0.3\textwidth]{fig3c_Example0_Fig1.pdf}}\\ \subfloat[$2$nd order. $E=3.85\cdot 10^{-4}$.]{\includegraphics[height=0.3\textwidth]{fig3d_Example0_Fig2.pdf}}\quad \subfloat[$3$rd order. $E=2.27\cdot 10^{-5}$.]{\includegraphics[height=0.3\textwidth]{fig3e_Example0_Fig3.pdf}}\quad \subfloat[a][$4$th order. $E=1.38\cdot 10^{-6}$.]{\includegraphics[height=0.3\textwidth]{fig3f_Example0_Fig4.pdf}} \caption{Logarithm in base ten of the absolute error in the evaluation of the Cauchy operator inside a jellyfish smooth contour for density interpolation orders $N=0,1,2,3$ and~$4$. The input function used corresponds to the contour restriction of a meromorphic function with poles at the locations marked by the green dots. The maximum absolute error, $E$, is provided in the captions.}\label{fig:jellyfish} \end{figure} \begin{figure}[ht] \centering \subfloat[Without regularization. $E=5.45\cdot 10^1$.]{\includegraphics[height=0.3\textwidth]{fig4a_Example1_FigNoReg.pdf}\label{fig_D_none}}\quad \subfloat[$N=0$. $E=6.00\cdot 10^{-2}$.]{\includegraphics[height=0.3\textwidth]{fig4b_Example1_Fig0.pdf}\label{fig_D_0}}\quad \subfloat[$N=1$. $E=5.18\cdot 10^{-4}$.]{\includegraphics[height=0.3\textwidth]{fig4c_Example1_Fig1.pdf}\label{fig_D_4}}\\ \subfloat[$N=2$. $E=1.03\cdot 10^{-5}$.]{\includegraphics[height=0.3\textwidth]{fig4d_Example1_Fig2.pdf}\label{fig_grad_D_none}}\quad \subfloat[$N=3$. $E=2.43\cdot 10^{-7}$.]{\includegraphics[height=0.3\textwidth]{fig4e_Example1_Fig3.pdf}\label{fig_grad_D_0}}\quad \subfloat[a][$N=4$. $\max E=1.37\cdot 10^{-8}$.]{\includegraphics[height=0.3\textwidth]{fig4f_Example1_Fig4.pdf}\label{fig_grad_D_4}} \caption{Logarithm in base ten of the absolute error in the evaluation of the Cauchy operator inside the Koch polygonal curve for density interpolation orders $N=0,1,2,3$ and~$4$. The input function used corresponds to the contour restriction of a meromorphic function with poles at the locations marked by the green dots. The maximum absolute error, $E$, is provided in the captions. }\label{fig:snowflake} \end{figure} \begin{figure}[ht] \centering \subfloat[Integral equation~\eqref{eq:IE_SL}.]{\includegraphics[height=0.4\textwidth]{fig5a_FigIE_Sop.pdf}\label{fig_Sop}} \subfloat[Integral equation~\eqref{eq:IE_HS}.]{\includegraphics[height=0.4\textwidth]{fig5b_FigIE_Top.pdf}\label{fig_Top}} \caption{Relative errors in the solution of the integral equations~\eqref{eq:IE_SL} and~\eqref{eq:IE_HS}, corresponding to the plots (a) in log-log scale and (b) in semi-log scale, respectively, computed using the trapezoidal rule based Nystr\"om method for various numbers~$M$ of discretization points. The inset figures display the logarithm in base ten of of the absolute error in the numerically approximated solution $u$ inside the domain, without using regularization (left half of the inset figure) and using the density interpolation of order $N=3$ (right half of the inset figure). }\label{fig:IE_convergence} \end{figure} \subsection{Laplace equation layer potentials and boundary integral operators} Following the discussion in Section~\ref{sec:DL_formulation}, we here apply the density interpolation method to solve the Laplace equation by means of boundary integral equation methods. For the sake of completeness, we consider the uniquely solvable interior Robin problem: \begin{equation}\label{eq:interior_Laplace} \Delta u =0\quad\mbox{in}\quad\Omega,\qquad \frac{\p u}{\p \nu} + u = f\quad\mbox{on}\quad\Gamma, \end{equation} which is formulated as direct boundary integral equations involving the two layer potentials and all four integral operators of Calder\'on calculus. The domain's boundary $\Gamma=\p\Omega$ is assumed of class~$C^2$ and $f\in C^{1,\alpha}(\Gamma)$, $0<\alpha<1$. Integral equations for the unknown traces $\p u/\p \nu$ and $u$ on~$\Gamma$ are derived from the Green's representation formula \begin{equation}\label{eq:GF} u(\ner) = \left(\mathcal S\frac{\p u}{\p \nu}\right)(\ner) - (\mathcal D u)(\ner),\quad \ner\in\Omega. \end{equation} Indeed, evaluating this formula on $\Gamma$---by making use of the interior jump conditions for the potentials~\cite{kress2012linear} and employing the boundary condition---we obtain the following second-kind integral equation for the unknown normal derivative of the solution: \begin{equation}\label{eq:IE_SL} \left(\frac{I}{2}+K+S\right)\frac{\p u}{\p \nu} = \left(\frac{I}{2}+K\right)f\quad\mbox{on}\quad\Gamma, \end{equation} where $I$, $S$ and $K$ are respectively the identity operator, the single-layer operator~\eqref{eq:SL_op}, and the double-layer operator~\eqref{eq:DL_op}. In a similar manner, taking the normal derivative of the Green's formula on~$\Gamma$---by making use of the exterior jump conditions of the layer potentials gradients~\cite{kress2012linear}---and enforcing the boundary condition, we obtain \begin{equation}\label{eq:IE_HS} \left(-\frac{I}{2}+K^\top+T\right)u = \left(-\frac{I}{2}+K^\top\right)f\quad\mbox{on}\quad\Gamma, \end{equation} for the unknown solution $u$ on $\Gamma$, where $K^\top$ and $T$ are respectively the adjoint double-layer operator~\eqref{eq:ADL_op} and the hypersingular operator~\eqref{eq:hypersingular}. The integral equations~\eqref{eq:IE_SL} and~\eqref{eq:IE_HS} are discretized following a Nystr\"om method~\cite{kress2012linear} based on direct use of the trapezoidal-rule discretization described in Section~\ref{sec:smooth}. The required density interpolants are constructed following the procedures outlined in Sections~\ref{sec:smooth} and~\ref{sec:det_coef}. Since the single-layer ($S$) and hypersingular ($T$) operators are recast, in~\eqref{eq:SL_op_reg} and~\eqref{eq:hyper_complex}, respectively, in terms of smooth integrands, the trapezoidal rule yields the expected order of convergence according to the achieved smoothness of the integrands, which, in turn, depends on the density interpolation order $N$. Note that the remaining double-layer ($K$) operator and its adjoint ($K^\top$) do not need regularization in this case. The resulting linear systems for the approximate values $v_j$ and $u_j$ of the traces $\p u/\p\nu$ and $u$ at the quadrature nodes $\nex_j\in\Gamma$, $j=1,\ldots,$ are iteratively solved by means of GMRES~\cite{saad1986gmres}, which only requires forward map evaluations of the integral operators. In order to examine the accuracy of the Nystr\"om method, we let $\Gamma$ be the smooth jellyfish curve given by the parametrization in~\eqref{eq:curve_param}, and $f$ be so that $u(x,y)=\e^x\sin y$ is the exact solution of~\eqref{eq:interior_Laplace}. Figure~\ref{fig_Sop} displays the relative errors $\max_{1\leq j\leq M}|v_j-\frac{\p u}{\p \nu}(\nex_j)|/\max_{1\leq j\leq M}|\frac{\p u}{\p \nu}(\nex_j)|$ in the numerical solution of the second-kind integral equation~\eqref{eq:IE_SL}. Numerical errors of order $O(M^{-N-2})$ as $M\to \infty$, for interpolation orders $N=0,\ldots, 3,$, are observed in these examples. The convergence appears to be slightly delayed due to the significant curvature of the jellyfish contour used, which requires a relatively large number of discretization points $M$ to be properly resolved. The observed convergence orders are explained by the fact that approximate second-kind integral equation solutions obtained by means of Nystr\"om methods, inherit the accuracy of the associated quadrature rule~\cite{atkinson1997numerical,hackbusch1995}. Therefore, since the dominant quadrature error in the approximation of $(I/2+K+S)\varphi$ at the nodes, stems from the evaluation of the regularized single-layer operator $S$ in~\eqref{eq:SL_op_reg} (since $K$ features an analytic kernel) and, as we show bellow, the direct trapezoidal rule approximation~\eqref{eq:trap_rule_total} of $S\varphi$ yields $O(M^{-N-2})$ errors as $M\to\infty$ for analytic contours and densities, we achieve the same asymptotic errors in the integral equation solution. The following results establish the abovementioned asymptotic error bound for the trapezoidal rule approximation of the regularized single-layer operator $S$: \begin{lemma} \label{lem:trap_rule}Let $g$ be an analytic function on $[0,2\pi]$ such that $g^{(n)}(0)=g^{(n)}(2\pi)=0$ for $n=0,\ldots N$. Then \begin{eqnarray} \left|\int_{0}^{2\pi}g(t)\de t-\frac{2\pi}{M}\sum_{m=1}^{M}g(t_m)\right|\leq O\left(M^{-N-2}\right),\label{eq:trap_smooth}\\ \left|\int_{0}^{2\pi}g(t)\log\left(4\sin^2\frac{t}{2}\right)\de t-\frac{2\pi}{M}\sum_{m=2}^{M}g(t_m)\log\left(4\sin^2\frac{t_m}{2}\right)\right|\leq O\left(M^{-N-2}\right),\label{eq:trap_log} \end{eqnarray} as $M\to\infty$, where $t_m=\frac{2\pi}{M}(m-1)$ for $m=1,\ldots,M.$ \end{lemma} \begin{proof} The error bound~\eqref{eq:trap_smooth} follows directly from~\cite[Corollary 3.3]{javed2014trapezoidal}, while~\eqref{eq:trap_log}, in turn, follows by slightly modifying the proof of the Euler-Maclaurin expansion for functions with logarithmic singularities presented in~\cite[Theorem 1]{celorrio1999euler}. \end{proof} \begin{theorem} Let $\gamma$ and $\varphi\circ\gamma$ be analytic and $2\pi$-periodic functions on $[0,2\pi]$, and let $\psi$, $Q_N$ and $\log(\cdot-z_0)$, with $z_0=\gamma(0)\in\Gamma$, be the functions defined in Theorem~\ref{th:single_layer}. Then, the error in the direct trapezoidal rule approximation~\eqref{eq:trap_rule_total} of the regularized single-layer operator~\eqref{eq:SL_op_reg} at $\nex=(\real z_0,\mathrm{Im}\, z_0)$, satisfies: $$ \left|(S\varphi)(\nex)+\real\left\{\frac{1}{M}\sum_{m=2}^{M} \log(\gamma(t_m)-\gamma(0))\left\{\psi(\gamma(t_m))-Q_N(\gamma(t_m),\gamma(0))\right\}\gamma'(t_m)\right\}\right|\leq O(M^{-N-2}), $$ as $M\to\infty$. \end{theorem} \begin{proof} Let \begin{equation}\label{eq:thm_ex0} g_0(t) = \left\{\psi(\gamma(t))-Q_N(\gamma(t),\gamma(0))\right\}\gamma'(t), \end{equation} which is $2\pi$-periodic and analytic on $[0,2\pi]$, and note that $g^{(n)}_0(0)=g^{(n)}_0(2\pi)=0$ for $n=0,\ldots,N$, because $Q_N(\cdot, z_0)$ is the density interpolant of $\psi$ at $z_0=\gamma(0)$. On the other hand, we have the identity \begin{equation}\label{eq:thm_ex} \log(\gamma(t)-\gamma(0)) =\frac{1}{2}\log\left(4\sin^2 \frac{t}{2}\right)+g_1(t), \end{equation} where $$ g_1(t)=\frac{1}{2}\log\left(\frac{|\gamma(t)-\gamma(0)|^2}{4\sin^2 \frac{t}{2}} \right)+i\operatorname{arg}(\gamma(t)-\gamma(0)), $$ is an analytic (but not periodic) function on $[0,2\pi]$. Therefore, it follows from~\eqref{eq:thm_ex0} and~\eqref{eq:thm_ex} that the single-layer operator can be expressed as $$ (S\varphi)(\nex) = -\frac{1}{2\pi}\real\left\{\frac{1}{2}\int_0^{2\pi}g_0(t)\log\left(4\sin^2 \frac{t}{2}\right)\de t+\int_0^{2\pi}g_0(t)g_1(t)\de t\right\}. $$ Since $g=g_0g_1$ satisfies $$ g^{(n)}(0) =\sum_{k=0}^{n}{n \choose k} g_1^{(n-k)}(0) g_0^{(k)}(0)=\sum_{k=0}^{n}{n \choose k} g_1^{(n-k)}(2\pi) g_0^{(k)}(2\pi) = g^{(n)}(2\pi) =0 $$ for $n=0,\ldots,N$, we obtain, from Lemma~\ref{lem:trap_rule}, the asymptotic error bound. \end{proof} Continuing with the numerical examples, Figure~\ref{fig_Top} displays the relative errors $\max_{1\leq j\leq M}|u_j- u(\nex_j)|/\max_{1\leq j\leq M}|u(\nex_j)|$ in the numerical solution of the integral equation~\eqref{eq:IE_HS}. Exponential convergence as $M\to \infty$ is achieved in this example for the interpolation orders $N=1,2$ and~3. This phenomenon is explained by the fact that, as the adjoint double-layer operator $K^\top$, the regularized hypersingular operator~\eqref{eq:hyper_complex} is given in terms of $2\pi$-periodic analytic integrands for $N=1,2$ and~3, and, as such, the trapezoidal rule yields exponential convergence in the overall evaluation of $(-I/2+K^\top+T)\varphi$ at the quadrature nodes as $M$ increases. This eventually translates into the same exponential convergence of the integral equation solution. The inset figures in~Figures~\ref{fig_Sop} and~\ref{fig_Top} show the logarithm in base ten of the absolute error in the numerical solution of~\eqref{eq:interior_Laplace} obtained from the Green's formula~\eqref{eq:GF} with (right half) and without (left half) regularization of the single- and double-layer potentials at points near the boundary. A total of $M=400$ discretization points and the interpolation order $N=3$ were used to produced these figures. The color difference between them (note that they are displayed using the same color scale) indicates superior accuracy achieved using the integral equation~\eqref{eq:IE_HS}. \begin{figure}[h!] \centering \subfloat[Interior conformal mappings: Moai (top figure) and Nazca bird (bottom figure).]{\includegraphics[width=0.71\textwidth]{fig6a_Conformal.pdf}\label{fig:Moai}}\\ \subfloat[Exterior conformal mappings: Moai (top figure) and Nazca bird (bottom figure).]{\includegraphics[width=0.71\textwidth]{fig6b_Conformal.pdf}\label{fig:Nazca}} \caption{Examples of conformal mappings from $C^2$ domains to the interior~(a) and exterior~(b) of the unit disk. The interior and exterior mappings, given by $f_i$ in~\eqref{eq:conf_map} and $f_e$ in~\eqref{eq:ext_conf_map}, respectively, are regularized by the proposed density interpolation technique (at points $z$ lying near the contour) and numerically evaluated by means of the discretization approach presented in Section~\ref{sec:psmooth}. The zoomed-in figures demonstrate that the accuracy achieved by the density interpolation method in the evaluation of the mappings $f_i$ and $f_e$ allows them to retain their angle preserving property near the contour even at regions of large distortion.}\label{fig:cmap} \end{figure} In our final example we apply the density interpolation technique to the numerical evaluation of conformal mappings from the interior (resp. exterior) of $C^2$ curves into the interior (resp. exterior) of the unit disk. We consider here the \emph{Moai} and \emph{Nazca bird} curves depicted in Figures~\ref{fig:cmap} whose~$C^2$ parametrizations produced by cubic spline interpolation. The associated interior (resp. exterior) map integral equation~\eqref{eq:int_IE} (resp.~\eqref{eq:ext_IE}) is solved by applying a direct Nystr\"om method based on the Chebyshev discretization approach presented in Section~\ref{sec:psmooth}. The density interpolation method of order $N=3$ is then employed to evaluate the interior ($f_i$) and exterior ($f_e$) conformal mappings near the boundary; the zoomed figures provide evidence of the achieved accuracy in the numerical evaluation of $f_i$ in~\eqref{eq:conf_map} and $f_e$ in~\eqref{eq:ext_conf_map} which are expressed terms of the Cauchy integral operator. It is important to point out that in the cases of the Araucaria of Figure~\ref{fig:AraucariaExample}, and the snow flake of Figure~\ref{fig:snowflake} and Table~\ref{tab:1}, which are the only contours in the paper that feature corners, the integrands considered are in fact smooth up to and including the endpoints of each curve segment. As such, the advocated Chebyshev discretization yields spectral accuracy in the evaluation of the resulting integral operator as the number of quadrature nodes increases. This is, however, often not the case for boundary integral equation solutions, which typically develop singularities depending on the angles at corners~\cite{kozlov1997elliptic}. As is well known, these singularities do have a significant impact on the overall numerical accuracy, that cannot be completely avoided by the quadratic refinement effected by the Chebyshev grids~\cite{Kress:1990vm,bruno2009high,anand2012well}. A treatment of such singularities in the context of density interpolation methods, which is compatible with the method proposed here, is given in~\cite[Sec.~5.3]{perez2018plane}. We mention, finally, that since both the Moai and the Nazca bird contours in Figure~\ref{fig:cmap}, are given in terms of parametrizations that are (globally) of class $C^2$, they do not feature corners. Therefore, although the associated conformal mappings were produced from Laplace integral equation solutions, the global regularity of the curves does not limits the accuracy of the Chebyshev grid discretization significantly. \section{Conclusions} We have introduced a new high-order density interpolation technique for the kernel regularization of Cauchy-like integral operators, such as Laplace, biharmonic, and Stokes layer potentials and associated boundary integral operators in two dimensions, as well as contour integral representations of conformal mappings. The proposed methodology relies on the well-known Cauchy integral and Sokhotski-Plemelj formulae of complex analysis, together with local Taylor interpolations of input density functions along the contour. High-order numerical methods for the practical implementation of the proposed technique were presented for both smooth and piecewise smooth contours. An FFT-based algorithm applicable to both cases was introduced for the accurate and efficient construction of the density interpolant. Application of this methodology to Stokes flows in the presence of moving interfaces is currently under investigation. \bibliographystyle{abbrv}
{ "timestamp": "2021-03-02T02:12:36", "yymm": "2007", "arxiv_id": "2007.13539", "language": "en", "url": "https://arxiv.org/abs/2007.13539" }
\section{Introduction} Let $A$ be an abelian variety over a number field $K$, and $\ell$ a prime number. If $A$ admits a $K$-rational $\ell$-isogeny, then necessarily, at every prime $\mathfrak{p}$ of good reduction not dividing $\ell$, the reduction $\tilde{A}_\mathfrak{p}$ over $\mathbb{F}_\mathfrak{p}$ also admits an $\ell$-isogeny, rational over $\mathbb{F}_\mathfrak{p}$. One may ask the converse question: \begin{center} \emph{ If $A$ admits a rational $\ell$-isogeny locally at every prime of good reduction away from $\ell$, must $A$ admit a $K$-rational $\ell$-isogeny? } \end{center} If the answer to this question for a given pair $(A/K,\ell)$ is `No', we refer to $\ell$ as an exceptional prime for $A$, and refer to $A$ as a \emph{Hasse at $\ell$ variety over $K$}. We think of Hasse at $\ell$ varieties as being counterexamples to a local-global principle for $\ell$-isogenies. This problem has been studied extensively in the case where $A$ is an elliptic curve, starting with the work of Sutherland \cite{Drew} who provided a characterisation of Hasse curves in terms of the \emph{projective mod-$\ell$ Galois image} (whose definition we recall in \Cref{sec:prelims}), and found all such counterexamples in the case when $K = \mathbb{Q}$ (of which there is only one up to isomorphism over $\overline{\mathbb{Q}}$). Cullinan \cite{cullinan2012symplectic} initiated the study of this question in the case of $\dim A = 2$, by identifying the subgroups of $\textup{GSp}_4(\mathbb{F}_\ell)$ that the mod-$\ell$ Galois image of a Hasse at $\ell$ variety must be isomorphic to, and remarked that, while his classification could be used to generate Hasse surfaces over arbitrary base fields, it ``would be interesting to create ``natural'' examples of such surfaces''. In this paper we provide the first examples of Hasse at $\ell$ surfaces that are simple over $\mathbb{Q}$, by studying the abelian varieties $A_f$ associated to weight~$2$ newforms $f$ via the Eichler-Shimura construction: \begin{example}\label{example:cm_hasse} Consider the weight two newform of level $\Gamma_1(189)$, Nebentypus the non-primitive Dirichlet character modulo $189$ of conductor $21$, sending the two generators $29$ and $136$ of the group $(\ZZ/189\ZZ)^\times$ to $-1$ and $\zeta_6^5$, where $\zeta_6 := e^{2\pi i/6}$ respectively, whose first few Fourier coefficients are as follows: \[ f(z) = q + (-2 + 2\zeta_6)q^4 + (-1 + 3\zeta_6)q^7 + O(q^{10}). \] Then $A_f$ is a Hasse at 7 abelian surface over $\mathbb{Q}$. This $f$ has label \href{https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/189/2/p/a/}{189.2.p.a} in the \href{https://www.lmfdb.org/}{LMFDB} \cite{lmfdb}. \end{example} This $f$ is a CM newform, having complex multiplication by the field $\mathbb{Q}(\sqrt{-3})$, and as such $A_f$ necessarily decomposes over $\overline{\mathbb{Q}}$ as the square of a CM elliptic curve. Our next example provides an instance of an absolutely simple Hasse surface. \begin{example}\label{example:abs_simple_hasse} Consider the weight two newform of level $\Gamma_0(7938)$ with Fourier coefficient field $\mathbb{Q}(\sqrt{2})$, whose first few coefficients are as follows ($\beta = \sqrt{2}$): \[ f(z) = q - q^2 + q^4 - q^8 - 9\beta q^{11} + O(q^{12}). \] Then $A_f$ is an absolutely simple Hasse at 7 abelian surface over $\mathbb{Q}$. This $f$ has label \href{https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/7938/2/a/bj/}{7938.2.a.bj} in the LMFDB. \end{example} Although the $f$ in this example does not have CM, one may show that it is congruent to a CM newform modulo $7$. We show that this is to be expected: \begin{theorem}\label{thm:cm_congruence} Let $f$ be a weight $2$ newform such that the corresponding modular abelian variety $A_f$ is Hasse at some prime $\ell$ which splits completely in the ring of integers of the Hecke eigenvalue field of $f$. Then $f$ is congruent modulo $\ell$ to a newform with complex multiplication. \end{theorem} The structure of the paper is as follows. In \Cref{sec:prelims} we survey previous and related work on this question, including Sutherland's group-theoretic reformulation of Hasse at $\ell$ varieties. \Cref{sec:decomposable_abelian_surfaces} studies the modular abelian varieties $A_f$ indicated above, yielding sufficient conditions on $f$ to ensure that $A_f$ is Hasse. Sections \ref{sec:find_examples_using_code} and \ref{sec:abs_simple_hasse} explain the algorithmic ingredients required to find examples of newforms satisfying the sufficient conditions, including the two examples given above. Finally in \Cref{sec:cm_congruence} we prove \Cref{thm:cm_congruence}. \section{Background and Preliminaries}\label{sec:prelims} For an abelian variety $A$ over a number field $K$, the absolute Galois group $G_K := \Gal(\overline{K}/K)$ acts on the $\ell$-torsion subgroup $A(\overline{K})[\ell]$, yielding the mod-$\ell$ representation \[ \bar{\rho}_{A,\ell} : G_K \to \textup{GL}_{2d}(\mathbb{F}_\ell),\] whose image $G_{A,\ell} := \im \bar{\rho}_{A,\ell}$ is well-defined up to conjugacy; we refer to $G_{A,\ell}$ as \emph{the mod-$\ell$ image of $A$}. We let $H_{A,\ell} := G_{A,\ell}$ modulo scalars, which we refer to as \emph{the projective mod-$\ell$ image of $A$}, viewed as a subgroup of $\textup{PGL}_{2d}(\mathbb{F}_\ell)$. If $A$ admits a polarisation of degree coprime to $\ell$, then the symplectic property of the Weil pairing on $A[\ell]$ ensures that $G_{A,\ell}$ is contained in $\textup{GSp}_{2d}(\mathbb{F}_\ell)$, and consequently that $H_{A,\ell} \subseteq \textup{PGSp}_{2d}(\mathbb{F}_\ell)$. Henceforth we will assume that $A$ is principally polarised. By an $\ell$-isogeny $\phi : A \to A'$ of principally polarised abelian varieties of dimension $d$ defined over a field $k$ with char($k$) $\neq \ell$ we mean a surjective morphism with kernel isomorphic to $\ZZ/\ell\ZZ$. We note that these isogenies are \emph{not} compatible with the principal polarisations of $A$ and $A'$, since this kernel is not a maximal isotropic subgroup of $A[\ell]$ with respect to the $\ell$-Weil pairing. To consider isogenies that \emph{are} compatible with the polarisations, one would need to consider certain isogenies with kernel isomorphic to $(\ZZ/\ell\ZZ)^d$, often denoted as $(\ell,\cdots,\ell)$-isogenies (see e.g. \cite{costello2020supersingular}). One may well formulate a local-global question for such isotropic isogenies, and the results in \cite{orr2017compatibility} are likely to be relevant here; but we do not address this problem in the present paper. Sutherland's characterisation of Hasse curves mentioned in the Introduction is expressed in terms of the canonical faithful action of $H_{A,\ell}$ on the projective space $\mathbb{P}^{2d-1}(\mathbb{F}_\ell)$. Following our previous paper \cite{BC13}, given a subgroup $H$ of $\textup{PGSp}_{2d}(\mathbb{F}_\ell)$, we say that $H$ is \emph{Hasse} if its action on $\mathbb{P}^{2d-1}(\mathbb{F}_\ell)$ satisfies the following two properties: \begin{itemize} \item every element $h \in H$ fixes a point in $\mathbb{P}^{2d-1}(\mathbb{F}_\ell)$; \item there is no point in $\mathbb{P}^{2d-1}(\mathbb{F}_\ell)$ fixed by the whole of $H$. \end{itemize} We also refer to a subgroup $G$ of $\textup{GSp}_{2d}(\mathbb{F}_\ell)$ as Hasse if its image modulo scalars is Hasse. The following result is then used by Sutherland in the case of $\dim A = 1$: the details of the general case are entirely analogous, and may be found spelled out in \cite{BanThesis}, Section 2.2: \begin{proposition}[Sutherland]\label{prop:group_theoretic_reformulation} An abelian variety $A/K$ is Hasse at $\ell$ if and only if $H_{A,\ell}$ is Hasse. \end{proposition} In the case $\dim A = 1$, it is easy to show that no subgroup of $\textup{PGL}_{2}(\mathbb{F}_2)$ is Hasse, so for elliptic curves the prime $2$ is never an exceptional prime. For an odd prime $\ell$, define $\ell^\ast := +\ell$ if $\ell \equiv 1 \Mod{4}$, and $\ell^\ast := -\ell$ otherwise. Sutherland provides necessary conditions for an elliptic curve $E$ over a number field $K$ to be Hasse at an odd prime $\ell$, under the assumption that $\sqrt{\ell^\ast} \notin K$, which is equivalent to the determinant of the projective representation $\mathbb{P}\bar{\rho}_{E,\ell}$ being surjective (see Lemma~2.1 in \cite{BC13}). These conditions were shown to be sufficient in Section 7 of \cite{BC13}. In the following Proposition, by $D_{2n}$ we mean the dihedral group of order $2n$. \begin{proposition}[\cite{Drew}, \cite{BC13}]\label{prop:hasse_elliptic} Let $\ell$ be an odd prime, $K$ a number field, and assume that $\sqrt{\ell^\ast} \notin K$. Then an elliptic curve $E$ over $K$ is Hasse at $\ell$ if and only if the following hold: \begin{enumerate} \item the projective mod-$\ell$ image of $E$ is isomorphic to $D_{2n}$, where $n > 1$ is an odd divisor of $(\ell-1)/2$; \item $\ell \equiv 3 \Mod{4}$; \item the mod-$\ell$ image of $E$ is contained in the normaliser of a split Cartan subgroup of $\textup{GL}_2(\mathbb{F}_\ell)$; \item $E$ obtains a rational $\ell$-isogeny over $K(\sqrt{\ell^\ast})$. \end{enumerate} \end{proposition} \begin{remark} For the converse of the above Proposition, only conditions (1) and (2) are required; together these imply conditions (3) and (4). \end{remark} \begin{remark} The case of $\sqrt{\ell^\ast} \in K$ was dealt with independently by \cite{BanThesis} and \cite{AnniThesis} (see also \cite{anni2014local}). \end{remark} The property of an elliptic curve $E$ being Hasse at some prime $\ell$ depends only on $j(E)$, provided $j(E) \notin \left\{0,1728\right\}$. Sutherland therefore defines an \emph{exceptional pair} to be a pair $(\ell, j_0)$ of a prime $\ell$ and an element $j_0 \neq 0, 1728$ of a number field $K$ such that there exists a Hasse at $\ell$ curve over $K$ of $j$-invariant $j_0$. Sutherland moreover shows, in the proof of Theorem $2$ in \cite{Drew}, that a Hasse curve cannot have CM if $\ell > 7$; therefore, specialising now to $K = \mathbb{Q}$, elliptic curves with level structure given by (3) above arise as non-trivial points on the modular curve $X_s(\ell)$ (the trivial points being the cusps and CM points). That such points exist only for $\ell \in \left\{2,3,5,7,13\right\}$ follows from the work of Bilu, Parent and Rebolledo \cite{BPR}, although Sutherland was able to deduce the following remarkable result using the earlier work of Parent \cite{parent2005towards}, as well as an explicit study of the modular curve $X_{D_6}(7)$ and its rational points. \begin{theorem}[Sutherland] The only exceptional pair for $\mathbb{Q}$ is \[ \left(7,\frac{2268945}{128}\right).\] \end{theorem} The analogue of \Cref{prop:hasse_elliptic} providing precisely which subgroups of $\textup{PGSp}_4(\mathbb{F}_\ell)$ are Hasse was given by Cullinan \cite{cullinan2012symplectic}. Given a subgroup $H \subseteq \textup{PGSp}_4(\mathbb{F}_\ell)$, let $\pi^{-1}(H)$ denote the pullback of $H$ to $\textup{GSp}_4(\mathbb{F}_\ell)$. \begin{theorem}[Cullinan] A subgroup $H \subseteq \textup{PGSp}_4(\mathbb{F}_\ell)$ is Hasse if and only if $\pi^{-1}(H) \cap \textup{Sp}_4(\mathbb{F}_\ell)$ is isomorphic to one of the groups in \Cref{tab:cullinan}. \end{theorem} \begin{table}[htp] \begin{center} \begin{tabular}{|c|c|c|} \hline Type & Group & Condition\\ \hline $\mathcal{C}_2$ & $D_{(\ell-1)/2} \wr S_2$ & None\\ & $\Csplus$ & $\ell \equiv 1$(4)\\ & $(\ell-1)/2.\textup{SL}_2(\mathbb{F}_3).2$ & $\ell \equiv 1$(24)\\ & $(\ell-1)/2.\textup{GL}_2(\mathbb{F}_3).2$ & $\ell \equiv 1$(24)\\ & $(\ell-1)/2.\widehat{S_4}.2$ & $\ell \equiv 1$(24)\\ & $(\ell-1)/2.\textup{SL}_2(\mathbb{F}_5).2$ & $\ell \equiv 1$(60)\\ & $\textup{SL}_2(\mathbb{F}_3) \wr S_2$ & $\ell \equiv 1$(48)\\ & $\widehat{S_4} \wr S_2$ & $\ell \equiv 1$(48)\\ & $\textup{SL}_2(\mathbb{F}_5) \wr S_2$ & $\ell \equiv 1$(120)\\ \hline $\mathcal{C}_6$ & $2^{1+4}_{-}.O_4^{-}(2)$ & $\ell \equiv 1$(120)\\ & $2^{1+4}_{-}.3$ & $\ell \equiv 5$(24)\\ & $2^{1+4}_{-}.5$ & $\ell \equiv 5$(40)\\ & $2^{1+4}_{-}.S_3$ & $\ell \equiv 5$(24)\\ \hline $\mathcal{S}$ & $2.S_6$ & $\ell \equiv 1$(120)\\ & $\textup{SL}_2(\mathbb{F}_5)$ & $\ell \equiv 1$(30)\\ & $\textup{SL}_2(\mathbb{F}_3)$ & $\ell \equiv 1$(24)\\ \hline \end{tabular} \vspace{0.3cm} \caption{\label{tab:cullinan}Hasse subgroups of $\textup{PGSp}_4(\mathbb{F}_\ell)$. See \cite{cullinan2012symplectic} for the group-theoretic notation used in this table.} \end{center} \end{table} At this point we may readily engineer Hasse surfaces over arbitrary number fields. For example, suppose we would like to construct an abelian surface $A$ whose mod-$\ell$ image satisfies $G_{A,\ell} \cap \textup{Sp}_4(\mathbb{F}_\ell) \cong \textup{SL}_2(\mathbb{F}_5)$ for some prime $\ell \equiv 1$ (mod 30); by \Cref{tab:cullinan}, this would give a Hasse surface. We would first take an abelian surface over $\mathbb{Q}$ with absolute endomorphism ring isomorphic to $\ZZ$; a quick search in the LMFDB yields the genus~$2$ curve \href{https://www.lmfdb.org/Genus2Curve/Q/249/a/249/1}{249.a.249.1}: \[ \mathcal{C} : y^2 + (x^3 + 1)y = x^2 + x,\] whose Jacobian variety $A$ has conductor $249$ and $\End_{\overline{\mathbb{Q}}}(A) \cong \ZZ$. Serre's Open Image Theorem, which also holds for abelian surfaces with absolute endomorphism ring $\ZZ$ \cite{hall2011open} ensures that, for all sufficiently large primes $\ell$, we have $G_{A,\ell} \cong \textup{GSp}_4(\mathbb{F}_\ell)$. Moreover, Dieulefait \cite{dieulefait2002explicit} provides an algorithm to determine a bound on the primes of non-maximal image. This algorithm has recently been implemented \cite{galreps} in Sage \cite{sagemath} at an ICERM workshop funded by the Simons collaboration, and for this $A$ we find that any prime $\ell \geq 11$ ensures maximal image. Choose such an $\ell$ which is congruent to $1$ (mod $30$), e.g. $\ell = 31$. We finally base-change $A$ to force $G_{A,\ell} \cap \textup{Sp}_4(\mathbb{F}_\ell) \cong \textup{SL}_2(\mathbb{F}_5)$, using the Galois correspondence. \begin{example} The Jacobian variety of the curve $\mathcal{C}$ above is a Hasse at $31$ surface over the number field $K$ such that $\Gal(\mathbb{Q}(A[31])/K) \cong \textup{SL}_2(\mathbb{F}_5)$. \end{example} \begin{remark} We indicate here other work on this subject. These local-global type questions for abelian varieties go back to Katz in 1980 \cite{katz1980galois}, who studied the analogous local-global question for rational torsion points; for elliptic curves this goes even further back to the exercises in I-1.1 and IV-1.3 in Serre's seminal book \cite{serre1968abelian}. Etropolski \cite{etropolski2015local} considers a local-global question for arbitrary subgroups of $\textup{GL}_2(\mathbb{F}_\ell)$, and Vogt \cite{vogt2020local} generalises the prime-degree-isogeny problem to composite degree isogenies. Very recently Mayle \cite{mayle} bounds by $\frac{3}{4}$ the density of prime ideals for elliptic curves $E/K$ which do not satisfy either of the ``everywhere-local'' conditions for torsion or isogenies, and Cullinan, Kenney and Voight study a probabilistic version of the torsion local-global principle for elliptic curves \cite{cullinan2020probabilistic}. \end{remark} \section{Split modular abelian surfaces which are Hasse} \label{sec:decomposable_abelian_surfaces} The example constructed in the last section raises the question of whether there are Hasse surfaces over $\mathbb{Q}$, pre-empting this somewhat contrived base-change method. In approaching this question, we establish the following lemma. \begin{lemma}\label{main_result} Let $A$ be an abelian surface over a number field $K$ whose mod-$\ell$ Galois image $G_{A,\ell}$ is contained in the direct sum of two subgroups $G, G'$ of $\textup{GL}_2(\mathbb{F}_\ell)$: \[ G_{A,\ell} \subseteq \begin{pmatrix} G & 0 \\ 0 & G' \end{pmatrix}. \] If one of $\left\{G,G'\right\}$ is Hasse, and the other is not contained in a Borel subgroup, then $A$ is a Hasse at $\ell$ surface over $K$. \end{lemma} \begin{proof} Let $H_{A,\ell}, H, H'$ respectively denote the images of $G_{A,\ell}, G, G'$ modulo scalar matrices. By \Cref{prop:group_theoretic_reformulation}, we need to establish that $H_{A,\ell}$ is a Hasse subgroup. Since the mod-$\ell$ Galois representation in this case decomposes as a direct sum of two subrepresentations, we write $V, V'$ such that $A[\ell] = V \oplus V'$. We first show that $H_{A,\ell}$ does not fix a point in $\mathbb{P}(A[\ell])$. If it did, then that point lifts to a point $w \in A[\ell]$. We may write $w = v \oplus v'$, with $v \in V$, $v' \in V'$. Since at least one of $v, v'$ must be non-zero, we suppose that $v$ is non-zero. Then $H$ must fix the image of $v$ in $\mathbb{P}(V)$, which is not allowed under the hypotheses on $\left\{G,G'\right\}$. Without loss of generality we suppose that $H$ is Hasse. Each element of $H_{A,\ell}$ may be written as $y = \begin{pmatrix} h & 0 \\ 0 & h' \end{pmatrix}$ for $h \in H$, $h' \in H'$. Since $h$ fixes a point in $V$, $y$ fixes the same point; thus every element of $H_{A,\ell}$ fixes a point. \end{proof} An immediate corollary provides an example of a Hasse surface over $\mathbb{Q}$, using Sutherland's $j$-invariant defined above: \begin{corollary} Let $E/\mathbb{Q}$ be any elliptic curve with $j$-invariant $\frac{2268945}{128}$. Then the abelian surface $E^2$ is Hasse at $7$ over $\mathbb{Q}$.\qed \end{corollary} This prompts the question of whether there exist \emph{simple} Hasse surfaces over $\mathbb{Q}$. We provide an affirmative answer to this question by restricting to the class of \emph{modular abelian surfaces} over $\mathbb{Q}$, whose definition we now recall. Let $f$ be a weight~$2$ cuspidal newform of level $\Gamma_1(N)$ for some $N > 1$, with Fourier coefficient field $K_f$, a number field whose ring of integers we will denote as $\mathcal{O}_f$. In the course of constructing the $\ell$-adic Galois representations of $f$, Shimura (Theorem 7.14 in \cite{shimura1971introduction}) defined the abelian variety $A_f$ associated to $f$, whose dimension is $[K_f:\mathbb{Q}]$. It is a theorem of Ribet (Corollary 4.2 in \cite{ribet1980twists}) that these abelian varieties are simple over $\mathbb{Q}$, and that $K_f$ is the full algebra of endomorphisms of $A_f$ which are defined over $\mathbb{Q}$. In this paper we refer to these varieties $A_f$ as \emph{modular abelian varieties}, and in the case where $[K_f:\mathbb{Q}] = 2$, we call them \emph{modular abelian surfaces}. (The reader is warned however that the adjective \emph{modular} is used by different authors throughout the literature to mean different things.) Furthermore, these varieties are of \textbf{$\textup{GL}_2$-type}: the $\ell$-adic Tate module, for each $\ell$, splits as a direct sum \[ T_\ell A_f = \bigoplus_{\lambda | \ell}T_{f,\lambda}, \] where each $T_{f,\lambda}$ is a free module of rank~$2$ over the $\lambda$-adic completion $\mathcal{O}_{f,\lambda}$ of $\mathcal{O}_f$. (See Exercise 9.5.2 in \cite{diamond2005first}; to obtain the integrality one may need to replace $T_{f,\lambda}$ with a similar representation, as explained in the discussion immediately preceding Definition 9.6.10 in \emph{loc. cit.}. This decomposition is also explained in Section~$2$ of \cite{ribet1977galois}). This formula allows us to consider the $\ell$-adic representation $T_\ell A_f$ as a direct sum of the $2$-dimensional $\lambda$-adic representations associated to $f$. Consider the case in which $K_f$ is a quadratic field, and $(\ell) = \lambda\lambda'$ splits in $\mathcal{O}_f$. By taking the reduction mod $\ell$ of the above formula, we obtain a splitting \[ A_f[\ell] = \overline{T}_{f,\lambda} \oplus \overline{T}_{f,\lambda'}\] of the $4$-dimensional $G_\mathbb{Q}$-representation $A_f[\ell]$ as a sum of two $2$-dimensional representations, all considered as representations over $\mathbb{F}_\ell$. Thus $G_{A_f,\ell}$ is contained in the block sum of two subgroups $G,G'$ of $\textup{GL}_2(\mathbb{F}_\ell)$: \[ G_{A_f,\ell} \subseteq \begin{pmatrix} G & 0 \\ 0 & G' \end{pmatrix}. \] We choose $G$ and $G'$ minimally; i.e., $G$ is the image of $G_\mathbb{Q}$ acting on $\overline{T}_{f,\lambda}$, and $G'$ the image of $G_\mathbb{Q}$ acting on $\overline{T}_{f,\lambda'}$. We denote by $H$ and $H'$ the corresponding projective images, as subgroups of $\textup{PGL}_2(\mathbb{F}_\ell)$. We may therefore state sufficient conditions for a modular abelian surface $A_f$ to be Hasse, as a corollary of \Cref{prop:hasse_elliptic} and \Cref{main_result} above: \begin{corollary}\label{cor:suff_conds_for_hasse} Let $f$ be a weight $2$ newform of level $\Gamma_1(N)$ with Fourier coefficient field $K_f$. Suppose: \begin{itemize} \item $K_f$ is a quadratic field; \item $\ell \geq 7$ is a prime congruent to $3 \Mod{4}$ which splits in $\mathcal{O}_f$ as $(\ell) = \lambda\lambda'$; \item among the projective mod-$\lambda$ and mod-$\lambda'$ images, one is isomorphic to $D_{2n}$, where $n > 1$ is an odd divisor of $\frac{l-1}{2}$, and the other is not contained in a Borel subgroup. \end{itemize} Then $A_f$ is Hasse at $\ell$ over $\mathbb{Q}$.\qed \end{corollary} \begin{remark} We do not deal with the case of $\ell$ remaining inert or ramifying in $\mathcal{O}_f$ in this paper. This would likely involve a group-theoretic investigation of the Hasse subgroups of $\textup{PGL}_2(\mathbb{F}_{\ell^n})$. \end{remark} In the next section we apply an algorithm of Anni \cite{AnniThesis} which determines when a weight $k$ newform has projective dihedral image, in order to find an $f$ satisfying the assumptions in the above corollary. We end this section with a result which gives sufficient conditions on $f$ to ensure that both the mod-$\lambda$ and mod-$\lambda'$ images are isomorphic. This enables us, in certain situations, to consider the image for only one of the prime ideals above $\ell$. Recall that the Fourier coefficient field of a newform $f$ is either totally real, or a CM field. \begin{proposition}\label{prop:iso_image} Let $f$ be a weight $2$ newform of level $\Gamma_1(N)$ and Fourier coefficient field $K_f$. Suppose that $K_f$ is an imaginary quadratic field, and $\ell$ splits in $\mathcal{O}_f$ as $(\ell) = \lambda\lambda'$. Then the projective mod-$\lambda$ and mod-$\lambda'$ images are isomorphic. \end{proposition} \begin{proof} Denoting by $\epsilon$ the Nebentypus of $f$, observe that we have the following relation: \[ \bar{f} = f \otimes \epsilon^{-1}, \] where the bar denotes complex conjugation (see e.g. \S~1 or the proof of Proposition~3.2 in \cite{ribet1977galois}). Since $K_f$ is imaginary, this gives a non-trivial element in the group of inner twists of $f$, which sends $f$ to its Galois conjugate, swaps $\lambda$ and $\lambda'$, and induces an isomorphism $\rho_{f,\lambda'} \cong \rho_{\bar{f},\lambda}$. We conclude by observing that $f$ and $f \otimes \epsilon^{-1}$ have isomorphic projective mod-$\lambda$ image. \end{proof} \begin{remark} In the case that $f$ does not have CM, the assumption in the above proposition that $K_f$ is a CM field is equivalent to the assumption that the Nebentypus of $f$ is not trivial (c.f. Example~3.7 in \cite{ribet1980twists}). \end{remark} \section{Constructing examples using Anni's thesis}\label{sec:find_examples_using_code} Section 10.1 of \cite{AnniThesis} describes an algorithm (Algorithm 10.1.3 in \emph{loc. cit.}) to determine whether or not a weight $k$ newform has projective dihedral image modulo a prime ideal $\lambda$ of the ring of integers $\mathcal{O}_f$ of $K_f$. The main idea can be encapsulated in the following: \begin{proposition}[Anni, Ribet, Serre] Let $f$ be a weight $k$ newform of level $N$, and let $\rho$ be the mod-$\lambda$ Galois representation associated to $f$. Assume that $\rho$ is irreducible. Then the following are equivalent: \begin{enumerate} \item $\rho$ has projective dihedral image; \item there exists a quadratic character $\alpha$ of modulus $q$ such that $\alpha \otimes \rho \cong \rho$, where $q$ is the product of all primes dividing $N$ such that their square divides $N$; \item there exists a quadratic field $K$, and characters $\chi, \chi'$ on $G_K$, such that the restriction of $\rho$ to $G_K$ is reducible: \[ \rho|_{G_K} = \chi \oplus \chi'. \] \end{enumerate} Moreover, if these hold, then the order of the dihedral group is $2n$, where $n$ is the order of $\chi^{-1}\chi'$. \end{proposition} We refer to the relevant results in the literature for more details: in chronological order, Proposition 4.4 and Theorem 4.5 in \cite{ribet1977galois}, Section 7 of \cite{serre1977modular}, and Section 10.1 of \cite{AnniThesis}. Anni's algorithm then consists in checking whether one of the finitely many Dirichlet characters as described in (2) above satisfies $\alpha \otimes \rho \cong \rho$, noting that only the primes up to the Sturm bound need to be checked. At this point, if Anni's algorithm yields a quadratic character for such a newform $f$, then either it has projective dihedral image, \emph{or} the representation is reducible, which would mean it has cyclic image. This reducible case is equivalent to $f$ being congruent mod-$\ell$ to an Eisenstein series of the same weight and level, which may be checked by computing the finitely many normalised Eisenstein series. If the representation is indeed dihedral, then we compute the characteristic polynomials of Frobenius at several rational primes to determine its order. We implemented this algorithm in Sage (\verb|find_dihedral.sage| in \cite{dihedralnewforms2020}), and ran it on all two-dimensional weight-two newforms $f$ (those with $[K_f:\mathbb{Q}] = 2)$ of level $\leq 189$, for which the prime $7$ splits in $\mathcal{O}_f$. The results obtained are summarised in \Cref{tab:dihedral-newforms}. We found that all of the forms had CM, and that the projective images in all of these cases were isomorphic for each of the prime ideals above $7$, as is necessarily the case in light of \Cref{prop:iso_image}. To save space in the table, we note here that the Fourier coefficient field of all of these newforms is the quadratic field $\mathbb{Q}(\sqrt{-3})$, and remind the reader that with $D_n$ we mean the dihedral group \emph{of order $n$} (and not $2n$). \begin{table}[htp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline LMFDB Label & CM field & $q$-expansion & $\mathbb{P}\rho(G_\mathbb{Q})$\\ \hline 49.2.c.a & $\mathbb{Q}(\sqrt{-7})$ & $q - \zeta_6q^2 + (1 - \zeta_6)q^4 - 3q^8 + 3\zeta_6q^9 + O(q^{10})$ & $C_3$\\ \hline 63.2.e.a & $\mathbb{Q}(\sqrt{-3})$ & $q + 2\zeta_6q^4 + (1 - 3\zeta_6)q^7 + O(q^{10})$ & $D_4$\\ \hline 81.2.c.a & $\mathbb{Q}(\sqrt{-3})$ & $q + 2\zeta_6q^4 + (1 - \zeta_6)q^7 + O(q^{10})$ & $D_{12}$\\ \hline 117.2.g.a & $\mathbb{Q}(\sqrt{-3})$ & $q + 2\zeta_6q^4 + \zeta_6q^7 + O(q^{10})$ & $D_{12}$\\ \hline 117.2.q.b & $\mathbb{Q}(\sqrt{-3})$ & $q - 2\zeta_6q^4 + (6 - 3\zeta_6)q^7 + O(q^{10})$ & $D_{12}$\\ \hline 189.2.c.a & $\mathbb{Q}(\sqrt{-3})$ & $q + 2q^4 + (-1 + 3\zeta_6)q^7 + O(q^{10})$ & $D_6$\\ \hline 189.2.e.b & $\mathbb{Q}(\sqrt{-3})$ & $q + (2 - 2\zeta_6)q^4 + (1 - 3\zeta_6)q^7 + O(q^{10})$ & $D_{12}$\\ \hline 189.2.p.a & $\mathbb{Q}(\sqrt{-3})$ & $q + (-2 + 2\zeta_6)q^4 + (-1 + 3\zeta_6)q^7 + O(q^{10})$ & $D_6$\\ \hline \end{tabular} \vspace{0.3cm} \caption{\label{tab:dihedral-newforms}Newforms arising as output from Anni's algorithm. The two forms with projective image $D_6$ yield Hasse surfaces over $\mathbb{Q}$ at $7$.} \end{center} \end{table} For the newforms in the table whose level is prime to $7$, we verified the irreducibility of the mod-$\lambda$ Galois representation with Corollary~2.2 of \cite{dieulefait2001newforms}: if it was reducible, then there would exist a Dirichlet character $\chi$ of conductor dividing the level and valued in $\mathbb{F}_7^\times$ such that, for all primes $p$ away from the level, we would have \[ a_p \equiv \chi(p) + p\frac{\epsilon(p)}{\chi(p)} \Mod{\lambda},\] where $\epsilon$ is the Nebentypus of $f$. Since there are only finitely many such $\chi$, we can test all possible candidates, and find that none of them satisfy all of these congruences, whence the representation must be irreducible. The last example in the above table is given in \Cref{example:cm_hasse}. Since it is a CM form, the corresponding abelian variety $A_f$ decomposes over $\overline{\mathbb{Q}}$ as the square of a CM elliptic curve $E$. We may quickly glean further information about $E$ from the \href{https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/189/2/p/a/}{homepage} of this form; in particular, from the ``Related objects'' section we find that the decomposition occurs over the field $\mathbb{Q}(\sqrt{-3})$, and that $E$ is a curve with $j_E = 0$, whose mod-$7$ Galois image is a split Cartan subgroup (and not its normaliser). By Sutherland's work \Cref{prop:hasse_elliptic}, we may conclude that $E$ is not a Hasse curve. \section{Finding absolutely simple Hasse Modular Abelian surfaces}\label{sec:abs_simple_hasse} We first collect some facts about absolutely simple modular abelian varieties from the literature. \begin{proposition}[Cremona, Jordan, Ribet] \label{lem:abs_endo} Let $f$ be a weight $2$ newform of level $\Gamma_1(N)$ such that the corresponding modular abelian surface $A_f$ is Hasse at some prime $\ell$ which splits completely in $\mathcal{O}_f$. Assume that $f$ is not a CM newform. \begin{itemize} \item If $f$ does not have inner twists, then $A_f$ is absolutely simple. \item If $f$ does have inner twists, then $A_f$ is absolutely simple if and only if $\End_{\overline{\mathbb{Q}}}^0(A_f)$ is an indefinite quaternion division algebra with centre $\mathbb{Q}$ of degree $4$ over $\mathbb{Q}$. Moreover, if this holds, then this algebra is realised over a totally complex field, $A_f$ has potential good reduction everywhere, and for every prime $p$ dividing $N$, we have $\ord_p(N) \geq 2$. \end{itemize} \end{proposition} \begin{proof} Write $\mathcal{X} = \End_{\overline{\mathbb{Q}}}^0(A_f)$. We have the following facts: \begin{enumerate} \item the centre of $\mathcal{X}$ is a subfield $F$ of $K_f$, and $\mathcal{X} \cong M_n(\cdot)$, where $\cdot$ is either $F$, or else an indefinite quaternion division algebra over $F$ of dimension $t^2$ over $F$, where $t$ is the Schur index of $\mathcal{X}$ (Proposition 5.2 in \cite{Ribet2004}); \item the degree of $\mathcal{X}$ over $\mathbb{Q}$ is $2[K_f:F]$ (Theorem 5.1 in \cite{ribet1980twists}); \item $F$ is a totally real number field, and $\Gal(K_f/F)$ is the group of inner twists of $f$ (Corollary 5.4 in \cite{Ribet2004}); \item $[K_f:F] = nt$ (Proposition 5.2 in \cite{Ribet2004}). \end{enumerate} If $f$ does not have inner twists, then $\Gal(K_f/F)$ is trivial, and so $n=1$; i.e., $A_f$ is absolutely simple. If $f$ does have inner twists, then we have $2 = nt$, so $n = 1 \Leftrightarrow t=2$; i.e., $A_f$ is absolutely simple if and only if $\mathcal{X}$ is an indefinite quaternion division algebra over $F$ of degree $4$ over $\mathbb{Q}$. The statements about the endomorphisms being realised over a totally complex field, and $A_f$ having potential good reduction everywhere, follow from the observation that $A_f/K$, when base-changed to the field $K$ over which all endomorphisms are defined, satisfies the definition of a \emph{fake elliptic curve}, and thus follow from the known properties of these objects; see e.g. Section 4 of \cite{halukandsamir}, who attribute this to Jordan (Section 3 in \cite{jordan1986points}). The statement about the valuations of primes dividing $N$ follows from Theorem 3 in \cite{cremona1992abelian}. \end{proof} One could in principle run the algorithm explained in \Cref{sec:find_examples_using_code} on all non-CM newforms with no non-trivial inner twists to furnish an example. However, for each such level $N$, the implementation \verb|find_dihedral.sage| in \emph{loc. cit.} first constructs the entire space of newforms $S_2(\Gamma_1(N))^{new}$, and thereafter takes only those of dimension~$2$; as such, it is very inefficient. We therefore implemented a faster approach in \verb|find_simple_dihedral_with_api.sage| in \emph{loc. cit.}, which refactors the main algorithms in \verb|find_dihedral.sage| to take as input not \texttt{Newform} objects, but rather lists of Fourier coefficients at prime values of newforms. A list of the newform labels to be checked is generated from the LMFDB, with the following parameters: \begin{itemize} \item Dimension $2$; \item No CM; \item Inner twist count $1$. \end{itemize} For each label in this list, the Fourier coefficients $a_p$ for prime $p$ are obtained with a call to the \href{http://www.lmfdb.org/api/}{LMFDB API}. Running the refactored algorithm to find dihedral newforms on all 15,838 candidate newforms in the LMFDB takes about half an hour on an old laptop. The results obtained are summarised in \Cref{tab:abs-simple-dihedral-newforms}. All the forms have Fourier coefficient field $\mathbb{Q}(\sqrt{2})$; we write $\beta = \sqrt{2}$. The projective images are verified as before, computing the orders of root quotients of the characteristic polynomials of Frobenius at several primes. This in particular allows one to rule out reducibility of the representation, by showing that the distribution of orders is inconsistent with a cyclic group. Unlike in \Cref{sec:find_examples_using_code}, the projective images at the two prime ideals above $7$ are not isomorphic; we provide the dihedral image, which occurs at the prime ideal given in the table. For the other prime ideal not given, where the algorithm returns that it does not have dihedral image, one readily finds a prime $p$ such that the characteristic polynomial of $\Frob_p$ is irreducible over $\mathbb{F}_7$, and hence the image is not contained in a Borel subgroup, which is sufficient for our purposes from \Cref{cor:suff_conds_for_hasse}. \begin{table}[htp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline LMFDB Label & $q$-expansion & $\mathbb{P}\rho(G_\mathbb{Q})$ & Prime ideal \\ \hline 7938.2.a.bj & $q - q^2 + q^4 - q^8 - 9\beta q^{11} + O(q^{12})$ & $D_6$ & $(1 - 2\beta)$\\ \hline 7938.2.a.bk & $q - q^2 + q^4 - q^8 + 3\beta q^{11} + O(q^{12})$ & $D_6$ & $(1 + 2\beta)$\\ \hline 7938.2.a.bp & $q + q^2 + q^4 + q^8 + 9\beta q^{11} + O(q^{12})$ & $D_6$ & $(1 - 2\beta)$\\ \hline 7938.2.a.bq & $q + q^2 + q^4 + q^8 + 9\beta q^{11} + O(q^{12})$ & $D_6$ & $(1 - 2\beta)$\\ \hline 9099.2.a.e & $q - 2q^4 + (-3 - \beta)q^5 + (-2 + 2\beta)q^7 + O(q^{12})$ & $D_{12}$ & $(1 - 2\beta)$\\ \hline 9099.2.a.g & $q - 2q^4 + (3 + \beta)q^5 + (-2 + 2\beta)q^7 + O(q^{12})$ & $D_{12}$ & $(1 - 2\beta)$\\ \hline \end{tabular} \vspace{0.3cm} \caption{\label{tab:abs-simple-dihedral-newforms}Newforms arising as output from Anni's algorithm using the LMFDB API. The three forms with projective image $D_6$ yield absolutely simple Hasse surfaces over $\mathbb{Q}$ at $7$.} \end{center} \end{table} The first example in the above table is \Cref{example:abs_simple_hasse} from the Introduction, which yields a Hasse at $7$ surface by \Cref{cor:suff_conds_for_hasse}. \section{Modular Hasse surfaces are congruent to CM newforms} \label{sec:cm_congruence} In this section we prove \Cref{thm:cm_congruence}. Let $f \in S_2(\Gamma_1(N))$ be a newform, and $\ell$ a prime which splits completely in the ring of integers $\mathcal{O}_f$ of the Fourier coefficient field $K_f$. By assumption that $A_f$ is Hasse, there exists a prime ideal $\lambda | \ell$ such that the projective image of $\overline{\rho}_{f,\lambda}$ is a Hasse subgroup of $\textup{PGL}_2(\mathbb{F}_\ell)$. Therefore, by \Cref{prop:hasse_elliptic}, we have that $\im \mathbb{P} \overline{\rho}_{f,\lambda}$ is a dihedral group. Henceforth, for ease of notation, write $\overline{\rho}$ for $\overline{\rho}_{f,\lambda}$. Since $\det \overline{\rho}$ is surjective in $\mathbb{F}_\ell^\times$, we have that $\det \mathbb{P} \overline{\rho}$ is surjective in $\left\{\pm 1\right\}$. The kernel of $\det$ is an index-$2$ subgroup of a dihedral group of order $2n$ with $n$ odd, and therefore is cyclic of order $n$. We thus obtain that the kernel of the composition \[ G_\mathbb{Q} \xrightarrow{\mathbb{P} \overline{\rho}} D_{2n} \longrightarrow D_{2n}/C_n \longrightarrow \left\{\pm1\right\} \] corresponds to the imaginary quadratic field $\mathbb{Q}(\sqrt{-\ell})$. We may now apply Th\'{e}or\`{e}me 1.1 of \cite{billerey2018representations} to obtain the existence of a CM newform $g$ such that $\overline{\rho}$ is isomorphic to the mod-$\lambda'$ reduction of the $\lambda'$-adic $G_\mathbb{Q}$-representation $\rho_{g,\lambda'}$, for some prime ideal $\lambda'$ lying over $\ell$ in the Fourier coefficient field of $g$ (which need not be the same as that of $f$). Moreover, from the proof of Corollaire 1.3 in \emph{loc. cit.}, we have that the weight of $g$ is $2$. This yields the desired congruence.\qed \begin{remark} Theorem A in \cite{orr_skorobogatov_2018} tells us that there are only finitely many $\overline{\mathbb{Q}}$-isomorphism classes of abelian surfaces over $\mathbb{Q}$ with complex multiplication. There are therefore only finitely many $\overline{\mathbb{Q}}$-isomorphism classes of Hasse modular abelian surfaces with CM. Since the field of complex multiplication in this case must be an imaginary quadratic field of class number 1 or 2, there are only finitely many such. Note that Gonz\'{a}lez (Theorem 3.2 in \cite{gonzalez2011}) has enumerated the possible pairs $(\End_{\overline{\mathbb{Q}}}^0(A_f), \End_{\mathbb{Q}}^0(A_f))$, for $A_f$ a two-dimensional modular abelian surface with complex multiplication; there are 83 such pairs. \end{remark} \section{Acknowledgements} This work was supported by a grant from the Simons Foundation (546235) for the collaboration `Arithmetic Geometry, Number Theory, and Computation', through a workshop held virtually at ICERM in June 2020. I am deeply indebted to the organisers of that workshop for extending to me an invitation for participation. I particularly thank John Voight for publicly wondering ``what happens for abelian surfaces'' after Jacob Mayle's talk, which inspired me to return to this subject after a seven-year hiatus. I thank Alex Bartel for comments on an earlier version of this manuscript; Nicolas Billerey for a correspondence which clarified issues surrounding congruences between CM and non-CM newforms; Peter Bruin for a correspondence about reducible Galois representations of newforms, for comments on an earlier draft of the manuscript, and for verifying the projective images of some weight-$2$ newforms that arose as output to Anni's algorithm - the current check on reducibility via searching for Eisenstein series congruences is from Sage code that he provided to me; John Cremona for extensive comments on an earlier version of the manuscript; David Loeffler for a correspondence which identified the role of non-trivial inner twists in \Cref{prop:iso_image}; Nicolas Mascot for explaining how to detect dihedral image via traces of Frobenius, suggesting the algorithms in Anni's thesis, and for corrections to an earlier version of the manuscript; Martin Orr for insightful examples about lifting mod-$p$ dihedral representations to characteristic zero, as well as for comments on and corrections to an earlier version of the manuscript; Samir Siksek for suggestions and ideas for further development; Andrew Sutherland for explaining how to use the LMFDB API to obtain Fourier coefficients of modular forms; and John Voight for questions about the CM examples from an earlier version of the manuscript. I thank John Cremona for giving me his copy of \emph{Modular Curves and Abelian Varieties} on my last day at Warwick as his PhD student, and Jonny Evans for giving me his copy of \emph{A First Course in Modular Forms} as I was leaving Cambridge. Both proved to be essential in the course of this work. I wish to express my sincere gratitude to all those who have provided open access to otherwise prohibitively expensive material. \bibliographystyle{alpha}
{ "timestamp": "2020-09-29T02:21:52", "yymm": "2007", "arxiv_id": "2007.13583", "language": "en", "url": "https://arxiv.org/abs/2007.13583" }
\section{Introduction} \label{sec:intro} \IEEEPARstart{O}{btaining} meaningful insight into the power consumption properties of residential users is a topic of growing importance. Such knowledge allows energy providers to better anticipate future demand, while allowing end users to identify costly appliances within their home, or other energy inefficient habits. Through a better understanding of each specific appliance's power consumption, users and providers can also begin to reduce the environmental impact of the electric grid. Hart~\cite{hart} proposed to determine the power consumption of appliances computationally through what we know as non-intrusive load monitoring (NILM). Using only the smart meter reading of a home, NILM infers the power consumption of appliances within by way of some machine learning or optimization algorithm. In most cases, algorithmically, the biggest challenge in NILM is obtaining good approximations of the distributions of appliance power consumption. More specifically, each appliance's posterior distribution conditioned on the aggregate power measurement - $\rho \left(p_i\mid p_H\right)$. While unsupervised methods exist for this estimation, such as~\cite{unilm,low_complexity}, it is most commonly achieved using supervised learning methods. In a supervised setting for NILM, measurements of the aggregate and appliance specific power, taken simultaneously over significant periods of time, are used to build a model of the posterior probabilities. Some of the common models include hidden Markov models~\cite{makonin_sshmm, FHMM}, integer programming~\cite{mixedlinear,aidedlinear}, and more recently, deep neural networks~\cite{kelly, murray_icassp, novel, waveNILM, energan, seq2subseq}. Using such supervised methods means an algorithm's performance greatly depends on how well the training data represents the real distributions. In the context of NILM, this means that the training data needs to represent the true distribution of a household's power consumption characteristics. To ensure a good approximation of the real distributions, as well as a fair evaluation of performance, long term datasets must be used for training and testing As a result, since 2011, data collection has been a main focus of NILM research, and has lead to a creation of a many publicly available datasets such as~\cite{ampds2, REFIT, Tracebase, ECO . While these datasets continue to advance the development of NILM solutions, each dataset is unique (in terms of duration, sampling frequency, methodology, etc.) and may only provide a small part of the full distribution of power consumption. When considering options for enriching NILM data, and in light of the aforementioned challenges, an alternative approach is to generate synthetic data. Our contribution is a novel approach for generating truly random appliance power signatures using generative adversarial networks (GAN). Our synthesizer, named PowerGAN, is capable of generating realistic appliance power traces in large quantities, with no hand modeling, allowing for the creation of truly random, new appliances. PowerGAN is unlike previous attempts at generating new power data~\cite{ambal,SmartSIM,SynD,ANTgen}, which are based on simple appliance modeling. PowerGAN is also novel within the existing GAN literature, as it presents an improvement over existing time-series generators based on GANs. \vspace{-0.3cm} \section{Related Work} \label{sec:prev} \subsection{Generative Adversarial Networks (GANs)} \label{subsec:GAN} Until recently, the main use for deep neural networks (DNN) was solving problems such as classification, regression, or segmentation. While DNNs were highly successful at such tasks, including NILM~\cite{kelly, waveNILM}, they were not able to generate synthetic data. This changed in 2014 with the introduction of generative adversarial networks (GAN)~\cite{GAN}. The main novelty in GAN is that instead of one neural network trained to solve an optimization problem, two competing neural networks are trained to find the equilibrium of a game. The two players in the GAN game are known as the \textit{generator} and the \textit{discriminator}. The generator tries to generate realistic signals from a random input known as the \textit{latent code}, while the discriminator attempts to successfully distinguish these generated signals from real ones. The training process is performed in turns, alternating between training the generator and the discriminator once (or more) at each turn. Fig.~\ref{fig:GAN} shows a visual explanation of the GAN framework. The equilibrium of the GAN game is achieved when the generator can create perfectly realistic signals, so that even a perfect discriminator cannot distinguish them from real ones. \begin{figure} \centering \includegraphics[width = 0.9\linewidth]{general-gans.png} \caption{GAN structure with alternating discriminator and generator training.} \label{fig:GAN} \vspace{-0.5cm} \end{figure} The introduction of GANs allowed DNNs to generate increasingly realistic signals such as faces or scenes~\cite{dcgan}. However, basic GANs, sometimes known as vanilla GANs, remain difficult to train. To improve both the final outcome as well as increase the stability of GAN training, many variations on the GAN framework have been published. Goodfellow \textit{et al.}~\cite{improved} suggested label smoothing, historical averaging, and minibatch discrimination. Arjovsky \textit{et al.}~\cite{wgan, wgan-gp} showed that KL divergence between real and fake sample outputs of the discriminator, the commonly used loss function in GAN training, suffered from vanishing gradients, and suggested using the Wasserstein distance instead. The corresponding GANs are referred to as Wasserstein GANs (WGANs). Gulrajani \textit{et al.}~\cite{wgan-gp} presented the gradient penalty as a way to increase the stability of WGAN training. Other improvements include using a conditional generator based on class labels~\cite{cgan,acgan}, and conditioning the generator on an input signal ~\cite{cyclegan} to transform the output. Basic GANs, mentioned above, are limited in performance as well as difficult to train. This makes vanilla GANs insufficient for the challenging task of representing the true distributions of appliance level power signatures. When approaching the development of our own GAN model, we considered two specific versions of GAN -- Progressively growing GAN~\cite{ProgGAN}, and EEG-GAN~\cite{eeggan}, both of which use the WGAN loss with gradient penalty as the underlying GAN loss. Karras \textit{et al.}~\cite{ProgGAN} have shown that it is beneficial to train GANs in stages. At first, coarse structure is learnt by training a GAN on highly downsampled signals. After sufficient training, the next stage of the GAN is added and the signal resolution is doubled. At this stage the weights that had previously been learnt are kept and additional layers are added. On the generator side, the layers are added at the end; whereas, on the critic side they are added at the beginning. In~\cite{eeggan} Hartmann \textit{et al.} present EEG-GAN, an adaption of~\cite{ProgGAN} for the generation of electroencephalogram signals. The training algorithm closely resembles that of~\cite{ProgGAN}, with modified architectures for generating 1-D time-series data instead of images. Despite the similarity in training, the authors do present several modifications in EEG-GAN, the combination of which was novel at the time of publication. One of particular importance to PowerGAN is the weighted, one-sided gradient penalty, which is adopted by PowerGAN and expanded on in Section~\ref{subsec:PowerGAN}. \vspace{-0.3cm} \subsection{Power Data Synthesizers} \label{subsec:synthesizers} The challenges presented by the available long-term disaggregation datasets have motivated several efforts to generate synthetic data for NILM. These efforts, varying in sophistication and scope, focus on generating realistic \textit{aggregate} signals. In contrast, the proposed PowerGAN is focused on \textit{appliance-level traces}. Nonetheless, these power data synthesizers all employ some techniques for simulating appliance-level data before layering it to create the aggregate. SmartSim~\cite{SmartSIM} was one of the first such power data synthesizers. SmartSim's appliance level simulation is performed by matching each appliance with one of four possible energy models: ON-OFF, ON-OFF with growth/decay, stable min-max, and random range models. Reasonable parameterizations for each of these models were extracted by the authors from real instances of the specific appliances in the Smart* dataset~\cite{SMART}. The estimation of these values directly from real data, taken from the Smart* dataset, inherently limits SmartSim's ability to capture the variability of real appliances. Furthermore, by copying these parameters from real data, SmartSIM provides no new appliance-level traces. The Automated Model Builder for Appliance Loads (AMBAL)\cite{ambal} and its recent iteration, ANTgen~\cite{ANTgen}, approach appliance models similarly. They employ the same four general appliance classes with the addition of compound model types. Compound models are combinations of the four basic models, and are generally a better fit to real-world appliances. Model parameters are determined using the ECO~\cite{ECO} and Tracebase datasets~\cite{Tracebase} where active segments of each appliance are broken up according to possible internal state changes. Rather than deciding \textit{a priori} the model class for a particular appliance, AMBAL/ANTgen selects the model fit that minimizes the mean absolute percentage error. SynD~\cite{SynD} is a similar effort that instead categorizes appliances as either autonomous or user-operated. Autonomous appliances include constantly-on loads (such as a router) or appliances that are cyclic in their operation patterns (such as a fridge). User-operated appliances can involve single-pattern operation (such as a kettle) or multi-pattern operation (such as a dishwasher or programmable oven). On the appliance level, power traces for SynD were measured directly by the authors and stored as templates. The extraction of appliance models directly from real data restricts the ability of these generators to provide truly novel appliance-level traces. However, the aim of these generators is to synthetically expand the space of realistic aggregate signals, which has and will continue to contribute to the NILM community. In contrast, our work focuses on appliance-level modeling, moving past the parameterization of pre-specified appliance models, and instead making use of the rapidly developing generative-adversarial framework to elucidate entire distributions over appliance behaviour. Note that we do not compare with SHED~\cite{SHED}, which uses similar methods, because it is designed for commercial buildings rather than residential ones It is also important to note that GANs have been used for NILM in~\cite{bao2018enhancing, energan, seq2subseq}. In~\cite{bao2018enhancing} a pretrained GAN generator is used to replace the decoder side of a denoising autoencoder based disaggregator. In~\cite{energan, seq2subseq}, GANs were heavily conditioned on aggregate data and simply used as a refinement method for supervised disaggregation using convolutional neural networks. However, none of these works use GANs for the purpose of generating new data, evaluate their models using conventional GAN metrics, or made their models publicly available, and as such are not comparable with PowerGAN. \vspace{-0.3cm} \section{Methodology} \label{sec:method} \subsection{PowerGAN} \label{subsec:PowerGAN} Both progressive growing of GANs and EEG-GAN introduce novel methods of training GANs, with a variety of techniques for improved performance and reliable convergence. However, neither of the two methods takes advantage of class labels. Inspired by~\cite{acgan,cgan}, we extend EEG-GAN by conditioning both the generator and the critic on the specific appliance label. We name our framework PowerGAN - a conditional, progressively growing, one dimensional WGAN for generating appliance-level power traces. The basic architecture of PowerGAN is similar to the EEG-GAN adaptation of \cite{ProgGAN}. PowerGAN contains six generator and critic blocks, each comprised of two convolutional layers and an upsampling, or downsampling layer respectively Following the process in~\cite{eeggan,ProgGAN}, we perform a fading procedure each time a new block is added. During fading, the output of a new block of layers is scaled by a linearly growing parameter $\alpha$ and added to the output of existing layers which is scaled by $1-\alpha$. All layers remain trainable throughout the process and the corresponding dimensionality discrepancies are resolved by a simple $1\times 1$ convolutional layer. An illustration of this process is shown in Fig.~\ref{fig:fading}. \begin{figure} \centering \includegraphics[width = \linewidth]{fading.png} \caption{The fading procedure proposed by~\cite{ProgGAN} as adapted for one time-series data in~\cite{eeggan} and PowerGAN. In (a) we see the currently stable generator and critic during an intermediate stage of training; note that generator (critic) contains a upsampling (downsampling) step. The blocks ``To Time-Series" and ``From Time-Series" are implemented via 1D convolution. In (b) we see the fading stage. On the generator side, the output of new blocks is slowly faded in, using a linearly growing parameter $\alpha$, with an nearest neighbor upsampling of the output of the stable blocks. Similarly, on the critic side, the features created by the new block are slowly merged in with previous inputs to the existing critic blocks. Finally, (c) shows the blocks after the fading is complete and $\alpha = 1$. In PowerGAN, this fading is performed over 1000 epochs, allowing for knowledge obtained at earlier steps of training to slowly adapt as new layers are added.} \label{fig:fading} \vspace{-0.5cm} \end{figure} A major novelty in PowerGAN is the introduction of conditioning, both for the generator and the critic, on the desired appliance label. Following the concepts presented in~\cite{cgan}, we choose to condition our GAN on the input labels by including the class label as an input to both the critic and the generator. On the generator side this is done by replacing the latent code input with $Z \in \mathbb{R}^{N_z \times C} = [\boldsymbol{z}^T_0, \boldsymbol{z}^T_1,...,\boldsymbol{z}^T_C]$ such that: \begin{equation} \boldsymbol{z}^T_i = \begin{cases} \boldsymbol{z}^T & i=l \\ \boldsymbol{0}^T & \text{otherwise} \end{cases} \end{equation} where $N_z$ is the latent space dimension, $\boldsymbol{z} \in \mathbb{R}^{N_z}$ is the latent code, $C$ is the number of different labels in the dataset, and $l$ is the current label. In practice, this is performed by extending both the latent code and the one-hot labels to $\mathbb{R}^{N_z \times C}$ and multiplying the resulting tensors. To accommodate for the added capacity required by the conditional generator, we increase the amount of features in the input stage by a factor of $C$ compared with the rest of the network. On the critic side, we simply extend the one-hot labels to $\mathbb{R}^{N_s \times C}$, where $N_s$ is the current signal length, and concatenate the resulting tensor to the input signal, as illustrated in Fig.~\ref{fig:conditioning}. \begin{figure*} \centering \includegraphics[width = 0.9\textwidth]{Conditioning.png} \caption{PowerGAN's method of conditioning the generator and critic. On the generator side (left), the input latent code and the one-hot class label are both extended and then multiplied. Effectively, this is equivalent to placing a copy of the latent code in the corresponding column matrix which is zero everywhere else. On the critic side (right), we perform a similar extension of the class labels, but then simply concatenate the resulting tensor to the input signal.} \label{fig:conditioning} \vspace{-0.3cm} \end{figure*} In PowerGAN, we also adopt many of the smaller, nuanced, practices proposed in~\cite{ProgGAN,eeggan}. As suggested in~\cite{ProgGAN}, to alleviate growing magnitude issues, we strictly normalize each time-step in each feature map to have an average magnitude of $1$. To improve convergence during training, we employ on-line weight scaling (instead of careful weight initialization). To increase the variation of generated signals, we use a simplified version of minibatch discrimination, as proposed in~\cite{ProgGAN} and modified in~\cite{eeggan}, wherein the standard deviation is used as an additional feature for the final layer of the critic. The minibatch standard deviation is calculated first at each feature, at each time-step, and then averaged across both features and time to give one single value for the entire batch. Furthermore, we use the weighted one-sided variation of the gradient penalty, as proposed in~\cite{eeggan}, and modify it to accommodate the conditional critic and generator. The gradient penalty's importance, as noted in~\cite{eeggan}, depends on the current value of the Wasserstein distance $D_W = \mathbb{E}_{x_g}[D_\alpha(x_g,l)] - \mathbb{E}_{x_r}[D_\alpha(x_r,l)]$. When $D_W$ is large, it is important to ensure that the cause isn't the loss of the 1-Lipschitz constraint. However, when the $D_W$ is low, it is worthwhile to focus on optimizing it directly, and assign a lower weight to the gradient penalty. In practice, this is achieved by giving an adaptive weight to the gradient penalty equal to the current $D_W$. It is important to note that this weight is treated as a constant for gradient purposes, to avoid undesirable gradients. The gradient penalty itself is one-sided, meaning it allows for the critic to have a smaller than 1-Lipschitiz constraint, as was considered but ultimately not chosen in~\cite{wgan-gp}. In this form the gradient penalty becomes: \begingroup\makeatletter\def\f@size{9}\check@mathfonts \begin{equation} \mathcal{L}_{GP} = \lambda\cdot\max(0,D_W)\cdot \mathbb{E}_{\tilde{x} \sim P_{\tilde{x}}}\bigr[\max\bigl(0,\lVert\nabla_{\tilde{x}} D\left(\tilde{x},l\right) \rVert _2 -1\bigr)^2 \bigl] \end{equation} \endgroup where $D_W$ is the current critic estimate of the Wasserstein distance, $D$ is the critic, and $\tilde{x}$ is a randomly weighted mixture of pairs of real and generated samples, each with the same label $l$. Remember that $D_W$ here is treated as a constant for back-propagation purposes. Finally, we use a small loss component to center critic output values around zero, also introduced in EEG-GAN~\cite{eeggan}: \begin{equation} \label{eq:cen} \mathcal{L}_{C} = \epsilon \cdot\bigl(\mathbb{E}_{x_r}[D(x_r)] + \mathbb{E}_{x_g}[D(x_g)] \bigr) \end{equation} where $\epsilon \ll 1$, and $x_r, x_g$ are real and generated samples, respectively. This loss helps with numerical stability as well as interpretation of the loss value during training. Combining all of the above, the final loss functions of the critic ($\mathcal{L}_D$) and the generator ($\mathcal{L}_G$) in PowerGAN are: \begin{gather} \mathcal{L}_D = \mathbb{E}_{x_g}[D_\alpha(x_g,l)] - \mathbb{E}_{x_r}[D_\alpha(x_r,l)] + \mathcal{L}_{GP} + \mathcal{L}_C \\ \mathcal{L}_G = -\mathbb{E}_{x_g}[D_\alpha(x_g,l)] \end{gather} Another important difference between PowerGAN and~\cite{eeggan} is in the method of resampling the signals. In~\cite{eeggan}, after comparing various methods, the authors use strided convolutions for downsampling in the critic, average pooling for downsampling the input data, and either linear or cubic interpolation for upsampling in the generator. We find that given the quick switching nature of appliance power traces, it is important to allow for high frequency changes in the signal, even at the price of some aliasing. For this reason we downsample the input signals using maxpooling, and perform the upsampling steps in the generator with nearest-neighbour interpolation. \vspace{-0.3cm} \subsection{Training} \label{subsec:train} PowerGAN was trained using the REFIT~\cite{REFIT} dataset. REFIT consists of power consumption data from 20 residential homes, at the aggregate and appliance level, sampled at 1/8~Hz. The REFIT dataset was prepared by following the prescription of some recent work to ensure consistent sampling~\cite{murray_icassp}. Because not all of the 20 houses contain the same appliances, we chose appliances that were available in multiple houses. We also wanted to ensure these appliances exemplified each of the four appliance types as defined by~\cite{hart}, and then expanded by~\cite{novel}: ON-OFF, Multi-state, Variable Load, and Always-ON (or periodic). Of the appliances available in REFIT, five that satisfied the above considerations were used: refrigerators (along with freezers, and hybrid fridge-freezers), washing machines, tumble dryers, dishwashers, and microwaves. Each instance of these five appliances were arranged into approximately five hour windows, centered around the available activations. We located these activations by first-order differences in power that were larger than 50 Watts. Windows were then filtered according to two conditions: First, the energy contained in the window should be appreciably larger than the ``steady-state'' contribution to the energy (taken here to be the sum of the window mean and half the window standard deviation). In other words, after ignoring the samples less than this value, the remaining energy contained in the window should be above some threshold, set in our work to be $33.33$ Watt-hours. This condition ensures that low-energy windows, where the activation was falsely detected due to sensor noise, are excluded. This condition also filters out windows that may contain significant energy, but have little useful structural information - mainly windows composed of a constant level of power. Secondly, we calculate the Hoyer sparsity metric~\cite{hoyer}, $S$, for $\boldsymbol{\delta}(w_i)$ - a vector of length $n$ containing the discrete first-order differences in each window $w_i$: \begin{equation} \label{eq:edge-spar} S_{\boldsymbol{\delta}(w_i)} = \frac{\sqrt{n} - \frac{\lVert \boldsymbol{\delta}(w_i) \rVert_1}{\lVert \boldsymbol{\delta}(w_i) \rVert_2}}{\sqrt{n}-1} \end{equation} where $\lVert \boldsymbol{\delta}(w_i) \rVert_1$ and $\lVert \boldsymbol{\delta}(w_i) \rVert_2$ are the $\ell_1$ and $\ell_2$-norms of $\boldsymbol{\delta}(w_i)$, respectively. At its extremes, the Hoyer sparsity metric is zero when every sample in $\boldsymbol{\delta}(w_i)$ is the same (meaning the $\ell_1$-norm is larger than the $\ell_2$-norm by a factor of $\sqrt{n}$), and unity when there is only one non-zero sample in $\boldsymbol{\delta}(w_i)$ (i.e., highly sparse). By requiring the sparsity metric to be larger than $0.5$, we ensure that windows are not overly noisy, further maximizing the structural information contained in them. The remaining windowed dataset was then balanced and the windows belonging to each appliance were normalized. Finally, before every epoch, windows were shifted randomly in time to avoid biasing the network towards specific activation locations within each window. The shifted windows were then downsampled to match the resolution of the current training stage. We utilized the Adam~\cite{adam} optimizer for training PowerGAN, setting $lr = 0.001$ and $\beta = (0,0.99)$ We trained each stage of PowerGAN for 2000 epochs, out of which the first 1000 included fading with linearly changing weights. See Algorithm~\ref{alg:powergan} for full details. \vspace{-0.3cm} \section{Experiments} \label{sec:eval} We present both a qualitative analysis of the PowerGAN-generated power traces as well as their quantitative evaluation, based on adaptations of commonly used GAN evaluation methods to 1-D power traces. We compare quantitative metrics with two other appliance power trace synthesizers: SynD~\cite{SynD}, and ANTgen \cite{ANTgen}, which is a more up-to-date version of AMBAL. SmartSim~\cite{SmartSIM} is not included in the comparison because the published sample data is of insufficient size for accurate comparison with other methods in these experiments. When generating signals using PowerGAN, we found it beneficial to add two simple post-processing steps: we ensure that at any given time-step the generated power is larger than zero; and we discard any generated signals that do not meet the energy threshold designated for the training data (and replace them with new generated samples). \setlength{\textfloatsep}{0pt} \begin{algorithm}[ht] \caption{PowerGAN Training Procedure} \label{alg:powergan} \begin{algorithmic}[1] \Require Real samples with corresponding labels $(x_R,l)\in X_R$; Conditional Generator $G(\boldsymbol{z},l)$; Conditional Critic $D(\boldsymbol{x},l)$; optimizers for $G,D$. \Ensure $N_b$: Number of blocks for $G,D$; $EP_b$: number of training epochs per block; $EP_{f}:$ number of fading epochs; $R$: ratio of critic to generator training iterations. \For{$n = 1,2, \ldots, N_b$} \State Add Block to $G,D$ \For{$ep = 1,2, \ldots, EP_b$} \State Set $\alpha = \min(1,ep/EP_{f})$ \State Set $G_\alpha, D_\alpha$ according to Fig.~\ref{fig:fading} \State Randomize appliance starting points \LeftComment and downsample $X_R$ by $2^{N_b-n}$ \State Select a minibatch of real samples and labels: $\boldsymbol{x}_R,\boldsymbol{l}$ \State Generate a mini-batch of samples using \LeftComment labels: $\boldsymbol{x}_G = G_\alpha \bigl(\boldsymbol{z}\backsim \boldsymbol{N}(0,\mathbb{I}) , \boldsymbol{l}\bigr)$ \State $\mathcal{L}_D = \mathbb{E}_{x_g}[D_\alpha(x_g,l)] - \mathbb{E}_{x_r}[D_\alpha(x_r,l)] + \mathcal{L}_{GP} + \mathcal{L}_C$ \State Take optimizer step for D \If {$ep==0\mod{R}$} \State generate a mini-batch of samples using \LeftComment labels: $\boldsymbol{x}_G = G_\alpha \bigl(\boldsymbol{z}\backsim \boldsymbol{N}(0,\mathbb{I}) , \boldsymbol{l}\bigr)$ \State $\mathcal{L}_G = -\mathbb{E}_{x_g}[D_\alpha(x_g,l)]$ \State Take optimizer step for G \EndIf \EndFor \EndFor \end{algorithmic} All expected value operations are approximated using the sample mean of the minibatch. \end{algorithm} \begin{figure*}[htbp] \centering \includegraphics[width = \textwidth]{All_apps.png} \caption{Examples of appliance power traces generated by PowerGAN, alongside their real counterparts taken from REFIT. We can see here that the generated signals follow the real data closely, yet without direct copying, in important attributes such as power levels, overshoot, quick switching, and more.} \label{fig:all_apps} \vspace{-0.3cm} \end{figure*} \vspace{-0.3cm} \subsection{Quantitative Evaluation} \label{subsec:quantity} Tasks such as segmentation, classification, regression, or disaggregation, are relatively easy to evaluate because they have a well-defined goal. While there are several different approaches to evaluating NILM~\cite{NILMeval}, all methods utilize a well-defined ground truth, such as appliance power consumption or state. Unfortunately, no such ground truth exists when attempting to evaluate randomly generated signals. In fact, the attempt to assign a numerical value to measure the quality of a GAN framework is in itself a significant and challenging research problem~\cite{GAN-eval}. To evaluate PowerGAN, we choose three commonly used GAN evaluation metrics, and adapt them to be applicable for power trace data. Inception score (IS)~\cite{improved} uses a pre-trained DNN-based classifier named Inception \cite{inception}, to evaluate the quality of generated signals. To calculate IS, a batch of generated samples are classified using the pre-trained model. The output of this classifier can be seen as the probability that a sample belongs to each target class. A good generator is realistic, meaning we expect low entropy for the output of the classifier. Simultaneously, a good generator is also diverse, meaning we expect high entropy when averaging out all classifier outputs. To include both requirements in one numerical measure,~\cite{improved} defines the Inception score as $IS = \exp\Bigl(\mathbb{E}\bigl[D_{KL}\bigl(p\left(y\vert \boldsymbol{x}\right) \Vert~ p\left(y\right)\bigr)\bigr]\Bigr)$, where $D_{KL}$ is the KL divergence. Because the IS is not an objective metric, it is common to compare the generator's score with the score obtained from real data. Because no such classifier is commonly used for power trace signals, we train our own model, using a one dimensional ResNet~\cite{resnet} architecture. To avoid biasing the model towards PowerGAN we also include training data from ECO~\cite{ECO} and Tracebase~\cite{Tracebase}, as they were the foundation used for the ANTgen power traces. The real power traces, used as foundation for SynD, were not published, so they could not be included in classifier training. We then evaluate the IS in batches and present the mean and standard deviation for each generator, as well as the real data. While IS has shown good correlation with human classification of real versus generated samples, it is not without its flaws. It is highly sensitive to noise and to scale, as well as mode collapse. For example, if a model can generate exactly one, highly realistic, sample for every class, it will achieve near perfect IS, without actually being a diverse generator. To avoid some of these pitfalls,~\cite{frechet} introduced the Frechet Inception Distance (FID). The FID uses the same classifier as IS, but instead of measuring probabilities directly at the output, it evaluates the distributions of features in the final embedding layer of the classifier. FID measures the Wasserstein 2-distance between the distribution of real and generated signal features, under a Gaussian assumption (which allows a closed-form solution). The FID is significantly less sensitive to mode collapse and noise, yet still struggles with models that directly copy large portions of the training set. Because FID is a proper distance, its value can serve as a more objective metric. We evaluate FID using the full set used for training our ResNet classifier, and generate an equivalent amount of data from each synthesizer. A similar approach to FID, the sliced Wasserstein distance (SWD)~\cite{ProgGAN} attempts to evaluate the difference between the distributions of real and generated signals directly. SWD uses 1-D projections to estimate the Wasserstein distance between two distributions, taking advantage of the closed form solution for the distance of such projections. In practice, the SWD is itself approximated using a finite set of random projections. It is common to evaluate SWD on some feature space, to make it more robust. For our work, we compare two possible feature sets: the classifier features used for FID, and a Laplacian ``triangle" (a 1-D adaption of a Laplacian pyramid) using a 15-sample Gaussian kernel. Similarly to FID, we evaluate the SWD on the entire training set, and we use 10 iterations of 1000 random projections each, calculating the mean and standard deviation along the iterations. Table~\ref{tbl:NumericEval} summarizes the results for all the metrics described above. \begin{table}[htbp] \caption{Synthesized Appliance Performance Evaluation} \label{tbl:NumericEval} \vspace{-0.5cm} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Generator} & $\boldsymbol{IS}$ & $\boldsymbol{FID}$ & $\boldsymbol{SWD}_{Lap}^*$ & $\boldsymbol{SWD}_{Cl}$ \\ \hline Dataset & $3.77 \pm .15$ & 0 & 0 & 0\\ \hline ANTgen & $ 3.73 \pm .11$ & $69.63$ & $45 \pm .029$ & $0.31 \pm .017$\\ SynD & $3.18 \pm .10$ & $76.09$ & $ 22 \pm .011$ & $0.33 \pm .015$\\ PowerGAN & $\boldsymbol{3.81 \pm .13}$ & $\boldsymbol{43.30}$ & $\boldsymbol{18 \pm .088}$ & $\boldsymbol{0.25 \pm .011}$\\ \hline \end{tabular} \end{center} $^{*} SWD_{Lap}$ values were calculated using Laplacian ``triangle" features were scaled by $10^{-3}$. $SWD_{Cl}$ values were calculated using the last layer of classifier features, similarly to the Frechet Inception distance. \vspace{-0.3cm} \end{table} Several things stand out when reviewing the quantitative results. First, we notice PowerGAN receives the highest Inception score, outscoring both SynD and ANTgen in a statistically significant manner (t-test $p\leq 1e^{-5}$). PowerGAN even slightly outscores the real data, although not in a statistically significant manner (t-test $p=0.38$). We believe this is caused by the existence of some inevitably mislabeled data in REFIT. When collecting sub-meter data for NILM applications, the wiring of certain houses makes it difficult to avoid having more than one appliance on each sub-meter. This means that often a sub-meter designated as one appliance (such as fridge or dishwasher) will contain measurements from a smaller, or less commonly used appliance (such as a kettle or battery charger). The presence of such activations may lead to a lower Inception score in the real data, but effects PowerGAN to a lesser extent. Secondly, we notice that the diversity of PowerGAN-generated signals is noticeable when reviewing the more advanced metrics. In both variations of the $SWD$ as well as FID, PowerGAN outperforms the other two synthesizers in a statistically significant manner (t-test $p \leq 9e^{-4}$). We believe that the combination of these scores shows that PowerGAN is capable of generating samples that are comparable, in terms of realism, with copying or hand-modeling real data directly (as done by SynD and ANTgen), while at the same time creating diverse and truly novel appliance power signatures. \vspace{-0.3cm} \subsection{Qualitative Analysis} \label{subsec:quality} \begin{figure*}[htbp] \centering \includegraphics[width = 0.85\textwidth]{Multi-Fridge.png} \caption{Examples of generated and real fridges. There is diversity in the generated fridges in terms of frequency, duty cycle, overshoot size, and more. PowerGAN generates some artifacts such as an overshoot at the end of an activation, as well as some power variations within a given activation.} \vspace{-0.3cm} \label{fig:multigen} \end{figure*} When evaluating our generated signals, we focus on the traces' realism as well as their variety and novelty. We find that PowerGAN is able to generate highly realistic-looking appliance traces while avoiding directly copying existing appliances from REFIT. In addition, we notice that the generator's diversity exists both between classes and within each class. Fig.~\ref{fig:all_apps} shows an example of generated signals from each of the five trained appliances, along with similar real power traces. We can see that the generated signals present highly comparable behaviours and contain all of the major features of each appliance class. Some important attributes in the generated signals are shown below, by class: \vspace{-0.15cm} \begin{itemize} \item \textbf{Fridges} - generated fridge traces maintain the periodic nature of real refrigerators. We see small variation in both frequency and duty cycles of the activations, with minor differences within an activation and larger differences between different samples. In addition, generated fridges maintain the initial spike in power consumption. \item \textbf{Washing Machines} - generated washing machine traces manage to convey the complicated state transitions of the various washing cycle states. We see quick fluctuations in power consumption, typical of the machine's internal heating unit switching on and off. Additionally, the generator is able to generate the variable load which occurs during the washing machine's spin cycle. \item \textbf{Tumble Dryers} - generated tumble dryer traces are able to maintain the characteristic drop in power consumption that occurs periodically when the dryer changes direction. Furthermore, PowerGAN is able to capture the usage characteristics of a dryer, occasionally including more than one activation in a 5-hour window. \item \textbf{Dishwashers} - generated dishwasher traces manage to maintain the multi-state properties of the original dishwashers, without incurring significant amount of switching noise or any major artifacts. \item \textbf{Microwaves} - generated microwave traces portray the low duty cycle of real microwaves, which are generally only used occasionally for periods of a few minutes at most. In addition, PowerGAN is able to generate traces that include quick switching of the microwave oven, which can occur during more advanced microwave modes such as a defrost program. \end{itemize} While PowerGAN generates realistic data for the most part, some issues still exist. The generated signals occasionally contain artifacts that are rare in real signals, such as an overshoot before deactivation, power fluctuations within a given state, or unlikely activation duration. When analyzing these artifacts, we note that examples of such behaviour exist in the real data, albeit rarely. We believe that these behaviours appear in PowerGAN because in the training procedure, such artifacts become central in identifying appliances, leading to them carrying significant gradients to the generator. In order to demonstrate the diversity of the power traces generated by PowerGAN, we present six examples of generated and real fridge signals in Fig.~\ref{fig:multigen}. We note that like the real fridge power traces, the generated signals vary in several important features: power level, activation frequency, duty cycle, and overshoot size. In addition, the generated signals demonstrate some variations in each of the above parameters within an activation window, similarly to real fridges. \section{Conclusions} \label{sec:Summary} After identifying the need for synthetic data generation for NILM, we presented here the first GAN-based synthesizer for appliance power traces. Our model, named PowerGAN, is trained in a progressive manner, and uses a unique conditioning methodology to generate multiple appliance classes using one generator. We have also implemented some groundwork for evaluating power trace generators which, as expected, requires more than one metric in order to evaluate the various requirements from synthesizers. Using these metrics, along with visual inspection of the generated samples, we have shown that PowerGAN is able to produce diverse, realistic power appliance signatures, without directly copying or hand-modeling the training data. While the results presented in this paper are based on training on the REFIT dataset, the presented framework can be used for training on any desired dataset, and at any sampling frequency. We believe that these properties may help researchers in using PowerGAN as an augmentation tool for training supervised NILM solutions. The PowerGAN generator can be used to randomly replace certain activation windows in the real data with synthesized ones, with the hope of improving out-of-distribution performance. In order to do this, one can modify the training procedure of PowerGAN slightly to include the desired activation window sizes, as well as remove the random time shifting during training, if a well localized activation is preferred for disaggregation. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-07-28T02:43:02", "yymm": "2007", "arxiv_id": "2007.13645", "language": "en", "url": "https://arxiv.org/abs/2007.13645" }
\section{Introduction} Heavy ion collisions are an important laboratory that provides distinct alternatives to probe fundamental aspects of the strong interactions theory -- the Quantum Chromodynamics (QCD). For central and semi -- central collisions, where the impact parameter $b$ of the collision is smaller than the sum of the nuclear radius and the strong interactions dominate, we can probe the creation of the Quark - Gluon Plasma (QGP) and constrain its properties \cite{Busza:2018rrf}. In contrast, ultraperipheral collisions (UPHICs), which are defined as collisions at large impact parameters $b > 2 R$ where the long range photon -- induced interactions become dominant, can be used to contrain the QCD dynamics at high energies (small -- $x$) \cite{upc}. One has that the description of the initial conditions for the collective behaviour of the medium produced in central and semi - central heavy ion collisions are determined by the momentum and spatial distributions of gluons in the nuclei, which are expected to be sensitive to the presence of non -- linear effects in the QCD dynamics \cite{hdqcd}. Therefore, there is a strict connection between the physics probed in central, semi - central and ultraperipheral collisions, which can be explored in order to improve our understanding of QCD at large energies and high densities \cite{Schlichting:2019abc}. Over the last decades, experiments at RHIC and LHC have collided a variety of nuclei over a wide range of energies, allowing to produce and characterize the properties of QGP as well as to study the production of different final states generated in ultraperipheral heavy ion collisions. In the coming years, new LHC data at larger energies ($\sqrt{s} = 5.5$ and 10.6 TeV) and the implementation of the nuclear programme at the FCC ($\sqrt{s} = 39$ TeV) are expected to advance in our understanding of the nature of the hot and dense QCD matter produced in these collisions \cite{Dainese:2016gch}. However, the accuracy with which the properties of the QGP can be constrained in these future collisions strongly depends on the knowledge of the incoming nuclear wave functions at small - $x$. In this paper, motivated by the studies performed in the Refs. \cite{vicmag,Lappi,armestoamir,Diego1,run2,heikkeplb,guzey_tdist,contreras,Luszczak:2017dwf,Guzey:2018tlk,Diego2,Sambasivam:2019gdd} for smaller center - of - mass energies, we will investigate the possibility of determine the presence of gluon saturation effects and estimate the magnitude of the associated non-linear corrections for the QCD dynamics in ultraperipheral $PbPb$ collisions for the energies of the next run of the LHC ($\sqrt{s} = 5.5$ TeV) \cite{hl_lhc}, as well as for the energies of the High -- Energy LHC ($\sqrt{s} = 10.6$ TeV) \cite{he_lhc} and Future Circular Collider ($\sqrt{s} = 39$ TeV) \cite{fcc}. In particular, we will consider the exclusive photoproduction of $J/\Psi$ on heavy nuclei, which is driven by the gluon content of the nucleus and is strongly sensitive to non-linear effects (parton saturation). We will estimate the contribution of the coherent and incoherent $J/\Psi$ processes, which provide different insights about the nuclear structure and the QCD dynamics at high energies \cite{Toll,Mantysaari:2016ykx,Mantysaari:2016jaz,cepila}. Such processes are represented in Fig. \ref{fig:diagrama}, where the Pomeron (${I\!\!P}$) represents a color singlet exchange between the dipole and the target. If the nucleus scatters elastically, the process is called coherent production, and the associated cross section measures the average spatial distribution of gluons in the target. On the other hand, if the nucleus scatters inelastically, the process is denoted incoherent production. In this case, one sums over all final states of the target nucleus, except those that contain particle production. The associated cross section probes the fluctuations and correlations in the nuclear gluon density. In both cases, the final state is characterized by two rapidity gaps. As demonstrated in Refs. \cite{Toll,Mantysaari:2016ykx,Mantysaari:2016jaz,cepila}, the coherent production probes the averaged density profile of the gluon density, while the incoherent cross sections constrain the event - by - event fluctuations of the gluonic fields in the target. In our analysis, we will describe the nuclear profile taking into account of the possible states of nucleon configurations in the nuclear wave function, assuming that each nucleon in the nucleus has a Gaussian profile of width $B_p$, centered at random positions sampled from a Woods-Saxon nuclear profile \cite{Toll,cepila}. The numerical calculations will be performed using the Sar{\it t}re event generator proposed in Ref. \cite{Toll} and detailed in Ref. \cite{sartre}. In order to estimate the impact of the non - linear (saturation) effects, we will compare the full predictions with those obtained disregarding these effects. As we will demonstrate below, the total cross sections for coherent and incoherent processes, as well as the corresponding rapidity and transverse momentum distributions, are sensitive to the non - linear effects. Our results indicate that the study of the exclusive $J/\Psi$ photoproduction in ultraperipheral $PbPb$ collisions at the LHC, HE - LHC and FCC can be useful to discriminate between the saturation and non - saturation scenarios. This paper is organized as follows. In the next Section, we present a brief review of the formalism used to estimate the coherent and incoherent cross sections as well the model for the nuclear profile used in our calculations. In Section \ref{sec:results} we present our results for the coherent and incoherent cross sections, considering the kinematical range that will be probed by the LHC, HE - LHC and FCC. Finally, in Section \ref{sec:conc} we summarize our main conclusions. \begin{figure}[t] \begin{tabular}{cc} {\includegraphics[width=0.5\textwidth]{vm_PbPb_coh.pdf}} & {\includegraphics[width=0.5\textwidth]{vm_PbPb_incoh.pdf}} \\ (a) & (b) \end{tabular} \caption{Typical diagrams for the (a) coherent and (b) incoherent $J/\Psi$ photoproduction in ultraperipheral $PbPb$ collisions.} \label{fig:diagrama} \end{figure} \section{Formalism} \label{form} The coherent and incoherent $J/\Psi$ photoproduction in ultraperipheral $PbPb$ collisions are represented by the diagrams shown in Fig. \ref{fig:diagrama}. As pointed before, the final state will be characterized by two rapidity gaps, i.e. the outgoing particles are separated by a large region in rapidity in which there is no additional hadronic activity observed in the detector. In the case of coherent interactions (left panel), the nucleus scatters elastically and remains intact in the final state. In contrast, in incoherent interactions (right panel), the nucleus scatters inelastically, i.e., breaks up due to the $p_T$ ($=\sqrt{-t}$) kick given to the nucleus. Theoretically, it is expected that the coherent production dominates at small squared transverse momentum transfer $t$ ($|t|\cdot R^2/3 \ll 1$, where $R$ is the nuclear radius), with its signature being a sharp forward diffraction peak. On the other hand, incoherent production is expected to dominate at large $t$ ($|t|\cdot R^2/3 \gg 1$), with the associated $t$-dependence being to a good accuracy the same as in the production off free nucleons. In ultraperipheral collisions, the $PbPb$ cross sections for the coherent and incoherent processes can be written in a factorized form, given by the so called equivalent photon approximation \cite{epa}, with the differential cross sections being expressed as follows \begin{eqnarray} \frac{d\sigma_{coh}}{dy\,dt} = n_{Pb}(y) \, \cdot \, \left.\frac{d\sigma}{dt}(\gamma Pb \rightarrow J/\Psi Pb;y)\right|_{coh} + n_{Pb}(-y) \, \cdot \, \left.\frac{d\sigma}{dt}(\gamma Pb \rightarrow J/\Psi Pb; -y)\right|_{coh}\,\,\,, \label{dsigdy_coh} \end{eqnarray} and \begin{eqnarray} \frac{d\sigma_{inc}}{dy\,dt} = n_{Pb}(y) \, \cdot \, \left.\frac{d\sigma}{dt}(\gamma Pb \rightarrow J/\Psi X;y)\right|_{inc} + n_{Pb}(-y) \, \cdot \, \left.\frac{d\sigma}{dt}(\gamma Pb \rightarrow J/\Psi X; -y)\right|_{inc}\,\,\,, \label{dsigdy_inc} \end{eqnarray} where $y$ is the rapidity of the $J/\Psi$ in the final state, which determines the photon energy $\omega$ in the collider frame and, consequently, the photon - nucleus center of mass energy $W = \sqrt{4 \omega E}$, where $E = \sqrt{s}/2$ and $\sqrt{s_{NN}}$ is the total collision energy per nucleon pair in the center-of-mass frame. As both incident nuclei act as a source of photons \cite{upc}, the contributions associated to photon - Pomeron and Pomeron - photon interactions are taken into account in the above equations. Moreover, $n_{A}$ denotes the equivalent photon spectrum of the relativistic incident nucleus. As in our previous studies \cite{run2,Diego1,Diego2} we will assume a point -- like form factor for the nucleus, which implies that \cite{upc} \begin{eqnarray} n_{A}(\omega) = \frac{2Z^{2}\alpha_{em}}{\pi } \left[ \xi K_{0}(\xi) K_{1}(\xi) -\frac{\xi^{2}}{2} \left( K_{1}^{2}(\xi) - K_{0}^{2}(\xi) \right ) \right] , \end{eqnarray} where $ \xi = \omega \left( 2R \right) / \gamma_{L}$, with $\gamma_L$ being the Lorentz factor. In our analysis we are assuming that the photons emitted are coherently radiated by the whole nucleus. Such condition imposes that the minimum photon wavelength must be greater than the nuclear radius. As a consequence, the photon virtuality must satisfy $Q^2 = -q^2 \le 1/R^2$, with the photon four -- momentum being $q^{\mu} = (\omega, \vec{q_{\perp}},q_z = \omega/v)$, where $\vec{q_{\perp}}$ is the transverse momentum of the photon in a given frame, where the projectile moves with velocity $v$. It implies that $Q^2 = \omega^2/\gamma_L^2 + q_{\perp}^2$. The coherence condition limits the maximum energy of the photon to $\omega < \omega_{\mbox{max}} \approx \gamma_L/R$ and the perpendicular component of its momentum to ${q_{\perp}} \le 1/R$. Therefore, the coherence condition sets an upper limit on the transverse momentum of the photon emitted by the nucleus, which should satisfy $q_{\perp} \le 1/R$, being $\approx 28$ MeV for $Pb$ beams. Consequently, the photon virtuality can be neglected and the photons can be considered as being real. The maximum photon energy can also be derived considering that the maximum possible momentum in the longitudinal direction is modified by the Lorentz factor, $\gamma_L$, due to the Lorentz contraction of the nucleus in that direction. It implies $\omega_{\mbox{max}} \approx \gamma_L/R$ and, consequently, $W^{\mbox{max}} = \sqrt{2\,\omega_{\mbox{max}}\, \sqrt{s_{NN}}}$. Considering the values of $\sqrt{s_{NN}}$ for $PbPb$ collisions at the LHC ($\sqrt{s_{NN}} = 5.5$ TeV) and FCC ($\sqrt{s_{NN}} = 39$ TeV), we obtain that the maximum photon -- nucleon center -- of -- mass energy, $W^{\mbox{max}}$, reached in these collisions are $0.95$ TeV and $6.8$ TeV, respectively. Such values are much larger than those studied at HERA and that will be accessed in the future electron -- ion collider. Therefore, the study of photon -- nucleus interactions at LHC and FCC will allow us to probe the QCD dynamics in a unexplored kinematical range. As pointed out in the Introduction, to establish the dynamics at small - $x$ is fundamental to the success of the heavy ion physics program. The main input in Eqs. (\ref{dsigdy_coh}) and (\ref{dsigdy_inc}) are the differential cross sections, $d\sigma/dt$, for the coherent and incoherent interactions. In order to estimate these quantities we will take into account the distinct nucleon configurations of the nucleus and average over all possible configurations. For coherent interactions, in which the nucleus is required to remain in its ground state, the average over the configurations of the nuclear wave function, denoted by $\left\langle ... \right\rangle$ hereafter, is taken at the level of the scattering amplitude. Consequently, the coherent cross section is obtained by averaging the amplitude before squaring it and the differential distribution will be given by \begin{equation}\label{eq:xsec-coh} \left.\frac{d\sigma^{\gamma Pb \rightarrow J/\Psi \,Pb}}{dt}\right|_{coh} = \frac{1}{16\pi}\left| \left\langle \mathcal{A}(x, \Delta) \right\rangle \right|^2\,\,, \end{equation} where $x = (M^2 -t)/(W^2)$, with $M$ being the $J/\Psi$ mass, and $\Delta = \sqrt{-t}$ is the momentum transfer. On the other hand, for incoherent interactions, the average over configurations is at the cross section level. In this case, the nucleus can break up and the resulting incoherent cross section will be proportional to the variance of the amplitude with respect to the nucleon configurations of the nucleus, i.e., it will measure the fluctuations of the gluon density inside the nucleus. The differential cross sections for incoherent interactions will be expressed as follows: \begin{equation}\label{eq:xsec-inc} \left.\frac{d\sigma^{\gamma Pb \rightarrow J/\Psi\,X}}{dt}\right|_{inc} = \frac{1}{16\pi} \left( \left\langle\left| \mathcal{A}(x, \Delta) \right|^2 \right\rangle - \left| \left\langle \mathcal{A}(x, \Delta) \right\rangle \right|^2\right), \end{equation} where $X = Pb^*$ represents the dissociative state. In our calculations we will include the skewedness correction by multiplicating the coherent and incoherent cross sections by the factor $R_g^2$ as given in Ref. \cite{Shuvaev:1999ce}. In the color dipole formalism, the scattering amplitude $\mathcal{A}(x, \Delta)$ can be factorized in terms of the fluctuation of the photon into a $q \bar{q}$ color dipole, the dipole-nucleus scattering by a color singlet exchange and the recombination into the exclusive final state $J/\Psi$, being given by \begin{eqnarray} {\cal A}({x},\Delta) = i \,\int d^2\mbox{\boldmath $r$} \int \frac{dz}{4\pi} \int \, d^2\mbox{\boldmath $b$} \, e^{-i[\mbox{\boldmath $b$} -(1-z)\mbox{\boldmath $r$}].\mbox{\boldmath $\Delta$}} \,\, (\Psi^{V*}\Psi) \,\,\frac{d\sigma_{dA}}{d^2\mbox{\boldmath $b$}}({x},\mbox{\boldmath $r$},\mbox{\boldmath $b$}) \label{amp} \end{eqnarray} where $(\Psi^{V*}\Psi)$ denotes the wave function overlap between the photon and the $J/\Psi$ wave functions, which will be described using the Boosted Gaussian model (For details see e.g. Ref. \cite{run2}). The variables $\mbox{\boldmath $r$}$ and $z$ are the dipole transverse radius and the momentum fraction of the photon carried by a quark (an antiquark carries then $1-z$), respectively, and $\mbox{\boldmath $b$}$ is the impact parameter of the dipole relative to the target. Moreover, ${d\sigma_{dA}}/{d^2\mbox{\boldmath $b$}}$ is the dipole-nucleus cross section (for a dipole at impact parameter $\mbox{\boldmath $b$}$) which encodes all the information about the hadronic scattering, and thus about the non-linear and quantum effects in the hadron wave function. How to treat the dipole - nucleus interaction is still an open question due to the complexity of the impact parameter dependence. In principle, ${d\sigma_{dA}}/{d^2\mbox{\boldmath $b$}}$ can be derived using the Color Glass Condensate (CGC) formalism \cite{CGC}, which is characterized by the infinite hierarchy of equations, the so called Balitsky-JIMWLK equations \cite{BAL,CGC}, which reduces in the mean field approximation to the Balitsky-Kovchegov (BK) equation \cite{BAL,kov}. In our analysis, following the studies presented in Refs. \cite{run2,Toll,Diego1,cepila}, we will describe the dipole - nucleus cross section using the Glauber-Gribov formalism \cite{glauber,gribov,mueller}, which implies that ${d\sigma_{dA}}/{d^2\mbox{\boldmath $b$}}$ is given by \begin{eqnarray} \frac{d\sigma_{dA}}{d^2\mbox{\boldmath $b$}} = 2\,\left( 1 - \exp \left[-\frac{1}{2} \, \sigma_{dp}(x,\mbox{\boldmath $r$}^2) \,T_A(\mbox{\boldmath $b$})\right]\right) \,\,, \label{enenuc} \end{eqnarray} where $\sigma_{dp}$ is the dipole-proton cross section and $T_A(\mbox{\boldmath $b$})$ is the nuclear profile function. We will describe the nuclear profile $T_A(\mbox{\boldmath $b$})$ taking into account of all possible states of nucleon configurations in the nuclear wave function. Following Refs. \cite{Toll,cepila}, we will assume that each nucleon in the nucleus has a Gaussian profile of width $B_p$, centered at random positions $\mbox{\boldmath $b$}_i$ sampled from a Woods-Saxon nuclear profile as follows \begin{equation}\label{eq:Ths0} T_A(\mbox{\boldmath $b$}) = \frac{1}{2\pi B_p} \sum_{i=1}^{A} \exp\left[ - \frac{(\mbox{\boldmath $b$} - \mbox{\boldmath $b$}_i)^2}{2B_p} \right] \,\,. \end{equation} Moreover, as in Ref. \cite{Toll}, the dipole - proton cross section will be given by \begin{equation} \sigma_{dp}(x,\mbox{\boldmath $r$}^2) = \frac{\pi^2 r^{2}}{ N_{c}} \alpha_{s}(\mu^{2}) \,\,xg\left(x, \mu^2 = \frac{C}{r^{2}} + \mu_{0}^{2}\right) \,\,\, \end{equation} where the gluon distribution evolves via DGLAP equation, with the initial condition at $\mu_{0}^{2}$ taken to be $ xg(x,\mu_{0}^{2}) = A_{g}x^{-\lambda_{g}} (1-x)^{6}$. In this work, we assume the parameters $B_p, A_g, \lambda_g, C$ and $\mu_0^2$ obtained in Ref. \cite{ipsat_heikke} for the IP-SAT model. We will denote by b - Sat the predictions derived using Eq. (\ref{enenuc}) as input in the calculations. In order to estimate the impact of non-linear corrections to the QCD dynamics, we also will estimate the observables assuming that the dipole - nucleus cross section is given by: \begin{eqnarray} \frac{d\sigma_{dA}}{d^2\mbox{\boldmath $b$}} = \sigma_{dp}(x,\mbox{\boldmath $r$}^2) \,T_A(\mbox{\boldmath $b$}) \,\,, \label{enenuc_lin} \end{eqnarray} which disregards the effect of the multiple elastic dipole rescatterings. The associated predictions will be denoted by b - Non Sat hereafter. For this case, we assume the parameters $B_p, A_g, \lambda_g, C$ and $\mu_0^2$ obtained in Ref. \cite{ipsat_heikke} for the IP-NONSAT model. \begin{figure}[t] \begin{tabular}{ccc} {\includegraphics[width=0.33\textwidth]{PbPb_rapidity_jpsi_5dot5TeV.pdf}} & {\includegraphics[width=0.33\textwidth]{PbPb_rapidity_jpsi_10dot6TeV.pdf}} & {\includegraphics[width=0.33\textwidth]{PbPb_rapidity_jpsi_39TeV.pdf}} \\ (a) & (b) & (c) \end{tabular} \caption{Rapidity distributions for the coherent and incoherent $J/\Psi$ photoproduction in $PbPb$ collisions for the (a) LHC, (b) HE - LHC and (c) FCC energies.} \label{fig:rapidity} \end{figure} \section{Results} \label{sec:results} In what follows, we will present predictions for the coherent and incoherent $J/\Psi$ photoproduction in $PbPb$ collisions for the energies of the next run of the LHC ($\sqrt{s} = 5.5$ TeV) \cite{hl_lhc}, as well as for the energies of the High -- Energy LHC ($\sqrt{s} = 10.6$ TeV) \cite{he_lhc} and Future Circular Collider ($\sqrt{s} = 39$ TeV) \cite{fcc}. The numerical calculations will be performed using the Sar{\it t}re event generator \cite{sartre}. In order to perform the averages present in the coherent and incoherent cross sections, we have considered 500 distinct nucleon configurations. As demonstrated in \cite{Toll}, this number of configurations is enough to obtain a good description of the cross sections for $|t| \le 0.08$ GeV$^2$, which is the range of interest in our study. Initially, let's estimate the rapidity distribution, which is one of the main observables that can be directly measured at the LHC and FCC. The predictions for coherent and incoherent interactions can be obtained from Eqs. (\ref{dsigdy_coh}) and (\ref{dsigdy_inc}) by integrating over all values of $t$. The results are presented in Fig. \ref{fig:rapidity}. One has that the coherent interactions dominate, in agreement with the results presented in Refs. \cite{heikkeplb,Diego1,contreras} for smaller center - of - mass energies. Such result is expect, since the coherent production is characterized by a sharp forward diffraction peak, being much larger than the incoherent one for small values of $|t|$ (see below). Moreover, we have that the values of the rapidity distribution for midrapidity increase with the energy, with the increasing being dependent on the modeling of the QCD dynamics. We have that the b-Sat predictions are a factor $\gtrsim 1.5$ smaller than the b-Non Sat one. The associated cross sections are presented in Table \ref{tab:cross} by integrating over the full rapidity range as well as over the typical ranges covered by central and forward detectors. The cross sections are of the order of mb, which implies that the number of events per year at the LHC / HE-LHC / FCC will be larger than $10^6/\,10^7/\,10^8$, if we assume the expected integrated luminosity as being ${\cal{L}} = 3.0 /\,10 /\,110$ $nb^{-1}$ \cite{hl_lhc,he_lhc,fcc}. Such large number of events implies that a detailed analysis of the coherent and incoherent processes is, in principle, feasible. Our results indicate that the measurement of the rapidity distribution can be useful to discriminate between the b-Sat and b-Non Sat scenarios. \begin{table}[t] \centering \begin{tabular}{|c||c|c||c|c||c|c|}\hline {\bf PbPb Collisions} & \multicolumn{2}{|c||}{\bf $\sqrt{s}=5.5$ TeV} & \multicolumn{2}{|c||}{\bf $\sqrt{s} = 10.6$ TeV} & \multicolumn{2}{|c|}{\bf $\sqrt{s} = 39$ TeV} \\ \hline {\bf Dipole Model} & b-Sat & b-Non Sat & b-Sat & b-Non Sat & b-Sat & b-NonSat \\ \hline \hline {\bf Coherent (Total)} & 22.6 & 36.8 & 39.5 & 67.0 & 98.7 & 184.4 \\ \hline {\bf Coherent ($|y|<2.0$)} & 15.8 & 25.9 & 24.6 & 41.9 & 51.1 & 94.4 \\ \hline {\bf Coherent ($2.0 < y < 4.5$)} & 3.4 & 5.4 & 7.4 & 12.6 & 21.4 & 41.5 \\ \hline \hline {\bf Incoherent (Total)} & 4.2 & 9.0 & 7.2 & 16.5 & 17.3 & 45.6 \\ \hline {\bf Incoherent ($|y|<2.0$)} & 2.9 & 6.3 & 4.4 & 10.3 & 8.9 & 23.3 \\ \hline {\bf Incoherent ($2.0<y<4.5$)} & 0.7 & 1.3 & 1.4 & 3.1 & 3.7 & 10.2 \\ \hline \hline \end{tabular} \caption{Cross sections, in mb, for the coherent and incoherent $J/\Psi$ photoproduction in $PbPb$ collisions at the LHC, HE - LHC and FCC considering the b-Sat and b-Non Sat dipole models.} \label{tab:cross} \end{table} Another observable of interest is the squared momentum transfer ($t$) distribution for a fixed rapidity. As demonstrated in previous studies \cite{armestoamir,Diego1,Diego2}, such distribution is an important alternative to probe the QCD dynamics at high energies and provide information about the spatial distribution of the gluons in the target and about fluctuations of the color fields. Our predictions are presented in Fig. \ref{fig:tdist} for distinct energies considering central (upper panels) and forward (lower panels) rapidities. As expected from previous studies \cite{Diego1,heikkeplb}, the coherent dominates at small - $|t|$ and the incoherent ones at large values of the momentum transfer, which is associated to the fact that increasing of the momentum kick given to the nucleus the probability that it breaks up becomes larger. As a consequence, the $J/\Psi$ production at large - $|t|$ is dominated by incoherent processes. In addition, the coherent cross sections clearly exhibit the typical diffractive pattern and is characterized by a sharp forward diffraction peak. In contrast, the incoherent cross section is characterized by a flat $t$ - dependence, decreasing when $|t| \rightarrow 0$. Regarding the impact of the saturation effects, one has that the normalization of the incoherent predictions is modified by the non - linear effects, with the difference between the b-Sat and b-Non Sat predictions increasing with the energy. A similar effect is also observed in the coherent case. However, for the coherent processes, the position of the dips is sensitive to the presence of the saturation effects, in agreement with the results obtained in Refs. \cite{Diego1,Diego2}. Our results indicate that the position of the second dip is dependent on description of the QCD dynamics, with the predictions becoming more distinct at larger energies. However, it is important to emphasize that the experimental separation of coherent processes at large - $|t|$ is still a challenge due to the dominance of the incoherent interactions. An alternative is the detection of the fragments of the nuclear breakup produced in the incoherent processes. e.g. the detection of emitted neutrons by zero - degree calorimeters \cite{Caldwell,Toll}. \begin{figure}[t] \begin{tabular}{ccc} {\includegraphics[width=0.33\textwidth]{PbPb_tvertex_jpsieq0_5dot5TeV.pdf}} & {\includegraphics[width=0.33\textwidth]{PbPb_tvertex_jpsieq0_10dot6TeV.pdf}} & {\includegraphics[width=0.33\textwidth]{PbPb_tvertex_jpsieq0_39TeV.pdf}} \\ {\includegraphics[width=0.33\textwidth]{PbPb_tvertex_jpsieq3_5dot5TeV.pdf}} & {\includegraphics[width=0.33\textwidth]{PbPb_tvertex_jpsieq3_10dot6TeV.pdf}} & {\includegraphics[width=0.33\textwidth]{PbPb_tvertex_jpsieq3_39TeV.pdf}} \\ \end{tabular} \caption{Transverse momentum distributions for the coherent and incoherent $J/\Psi$ photoproduction in $PbPb$ collisions for the LHC, HE - LHC and FCC energies considering the central (upper panels) and forward (lower panels) rapidity ranges.} \label{fig:tdist} \end{figure} \section{Summary} \label{sec:conc} Ultraperipheral heavy ion collisions at the LHC and FCC are an important alternative to constrain the QCD dynamics at high energies and, consequently, the description of the initial conditions for central and semi - central collisions. In particular, the increasing in the center - of - mass energy and integrated luminosity in the forthcoming experiments opens up new opportunities to probe the nuclear wave function in a unexplored energy range, where non - linear (saturation) effects are expected to significantly contribute. In this paper, we have performed a detailed investigation of the coherent and incoherent $J/\Psi$ photoproduction in $PbPb$ collisions considering the possible states of nucleon configurations in the nuclear wave function and taking into account of the non - linear corrections to the QCD dynamics. Moreover, a comparison with the results derived disregarding these corrections was also presented. We have derived predictions for the cross sections of the coherent and incoherent processes considering the rapidity ranges covered by central and forward detectors, which demonstrated that the event rates of these processes are very large and that they are sensitive to saturation effects. Moreover, predictions for the rapidity and transverse momentum distributions were presented. In particular, these results indicate that the experimental analysis of the transverse momentum distribution is useful to discriminate between different approaches for the QCD dynamics as well to improve our description of the gluon saturation effects. Finally, our results indicate that a future experimental analysis of the coherent and incoherent processes will be useful to improve our understanding of the QCD dynamics at high energies. \begin{acknowledgments} VPG acknowledge useful discussions about coherent and incoherent interactions with Jan Cepila, Michal Krelina and Wolfgang Schafer. This work was partially financed by the Brazilian funding agencies CNPq, CAPES, FAPERGS, FAPERJ and INCT-FNA (processes number 464898/2014-5 and 88887.461636/2019-00). \end{acknowledgments} \hspace{1.0cm}
{ "timestamp": "2020-07-28T02:41:56", "yymm": "2007", "arxiv_id": "2007.13625", "language": "en", "url": "https://arxiv.org/abs/2007.13625" }
\section{Introduction} \label{section-1} Recent years, Convolutional Neural Networks (CNNs) have been widely used in computer vision tasks such as image classification \cite{krizhevsky2012imagenet}, object detection \cite{ren2015faster}, face recognition \cite{sun2015deepid3}, and object tracking \cite{bertinetto2016fully}. Modern CNN models could achieve state-of-the-art accuracy by extracting rich features and significantly outperform traditional methods \cite{lecun2015deep}. However, running large scale CNN models often requires large memory space and consumes huge computational resources. Many CNN accelerator architectures have been invented to improve the CNN inference throughput and energy efficiency in both industrial \cite{jouppi2017datacenter, jouppi2018motivation} and academic \cite{chen2016eyeriss, parashar2017scnn, imani2019floatpim, he2018joint, li2018network, ren2019admm, wen2017coordinating, wen2017learning, chen2019slide} communities. Among them, exploiting network sparsity has been observed as one of the most efficient approaches. For example, by applying weight pruning, the model size of AlexNet and VGG-16 could be reduced by $35\times$ and $49\times$, respectively \cite{han2015deep}, which provides a huge potential for accelerating CNN inference. \begin{figure}[ht] \centering \begin{subfigure}[t]{6.85pc} \centering \includegraphics{1-no-sparse.pdf} \caption{Inference without sparsity.} \label{fig-1-1:no-sparse} \end{subfigure} \begin{subfigure}[t]{13.85pc} \centering \includegraphics{1-weight-sparse.pdf} \caption{Inference with weight sparsity.} \label{fig-1-2:weight-sparse} \end{subfigure} \par \begin{subfigure}[t]{6.85pc} \centering \includegraphics{1-activation-sparse.pdf} \caption{Inference with activation sparsity.} \label{fig-1-3:activation-sparse} \end{subfigure} \begin{subfigure}[t]{13.85pc} \centering \includegraphics{1-wg-sparse.pdf} \caption{Training with weight gradient sparsity.} \label{fig-1-4:wg-sparse} \end{subfigure} \par \begin{subfigure}[t]{16pc} \centering \includegraphics{1-ours.pdf} \caption{Ours: training with activation gradient sparsity.} \label{fig-1-5:ours} \end{subfigure} \caption{CNN sparsity exploration.} \label{fig-format} \vspace{-1.2pc} \end{figure} In fact, by using weight sparsity, a sparse-aware computing architecture --- EIE \cite{han2016eie} achieved $13\times$ speedups and $3400\times$ energy efficiency when compared to GPU implementation of the same DNN without compression. \Cref{fig-1-1:no-sparse} and \cref{fig-1-2:weight-sparse} show the original CNN inference process and the one with weight pruning and sparsity utilization, respectively. One step further, SCNN \cite{parashar2017scnn} achieved significant speedup in CNNs inference by utilizing both weight sparsity and \textit{natural sparsity} of activations. \Cref{fig-1-3:activation-sparse} shows the basic idea of this procedure. Most works focused on the inference of CNN. However, the sparsity of the training process was less studied. Compared with inference, CNN training demands much more computational resources. Usually CNN training introduces about $3 \times$ computational cost and consumes $10\times$ to $100 \times$ memory space compared with the inference. To utilize gradient sparsity in CNNs training, \cite{wen2017terngrad}\cite{he2019simultaneously} proposed a method that uses only three numerical levels \{-1,0,1\} for weight gradients to reduce communication in distributed training, which can be considered as a mixture of pruning and quantization. This scheme is shown in \cref{fig-1-4:wg-sparse}. However, it can hardly reduce the computation since the main training process (forward and back-propagation) is still calculated in a dense data format. There are also many training accelerators try to utilizing sparsity, which can be found in \cite{comparison}. But neither of them exploit gradient sparsity. Since activation gradients is an operand for both backward process and weight gradients generation process, the sparsity of activation gradients provides a significant reduction on the computation of training. To overcome the limitation of previous works and fully exploit the sparsity of CNN training, we proposed a layer-wise gradients pruning method that can improve the training performance remarkably by generating \textit{artificial sparsity} on activation gradients data. Our scheme is shown in \cref{fig-1-5:ours}. The key contributions of this paper are: \begin{itemize} \item We present a layer-wise gradients pruning algorithm that can generate high artificial sparsity on activation gradients data with negligible overhead. Our experiments show that this algorithm will hardly degrade the accuracy and convergence rate of CNNs. \item We propose a 1-Dimensional Convolution based dataflow. Combined with our pruning algorithm, it can fully utilize all kinds of sparsity in training. \item To support our sparse dataflow, an accelerator architecture is designed for CNNs training that can benefit from both activation and gradient sparsity. \end{itemize} The proposed \textit{SparseTrain} scheme is evaluated by AlexNet \cite{krizhevsky2012imagenet} and ResNet \cite{he2016deep} on CIFAR/ ImageNet. Evaluation results on AlexNet with natural sparsity show \textit{SparseTrain} achieves $2.7 \times$ speedup and $2.2 \times$ energy efficiency on average, compared with the dense baseline. The remaining of this paper is organized as follows. Section \ref{section-2} introduces CNN training basics. Section \ref{section-3} provides a gradients pruning algorithm, as well as the threshold determination and prediction method. The details of our sparse training dataflow are presented in Section \ref{section-4}. Section \ref{section-5} demonstrates the sparse-aware architecture designed for the proposed training dataflow. Evaluation results are discussed in Section \ref{section-6}. Section \ref{section-7} concludes this paper. \section{Preliminary} \label{section-2} \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{fig-background-train} \caption{CNN training demonstration.} \label{fig-background-train} \vspace{-1.2pc} \end{figure} A typical CNN training procedure is shown in \cref{fig-background-train}. It contains three stages: Forward, Backward and Weight Update. \textbf{Forward} stage starts from input layer until final layer, including convolutional (\texttt{CONV}), \texttt{ReLU} and \texttt{MaxPool} layer. Input activations $\mathbf{I}$ of $n$-th \texttt{CONV} layer are formulated as a 3-D tensor. The size of $\mathbf{I}$ is determined by the number of input channels $C$, height $H$ and width $W$, and we denote the input activations of $i$-th channel as $\mathbf{I}_i$. Output activations $\mathbf{O}$ are also formulated as a 3-D tensor with $F$ output channels. Weights are formulated as a 4-D tensor with size of $K \times K \times C \times F$, where each convolution kernel $\mathbf{W}_{i, j}$ is a 2-D tensor with the size of $K \times K$. Vector $\mathbf{b}$ is the bias applied on $\mathbf{O}$ after convolution. If 2-D convolution is denoted by ``$\ast$", the \texttt{CONV} layer can be represented as \[ \mathbf{O}_i = \sum_{j = 0}^{C} {\mathbf{W}_{i, j}} \ast \mathbf{I}_j + \mathbf{b}_i, \quad i = 0, \cdots, F. \] \texttt{ReLU} and \texttt{MaxPool} are non-linear operation layers. \texttt{ReLU} layer applies point-wise function $f(x)=max(0, x)$ on all activations. \texttt{MaxPool} layer select the maximum value from each window of activations as output. Hence, the activations usually become sparse after \texttt{ReLU} and \texttt{MaxPool} layers. And the non-zero patterns generated by \texttt{ReLU} and \texttt{MaxPool} layers are recorded as mask and will be adopted in backward stage. \textbf{Backward} stage includes two steps: \begin{itemize} \item \textbf{Gradient To Activations (GTA)}: it calculates the activation gradients (derivatives to certain layer's activations). According to chain rule, the gradients are calculated from loss function back to input layer. The \texttt{CONV} layer in GTA step is represented as \[ {\mathrm{d}\mathbf{I}}_j = \sum_{i = 0}^{F} {\mathrm{d}\mathbf{O}_i \ast {\mathbf{W}^+}_{i, j}}, \quad j = 0, \cdots, C, \] where $\mathrm{d}\mathbf{I}_j$ indicates the input activation gradients (derivatives of input activations) for the $j$-th channel, $\mathrm{d}\mathbf{O}_i$ represents the output activation gradients for the $i$-th channel, $\mathbf{W}_{i, j}$ is the $i$-th filter of $j$-th channel, $\mathbf{W}^+_{i, j}$ is sequentially reversed of $\mathbf{W}_{i, j}$, or in other words, $\mathbf{W}_{i, j}$ rotated by $180$ degrees. \texttt{ReLU} and \texttt{MaxPool} layers in GTA step directly adopt the pre-stored mask in forward stage. \item \textbf{Gradient To Weights (GTW)}: it calculates the weight gradients ${\mathrm{d}\mathbf{W}}$ (derivatives of loss function to layers' weights). These weight gradients are utilized to update weights by Stochastic Gradient Descent (SGD) method. Weight gradients are calculated by \[ {\mathrm{d}\mathbf{W}}_{i, j} = {\mathrm{d}\mathbf{O}}_i \ast \mathbf{I}_j, \quad i = 0, \cdots, F, ~ j = 0, \cdots, C, \] where ${\mathrm{d}\mathbf{W}}_{i, j}$ is the weight gradients of $j$-th channel in $i$-th filter. \end{itemize} \textbf{Weight Update} stage is to update the weights of each layer with the calculated weight gradients by using SGD method. For modern CNNs, batch training is a popular way to update the weights by sending a batch of inputs (e.g. 32 images) into the network. The weight gradients are computed by averaging the batch of gradients. Finally, the weights are updated according to a pre-set learning rate $\alpha$. Generally, weight update stage is not a performance bottleneck for CNN training. Thus, \textbf{only the Forward, GTA and GTW procedures are taken into considerations for acceleration}. \section{Algorithm} \label{section-3} \subsection{Stochastic Pruning} \begin{figure}[ht] \centering \includegraphics[width=0.6\columnwidth]{2-1-prune} \caption{Stochastic gradients pruning algorithm. $\tau$ is the threshold that determines whether a value should be pruned. $p$ is the probability of setting a value to $0$ or $\pm\tau$.} \label{fig:prune} \end{figure} From the experiments, we found that there are many gradients data whose absolute value is very small. Intuitively, a single small gradient value has little impact on the updating of weight, thus it can be pruned (set to zero) directly. However, if many values are pruned, the distribution of gradients is changed remarkably, which causes accuracy loss. Thus, we adopt a stochastic pruning algorithm proposed by \cite{ye2019accelerating} to solve this problem. This algorithm treats the gradients that to be pruned (denoted as $g$) as a $n$-dimensional vector, and prunes the vector component whose absolute value is smaller than the threshold ${\tau}$. \Cref{fig:prune} shows the stochastic pruning method. By setting values to zero or $\pm\tau$ stochastically, the expectation of each component remains unchanged, which improves the convergence and accuracy. Detailed analysis can be found in \cite{ye2019accelerating}. \begin{algorithm}[ht] \caption{Overall Pruning Scheme} \label{algo:1} \KwIn{$\left[G_1, G_2, ..., G_N\right]$: $N$ batch original activation gradients, $N_F$: depth of FIFO, $p$: target sparsity.} \KwOut{sparse activation gradients $\left[\hat G_1, \hat G_2, ..., \hat G_N \right]$ } $F$ $\triangleq$ FIFO with depth $N_F$. \For{$i=1; i \le N ; i \leftarrow i+1$} { $g \triangleq G_i$ ; $\hat g \triangleq \hat G_i$ \; $n =$ length of $g$ ; $A = 0$ ; \For{$j=1;j \le n;j \leftarrow j+1$} { $A \leftarrow A + \left|g_i\right|$ \; \If{$i > N_F$} { $\hat{\tau}$ = mean$\left(F\right)$ ; \eIf{${|g_i| < \hat \tau}$} { Generate a random number $r \in \left[ {0,1} \right]$ \; \eIf{$|g_i| > \hat{\tau} r$} { ${\hat g_i} = ({g_i} > 0) $ ? ${\hat{\tau}}$ : $({-\hat{\tau}})$ \; } { ${\hat g_i} = 0 $ \; } } { ${\hat g_i} = {g_i} $ } } } $\tau = {\Phi^{-1}}\left( {\frac{1 - p}{2}} \right) \frac{1}{n}\sqrt {\frac{2}{\pi }} A$ \; $F.$push $\left( \tau \right)$ \; } \end{algorithm} \subsection{Threshold Determination and Prediction} Clearly, it's unfeasible to find the threshold by sorting, due to its huge memory and time consumption. Thus, we propose a hardware friendly threshold selection scheme with less overhead. This selection method contains 2 steps: \textbf{Determination} and \textbf{Prediction}, where the \textbf{Determination} step refers to \cite{ye2019accelerating}. \begin{figure}[t] \centering \begin{minipage}[ht]{8pc} \centering \includegraphics{fig-struct-type} \caption{Pruning positions of two typical structures.} \label{fig-struct-type} \end{minipage} \hfill \begin{minipage}[ht]{12pc} \centering \includegraphics{fig-predict} \caption{The threshold prediction scheme for each \texttt{CONV} layer using FIFO.} \label{fig-predict} \end{minipage} \vspace{-1pc} \end{figure} \textbf{Threshold Determination.} Modern CNN models have two typical structures, as shown in \cref{fig-struct-type}. For the \texttt{CONV-ReLU} structure, where a \texttt{CONV} layer is followed by a \texttt{ReLU} layer, output activation gradients $\mathrm{d}\mathbf{O}$ are sparse, but subject to a irregular distribution. On the other hand, the input activation gradients $\mathrm{d}\mathbf{I}$, which going to be propagated to the previous layer, is full of non-zero values. Statistics show that the distribution of $\mathrm{d}\mathbf{I}$ is symmetric with 0 and its density decreases with the increment of absolute value. For the \texttt{CONV-BN-ReLU} structure, $\mathrm{d}\mathbf{O}$ subjects to the same distribution of $\mathrm{d}\mathbf{I}$. Simply, we assume these gradients all subject to a normal distribution with mean $0$ and variance ${{\sigma^2}}$ according to \cite{ye2019accelerating}. In the first case, $\mathrm{d}\mathbf{O}$ can inherit the sparsity of $\mathrm{d}\mathbf{I}$ because ReLU layer won’t reduce sparsity. Thus $\mathrm{d}\mathbf{I}$ can be treated as pruning target $g$ in \texttt{CONV-ReLU} structure. Besides, in \texttt{CONV-BN-ReLU} structure, $\mathrm{d}\mathbf{O}$ is considered as pruning target $g$. In this way, the distribution of $g$ in both situations could be unified to normal distribution. Suppose the length of $g$ is $n$, the standard deviation $\sigma$ is estimated in a unbiased way \cite{ye2019accelerating} using \[ \hat \sigma = \frac{1}{n}\sqrt {\frac{2}{\pi }} \sum\limits_{i = 1}^n {\left| {{g_i}} \right|} \ , \quad {g_i} \in g .\] Then we can compute the threshold ${\tau}$ using the cumulative distribution function ${\Phi}$ of the standard normal distribution, target pruning rate $p$ and ${\hat \sigma}$: \[ \tau {\rm{ = }}{\Phi^{-1}}\left( {\frac{1 - p}{2}} \right)\hat \sigma . \] \textbf{Threshold Prediction.} The stochastic pruning scheme mentioned above needs to access all gradients two times: for computing $A$ and for gradients pruning. What is worse, gradients need to be stored in memory temporarily before pruned, which brings overhead of memory access. The best way is to prune gradients before they are sent back to memory. To accomplish this, we improve the algorithm by predicting the threshold before calculating. Denoting the number of batches as $N$, the prediction method keeps a FIFO with a length of $N_F$ for each \texttt{CONV} layer, where $N_F$ is a hyper-parameter that satisfies $N_F << N$. \Cref{fig-predict} shows the pruning algorithm with threshold prediction for each \texttt{CONV} layer. Activation gradients of each batch will be pruned under the predicted threshold ${\tau'}$ as soon as they are calculated, where ${\tau'}$ is the average of all the thresholds stored in the FIFO. At the end of each batch, The determined threshold of this batch is calculated and pushed into the FIFO. Gradients will not be pruned before the FIFO is filled up. Details of the whole gradients pruning algorithm is shown in Algorithm~\ref{algo:1}. The arithmetic complexity of our algorithm is $\mathcal{O}\left( n \right)$, while complexity of sorting is at least $\mathcal{O}\left( n \log n \right)$. Besides, all the gradients will be accessed just one time in our algorithm, so that almost no extra storage is required, which saves time and energy consumption. \section{Dataflow} \label{section-4} \begin{figure*}[ht] \centering \begin{subfigure}{11.82pc} \includegraphics{fig-decompose-inference} \caption{The Forward step.} \label{fig-decomposition-inference} \end{subfigure} \hfill \begin{subfigure}{18.01pc} \includegraphics{fig-decompose-gta} \caption{The GTA step.} \label{fig-decomposition-GTA} \end{subfigure} \hfill \begin{subfigure}{11.68pc} \includegraphics{fig-decompose-gtw} \caption{The GTW step.} \label{fig-decomposition-GTW} \end{subfigure} \caption{Demonstration of our dataflow. Operations in \texttt{CONV} layers are disassembled into channel level and row level. Row level operations are picked as the basic operations of the dataflow.} \label{fig-decomposition} \vspace{-1pc} \end{figure*} To gain more benefits from the above algorithm, it's essential to design an accelerator that can utilize both activation sparsity and gradient sparsity. Prior works have shown that the utilized dataflow usually affects the architecture's performance significantly \cite{chen2016eyeriss}. Thus, we first introduce the dataflow used in the accelerator. We propose a sparse training dataflow by dividing all computation into 1-Dimensional convolutions. This dataflow supports all kinds of sparsity in training and provides opportunities for exploiting all types of data reuse. This section will introduce the sparsity in training and the dataflow in detail. \subsection{Data Sparsity in Training} \label{subsection:sparsity} The sparsity of involved six data types during training have been summarized in \cref{tab-data-sparsity}. Input activations $\mathbf{I}$ for each \texttt{CONV} layer are usually sparse because the previous \texttt{ReLU} layer sets negative activation values into zeros. Weights $\mathbf{W}$ are also dense for all steps of training. The output activation gradients $\mathrm{d}\mathbf{O}$ are usually sparse. But for networks with \texttt{BN} layers, $\mathrm{d}\mathbf{O}$ becomes dense after passing through \texttt{BN} layers. However, this issue can be resolved by our gradients pruning algorithm. Thus, we can regard output activation gradients of all \texttt{CONV} layers as sparse data. There is an additional optimization opportunity in the GTA step. The gradients $\mathrm{d}\mathbf{I}$ are usually sent to a \texttt{ReLU} layer after generated from a \texttt{CONV} layer, leading to certain values be set to zero forcefully. Actually, we could predict the positions of these zeros and skip the corresponding calculations, according to the mask generated in the Forward step. Besides, both operands (input activations $\mathbf{I}$ and gradients to output activation $\mathrm{d}\mathbf{O}$) in the GTW step are sparse. Theoretically, it could significantly reduce the computation cost in obtaining the weight gradients $\mathrm{d}\mathbf{W}$. \subsection{Sparse Training Dataflow} Aiming to utilize all the sparsity in the training process, we divide one 2-D convolution into a series of 1-D convolutions and treat the 1-D convolutions as basic operations for scheduling and running. In this subsection, we will show the way to disassemble three basic steps of CNN training into 1-D convolutions in detail. \textbf{Forward step.} As shown in \cref{fig-decomposition-inference}, for one particular 2-D convolution, a small kernel moves through the input activations and perform multiplications and accumulations on each position. More detailed, one output row is the addition of $K$ 1-D convolution results, where $K$ is the kernel size. For a 1-D convolution in the Forward step, one operand is a certain row of the kernel, which is a short dense vector. Another operand is a row of the input activations, which is a long sparse vector. This basic operation is denoted as Sparse Row Convolution (SRC). \begin{table}[h] \centering \caption{Sparsity summary for involved data in training.} \label{tab-data-sparsity} \begin{tabular}{|c|c|c|} \hline Data Type & Symbol & Sparsity \\ \hline Weights & $\mathbf{W}$ & dense \\ \hline Weight Gradients & $\mathrm{d}\mathbf{W}$ & dense \\ \hline Input Activations & $\mathbf{I}$ & sparse \\ \hline Gradients to Input Activations & $\mathrm{d}\mathbf{I}$ & dense \\ \hline Output Activations & $\mathbf{O}$ & dense \\ \hline Gradients to Output Activations & $\mathrm{d}\mathbf{O}$ & sparse \\ \hline \end{tabular} \end{table} \textbf{GTA step.} \Cref{fig-decomposition-GTA} shows the decomposition of the GTA step. Similar to the Forward step, the GTA step can also be regarded as the summation of multiple 1-D convolutions. However, as mentioned in \cref{subsection:sparsity}, certain gradients are set to zero forcefully after sent to \texttt{ReLU} layers. Thus, the calculation of these masked values are entirely unnecessary and can be skipped safely to save computation cost. Considering this optimization, we define a different basic operation named as Masked Sparse Row Convolution (MSRC). \textbf{GTW step.} The weight gradients computation in the GTW step is shown in \cref{fig-decomposition-GTW}. The GTW step is significantly different from the Forward and the GTA step in two ways. The first is that operands of extracted 1-D convolutions are two sparse long vectors. The second lies in the range of results: traditional convolutions need to slide one vector from the start of the other vector to its end, and calculating multiplications and accumulations in each sliding position. However, in the GTW step, only the results in several positions are needed, and it is not necessary to fully calculate all the convolution results. The resulting vector is usually short (as the kernel size $K$) and we can store it in a small scratchpad during the whole convolution. Thus, this 1-D convolution is named Output Store Row Convolution (OSRC) and is considered as the basic operations of the GTW step. Gradients to bias in \texttt{CONV} layers are also required in the GTW step. The calculation of bias gradients for each channel is just the summation of the output activation gradients from the corresponding channel. It is simple to calculate them by accumulating gradients during the GTA step. \section{Architecture} \label{section-5} \begin{figure}[t] \centering \hfill \begin{subfigure}{11pc} \centering \includegraphics{fig-architecture-overview} \caption{Overview architecture} \label{fig-architecture-overview} \end{subfigure} \hfill \begin{subfigure}{6.8pc} \centering \includegraphics{fig-architecture-ppu} \caption{PPU module} \label{fig-architecture-ppu} \end{subfigure} \hfill \par \begin{subfigure}{14.2pc} \centering \includegraphics{fig-architecture-pe} \caption{PE module} \label{fig-architecture-pe} \end{subfigure} \caption{The architecture design of \textit{SparseTrain}.} \vspace{-1pc} \end{figure} \begin{table*}[ht] \centering \begin{threeparttable} \caption{Evaluation results for the gradients pruning algorithm, where \texttt{acc}\% means the training accuracy and $\rho_{nnz}$ means the density of non-zeros.} \label{tab:spar-acc} \begin{tabular}{cc|cc|cc|cc|cc|cc} \specialrule{0.8pt}{0pt}{0pt} \multirow{2}{*}{Model}& \multirow{2}{*}{Dataset}& \multicolumn{2}{c|}{Baseline}& \multicolumn{2}{c|}{$p=70\%$}&\multicolumn{2}{c|}{$p=80\%$}&\multicolumn{2}{c|}{$p=90\%$}&\multicolumn{2}{c}{$p=99\%$} \\ \cline{3-12} & & \texttt{acc}\% & $\rho_{nnz}$ & \texttt{acc}\% & $\rho_{nnz}$ & \texttt{acc}\% & $\rho_{nnz}$ & \texttt{acc}\% & $\rho_{nnz}$ & \texttt{acc}\% & $\rho_{nnz}$ \\ \hline AlexNet & CIFAR-10&90.50&0.09&90.34&0.01&\textbf{90.55}&0.01&90.31&0.01&89.66&0.01 \\ ResNet-18 & CIFAR-10&95.04&1&\textbf{95.23}&0.36&95.04&0.35&94.91&0.34&94.86&0.31 \\ ResNet-34 & CIFAR-10&94.90&1&95.13&0.34&95.09&0.32&\textbf{95.16}&0.31&95.02&0.28 \\ ResNet-152 & CIFAR-10&\textbf{95.70}&1&95.13&0.18&95.58&0.18&95.45&0.16&93.84&0.08 \\ AlexNet & CIFAR-100&67.61&0.10&67.49&0.03&\textbf{68.13}&0.03&67.99&0.03&67.93&0.02 \\ ResNet-18 & CIFAR-100&76.47&1&76.89&0.40&\textbf{77.16}&0.39&76.44&0.37&76.66&0.34 \\ ResNet-34 & CIFAR-100&77.51&1&77.72&0.36&\textbf{78.04}&0.35&77.84&0.33&77.40&0.31 \\ ResNet-152 & CIFAR-100&79.25&1&\textbf{80.51}&0.22&79.42&0.19&79.76&0.18&76.40&0.10 \\ AlexNet & ImageNet&56.38&0.07&\textbf{57.10}&0.05&56.84&0.04&55.38&0.04&39.58&0.02 \\ ResNet-18 & ImageNet&68.73&1&\textbf{69.02}&0.41&68.85&0.40&68.66&0.38&68.74&0.36 \\ ResNet-34 & ImageNet&\textbf{72.93}&1&72.92&0.39&72.86&0.38&72.74&0.37&72.42&0.34 \\ \specialrule{0.8pt}{0pt}{0pt} \end{tabular} \end{threeparttable} \vspace{-0.6pc} \end{table*} Aiming to evaluate the dataflow sparsity in \textit{SparseTrain}, we design an architecture of which overview is shown in \cref{fig-architecture-overview}. The architecture consists of Buffer, Controller, and PE groups. Each PE group contains $3$ PEs and a Post Processing Unit (PPU). PE is designed to calculate 1-D convolution and PPU is adopted for point-wise operations. \textbf{PE Architecture}: \label{subsection:pe} As demonstrated in \cref{section-4}, there are three kinds of basic operations in our dataflow: SRC, MSRC and OSRC. PE module is designed to support all of these operations and the architecture of PE module is shown in \cref{fig-architecture-pe}. Our PE is designed to perform a complete 1-D convolution, instead of just one multiplication. Each time an input value is loaded from Port-1, PE multiplies it by $K$ values in Reg-1, providing $K$ product results stored in Reg-2. When performing SRC operations, A PE first loads weight vector from Port-2 and saves them in Reg-1. Then activation values are loaded from Port-1 and multiplied by the weights in Reg-1. Results are accumulated to Reg-2. This process repeats until the activation vector gets to the end. For MSRC operations, the offset vector of input activations ($\mathbf{I}$) is loaded from Port-3 and saved to Reg-2. Values in the offset vector is used to indicate the results that should be calculated. In other words, values that are not encoded in the offsets of $\mathbf{I}$ can be predicted as zero and skipped safely. During calculation, PE will look ahead for the next non-skip operand from Port-1, so that it can start computation as soon as possible without extra cycles waiting for input data. For OSRC operations, PE stores weight gradients ($\mathrm{d}\mathbf{W}$) in Reg-2 during computing. $\mathbf{I}$ and $\mathrm{d}\mathbf{O}$ vector are loaded from Port-1 and Port-2, respectively. During computing, $K$ values of $\mathrm{d}\mathbf{O}$ will be cached in Reg-1 and multiplied with each value in $\mathbf{I}$. The results will be accumulated to Reg-2. \textbf{PPU Architecture}: In general, PPU is responsible for all point-wise operations. \Cref{fig-architecture-ppu} shows the internal architecture of PPU. In PPU, resulting vector will be converted into a compressed format and sent back to buffer. PPU will also perform \texttt{ReLU} operation before format conversion if needed. During GTA step, all of the gradients through PPU modules and their absolute value will be accumulated into two registers respectively. Accumulation results are used for generating bias gradients and determining pruning threshold. \section{Evaluation} \label{section-6} In this section, several experiments are conducted to demonstrate that the proposed approach could reduce the training complexity significantly with a negligible model accuracy loss. In the following experiments, AlexNet \cite{krizhevsky2012imagenet} and ResNet \cite{he2016deep} series are evaluated on three datasets including CIFAR-10, CIFAR-100 and ImageNet. All the models are trained for $300$ epochs on CIFAR-\{10, 100\} and $180$ epochs on ImageNet, due to our limited computing resources. PyTorch framework is adopted for the algorithm verification. To verify the performance of our dataflow and architecture, a custom cycle-accurate C{++} simulator is implemented based on SystemC model for the proposed architecture, and a simple compiler is designed with Python to convert CNN models in PyTorch into our internal instructions for driving our simulator. We also have the RTL implementation for PE, PPU and controller, which is synthesized by Synopsys Design Compiler with Global Foundries $14 nm$ FinFET technology for area estimation and then simulated by Synopsys PrimeTime for power estimation. The area and energy consumption of buffer (SRAM) is estimated by PCACTI \cite{shafaei2014fincacti}. To demonstrate the advantages of our proposed sparse training dataflow, we compare it with the architecture from Eyeriss \cite{chen2016eyeriss}. Since Eyeriss is designed for CNN inference rather than training, we modify the architecture of Eyeriss to support the dense training process. We adopt $168$ PEs in both the proposed architecture and the baseline architecture. For convenience, $386$KB SRAM is utilized as the global buffer for intermediate data, which is sufficient for storing data used in each iteration. A larger buffer is beneficial to improving data-reuse and energy efficiency, but it is beyond the considerations of this work. \subsection{Sparsity and Accuracy} From \cref{tab:spar-acc}, there is no accuracy lost for most situations and even $1\%$ accuracy improvement for ResNet-50 on CIFAR-100. The only significant accuracy lost exists in AlexNet on ImageNet, when using a very aggressive pruning policy like $p = 99.5\%$. That proves the accuracy loss caused by our layer-wise gradients pruning algorithm is almost negligible. The gradient density illustrated in \cref{tab:spar-acc} shows that our method could reduce the gradient density by $3\times \sim 10\times$. In addition, the deeper networks could obtain a relatively lower gradient density with our sparsification, which means that our method works better for larger networks. \begin{figure}[t] \centering \includegraphics{fig-evaluate-performance} \caption{Average training latency per sample for different models and datasets. The speedup is the training latency of \textit{SparseTrain} compared with the baseline.} \label{fig-evaluate-time} \vspace{-1pc} \end{figure} \begin{figure}[t] \centering \includegraphics{fig-evaluate-energy} \caption{Average energy consumption per sample for different models and datasets. ``Reg'' represents register, and ``Comb'' represents combinational logic. The energy efficiency is the improvement of \textit{SparseTrain} compared with the baseline.} \label{fig-evaluate-energy} \vspace{-1pc} \end{figure} \subsection{Convergency} The training loss of AlexNet/ResNet-18 on CIFAR-10 and ImageNet is demonstrated in \cite{ye2019accelerating}. In general, ResNet-18 is very robust for gradients pruning. For AlexNet, the gradients pruning could still be robust on CIFAR-10 but will degrade convergency speed a little on ImageNet with aggressive pruning policy. These results prove that with reasonable hyper-parameter $p$, our pruning algorithm basically has the same convergency property compared with the original training scheme. \subsection{Latency and Energy} \Cref{fig-evaluate-time} shows the training latency reduction brought by sparsity exploration. The proposed \textit{SparseTrain} scheme achieves $4.5 \times$ speedup at most for AlexNet on CIFAR-10. On average, it achieves about $2.7 \times$ speedup compared with the baseline. \Cref{fig-evaluate-energy} shows the average energy consumption per data sample. Overall, \textit{SparseTrain} has $1.5 \times$ to $2.8 \times$ (on average $2.2 \times$) energy efficiency improvement than baseline. For the baseline, $62\% \sim 71\%$ of the energy consumption comes from SRAM accesses. \textit{SparseTrain} reduces the global buffer accesses by utilizing sparse dataflow and reduces the energy cost by $30\% \sim 59\%$. The energy consumption of combinational logic in \textit{SparseTrain} could be reduced by $53\% \sim 88\%$, which is more significant than SRAM and register accesses. This also contributes much to the total energy saved. \section{Conclusion} \label{section-7} As the model size and datasets scale larger, CNN training becomes more and more expensive. In response, we propose a novel CNN training scheme called \textit{SparseTrain} for acceleration by fully exploiting sparsity in the training process. By performing stochastic gradients pruning on each layer, \textit{SparseTrain} achieves high sparsity with negligible accuracy loss. Additionally, with a 1-D CONV based sparse training dataflow and a sparse-aware architecture, \textit{SparseTrain} can improve CNN training speed and efficiency significantly by exploiting different types of sparsity. With the threshold prediction method, gradients pruning can be performed on our architecture with almost no overhead. Experiments show that the gradients pruning algorithm could achieve $3\times$ to $10\times$ sparsity improved with negligible accuracy loss. Compared with the baseline, \textit{SparseTrain} achieves about $2.7 \times$ speedup and $2.2 \times$ energy efficiency improvement on average.
{ "timestamp": "2020-07-28T02:41:01", "yymm": "2007", "arxiv_id": "2007.13595", "language": "en", "url": "https://arxiv.org/abs/2007.13595" }
\section{Introduction} \label{s.intro} Machine Learning (ML) models have been extensively investigated and used for regression and classification problems~\cite{Krizhevsky:2012,Kim_2016_CVPR,Levine:2016}. More recently, Convolutional Neural Networks (CNNs) have shown great success in many applications, such as image/text classification~\cite{Lecun:98} and speech recognition~\cite{Hinton:2012}, since they require considerably less effort to optimize parameters than the common feature extraction pipeline\,\cite{Lecun:98}. However, CNNs may require a high number of labeled samples (annotated objects) for training\,\cite{Yosinski:2014}. While small labeled training sets can impair the ability of an ML model to correctly classify new samples (a problem known as \emph{over-fitting}\,\cite{Srivastava:14}), large unlabeled sets make visual inspection and annotation very expensive for the expert. Human costs become even so more prohibitive in domains that require specialized knowledge about the objects, like Medicine and Biology. Solutions for small labeled sets include data augmentation\,\cite{Mash:2016} and regularization methods\,\cite{Nowlan:1992}. For large unlabeled sets, semi-supervised classifiers have been used to propagate labels from a small supervised set to the many unsupervised samples by exploring the sample distribution in some feature space\,\cite{Kingma:2014, forestier:2016, Papernot:2017}. Yet, none of these approaches has combined the cognitive ability of humans in data abstraction with the ability of machines in data processing to increase the number of labeled objects. Recent studies have investigated the use of feature space projections and visual analytics to understand and engineer ML models\,\cite{RauberInfVis2017, RauberVGTC2016,Peixinho:2018,Bernard:2018,BenatoSibgrapi:2018}. Such work addresses both aforementioned labeling cases with approaches for interactive data augmentation~\cite{Peixinho:2018} and interactive data annotation~\cite{Bernard:2018,BenatoSibgrapi:2018} guided by feature space projections, respectively. Bernard et al.\,\cite{Bernard:2018} have compared interactive data annotation in a feature space projection with an active learning technique, in which experts supervise and annotate samples selected by a classifier and the classifier is retrained to annotate and select more samples in the original feature space. They discovered that interactive data annotation in the feature space projection is superior to active learning. Benato et. al.~\cite{BenatoSibgrapi:2018} have showed that when the user propagates labels to a large unsupervised sample-set guided by the true-label knowledge of a few samples and by the visual information of the sample distribution in a feature space projection, the resulting labeled training-set is more correct than the one created by semi-supervised classifiers in the original feature space. Hence, classifiers trained from such interactively labeled sets can better predict labels of unseen test samples than those trained from automatically labeled sets. Yet, Bernard et al.~\cite{Bernard:2018} and Benato et al.~\cite{BenatoSibgrapi:2018} have not \emph{combined} automatic and interactive approaches for label propagation --- i.e, they have not been concerned with the user \emph{effort} in visual data inspection and annotation. In this work, we fill the above gap by proposing a semi-automatic approach that reduces user labeling effort while achieving better classification accuracy on unseen test sets. For this, we exploit the concept of \emph{sample informativeness} from Active Learning (AL). Such approaches select samples for expert supervision based on their informativeness --- i.e., potential to improve the design of a classifier from the knowledge of their true label\,\cite{Settles:2009}, measured by the \emph{confidence} of a classifier about the label assigned to a sample\,\cite{Patra:2012,Miranda:2009,Spina:2012,Tavares:2012}. In our case, we propagate labels to samples with high-confidence values; and enable the expert focus on low-confidence values for manual label propagation. For this, the user visually analyzes the sample distribution in a 2D scatterplot created by the \emph{t-Distributed Stochastic Neighbor Embedding} (t-SNE) technique\,\cite{MaatenJMLR:14}, constructed similarly to\,\cite{Bernard:2018,BenatoSibgrapi:2018}, and the true-label knowledge of only a few samples per class. Although our method can explore further classifier improvement of the classifier by multiple iterations of AL with additional supervised samples, we solve data annotation from a single user interaction for label propagation with no sample supervision. For automatic label propagation, we evaluate two semi-supervised classifiers trained in both latent and projection spaces for automatic label estimation and choose the best one for our goal. We show that our semi-automatic label propagation (SALP) method achieves end-to-end better classification results as compared to both fully automatic label propagation and fully manual label propagation. This work is organized as follows. Section~\ref{s.method} presents our semi-supervised data annotation approach. Section~\ref{s.experiments} presents the experimental setup, compared baselines, used datasets, and experimental results. Section~\ref{s.discussion} discusses our results. Section~\ref{s.conclusion} concludes the paper. \section{Semi-Automatic Projection-Based Data Annotation} \label{s.method} \begin{figure}[h] \label{f.pipeline} \centering \includegraphics[width=1.0\linewidth]{figs/diagrama_paper_PDF.pdf} \caption{Semi-automatic data annotation pipeline. We extract features by unsupervised learning from the training set and next use these to project this set to a 2D scatterplot. We next enrich the training set by propagating labels from supervised to unsupervised samples by automatic methods (in both latent and projection spaces) and by manual user-controlled methods. We finally compare the quality of the classifiers trained on such training sets to decide on the best label propagation method. Red indicates additions to earlier related work\,\cite{BenatoSibgrapi:2018}. } \end{figure} Given a training set with a low number of supervised samples and a considerably larger number of unsupervised samples, our semi-automatic data annotation approach has four steps: \begin{itemize} \item \emph{unsupervised feature learning:} We start by extracting features from the input dataset. To minimize the number of supervised samples needed, we adopt an unsupervised feature-learning procedure (Sec.~\ref{s.unsupervised}); \item \emph{feature space projection:} We create a feature space 2D projection that captures well the sample distribution in the latent feature space for further visual analysis; \item \emph{semi-supervised label estimation:} We propagate labels automatically to high-confidence unlabeled samples, thereby increasing with training-set size with little effort and high quality (Sec.~\ref{s.semi_sup}); \item \emph{visual analysis:} The expert creates additional labeled samples to the above ones, by interactively propagating labels to the less-confident samples using the 2D projection (Sec.~\ref{s.ilp}). \end{itemize} \subsection{Unsupervised Feature Learning} \label{s.unsupervised} We use an Autoencoder Neural Network (AuNN)\,\cite{Masci2011, Vincent} for unsupervised feature learning. AuNNs consist of two parts, encoder and decoder. The encoder maps the input samples to points in a reduced (latent) feature space; the decoder reconstructs these samples. The two parts are coupled and trained together by backpropagation. As cost function, we use the mean squared error between the original and reconstructed samples. For small errors, the obtained latent feature space is a reasonable representation of the original sample distribution. Hence, we train the AuNN with all labeled and unlabeled samples by ignoring labels. After evaluating several models, we decided for a Stacked Convolutional AuNN\,\cite{Masci2011} --- a neural network that presents convolutional layers and can usually obtain relevant latent features. For our experiments, we use image datasets. However, this latent feature learning can be used for any other kind of data that can be suitably mapped to the input layer of the encoder. Section~\ref{s.experiments} presents implementation details. \subsection{Feature Space Projection} \label{s.featspace} Previous works indicate that 2D projections, created by the t-SNE algorithm\,\cite{Hinton:2006,Maaten:2008}, achieve this goal well\,\cite{RauberInfVis2017,Bernard:2018,BenatoSibgrapi:2018}, so we follow these (Sec.~\ref{s.featspace}). The dimension of the latent feature space can still be considered very high (with usually hundreds to thousands of features) and so unfeasible for visual inspection of the sample distribution. As previously mentioned, we wish to reduce the latent space to two dimensions by preserving as much as possible the relevant structure of the data. The most suitable techniques for this task seem to preserve local distances between samples and the t-SNE algorithm satisfies this criterion~\cite{MaatenJMLR:14}. It is a non-linear projection that depends on the choice of two parameters: perplexity and number of iterations. Our choice for these parameters is discussed in Section~\ref{s.experiments}. \subsection{Semi-Supervised Label Estimation} \label{s.semi_sup} For semi-supervised label estimation, we consider two techniques that explore the sample distribution in a given feature space to propagate labels with confidence values from supervised to unsupervised ones: Laplacian Support Vector Machines (LapSVM)\,\cite{Sindhwani:2005,Belkin:2006} and Semi-Supervised Classification by Optimum-Path Forest (OPF-Semi)\,\cite{Amorim:2016}. We evaluate both methods on both latent and projection spaces. Given that the performance of OPF-Semi in label propagation is much higher than that of LapSVM (see Sec.~\ref{s.experiments}), we select OPF-Semi to output confidence values, used next for our manual label propagation (Sec ~\ref{s.ilp}). Additionally, we found that OPF-Semi in the projection space outperforms itself in the latent feature space (see Sec.~\ref{s.experiments}). Hence, we use the 2D version of OPF-Semi for semi-automatic data annotation. OPF-Semi maps (un)labeled samples to nodes of a graph and computes an optimum-path forest rooted at labeled samples. In this forest, each node $s$ is conquered (labeled) by the root $R$ that offers a path of minimum cost $k(R,s)$ to $s$. We use costs to compute label confidence values $c(s)$ as described in\,\cite{Miranda:2009,Spina:2012,Tavares:2012}. In brief: Let $A$ and $B$ be two roots for sample $s$ so that $A$ is the one that has conquered $s$ ($k(R,s)$ is minimal) and $B$, having a different label than $A$, offers the second-best cost $k(B,s)$ to $s$. We assign the confidence $c(s) = 1-k(A,s)/ (k(A,s)+k(B,s))$, $c(s) \in [0,1]$, to the label of $s$ given by $A$. That is, if the second-best cost $k(B,s)$ is much larger than the minimal cost $k(A,s)$, the label $A$ has a high confidence. We use the confidence as follows: All labels assigned by OPF-Semi having a confidence above a threshold $\tau$ are used as such in the training process. The threshold $\tau$ is chosen by the user based on the visual analysis of the feature projection with unsupervised samples colored by their confidence values from red (low $c$) to green (high $c$) (Fig.~\ref{f.colormap}). Changing $\tau$ interactively by a slider lets the user (a) say that high-confidence samples can keep their likely good labels assigned by OPF-Semi and (b) focus on the remaining low-confidence samples to assign them labels by manual label propagation, described next in Sec.~\ref{s.ilp}. Users can choose the exact threshold $\tau$ balancing how much they wish to trust OPF-Semi \emph{vs} how many samples they are willing to label manually. \begin{figure}[htb] \centering \includegraphics[width=.45\linewidth]{figs/proj/mnist_3.png}\\ \caption{Feature projection showing unsupervised samples from red (low confidence) to green (high confidence).} \label{f.colormap} \end{figure} \subsection{Manual Label Propagation} \label{s.ilp} The added value of user-driven label propagation in a t-SNE projection was demonstrated by the interactive label propagation technique in\,\cite{BenatoSibgrapi:2018} which we refer to next as ILP for brevity. However, ILP propagation is fundamentally affected by the quality of the latent features extracted by the AuNN (Sec.~\ref{s.unsupervised}) \emph{and} the quality of the t-SNE projection itself: If both these operations faithfully preserve the similarity of original samples, then the user can likely propagate labels well, by simply selecting points close in the projection to the supervised samples. If either the latent space or the projection create errors, which they inherently do\,\cite{nonato18}, this will likely create wrong labels. We assist the user in this process as follows. We color the supervised points in the projection by their labels, and color all low-confidence unsupervised points $s$ having $c(s) < \tau$ in black (Fig.~\ref{f.proj}). The black points are projected before the colored points, in order to minimize undesired occlusions. When moving the mouse pointer over a projected point, we show its sample image in a tooltip. The user next employs these three sources of information -- proximity of unsupervised (black) points in the 2D projection to supervised (colored) ones, low-confidence value of the unsupervised points, and similarity of unsupervised-to-supervised tooltip images -- to decide which unsupervised samples get which supervised label. Label propagation is next done simply by selecting desired points in the projection and clicking to assign them a supervised-point label. \begin{figure}[htb] \centering \includegraphics[width=.45\linewidth]{figs/proj/mnist.png}\\ \caption{Semi-automatic label propagation is done from the supervised samples (points colored by class, saturated colors, red border) first automatically to the unsupervised and high-confidence ones (light colors, no border). Remaining low-confident samples (black) are candidates for manual propagation.} \label{f.proj} \end{figure} \section{Experiments and Results} \label{s.experiments} We next present the experimental setup, baselines, datasets, implementation details, and experimental results used for validating our semi-automatic data annotation method. \subsection{Experimental Setup} \label{s.expsetup} We divide each available dataset $D$ into three subsets for validation: a very small training set $S$ with a few supervised samples per class ($3\% |D|$); a considerably larger training set $U$ with unsupervised samples for label propagation ($67\% |D|$); and a set $T$ with unseen test samples ($30\% |D|$). Next, based on the user-chosen confidence threshold $\tau$, we split $U$ into high-confidence samples $L_c$, which get their label from OPF-Semi, and low-confidence ones $L_i$, which can be interactively labeled by the user. Note that $L_c \cap L_i = \emptyset$ and $L_c \cup L_i \neq U$, since the user can choose not to label $L_i$ entirely, to minimize manual labeling effort. We randomly split $D$ into $S$, $U$, and $T$ this way three times and repeat the evaluation --- i.e., label propagation from $S$ to $U$ followed by supervised training on $S\cup U$ and testing on $T$ --- for statistical purposes. After labels are propagated from $S$ to $U$, we train a supervised classifier on $S\cup U$ using the latent feature space. For this task, we used the Optimum-Path Forest (OPF)\,\cite{Papa:2012} and Support Vector Machines (SVM)\,\cite{Hearst:1998}. OPF has no hyperparameters to set, so it is simple to use. For SVM, we find optimal values for its hyperparameters $\sigma$ (influence radius), $C$ (regularization) and kernel type by grid search over the ranges $[0.1, 0.000001]$, $[1, 10000]$ and the kernel functions \textit{Gaussian radial basis} and \textit{linear} respectively, using $3$ splits and stratified random sampling with $70\%$ and $30\%$ of the samples from $S\cup U$ used for training and validation, respectively. We test the classifiers on $T$ and measure their effectiveness by Cohen's $\kappa$ coefficient\,\cite{Cohen:1973}. The $\kappa$ coefficient is within $[-1,1]$, where $\kappa \leq 0$ means no agreement and $\kappa=1$ means complete agreement between two annotators. Additionally, we also compute the accuracy of label propagation on $U$ for each approach, that is the number of labeled samples correctly assigned divided by the number of unsupervised samples ($|U|$). Therefore, the best approach for label propagation is the one that produces the best supervised classifiers. Since we consider the $\kappa$ as effectiveness measure, the best supervised classifier is then the one that provides the best $\kappa$ result. \subsection{Baselines} \label{ss.baselines} As described in Section~\ref{s.method}, we propose a semi-automatic label propagation (SALP) that uses OPF-Semi in the 2D t-SNE projection space to propagate labels to high-confidence samples and the user to propagate labels to low-confidence samples, respectively. We next compare SALP with the following three baselines: \begin{enumerate} \item \emph{No label propagation (NLP):} SVM and OPF, are trained from only $S$, ignoring set $U$. \item \emph{Automatic label propagation (ALP):} set $U$ is fully labeled by one of the four ALP methods below and SVM and OPF are trained from $S\cup U$. \begin{enumerate} \item LapSVM using the $n$D latent feature space. \item LapSVM using the 2D t-SNE projection space. \item OPF-Semi using the $n$D latent feature space. \item OPF-Semi using the 2D t-SNE projection space. \end{enumerate} \item \emph{Interactive label propagation (ILP):} set $U$ is fully labeled by the user and SVM and OPF are trained from $S\cup U$, as in\,\cite{BenatoSibgrapi:2018}. \end{enumerate} In all above cases, we test SVM and OPF on $T$. \subsection{Datasets} \label{ss.datasets} Our first dataset contains 5000 images ($28\times28$ pixels each) of handwritten digits from 0 to 9, randomly selected from the popular public dataset MNIST\,\cite{Lecun:2010:mnist}. Our next three datasets use images ($200 \times 200$ pixels each) from an automatic processing pipeline that separates microscopy images of human intestinal parasites into three groups: (i) \emph{Helminth larvae} and fecal impurities ($3514$ images); (ii) \emph{Helminth eggs} and fecal impurities ($5112$ images); and (iii) \emph{Protozoan cysts} and fecal impurities ($9568$ images).Fecal impurity is a diverse class that has very similar samples to parasites (see Fig.~\ref{f.parasites}). We consider these three datasets with and without images of fecal impurities, yielding five datasets for testing our proposal, apart from MNIST. The number of classes in each dataset is as follows: (i) H.Larvae has two categories; (ii) H.Eggs has nine categories (\emph{H.nana}, \emph{H.diminuta}, \emph{Ancilostomideo}, \emph{E.vermicularis}, \emph{A.lumbricoides}, \emph{T.trichiura}, \emph{S.mansoni}, \emph{Taenia}, and impurities); and (iii) P.cysts has seven categories (\emph{E.coli}, \emph{E.histolytica}, \emph{E.nana}, \emph{Giardia}, \emph{I.butschlii}, \emph{B.hominis}, and impurities), respectively. Those are the most common species of human intestinal parasites in Brazil, which are responsible for public health problems in most tropical countries~\cite{Suzuki:2013}. All three datasets are unbalanced with considerably more impurity samples. The images of parasites have been annotated by biomedical specialists. Table~\ref{t.split_data} gives the number of images in each set $S$, $U$, and $T$ after the random split described in Sec.~\ref{s.expsetup}. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{figs/parasites.png}\\ \caption{Examples of each species of H.Eggs (left) and similar images of impurities (right).} \label{f.parasites} \end{figure} \input{split_data} \subsection{Implementation Details} \noindent\textbf{Feature extraction:} Figure~\ref{f.arch} shows the AuNN architectures for the MNIST and parasites datasets. We implemented these networks in Keras\,\cite{Chollet:2015:keras} with $6$ convolutional layers of $3\times 3$ filters, $3$ for the encoder and $3$ for the decoder, respectively. After each convolutional layer, we use \textit{ReLU} activation and apply max-pooling in the encoder and upsampling in the decoder. We normalize the input images within $[0,1]$, since the output requires sigmoid activation. We choose the number of filters based on the dataset: For MNIST, the 6 convolutional layers use $16$, $8$, $8$, $8$, $8$, and $16$ filters. For the 5 parasites datasets, we use $32$, $16$, $8$, $8$, $16$, and $32$ filters respectively. As cost function, we use mean squared error as it provides more suitable results in reconstruction task with fewer training epochs. We use 50 epochs for the easier datasets (MNIST and H. Eggs without impurities) and 100 for the others. For MNIST, we use a latent feature space of $n=128$ dimensions. For the parasites, which have higher-resolution and more complex images, we use $n=5000$ dimensions. \begin{figure}[htb] \centering \includegraphics[width=0.68\linewidth]{figs/aunn_mnist.pdf} \hspace{0.10cm} \includegraphics[width=0.7\linewidth]{figs/aunn_parasitos.pdf}\\ \caption{AuNN architecture for MNIST dataset (top) and for Parasites datasets (bottom). The yellow layers are the convolutional layers, the red layers at the beginning of each network are the Max Pooling layers and the blue layers are the Up-Sampling layers.} \label{f.arch} \end{figure} \noindent\textbf{Projection:} Different choices of t-SNE parameters can lead to different 2D projections\,\cite{wattenberg16}. We found empirically that for a range of $1000$ to $7000$ samples in $S\cup U$, setting t-SNE's perplexity to $40$ and maximum iteration count to $1000$ respectively yields good projections for label propagation. \subsection{Experimental Results} We discuss the performance of our pipeline, measured by the performance of the classifiers trained from $S\cup U$ in the latent feature space and tested on $T$, by answering the following questions: \begin{itemize} \item Which space ($n$D latent, 2D projection) is better for ALP? (Sec.~\ref{sec:q1}) \item How to set the confidence threshold $\tau$? (Sec.~\ref{sec:q2}) \item Which approach (manual, semi-automatic, automatic) best propagates labels from $S$ to $U$? (Sec.~\ref{sec:q3}) \item What is the end-to-end value of SALP? (Sec.~\ref{sec:q4}) \item How do results depend on the projection quality? (Sec.~\ref{sec:q5}) \end{itemize} Note that we use the 2D projection space only for manual label propagation, i.e. not for testing, since we cannot assume that set $T$ is known during training. \subsubsection{Influence of reducing the feature space from $n$D to 2D} \label{sec:q1} Table~\ref{t.results1} presents mean and standard deviation values of Cohen's $\kappa$ for classifiers on set $T$ for each dataset, as well as the sizes of $S$, $U$, and $S\cup U$, and the mean accuracy values in automatic label propagation for LapSVM and OPF-Semi, used in the $n$D feature space and also in the 2D projection space, as well as the option of not propagating labels. We get several insights. First, we see that LapSVM performs sometimes better and sometimes worse in $n$D as compared to 2D, depending on the dataset. In contrast, OPF-Semi consistently shows a positive impact of reducing the feature space independently of the dataset. This happens even when its label-propagation performance is not the best one. \input{results_1_new} \subsubsection{The choice of the confidence threshold} \label{sec:q2} As stated in Sec.~\ref{s.semi_sup}, users need to choose the threshold $\tau$ to specify which automatically-propagated labels they want to keep and which they wish to `override' manually. Figure~\ref{f.mapmnist} show the projections of all, respectively the most-confident samples selected by the user, for the six studied datasets. We see that the threshold $\tau$ varies relatively little (being either $0.5$ or $0.6$) across datasets. This indicates that a good default value to start with is $\tau=0.5$, after which users can tune $\tau$ upwards or downwards depending on the actual distribution of confidences in the projection. Overall, we can see that the more challenging is the dataset, the higher is the threshold $\tau$. \begin{figure}[htbp!] \newcommand\cincludegraphics[2][]{\raisebox{-0.4\height}{\includegraphics[#1]{#2}}} \centering \vspace{-1.0cm} \setlength{\tabcolsep}{1.5pt} \begin{tabular}{cc|ccc} & & \footnotesize $S\cup U$ & \footnotesize $L_c$ & \footnotesize $U\setminus L_c$\\ \hline \rotatebox[origin=c]{90}{{\footnotesize MNIST}} & \rotatebox[origin=c]{90}{\footnotesize $\tau=0.5$} & \cincludegraphics[width=.2\linewidth]{figs/proj/mnist_3.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/mnist_5.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/mnist_4.png}\\ \rotatebox[origin=c]{90}{\footnotesize H.Eggs} & \rotatebox[origin=c]{90}{\shortstack[l]{\footnotesize $\tau=0.6$}} & \cincludegraphics[width=.2\linewidth]{figs/proj/eggssimp_3.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/eggssimp_5.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/eggssimp_4.png}\\ \rotatebox[origin=c]{90}{\footnotesize P.cysts} & \rotatebox[origin=c]{90}{\footnotesize $\tau=0.5$} & \cincludegraphics[width=.2\linewidth]{figs/proj/protosimp_3.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/protosimp_5.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/protosimp_4.png}\\ \rotatebox[origin=c]{90}{\footnotesize H.Larvae (I)} & \rotatebox[origin=c]{90}{\footnotesize $\tau=0.6$} & \cincludegraphics[width=.2\linewidth]{figs/proj/larvas_3.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/larvas_5.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/larvas_4.png}\\ \rotatebox[origin=c]{90}{\footnotesize H.Eggs (I)} & \rotatebox[origin=c]{90}{\footnotesize $\tau=0.5$} & \cincludegraphics[width=.2\linewidth]{figs/proj/eggscimp_3.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/eggscimp_5.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/eggscimp_4.png}\\ \rotatebox[origin=c]{90}{\footnotesize P.cysts (I)} & \rotatebox[origin=c]{90}{\shortstack[l]{\footnotesize $\tau=0.5$}} & \cincludegraphics[width=.2\linewidth]{figs/proj/protocimp_3.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/protocimp_5.png} & \cincludegraphics[width=.2\linewidth]{figs/proj/protocimp_4.png}\\ \end{tabular} \vspace{-0.2cm} \caption{Projections colored by label confidence (red=low confident, green=high confident). Rows are datasets (easiest at top, hardest at bottom). Columns show the entire set of supervised-and-unsupervised samples $S \cup U$, the high-confidence samples $L_c$ labeled by ALP, and the low-confidence samples $U \setminus L_c$ that go to manual labeling.} \label{f.mapmnist} \end{figure} \subsubsection{Best label propagation approach} \label{sec:q3} Table~\ref{t.results1} showed that OPF-Semi 2D is the winner for automatic label propagation (ALP). Hence, the next question is how well this method would compare against interactive label propagation (ILP)\,\cite{BenatoSibgrapi:2018}, which uses manual label propagation to \emph{all} unsupervised labels, and our new semi-automatic label propagation (SALP), which uses manual label propagation to samples with low-confidence unsupervised labels only. Figure~\ref{f.mnist} illustrates the ILP and SALP projections for the studied datasets. A key advantage of SALP over ILP is that it shows only the least confident samples (according to OPF-Semi 2D) to the user, hence reducing the effort needed to understand the picture (and also reducing clutter and overlap in the projection), thus making the interactive labeling task easier. We discuss next several observations relating ILP to SALP in Fig.~\ref{f.mnist}, as well as observations we made during the actual interactive labeling process. For the MNIST dataset, the user propagated labels to $1864$ unsupervised samples on average (over the three considered runs) when using ILP. When using SALP, this number dropped to $1182$ samples. This pattern of less effort for SALP is consistent over all other datasets, as discussed next. \begin{figure}[htbp!] \newcommand\cincludegraphics[2][]{\raisebox{-0.3\height}{\includegraphics[#1]{#2}}} \centering \vspace{-1.0cm} \setlength{\tabcolsep}{1.2pt} \begin{adjustbox}{width=\textwidth} \begin{tabular}{cc|cccccccc} & & \tiny \textbf{ILP} & \tiny \textbf{SALP} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\tiny \textbf{Labeled} \\ \tiny \textbf{(OPF-Semi)}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\tiny \textbf{Labeled} \\ \tiny \textbf{(OPF-Semi+user)}\end{tabular}} & \tiny $|U|$ & \tiny $|L_c|$ & \tiny $|U\setminus L_c|$ & \tiny $|L_i|$ \\ \hline \rotatebox{90}{\tiny MNIST} & \rotatebox{90}{\tiny $\tau=0.5$} & \cincludegraphics[width=.18\linewidth]{figs/proj/mnist_1.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/mnist_2.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/mnist_6.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/mnist_7.png} & \tiny 3325 & \tiny 1690 & \tiny 1635 & \tiny 1182 \\ \rotatebox{90}{\tiny H.Eggs} & \rotatebox{90}{\tiny $\tau=0.6$} & \cincludegraphics[width=.18\linewidth]{figs/proj/eggssimp_1.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/eggssimp_2.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/eggssimp_6.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/eggssimp_7.png} & \tiny 1176 & \tiny 1022 & \tiny 154 & \tiny 154 \\ \rotatebox{90}{\tiny P.cysts} & \rotatebox{90}{\tiny $\tau=0.5$} & \cincludegraphics[width=.18\linewidth]{figs/proj/protosimp_1.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/protosimp_2.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/protosimp_6.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/protosimp_7.png} & \tiny 2562 & \tiny 1643 & \tiny 919 & \tiny 666 \\ \rotatebox{90}{\tiny H.Larvae (I)} & \rotatebox{90}{\tiny $\tau=0.6$} & \cincludegraphics[width=.18\linewidth]{figs/proj/larvas_1.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/larvas_2.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/larvas_6.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/larvas_7.png} & \tiny 2337 & \tiny 1813 & \tiny 524 & \tiny 524 \\ \rotatebox{90}{\tiny H.Eggs (I)} & \rotatebox{90}{\tiny $\tau=0.5$} & \cincludegraphics[width=.18\linewidth]{figs/proj/eggscimp_1.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/eggscimp_2.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/eggscimp_6.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/eggscimp_7.png} & \tiny 3400 & \tiny 983 & \tiny 2417 & \tiny 2076 \\ \rotatebox{90}{\tiny P.cysts (I)} & \rotatebox{90}{\tiny $\tau=0.5$} & \cincludegraphics[width=.18\linewidth]{figs/proj/protocimp_1.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/protocimp_2.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/protocimp_6.png} & \cincludegraphics[width=.18\linewidth]{figs/proj/protocimp_7.png} & \tiny 6363 & \tiny 2215 & \tiny 4148 & \tiny 1733 \\ \end{tabular} \end{adjustbox} \vspace{-0.2cm} \caption{Comparison of different label propagation methods (columns) for different datasets (rows). From left to right: ILP, SALP, labels automatically propagated by OPF-Semi, and final labeling result of SALP together with OPF-Semi. Colors indicate labels given by either supervised samples (ILP, SALP) or both unsupervised and propagated labels (OPF-Semi, OPF-semi+user)). Black shows samples to be considered by manual propagation (three left columns), and samples skipped by manual propagation (right column). Sample set sizes are shown to the right.} \label{f.mnist} \end{figure} For the H.Eggs dataset, we see that the ILP projection shows well-separated sample groups from distinct classes (colors). This indicates that separating classes in feature space is relatively easy. This is confirmed in turn by the fact that we only have very few low-confidence samples after running OPF-Semi 2D (black dots in the SALP projection). Hence, while labeling in ILP can proceed very easily, given the good cluster separation, labeling in SALP is \emph{even easier}, since we have both good cluster separation \emph{and} a low number of samples to label. In this case, the user propagated labels to $1171$ samples in ILP and to only $154$ samples in SALP. For P.cysts, the projections a less clear visual separation of same-class (same color) points in groups. This makes interactive label propagation more challenging for both ILP and SALP. The user propagated labels to $1999$ samples in ILP and to $919$ samples in SALP. For SALP, we see that OPF-Semi 2D propagated labels in more central regions of the visible groups where, hence, confidence is high. The remaining confusion regions (black points) are solved by the user. For. H.Larvae, we notice that supervised impurity samples (green) are all over the projection, whereas the supervised H.Larvae samples (red) are more concentrated in the top-right of the projection. Given this quite good visual separation, propagating the impurity label using ILP is relatively easy for most parts of the projection. However, this still takes manual effort. Using SALP, such `easy' areas are solved automatically, and the user is left with only the more difficult region at the top-right, where green meets red, to solve. In ILP, the user propagated labels for $2080$ samples on average while in SALP this number was $524$ samples. For H.Eggs dataset with impurity, the supervised impurity samples (gray) fall between groups of colored points (actual H.Eggs classes) in the projection. In contrast to the earlier datasets, we see many more black points in SALP, meaning that OPF-Semi 2D has difficulties in automatically propagating labels. This matches the fact that datasets with impurities are considerably harder. For this dataset, the user propagated labels to more points in SALP ($2076$) than ILP ($1787$). This seems to support the evidence that the simplification of the SALP projection by removing high-confidence points, even though minor in this case, was enough to help the user see more structure in the projection along which she could propagate labels. Also, as for P.cysts, we see that OPF-Semi 2D propagates labels in more central regions of the visible groups, leaving the rest to the user. Finally, for P.cysts with impurities, the supervised impurity samples (brown) are spread out over the entire projection. The supervised P.cysts samples (other colors than brown) are mixed quite strongly, and the projection shows little structure -- roughly, one large and one small crescent-shaped group. This is the most challenging dataset for manual label propagation and classification among the evaluated datasets. This difficulty can be noted by comparing P.cysts and H.Eggs both without impurities. For P.cysts, even without impurities, the classes are mixed in the projection. However, the classes are well separated in the projection for the H.Eggs dataset without impurities. When adding the impurities to those datasets, the difficulty increases for the classifiers, as shown in Sec. ~\ref{sec:q4}. As for H.Eggs, OPF-Semi 2D finds only few confident samples, so the manual labeling effort is quite similar for both ILP and SALP. This is matched by the actual number of points to which the user actually propagated labels ($1787$ with ILP \emph{vs} $1733$ with SALP). Even though these figures are almost identical, the main benefit for SALP here is that OPF-Semi 2D already filtered the easy cases (high confidence) points, thereby focusing the user's effort to the more difficult cases. \subsubsection{End-to-end value of SALP} \label{sec:q4} We have seen that SALP decreases the user's effort in label propagation. A final question we answer is: How much added-value does SALP bring, in terms of classification quality, as opposed to the earlier similar method, ILP, or to the best fully-automatic counterpart we found, OPF-Semi 2D? Table~\ref{t.results2} answers this by showing the average and stardard deviation of $\kappa$ on the test set $T$ for each considered dataset. The table further shows the sizes of $S$, $U$, and $S\cup U$, and the mean accuracy values in label propagation for OPF-Semi 2D, ILP, and SALP. It is important to highlight that the propagation accuracy for SALP considers not only the low-confident samples labeled by the user, but the high-confident ones automatically treated by OPF-Semi 2D. We see that SALP consistently obtained the best classification results on unseen $T$ for all datasets. This proves that SALP is, indeed, of added value with respect to earlier existing methods -- using it yields better classifiers in the end. Separately, we see that, for all but the simplest datasets (MNIST and H.Eggs), SALP also yields the best label propagation accuracy. \input{results_2_new} \subsubsection{How do results depend on projection quality} \label{sec:q5} We did the same experiments discussed in the sections so far using UMAP\,\cite{UMAP:2018} instead of t-SNE as a projection technique. Overall, we noticed worse results, in terms of label propagation accuracy and classifier quality ($\kappa$) than when using t-SNE. This indicates that the neighborhood preservation quality of a projection (which is higher for t-SNE than for UMAP) is am important factor for out method. Note also that the trends observed so far linking obtained SALP and ILP quality with the dataset size and difficulty cannot be ascribed to us having used `optimal' projections by a lucky setting of the projection-method parameters: Indeed, both UMAP and t-SNE are non-deterministic methods. \section{Discussion} \label{s.discussion} We next discuss several aspects of our method \subsection{Using the $n$D \emph{vs} 2D feature space} An interesting question is how the fully automatic label propagation (ALP) performs when using the latent $n$D feature space \emph{vs} the 2D projection space. Figure~\ref{f.2d_nd_feat} shows the average $\kappa$ classification values for LapSVM and OPF-Semi using these two spaces for the OPF and SVM classifiers respectively. Datasets are sorted along the $x$ axis by decreasing order of the $\kappa$ value for OPF-Semi 2D. We see that LapSVM leads to better results in 2D than in $n$D for half of the datasets, while OPF-Semi does that for \emph{all} datasets. This essentially tells that the 2D projection space, created by t-SNE, is able to retain all needed information to enable the desired label propagation and, next, good-quality classifier construction. This is an important result, as it justifies next presenting the 2D projection space to the user as the sole information based on which she will perform the manual label propagation. We also see that the trend of the $\kappa$ values along the $x$ axis, for both the 2D and $n$D variants, matches the perceived difficulty of the datasets: High $\kappa$ values correspond to easier datasets (left), while lower $\kappa$ values correspond to the harder datasets with impurities (to the right). Finally, we plot here also the $\kappa$ values for ILP and SALP (curves in the figures). In all cases, these curves are above the automatic methods, showing that adding manual effort pays off. The SALP curve is above the ILP one, showing that the optimal design is reached by \emph{combining} automatic and manual label propagation (both executed in the 2D space). \input{graphs/nd_2d} \subsection{User effort reduction} Besides achieving the best classification results, as compared to both fully-automatic and fully-manual (ILP) label propagation, SALP also reduces the \emph{manual} effort as compared to ILP. Figure~\ref{f.user_effort} shows this by depicting the percentage of samples labeled by the user over total number of samples to label ($|U|$) per dataset and for ILP and SALP. For SALP, this measurement excludes, indeed, the automatically-labeled samples by OPF-Semi 2D. Datasets are sorted along $x$ by increasing $|U|$, i.e, from the smallest to the largest dataset. Figure~\ref{f.user_effort} reveals several insights. First, assuming that the labeling effort is proportional with the number of labeled samples and the effort per sample is the same for ILP and SALP (which should be the case given that the two methods share the same visualization and interaction), we see that the ILP effort is always larger than the SALP effort, except for H.Eggs with impurities. Secondly, the percentage of propagated samples for ILP decreases with the dataset size. This can be explained by the difficulty of propagating labels in projections showing many points, where overlap and clutter become issues. We note an opposite for trend SALP: The percentage of propagated samples increases with dataset size. The trend breaks for the largest dataset (Prot.c.(I), $6363$ samples), about twice larger than the second-largest dataset (H.Eggs(I), $3400$ samples). Here, the projection is likely quite dense and cluttered, so manual propagation becomes similarly hard for ILP and SALP. In parallel, we observe that the number of samples $U\setminus L_c$, those above the threshold $\tau$ and low-confidence labels to OPF-Semi, also increases with the dataset size. Thus, the amount of samples $U\setminus L_c$ presented to the user to propagate labels with SALP increases with dataset size. One case in point is the H.Eggs with impurities dataset. This dataset has the largest percentage of annotated samples by SALP, exceeding also ILP. This is explained by the size of the dataset (second largest one) and the fact that its projection makes it reasonably easy to propagate labels for the large impurity class (Fig.~\ref{f.mapmnist}). \input{graphs/user_effort.tex} \input{graphs/effectiveness.tex} \subsection{Effectiveness} As shown in Fig.~\ref{f.2d_nd_feat}, SALP consistently yields best classification results, for both SVM and OPF classifiers, overpassing fully manual propagation methods (ILP) and the best fully automatic one (OPF-Semi 2D). The gains of SALP are higher for the more challenging datasets, where fully automatic methods encounter challenges. Conversely, where such methods work well, they reduce user effort as compared to fully manual propagation (ILP). In brief, this shows that the combination of automatic methods with human insights is indeed of added value both in increasing classifier quality and decreasing the effort needed to achieve it. It is next interesting to compare the \emph{normalized gain} of ILP \emph{vs} SALP. We define this as the obtained $\kappa$ value (what we get) divided by the percentage of manually labeled samples (what we need to pay). Figure~\ref{f.effectiveness} shows this normalized gain for ILP and SALP for both SVM and OPF classifiers. We see that SALP has far larger normalized gains than SALP for smaller datasets, while differences become quite small for the two largest datasets. \subsection{Manual sample selection justification} In classical pipelines, expert users would label samples in an empirical order. In pipelines that consider active learning methods, the sample informativeness can be used to suggest samples in each iteration for user supervision. However, those approaches do not usually explore the ability of humans in abstracting information from data visualization. Given that their labeling effort is limited (and their cost is high), the aim is to maximize the `added value' of creating extra labels manually. Our hypothesis (which we show, by our experiments, to hold) is that, when expert users are offered hints in terms of sample similarity (via the 2D projection and its tooltips) and by the confidence of an automatic labeler (color-coded in the projection), they can manually create extra labels that have a higher added-value (for classification accuracy) than fully automatic methods can achieve. The core point of manual labeling is to enable users with expert knowledge select the samples they think are most relevant for constructing a good training set. Answering the question of why expert users would select a certain sample subset rather than another one is not something we can argue theoretically, as it depends on a multitude of factors -- first and foremost, the training of the expert and how this training determines the expert to consider a given image more (or less) relevant for being labeled in a certain way. \subsection{Limitations} Several limitations exist to our approach, as follows. First, \emph{validation} is limited to six datasets, two classifier techniques, and one user performing manual labeling. Measuring the added-value of SALP for more (dataset, classifier, user) combinations would bring more insights into the effectiveness of the method. Secondly, while the added-value of the 2D t-SNE projection space in capturing information needed for good label propagation has been demonstrated both for automatic methods and manual ones, the actual effect of t-SNE's distortions has not been quantitatively gauged. Using projection accuracy metrics such as stress, trustworthiness, continuity, or neighborhood hit\,\cite{nonato18} can be used to find such correlations. On the other hand, using visual tools\,\cite{nonato18} that highlight such errors in specific projection areas can help the user to achieve more accurate and/or faster manual label propagation. \section{Conclusion} \label{s.conclusion} We proposed a combined automatic-and-user-driven approach for creating labeled samples for sparsely-annotated datasets for the purpose of training classifier models. For this, we extract dataset features using Autoencoder Neural Networks and next reduce these to a 2D space using t-SNE. We next automatically propagate labels from the (few) supervised to unsupervised samples in this 2D space, while monitoring the propagation confidence. For low confidence labeled samples, we allow the user to manually annotate them by using the visual insights encoded in the 2D projection annotated with the supervised sample labels. Several quantitative results follow: First, we showed that the 2D projection space leads to higher-accuracy automatic label propagation than the high-dimensional latent space extracted by the autoencoder. To our knowledge, this insight is new, and suggests new ways for dimensionality reduction. Secondly, we show that our semi-supervised method, combining the OPF-Semi automatic label propagation with user-driven manual label propagation, both done in the 2D space, achieves higher classification quality than both fully-automatic and fully-manual label propagation. This opens the way to different methods for combining automatic and human-centered methods for the engineering of high-quality machine learning systems. Future work will consider the use of the proposed semi-automatic label propagation method in Active Learning (AL) scenarios. We expect that AL looping can improve classification results as long as the propagation accuracy increases. Also, we intend to consider metric learning approaches that might improve the 2D projection of the feature space. We are interested in methods that allow the comparison between training and testing data. Specifically, we intend to investigate methods such as the exemplar-centered High Order Parametric Embedding~\cite{Min:2017}. Separately, we plan to perform more extensive validation studies measuring the added-value of our approach for more types of datasets, classification methods, and using additional visual analytics techniques to help users to propagate labels better and faster. \section*{Acknowledgments} The authors are grateful to FAPESP grants \#2014/12236-1, \#2016/25776-0 and \#2017/25327-3, and CNPq grants 303808/2018-7. The views expressed are those of the authors and do not reflect the official policy or position of the S\~ao Paulo Research Foundation. \bibliographystyle{elsarticle-num}
{ "timestamp": "2020-07-28T02:44:24", "yymm": "2007", "arxiv_id": "2007.13689", "language": "en", "url": "https://arxiv.org/abs/2007.13689" }
\section{Introduction} \label{sec:introduction} \begingroup% \renewcommand{\thefootnote}{}% \footnote{% A preliminary version of this work was presented in \citet{stroh2017sequential}. % The results contained in this article also appear in the PhD thesis of the first author \cite{stroh:thesis}. % } \endgroup In the domain of computer experiments, \emph{multi-fidelity} refers to the idea of combining results from numerical simulations at different levels of accuracy, % high-fidelity simulations corresponding to more accurate but, in general, more expensive computations. As a representative example of a multi-fidelity simulator, consider the case of a partial differential equation (PDE) solver based on a finite element method: the accuracy of the numerical solution depends among other things on the fineness of the discretization. High fidelity results are obtained when the mesh size is small. Conceptually, a simulator is viewed in this article as a \emph{black box} with inputs and outputs. % The parameter that controls the level of accuracy/fidelity---the mesh size in the case of a PDE solver---is one of the inputs of this black box, % alongside others such as design or control variables and environmental variables \citep[see, e.g.,][]{santner2003design}. Examples of multi-fidelity simulators can be found in virtually all areas of engineering and science, including aeronautics \citep{forrester2007multi}, fire safety \citep{demeyer2017surrogate}, electromechanics \citep{hage2014radial}, electromagnetism \citep{koziel2013robust}, h\ae{}modynamics \citep{perdikaris2016model} and many more. When the objective is to estimate a particular quantity of interest (QoI), % such as the optimal value of the design variables (optimization problem) or the probability that the outputs belong to a prescribed ``safe region'' (reliability problem), % multi-fidelity makes it possible to obtain a good approximation of the QoI with a computational effort lower than what would have been necessary if only high fidelity simulations had been carried out. This cost reduction is achieved through the joint use of multi-fidelity models, which allow simulation results obtained at different levels of fidelity to be combined, and multi-fidelity designs of experiments (DoE); see \cite{fernandez2016review} for a review that covers both aspects. This article addresses the problem of constructing, sequentially, a multi-fidelity DoE targeting a given QoI. We adopt a Bayesian point of view following the line of research initiated by \citet{sacks1989design}---see also \citet{currin1991bayesian}, \citet{santner2003design}\ldots---where prior belief about the simulator is modeled using a Gaussian process. The Bayesian approach provides a rich framework for the construction of sequential DoE, % which has been abundantly relied upon in previous works dealing with the case of single-fidelity simulators, where the cost of a simulation is assumed to be independent of the value of the input variables % \citep[see, e.g.,][]{kushner64, mockus78, jones1998efficient, ranjan2008, villemonteix2009informational, picheny2010adaptive, bect2012sequential, chevalier2014fast}. In this framework, sequential designs are usually constructed by means of a sampling criterion---also called infill criterion, acquisition function or merit function---, the value of which indicates whether a particular point in the input space is promising or not. The expected improvement (EI) criterion \citep{jones1998efficient} is a popular example of such a sampling criterion. The extension of the Bayesian approach to sequential DoE in a multi-fidelity setting is based on two ingredients: 1) the construction of prior models for simulators with adjustable fidelity; 2) the construction of sampling criteria that take the variable cost of simulations into account. For the case of \emph{deterministic} multi-fidelity simulators, Gaussian process-based models have already been proposed in the literature \citep{kennedy2000predicting, le2013multi, le2014recursive, picheny2013nonstationary, tuo2014surrogate}, % Extensions to \emph{stochastic} simulators have been proposed as well \citep{stroh2016gaussian, stroh2017assessing}. Sampling criteria for single-fidelity sequential designs do not reflect a crucial feature of multi-fidelity simulators: the cost of a run depends on the value of the inputs (in particular on the one that controls the fidelity of simulation). Various methods that take into account the variable cost of the simulations have been proposed for particular cases, for single-objective unconstrained optimization \citep{huang2006sequential, swersky2013multi, he2017optimization} and global approximation \citep{xiong2013sequential, gratiet2015kriging}, notably. In this article, we provide a general principled methodology to construct sequential DoE for multi-fidelity simulators % and, more generally, for simulators where the cost of a simulation depends on the value of the inputs. The methodology is applicable to any QoI, and builds on the \emph{Stepwise Uncertainty Reduction} (SUR) principle \citep[see, e.g.,][and references therein]{villemonteix2009informational, bect2012sequential, chevalier2014fast, bect2016supermartingale}, which unifies many of the aforementioned sequential DoE for the fixed-cost case. More precisely, for the variable-cost case, we propose the \emph{Maximal Rate of Stepwise Uncertainty Reduction} (MR-SUR) principle, % which consists in constructing a sequential design by maximizing, at each step, the ratio between the expected reduction of uncertainty % (to be defined more precisely later on) and the cost of the simulation. The article is organized as follows. Section~\ref{sec:models} reviews Gaussian process modeling for deterministic simulators, and discuss some possible extensions to (normally distributed) stochastic simulators. Section~\ref{sec:doe} first reviews both existing methods of sequential design for multi-fidelity simulators and the SUR principle for fixed-cost simulators, and then presents the MR-SUR principle and its relations with some existing sequential DoE. Finally, Section~\ref{sec:illustration} illustrates the method and assesses its performance through several academic examples, % including a computationally intensive problem of fire safety analysis where the quantity of interest is the probability of exceeding a tenability threshold during a building fire. \section{Gaussian-process models for multi-fidelity} \label{sec:models} We consider a computer simulator with input variables $u \in \Uset \subset \Rset^d$ and one or several scalar outputs, which are generally obtained after some post-processing steps (e.g., an aerodynamic drag in a CFD model). Moreover, we consider that the accuracy, or \emph{fidelity}, of the computer simulation can be tuned using a parameter $\delta$ that ranges in a discrete or continuous set~$\Tset$. For instance, $\delta$ is a mesh size in a finite element method. Such a parameter will be called \emph{fidelity parameter} and can be viewed as an additional input of the simulator. We denote by $x = (u, \delta) \in \Xset$ the aggregated vector of inputs, with $\Xset = \Uset \times \Tset$, and we assume from now on that the output is scalar. \subsection{The auto-regressive model for deterministic simulators} \label{subsec:model:KO} The so-called \emph{auto-regressive model} of \citet{kennedy2000predicting} assumes a deterministic simulator with a finite number~$S$ of levels of increasing fidelity. Let $\delta_1,\,\ldots, \delta_S$ denote the corresponding values of the fidelity parameter and set $\Tset = \{ \delta_1, \ldots, \delta_S \}$. The simulator is then modeled by a Gaussian process~$\xi$ on~$\Xset = \Uset \times \Tset$, defined through an auto-regressive relationship between successive levels: \begin{equation} \left\{ \begin{aligned} \xi(u, \delta_1) & = \eta_1(u),\\ \xi(u, \delta_s) & = \rho_{s-1}\, \xi(u, \delta_{s-1}) + \eta_s(u), \quad 1 < s \le S, \end{aligned}\right. \end{equation} where $\eta_1$, \ldots, $\eta_S$ are $S$~mutually independent Gaussian processes, and $(\rho_s)_{1\leq s < S} \in \Rset^{S - 1}$. The model has been used in numerous applications, where the actual number~$S$ of levels is most often two \citep[see, e.g.,][]{% forrester2007multi, kuya2011multifidelity, brooks2011multi, wankhede2012multi, goh2013prediction, le2014recursive, gratiet2015kriging, elsayed2015optimization, thenon2016multi, demeyer2017surrogate}, sometimes three \citep[][Section~3.2]{perdikaris2016model}. In practice, the Gaussian processes~$\eta_s$ are chosen among a family of Gaussian processes indexed by (hyper-)parameters such as correlation lengths, regularity parameters, etc., which are estimated from data (simulation results), by maximum likelihood for instance (see, e.g., \cite{stein1999interpolation}). Since the processes $\eta_s$ are assumed independent, there must be enough simulation results at each level of fidelity, even at the possibly very expensive highest fidelity levels, to obtain good estimates of the hyper-parameters---% which explains perhaps why this model is typically used with a small number of levels. \subsection{The additive model for deterministic simulators} \label{subsec:model:TWY} Another approach to building Gaussian process models for deterministic multi-fidelity simulators, which readily applies to the case where $\Tset$ contains a continuum of levels of fidelity or a large number of ordered discrete levels of fidelity, has been proposed by \citet{picheny2013nonstationary} and \citet{tuo2014surrogate}. Assuming for simplicity that $\Tset = \left[0, \infty\right)$, with $\delta = 0$ corresponding as in \citet{tuo2014surrogate} to the highest---often unreachable---level of fidelity, a Gaussian process~$\xi$ over the product space~$\Xset = \Uset \times \Tset$ is defined in this approach as the sum of two independent parts: \begin{equation} \label{eq:TWY:sum} \xi(u, \delta) = \xi_0(u) + \varepsilon(u, \delta), \end{equation} where $\xi_0$ and $\varepsilon$ are mutually independent Gaussian processes, and $\varepsilon$ has zero mean and goes to zero in the mean-square sense when $\delta \to 0$. In other words, $\var \left( \varepsilon (u, \delta) \right) \to 0$ when $\delta \to 0$, for all~$u$: as a consequence, $\xi$ is a \emph{non-stationary} Gaussian process on $\Xset \subset \Rset^{d+1}$. Under this decomposition, $\xi_0$ represents the ``ideal'' version of the simulator, while $\varepsilon$ represents numerical error. In both articles, $\xi_0$ is then assumed to be stationary, whereas the covariance function of $\varepsilon$ is multiplicatively separable: for all $u, u' \in \Uset$ and $\delta, \delta' \in \Tset$, \begin{equation} \label{eq:TWY:cov-epsi} \cov\left(\varepsilon(u, \delta), \varepsilon(u', \delta')\right) = r(\delta, \delta')\, k(u, u'), \end{equation} where $k$ is a stationary covariance function on~$\Uset$, and $r$ is a (non-stationary) covariance function on~$\Tset$ such that $r(\delta, \delta) \to 0$ when $\delta \to 0$. As an example of a suitable choice for~$r$, consider the Brownian-type model proposed by \citet{tuo2014surrogate}: \begin{equation} \label{eq:TWY:Brownian-cov} r(\delta, \delta') = \min\{\delta, \delta'\}^L, \end{equation} with $L$ a real positive parameter. Other choices are of course possible. \newcommand \EquModelTWY {\eqref{eq:TWY:sum}--\eqref{eq:TWY:Brownian-cov}} \subsection{Extension to stochastic simulators} \label{subsec:model:stoch} We now turn to the case of stochastic simulators, that is, simulators whose output is stochastic, % as happens for instance when the computer program relies on a Monte Carlo method \citep[see, e.g.,][]{cochet2014}. Extending the multi-fidelity Bayesian methodology of Sections~\ref{subsec:model:KO} and~\ref{subsec:model:TWY} to stochastic simulators is not straightforward in general, % since the output at a given input point~$x_i = (u_i, \delta_i) \in \Xset$ is now a random variable~$Z_i$, the distribution of which is in general unknown and different at each point in~$\Xset$. (Several runs at the same input point yield independent and identically distributed responses.) We focus in this section on the simpler case where the output~$Z_i$ can be assumed to be normally distributed: \begin{equation} Z_i \,\vert\, \xi, \lambda \;\sim\; \N(\xi(x_i), \lambda(x_i)), \label{eq:normal_outputs} \end{equation} with mean~$\xi(x_i)$ and variance~$\lambda(x_i)$ possibly depending on the input point. In this setting, we propose to extend the multi-fidelity models of previous sections using independent prior distributions for~$\xi$ and~$\lambda$, % with either the autoregressive model of Section~\ref{subsec:model:KO} or the additive model of Section~\ref{subsec:model:TWY} as a prior for~$\xi$. Then, since $\lambda$ must have positive values and we want to retain the simplicity of the Gaussian process framework, % we suggest modeling the logarithm of the variance, i.e. $\log(\lambda)$, by a Gaussian process~$\tilde \lambda$, following \citet{goldberg1998regression}, \citet{kersting2007}, \citet{boukouvalas2009learning} and others. Under this type of model, the inference task% ---estimating the hyper-parameters of the Gaussian process models for~$\xi$ and~$\tilde{\lambda}$, and computing posterior distributions---% becomes more difficult since neither~$\xi$ nor~$\tilde{\lambda}$ are directly observable. \cite{goldberg1998regression} take a fully Bayesian approach and suggest using a time-consuming Monte-Carlo method. Other authors have proposed optimization-based approaches, that simultaneously produce estimates of both the Gaussian processes hyper-parameters and the unobserved log-variances: in particular, \citet{kersting2007} and~\citet{boukouvalas2009learning} propose a method called \emph{most likely heteroscedastic GP}, stemming from the Expectation-Maximization (EM) algorithm % \citep[see also][for a similar algorithm]{marrel2012global}, while \citet{binois2018} use a more a more sophisticated joint maximization procedure with relaxation to obtain the joint MAP (maximum a posterior) estimator. For the numerical experiments of this article (Sections~\ref{subsec:illustration_dampedOscillator} and~\ref{subsec:illustration_fds}) we will take a simpler route, assuming that the variance~$\lambda$ depends only on the fidelity level~$\delta$---which is approximately true in the two examples we shall consider. In this setting, as long as the number of fidelity levels of interest is not too large, the value of the variance at these levels can be simply estimated jointly with the other hyper-parameters of the model; a general-purpose log-normal prior for the vector of variances is proposed by \citet{stroh2016gaussian, stroh2017integrating}. \section{Sequential design of experiment for multi-fidelity} \label{sec:doe} \subsection{Existing methods} \label{subsec:doe_multi} In the literature of multi-fidelity, a variety of sequential design algorithms have been proposed. (See Supplementary Material for a review of \emph{non-sequential} multi-fidelity designs, which can be used as initial designs for sequential ones.) For instance, \citet{forrester2007multi} suggest using the auto-regressive model of \cite{kennedy2000predicting} and a standard single-level sequential design at the highest level of fidelity to select input variables $u\in\Uset$ for the next experiment. Then, simulations at all levels of fidelity are run for the selected $u$. Building on \citet{forrester2007multi}, \citet{kuya2011multifidelity} suggest a two-stage method: run a large number of simulations at the low-fidelity level, and then use a sequential design strategy to select simulations at the high-fidelity level. In a different spirit, \citet{xiong2013sequential} use Nested Latin Hypercube Sampling (NLHS) and suggest to double the number of simulations when going from a level~$\delta^{(s)}$ to~$\delta^{(s + 1)}$, until some cross-validation-based criterion is satisfied. More interestingly in the context of this article, some methods have been proposed that explicitly take into account the simulation cost. This is typically achieved by crafting a sampling criterion that takes the form of a ratio between a term which measures the interest of a simulation at $(x, \delta)$, and the cost of the simulation \citep{huang2006sequential, gratiet2015kriging, he2017optimization}. For instance, \cite{he2017optimization} propose a global optimization method using the Expected Quantile Improvement (EQI) of \cite{picheny2013quantile} and the multi-fidelity model of \cite{tuo2014surrogate}, and build a new sampling criterion corresponding to the ratio between the EQI sampling criterion and the cost of a simulation. Outside the multi-fidelity literature, a similar idea has been proposed by \citet{johnson1960information} to design sequential testing procedures and by \citet{swersky2013multi} for multi-task optimization. In both cases, the numerator of the criterion is the expected reduction of the entropy of the QoI. In this article, we propose a general methodology to build such sequential designs, which is not tied to a particular kind of model or QoI. The key idea is to measure the potential of a particular design point using the SUR framework, recalled in Section~\ref{subsec:doe_sur}. The methodology itself, that we call MR-SUR, is presented in Sections~\ref{subsec:doe_mrsur}. \subsection{Stepwise Uncertainty Reduction} \label{subsec:doe_sur} We recall here the principle of SUR strategies, introduced in the design of computer experiments by Vazquez and co-authors \citep[][\ldots]{vazquez:2007:ds, villemonteix2009informational, vazquez2009sequential, bect2012sequential, chevalier2014fast}. Given a Bayesian model of a simulator and an unknown QoI $Q$, that is, a particular feature of the simulator that we want to estimate, a SUR strategy is a Bayesian method for the construction of a sequence of evaluation locations $X_1, X_2, \ldots\in\Xset$ at which observations of the simulator will be taken in order to reduce the uncertainty on $Q$. (In this section, $\Xset$ denotes a generic input space, not necessarily of the form~$\Xset = \Uset \times \Tset$.) The starting point of the construction of a SUR strategy is the definition of a statistic~$H_{n}$ measuring the residual uncertainty about $Q$ given past observations $Z_1, \ldots, Z_n$. Many choices for~$H_n$ are possible for any particular problem, but a natural requirement \citep{bect2016supermartingale} is that~$H_n$ should be decreasing on average when $n$~increases. For instance, if~$Q$ is a scalar QoI, $H_n$ could be the posterior entropy or the posterior variance of~$Q$. If $Q$ is a function defined on~$\Xset$, as will be the case in Section~\ref{sec:illustration}, a possible choice is \begin{equation} H_n \;=\; % \esp_n \left( \lVert Q - \widehat Q_n \rVert_\mu^2 \right) \;=\; % \int_\Xset \var_n \left( Q(x) \right)\, \mu(\dx), \label{equ:Hn-L2} \end{equation} where $\mu$ denotes a measure~$\Xset$, % $\lVert h \rVert_\mu^2 = \int_\Xset h(x)^2\, \mu(\dx)$, $\esp_n$ (resp.~$\var_n$) is the posterior expectation (resp.~variance) given~$Z_1$, \ldots, $Z_n$, % and $\widehat Q_n (x) = \esp_n \left( Q(x) \right)$. Then, given past observations, $X_{n+1}$ is chosen by minimizing the expectation of the future residual uncertainty: \begin{equation} X_{n+1}= \argmin_{x\in \Xset} J_n\left(x\right),\quad\text{with } J_n\left(x\right) = \esp_n\left(H_{n + 1} \middle \vert X_{n+1} = x\right), \label{eq:sur_crit} \end{equation} where the expectation is with respect to the outcome~$Z_{n+1}$ of a new simulation at~$x \in \Xset$. \medbreak \noindent\textbf{Example.} Assume a stochastic multi-fidelity simulator defined over $\Xset = \Uset \times \Tset$ as in Section~\ref{subsec:model:stoch}, and consider the functional QoI defined on~$\Xset$ by \begin{equation} \label{eq:QoI-prob-fun} Q(x) \;=\; % \prob \left( Z_x > \zcrit \bigm| \xi, \lambda \right) \;=\; % \Phi \left( \frac{ \xi(x) - \zcrit}{\sqrt{\lambda(x)}} \right), \end{equation} where $Z_x$ denotes the outcome of a new simulation at~$x$, $\zcrit \in \Rset$ is a given threshold, and $\Phi$ the cdf of the standard normal distribution. Pick some reference level~$\deltaref \in \Tset$ and consider the residual uncertainty \begin{equation} H_n \;=\; % \int_{\Uset} \var_n \left( Q(u, \deltaref) \right)\, \du, \label{equ:Hn-L2-U} \end{equation} which is a special case of~\eqref{equ:Hn-L2} with $\mu$ equal to Lebesgue's measure on~$\Uset$ at fixed $\delta = \deltaref$. Then, using computations similar to those of \citet{chevalier2014fast}, it can be proved that \begin{equation} J_n(x) = \int_{\Uset} \left[ \Phi_2\Bigl(a_n(x'), a_n(x'); \frac{k_n(x', x')}{v_n(x')}\Bigr) - \Phi_2\Bigl(a_n(x'), a_n(x'); \frac{k_n(x, x')^2}{v_n(x) v_n(x')}\Bigr) \right]\, \ddiff u', \label{eq:notre-SUR-crit} \end{equation} where $x' = (u', \deltaref)$, $m_n$ (resp.~$k_n$) denotes the posterior mean (resp.~covariance) of~$\xi$, $v_n(x) = \lambda(x) + k_n(x, x)$, $a_n(x) = \left(m_n(x) - \zcrit\right)/\sqrt{v_n(x)}$, and $\Phi_2 \left( \cdot, \cdot \,; \rho \right)$ is the cdf of the standard bivariate normal distribution with correlation~$\rho$. (For tractability, the variance function $\lambda$ is assumed to be known in the computation of the criterion. In practice, the estimated variance function is plugged in the expression, and the integral over~$\Uset$ is approximated using a Monte Carlo method.) \begin{remark} See Supplementary Material for a proof of~\eqref{eq:notre-SUR-crit}, in a more general form which also allows for batches of parallel evaluations and integration of~$Q$ with respect to environmental variables (all or part of the components of~$u$, depending on the application). \end{remark} \begin{remark} \label{rem:special-case} % In the special case~$\lambda \equiv 0$ (deterministic simulator), corresponding to $Q(x) = \mathds{1}_{\xi(x) > \zcrit}$, the criterion~\eqref{eq:notre-SUR-crit} has been proposed by \cite{bect2012sequential} and computed by \citet{chevalier2014fast}. % The general case is new, to the best of our knowledge. % \end{remark} The reader is referred, e.g., to \cite{villemonteix2009informational, picheny2010adaptive, chevalier13, chevalier2014fast, bect2016supermartingale} for other examples of SUR criteria. \subsection{Maximum Rate of Stepwise Uncertainty Reduction} \label{subsec:doe_mrsur} The proposed Maximum Rate of Stepwise Uncertainty Reduction (MR-SUR) strategy builds on the SUR strategy presented in Section~\ref{subsec:doe_sur}. The goal is to achieve a balance between the (expected) reduction of uncertainty brought by new observations on the one hand, and the cost of these observations, usually measured by their computation time, on the other hand. Denoting by $C:\Xset \to \Rset_{+}$ the cost of an observation of the simulator, which depends on the fidelity level $\delta \in \Tset$ and/or input variables $u\in\Uset$, the MR-SUR strategy is given by \begin{equation} X_{n+1} \;=\; % \argmax_{x \in \Xset}\, \frac{H_n - J_n(x)}{C(x)} \;=\; % \argmax_{x \in \Xset}\, \frac{G_n(x)}{C(x)}\,, \label{eq:mrsur} \end{equation} where $G_n(x) = H_n - J_n(x)$ is the \emph{expected uncertainty reduction} associated to a future observation at~$x \in \Xset$. This strategy boils down to a SUR strategy when $C$ is constant. A few special cases of MR-SUR strategies, adapted to particular models and estimation goals, have been proposed earlier in the literature. To the best of our knowledge, the oldest example is the sequential testing method of \cite{johnson1960information}, where~$H_n$ is the posterior entropy of the location of faulty component in an electronic equipment---with a discrete distribution over all possible fault locations as the underlying model. More recently, \cite{snoek2012practical} and \cite{swersky2013multi} have proposed Bayesian optimization procedures of the MR-SUR type, for unconstrained global optimization problems with variable-cost noiseless evaluations, corresponding respectively, when to cost is constant, to the expected improvement \citep{mockus78, jones1998efficient} and IAGO \citep{villemonteix2009informational} algorithms. Finally, the first sequential design procedure of \cite{gratiet2015kriging} can also be seen as an approximate MR-SUR strategy for the approximation problem, where $H_n$ is the posterior integrated prediction variance. To illustrate the MR-SUR principle, let us consider a simple simulated example, with $\xi$ a Gaussian process on $\Xset = \Uset \times \Tset = [-0.5,0.5]\times[0,1]$ such that $\xi \mid m \sim \GP(m, k)$, $m \sim \mathcal{U}(\Rset)$, and $k$ as in Section~\ref{subsec:model:TWY}: \begin{equation*} k:((u,\delta), (u^{\prime}, \delta^{\prime})) \;\mapsto\; % \sigma_0^2 \mathcal{M}_{\nu_0} \left(\frac{|u - u^{\prime}|}{\rho_0}\right) \,+\, % \sigma_0^2G\min\left\{\delta, \delta^{\prime}\right\}^L \mathcal{M}_{\nu_{\varepsilon}} \left(\frac{|u - u^{\prime}|}{\rho_{\varepsilon}}\right)\,, \end{equation*} where $\mathcal{M}_{\nu}$ stands for the Matérn correlation function with regularity parameter $\nu$. The values $m = 0$, $\sigma_0 = 1$, $G = 4$, $L = 2$, $\nu_0 = \nu_{\varepsilon} = 5/2$, $\rho_0 = 0.3$, $\rho_{\varepsilon} = 0.1$ are used in the simulations, and all the parameters except $m$ are assumed to be known in this experiment. The cost function is $C: (u, \delta) \mapsto 1/\delta$ and the QoI is \begin{equation*} Q =\int_{\Uset} \mathds{1}_{\xi(u,0) > 0}\, \du\,. \end{equation*} Note that the level of highest fidelity $\delta=0$ is not observable in practice. A NLHS of size $n = 12 + 6 + 6 + 3$ on the levels $\delta = 1, 1/2, 1/5, 1/10$ is taken as the observed DoE, and the outputs~$Z_1, \ldots, Z_n$ are simulated according to~\eqref{eq:normal_outputs} with constant variance~$\lambda = 0.4^2$. We compute the functions $J_n$, $G_n$ and $C$ over a regular grid on $\Uset \times \Tset$, to obtain Figures~\ref{fig:benef_vs_cost} and~\ref{fig:mean_benef}. \begin{figure} \begin{center} \psfrag{Cost}[tc][tc]{Cost of an observation $C(x)$} \psfrag{Expected uncertainty}[bc][bc]{$J_n(x)$} \psfrag{Pareto line}[bc][bc]{} \subfloat[Uncertainty and cost]{ \includegraphics[width = 0.5\textwidth]{mrsur_uncer_cost} \label{fig:benef_vs_cost}} \psfrag{Cost}[tc][tc]{Cost of an observation $C(x)$} \psfrag{Benefit./Cost}[bc][bc]{$[H_n - J_n(x)]/C(x)$} \psfrag{Mean benefit}[bc][bc]{} \subfloat[Gain-cost ratio]{ \includegraphics[width = 0.5\textwidth]{mrsur_mean_benefit} \label{fig:mean_benef}} \end{center} \caption{% An example of objective space.% (a) Representation of possible designs in the $(C, J)$ plane. % (b) Representation in the $(C, G/C)$ plane. % Each point corresponds to one point $x$, the solid line is the Pareto-optimal points, and the square is the design returned by the Maximal Rate of Uncertainty Reduction criterion.% } \end{figure} Observe on Figure~\ref{fig:benef_vs_cost} that, for each cost value (corresponding to a fixed fidelity level), there is a range of points that yield more or less expected uncertainty reduction. Good observation points lie on the Pareto front (in solid black line), that is, the set of points for which there is no larger expected uncertainty reduction at lower cost. The MR-SUR strategy selects an observation location that correspond to the maximum of the ``slope'' of the Pareto front. Figure~\ref{fig:example_seqdes} shows the sequence of Pareto fronts as more observation points are added in the design using~\eqref{eq:notre-SUR-crit}. The horizontal axis is the total cost, so that the left-ends of the Pareto fronts are shifted. Observe for instance that the points numbered~$3$ to~$9$, selected using MR-SUR, achieve a larger uncertainty reduction at lower cost that what would have been achieved if we had selected only one expensive observation. \begin{figure} \begin{center} \psfrag{Cost}[tc][tc]{Total cost} \psfrag{Expected uncertainty}[bc][bc]{$J_n(x)$} \psfrag{Pareto line}[bc][bc]{} \includegraphics[width = 0.7\textwidth]{mrsur_uncer_cost_recur} \end{center} \caption{% The sequential Pareto fronts in the space $(C, J)$ as function of the total cost of the design on an example of sequential MR-SUR algorithm.% } \label{fig:example_seqdes} \end{figure} \section{Numerical results} \label{sec:illustration} \subsection{Setup of the experiments} In each example, we consider a multi-fidelity simulator for which simulation cost $C$ depends on~$\delta$ alone, and is assumed to be known. Some common features of all three numerical experiments are presented in this section. \emph{Initial DoE.} A nested Latin hypercube sample (NLHS) is used as an initial design. More specifically, we use the algorithm developed by \cite{qian2009nested}, with an additional maximin optimization at each level to obtain better space-filling properties \citep[see][Section~2.2.3 for details]{stroh:thesis}. \emph{GP modeling.} In each example, a multi-fidelity GP model of the type described in Section~\ref{sec:models} is used. The posterior distribution of the parameters is initially sampled using an adaptive Metropolis-Hastings algorithm \citep{haario2001adaptive} and then updated at each iteration by sequential Monte Carlo \citep[see, e.g.,][]{chopin2002sequential}. More details about the particular GP model that is used, and the prior distribution on the parameters, are provided inside each example section. \emph{Optimization of the sampling criterion.} At each iteration of a SUR or MR-SUR strategy, a new simulation point is selected according to~\eqref{eq:sur_crit} or~\eqref{eq:mrsur}. This step involves a an optimization of the SUR or MR-SUR criterion criterion, which is carried out in the experiments of this article using a simple two-step approach: the criterion is first optimized by exhaustive search on a regular grid over $\Uset\times\Tset$, and then a local optimization is performed starting from the best point in the grid. Other approaches have been proposed in the literature, that would be more efficient in higher-dimensional problems \citep[see, e.g.,][]{Feliot:BMOO}. \emph{Other computational details.} All integrals are approximated by Monte-Carlo methods. SUR and MR-SUR criteria are evaluated using the Maximum A Posteriori (MAP) estimator of the parameters---obtained by local optimization from the best point in the MCMC/SMC sample---in a plug-in manner. (A fully Bayesian approach could be considered in principle, but would lead to much higher computational complexity.) \subsection{A one-dimensional example} \label{sec:one-dim-example} Consider as a first (toy) example the two-level deterministic simulator defined for $u \in \left[ 0; 1 \right]$ and $\delta \in \left\{ 1, 2 \right\}$ by the analytical formulas \citep{forrester2007multi} \begin{equation} \left\{ \begin{aligned} f_1(u) &= f(u, 1) = 0.5\, (6u - 2)^2\sin(12u - 4) + 10\, (u - 0.5),\\ f_2(u) &= f(u, 2) = (6u - 2)^2\sin(12u - 4) + 10, \end{aligned} \right. \end{equation} and assume that computing~$f_2$---hereafter referred to as the ``high fidelity'' function---is four times more costly than computing~$f_1$, e.g., $C(2) = 1$ and $C(1) = \frac{1}{4}$. Note that the two functions are related by $f_2(u) = 2\, f_1(u) - 20(u - 1)$, which makes them perfect candidates for the autoregressive model presented in Section~\ref{subsec:model:KO}. The goal in this example is to estimate the set \begin{equation*} \Gamma = \left\{ f_2 > \zcrit \right\} = \left\{ u\in \Uset,\, f_2(u) \geq \zcrit \right\} \end{equation*} with $\zcrit = 10$. The performance of MR-SUR for this task will be compared with that of SUR strategies operating at the low-fidelity level only (LF-SUR) or at the high-fidelity level only (HF-SUR). In this experiment, all three sequential strategies start with the same multi-fidelity initial design, and use the same Gaussian process prior and the same measure of uncertainty~$H_n$. The initial design consists of six observations at the low-fidelity level and three at the high-fidelity level, for a total of~$n = 9$ observations, corresponding to an initial budget of $6 \times \frac{1}{4} + 3\times 1 = 4.5$ cost units. A supplementary budget of $9.0$~cost units is assumed to be available for the sequential design. The autoregressive model of Section~\ref{subsec:model:KO} is used, with Matérn 5/2 covariance functions and weakly informative priors on the parameters (see Supplementary Material for details). The uncertainty on~$\Gamma$ is quantified using the uncertainty measure~\eqref{equ:Hn-L2-U}. In the special case of a deterministic simulator, we have (cf. Remark~\ref{rem:special-case}) \begin{equation*} H_n = \int_0^1 p_n(u)\, \left(1 - p_n(u)\right) \du, \end{equation*} where $p_n(u) = \prob_n\left( \xi_2(u) \geq \zcrit \right)$ is the posterior mean of~$\mathds{1}_{\xi_2(u) \geq \zcrit}$, and $p_n(u)\, \left(1 - p_n(u)\right)$ its posterior variance. \begin{figure} \quad \begin{subfloatrow} \subfloat[Comparison between designs of experiment]{% \label{fig:ex1_errors} \psfrag{Cost}[tc][tc]{$c_n$}% \psfrag{||p(x, tHF) - p*(x, tHF)||}[bc][bc]{$\MedErr_n$}% \psfrag{L2-error on the probability function}[bc][bc]{}% \includegraphics[width = 0.45\textwidth]{exp1_err_func} } \renewcommand{\arraystretch}{0.9 \subfloat[MR-SUR: number of LF/HF eval.]{% \scriptsize \hspace{6em} \begin{tabular}[b]{|c|c|c|} \hline LF & HF & freq.\\ \hline 36 & 0 & 0 \\ 32 & 1 & 0 \\ 28 & 2 & 14 \\ 24 & 3 & 18 \\ 20 & 4 & 10 \\ 16 & 5 & 9 \\ 12 & 6 & 1 \\ 8 & 7 & 4 \\ 4 & 8 & 3 \\ 0 & 9 & 1 \\ \hline \end{tabular} \hspace{6em} \label{fig:ex1_histobs}% } \end{subfloatrow} \hfill \caption{% The one-dimensional experiment. % (a) Median estimation error as a function of the cost. Light-gray disks: LF-SUR; % dark-gray squares: HF-SUR; % black diamonds: MR-SUR. (b) Number of LF/HF evaluations in the MR-SUR strategy. The last column indicates how many times, in the 60 repetitions, a given combination appears (recall that HF evaluations are four times as costly as LF ones). } \end{figure} The experiment is repeated $R = 60$ times---the simulator is deterministic, but randomness in the result comes from both the initial DoE and the use of a Monte Carlo procedure to sample from the posterior of the parameters. Figure~\ref{fig:ex1_errors} presents the evolution of the median estimation error, defined as $\MedErr_n = \mathrm{median}_{1 \le r \le R}\, \lVert p^{(r)}_n - \one_{f_2 > \zcrit} \rVert$ with $\lVert \cdot \rVert$ the $L^2$-norm on~$\Uset$, as function of the cost $c_n = \sum_{i \le n} C(\delta_i)$. First, it appears clearly that high fidelity evaluations are needed: the LF-SUR strategy achieves no significant error reduction with respect to the initial design. Second, we observe that the combination of low- and high-fidelity evaluations chosen by the MR-SUR strategy is more efficient, on average, than a purely high-fidelity sequential design. The actual number of evaluations on each level is summarized, for the 60 repetitions, in Table~\ref{fig:ex1_histobs}: the MR-SUR strategy tends to use between two and five high-fidelity evaluations. (The recommendation of \citet{xiong2013sequential}---observing the low-fidelity level twice as many times as the high-fidelity one---would correspond here to six high-fidelity evaluations.) \subsection{Random damped harmonic oscillator} \label{subsec:illustration_dampedOscillator} We now assess the performance of MR-SUR on an example proposed by~\citet{au2001estimation}. We consider a random damped harmonic oscillator, whose displacement $X$ is the solution of the stochastic ordinary differential equation \begin{equation} \ddot{X}(t) + 2\zeta\omega_0\dot{X}(t) + \omega_0^2X(t) = W(t), \quad t \in [0, t_{\mathrm{end}}], \quad \dot{X}(0) = 0,\quad X(0) = 0\,, \label{eq:ex2_stochas_eq} \end{equation} where $\omega_0$ is the resonance frequency of the oscillator, $\zeta$ is a damping coefficient, $W$ is a Gaussian white noise and $t_{\mathrm{end}} = 30\,\text{s}$. The solution of~\eqref{eq:ex2_stochas_eq} can be approximated using an exponential Euler scheme with time step $\delta > 0$ (more details can be found in the Supplementary Material): we denote by $X_k^{(\delta)}$ the resulting approximation of $X$ at time steps $t_{k} = k \delta$, $k \in \Nset$, $k \le K_\delta = \lfloor t_{\mathrm{end}} / \delta \rfloor$. We will be interested in the maximal log-displacement $\max_{t \le t_{\mathrm{end}}} \log \left| X(t) \right|$, that we approximate by $Z(\omega_0, \zeta, \delta) = \max_{k \le K_\delta} \log \bigl( |X^{(\delta)}_k| \bigr)$. We view the mapping $x=(\omega_0, \zeta, \delta) \in \Rset_+^{3} \mapsto Z(\omega_0, \zeta, \delta)$ as a multi-fidelity stochastic simulator, where~$\delta$ controls the level of fidelity. In this problem, the QoI is the function $Q: (\omega_0, \zeta) \mapsto \prob(Z(\omega_0, \zeta, \deltaref) > \zcrit)$, where $\deltaref = 0.01~\text{s}$ denotes the level of highest fidelity and $\zcrit = -3$ is a given critical threshold. The computational cost of $Z$ is an affine function of $1/\delta$: $C(\delta) = a/\delta + b$. After normalization to have $C(\deltaref) = 1$, the coefficients are $a = 0.0098$ and $b = 0.0208$. \begin{table} \begin{center} \begin{tabular}{l|*{10}{c}} Level $\delta$ & $1\,\text{s}$ & $0.51\,\text{s}$ & $1/3\,\text{s}$ & $0.25\,\text{s}$ & $0.2\,\text{s}$ & $1/6\,\text{s}$ & $0.1\,\text{s}$ & $0.05\,\text{s}$ & $0.02\,\text{s}$ & $0.01\,\text{s}$\\ \hline Cost$^{-1}$ & 32.7 & 24.8 & 19.9 & 16.7 & 14.3 & 12.6 & 8.4 & 4.6 & 2 & 1\\ \hline Initial DoE & 180 & 60 & 20 & 10 & 5 & 0 & 0 & 0 & 0 & 0\\ \end{tabular} \end{center} \caption{% Levels of fidelity considered in this example. The highest level of fidelity is $\delta = 0.01\,\text{s}$. } \label{tab:ex2_levels} \end{table} A good approximation of the output distributions is obtained if we assume $Z(\omega_0,\zeta,\delta) \mid \xi, \lambda \sim \N(\xi(x), \lambda(\delta))$, where the variance only depends on the fidelity level. This assumption makes it possible to write \begin{equation*} Q(\omega_0, \zeta) = \Phi\left( \frac{\xi(x) - \zcrit}{\sqrt{\lambda(\deltaref)}} \right). \end{equation*} The mean function~$\xi$ is modeled by the additive Gaussian process model~\EquModelTWY of Section~\ref{subsec:model:TWY}, where the variance~$\lambda$ is log-Gaussian as in Section~\ref{subsec:model:stoch} and the prior distributions for the hyper-parameters are set as in \citet{stroh2017integrating}. The posterior mean $\widehat Q_n = \esp_{n} (Q)$ is used to estimate~$Q$. In this example we consider $S = 10$ levels, and the initial design is an NLHS on the five first levels. The different levels of fidelity, their costs, and the initial design are summarized in Table~\ref{tab:ex2_levels}. The total cost of the initial design is $9.88$. The total simulation budget, taking into account the initial budget, is set to $20$. We also use a very high simulation budget to compute a reference value for $Q$, which will be used to assess the estimation error. \begin{figure} \begin{center} \psfrag{Cost}[tc][tc]{$c_n$} \psfrag{||p(x, tHF) - p*(x, tHF)||}[bc][bc]{$\MedErr_n$} \psfrag{Error-L2 on the probability function}[tc][tc]{} \includegraphics[width = 0.6\textwidth]{ex2_err_func} \psfrag{SUR|dt=0.167}[cl][cl]{\small SUR at $\delta = 0.16\,\text{s}$} \psfrag{SUR|dt=0.1}[cl][cl]{\small SUR at $\delta = 0.1\,\text{s}$} \psfrag{SUR|dt=0.05}[cl][cl]{\small SUR at $\delta = 0.05\,\text{s}$} \psfrag{SUR|dt=0.02}[cl][cl]{\small SUR at $\delta = 0.02\,\text{s}$} \psfrag{SUR|dt=0.01}[cl][cl]{\small SUR at $\delta = 0.01\,\text{s}$} \psfrag{MSUR}[cl][cl]{\small MR-SUR} \subfloat{ \includegraphics[width = 0.38\textwidth]{ex2_err_func_legend}} \addtocounter{subfigure}{-1} \end{center} \caption{% Median estimation error as a function of the cost for the oscillator test case. } \label{fig:ex2} \end{figure} We compare the MR-SUR strategy, using the integrated posterior variance~\eqref{equ:Hn-L2-U} as the uncertainty measure, to five different SUR strategies based on the same uncertainty measure. All of them are started with the same initial design, and each SUR strategy corresponds to sampling on only one of the five highest-fidelity levels. The experiment is repeated 48 times with different initial designs, and the strategies are compared using the median estimation error as in Section~\ref{sec:one-dim-example}. The result is shown on Figure~\ref{fig:ex2}: the MR-SUR strategy is never far from the best strategy for any given budget, and actually outperforms all the other (fixed-level SUR) strategies as soon as~$c_n$ larger that approximately~11.5. Additional experiments, presented in the Supplementary Material, also show the benefit of using MR-SUR with batches of parallel evaluations on this example. \subsection{A fire safety example} \label{subsec:illustration_fds} In this section, we illustrate the MR-SUR strategy on a fire safety application. The goal is to assess the safety of a $20\,\text{m} \times 12\,\text{m} \times 16\,\text{m}$ parallelepiped-shaped storage facility, with two $2\,\text{m} \times 1\,\text{m}$ open doors and two $2\,\text{m} \times 2\,\text{m}$ open windows. The propagation of smoke and heat is simulated using Fire Dynamics Simulator \citep[FDS; see][]{mcgrattan2010fire}, a state-of-the-art CFD software for fire engineering, which solves the transport equations using finite difference methods. The fire is located at the center of the room, and burns polyurethane. To assess fire safety, the values of several physical quantities are compared against regulatory thresholds---in this illustration, we focus on one of them only, called visibility and hereafter denoted by~$V$. According to the \citet{iso2012life}, visibility must remain greater than $\zcrit = 5~\text{m}$ to ensure safety during an evacuation. \begin{table} \begin{center} \begin{tabular}{l|*{4}{c}} Level~$\delta$ & $50~\text{cm}$ & $33~\text{cm}$ & $25~\text{cm}$ & $20~\text{cm}$ \\ \hline Real cost & $69~\text{min}$ & $6~\text{h}$ & $20~\text{h}$ & $49~\text{h}$ \\ Normalized cost & 1/42 & 1/8 & 1/2.5 & 1 \\ \hline Initial design & 90 & 30 & 10 & 0 \\ \end{tabular} \end{center} \caption{The levels of fidelity on FDS.} \label{tab:fds_levels} \end{table} Our FDS-based simulator will be treated as a stochastic simulator% \footnote{% although is is actually, strictly speaking, a deterministic simulator, since the seed of the random number generator is fixed by the software; cf. \cite{stroh2017assessing} for details.} with nine input variables: three environmental variables (external temperature~$T_{\rm ext}$, atmospheric pressure~$P_{\rm atm}$ and ambient temperature~$T_{\rm amb}$) denoted by $u_{\rm e}\in\Rset^{3}$, five ``scenario variables'' (fire growth rate~$\alpha$, fire area~$A_{\rm f}$, maximal heat release rate~$\dot{Q}_{\rm h}''$, total released energy~$q_{\rm f_d}$ and soot yield~$Y_{\rm soot}$) denoted by $u_{\rm s}\in\Rset^5$, and finally the size~$\delta$ of the spatial discretization mesh, which plays the role of a fidelity parameter. The reader if referred to \cite{stroh2017assessing} and~\citet{stroh:thesis} for more details on the application. In this example, our objective is to estimate the probability that $V$ becomes less than $z^{\rm crit}$ in a particular fire scenario, defined by $\alpha = 0.1057\,\text{kW}\cdot\text{s}^{-2}$, $A_{\rm f} = 14\,\text{m}^2$, $\dot{Q}_{\rm h}'' = 460\,\text{kW}\cdot\text{m}^{-2}$, $q_{\rm f_q} = 450\,\text{MJ}\cdot\text{m}^{-2}$, and $Y_{\rm soot} = 0.027\,\text{kg}\cdot\text{kg}^{-1}$. The environmental inputs~$u_{\rm e}$ are assumed random and integrated according to an environmental distribution~$\Fenv$, which is a trivariate normal distribution with mean $(10\,^{\circ}\text{C}, 100\,\text{kPa}, 22.5\,^{\circ}\text{C})$, variances equal to $(20/3\,^{\circ}\text{C},2/3\,\text{kPa}, 2.5\,^{\circ}\text{C})^2$, and a correlation coefficient of 0.8 between the temperatures. The QoI is $Q = \int_{\Uset_{\rm e}} p(u_{\rm e}, u_{\rm s}, \deltaref)\, \ddiff \Fenv(u_{\rm e})$, where $\deltaref = 20\,\text{cm}$ is the reference level and $p(u_{\rm e}, u_{\rm s}, \deltaref) = \prob\left(V < \zcrit\vert u_{\rm e}, u_{\rm s}, \deltaref\right)$. Four levels of fidelity will be considered for running simulations: the reference level $\deltaref = 20\,\text{cm}$, and three levels of lower fidelity ($\delta = 50\,\text{cm}$, $33\,\text{cm}$ and $25\,\text{cm}$). Table~\ref{tab:fds_levels} shows the correspondence between levels and computation times. Four independent initial NLHS designs of size~$n = 130$, distributed across the first three levels of fidelity as shown in Table~\ref{tab:fds_levels}, are available from previous studies. The normalized cost of each initial design is 9.89 (i.e., 20 days). A reference value for $Q$ has also been obtained from 150 Monte Carlo simulations, distributed on the highest fidelity level using $\Fenv$. This reference value has a normalized cost of 150 (i.e., about ten months). We run the MR-SUR strategy starting from our four initial designs, using a supplementary budget of 24 for each run (about 48.7 days). The underlying Bayesian model is the same as in Section~\ref{subsec:illustration_dampedOscillator}, the QoI $Q$ is estimated using the posterior mean $\hat Q_n = \esp_{n}(Q)$, and the measure of uncertainty is the integrated posterior variance \begin{equation*} H_n = \int \var_n \left( % p(u_{\rm e}, u_{\rm s}, \deltaref) % \right)\, \ddiff \Fenv(u_{\rm e}), \end{equation*} which is a special case of~\eqref{equ:Hn-L2}. The corresponding SUR criterion is similar to~\eqref{eq:notre-SUR-crit}, with an integral over the environmental variables only. The result is shown on Figure~\ref{fig:ex3_res}. We can see on Figure~\ref{fig:ex3_res_estimVI} that the estimations are initially, in three out of four cases, incompatible with the Monte Carlo one, but tend to get closer to the reference value when more simulations are carried out using the MR-SUR strategy. Figure~\ref{fig:ex3_res_uncerVI} shows the measure of uncertainty as a function of the cost: the uncertainty is large at the beginning of the sequential design and rapidly becomes smaller, as expected, as the MR-SUR strategy proceeds. (Note that the cost of the whole design is approximately $9.9 + 24 = 33.9$, which is must cheaper than the cost of~150 of the Monte Carlo reference.) \begin{figure} \begin{center} \psfrag{Cost}[tc][tc]{$c_n$} \psfrag{Phat}[bc][bc]{$\widehat Q_n$} \psfrag{sqrt(Hn)}[bc][bc]{$\sqrt{H_n}$} \psfrag{Estimations of the probability}[bc][bc]{} \psfrag{Measure of uncertainty}[bc][bc]{} \subfloat[Estimations of the probability]{ \includegraphics[width=0.45\textwidth]{ex3_estim_VI} \label{fig:ex3_res_estimVI}} \subfloat[Measures of uncertainty]{% \includegraphics[width=0.45\textwidth]{ex3_uncer_VI} \label{fig:ex3_res_uncerVI}% } \end{center} % \caption{% % Result of four repetitions for the fire safety example. % % (a)~Estimated probability as a function of the cost. % % The horizontal lines correspond to the Monte Carlo reference (dash-dotted line: mean; dotted lines: two-standard-deviation interval). % (b)~Square root of the measure of uncertainty (upper bound on the posterior standard deviation of the probability). % The horizontal dotted line is the Monte Carlo standard deviation. } \label{fig:ex3_res} \end{figure} \section{Conclusion} \label{sec:conclusion} The main contribution of this article is to unify and extend several methods of the literature of Bayesian sequential design of experiments for multi-fidelity numerical simulators. The unification that we propose is cast in the framework of Stepwise Uncertainty Reduction (SUR) strategies: when the accuracy of computer simulations can be chosen by the user, a natural extension of SUR strategies is to consider sampling criteria built as the ratio between the reduction of uncertainty and the cost of a simulation. We call this approach Maximal Rate of Stepwise Uncertainty Reduction (MR-SUR). It can be applied to deterministic or stochastic simulators. Our numerical experiments show that the MR-SUR approach typically provides estimations which, for a given computational cost, are never much worse, and often better, than the best SUR strategy using a single level of fidelity. Further work directions could be considered in the future. For instance, there is no explicit ingredient in MR-SUR strategies that tells the procedure to ``learn'' the model, and in particular, to learn the correlations between the levels of fidelity. It seems to us that this would be important, particularly when simulations are very expensive and the simulation budget is very limited, as in our fire safety application. Using a fully Bayesian approach would somehow answer this problem, as the uncertainty about the model would be propagated to the uncertainty about the QoI. Another important research direction would be to address parallel simulations. What would be a principled approach of resource allocation when several simulations with different accuracies and different costs can be conducted at the same time? \bibliographystyle{apalike}
{ "timestamp": "2021-05-31T02:19:35", "yymm": "2007", "arxiv_id": "2007.13553", "language": "en", "url": "https://arxiv.org/abs/2007.13553" }
\section{Introduction} In predictive modeling, a set of learning examples is used to induce a model that can be used to make accurate predictions for unseen examples. The examples are described with features and are associated with a target variable. The model uses the features to predict the value of the target variable. In most common tasks, there is only one target variable. If it is numeric, the task is called regression; if the target is discrete, the task is called classification. However, many real-life problems can be more naturally represented with more complex targets composed of multiple variables that can have additional structure or dependencies among them. Such problems are encountered in a variety of disciplines: life sciences (e.g., gene and protein function prediction), environmental sciences (e.g., soil functions, habitat modeling), text and image analysis (e.g., document classification, image annotation) and others. These problems are addressed with the task of \emph{structured output prediction} (SOP). In this work, we focus on three types of SOP tasks: \begin{itemize} \item \emph{Multi-target regression} (MTR) is a predictive modeling task where the target is a vector with numeric components/variables. \item \emph{Multi-label classification} (MLC) is a generalization of the standard classification where each example can be a member of multiple classes: the targets are subsets of a finite set of possible labels. \item \emph{Hierarchical multi-label classification} (HMLC) further extends the task of MLC by additionally considering a partial order on the set of possible labels, i.e., some labels are specialized cases of other labels. The partial order organizes the labels into a hierarchy that can be represented with a directed acyclic graph. \end{itemize} Predictive clustering trees are a variant of decision trees that have been successfully applied to various predictive modeling tasks, including structured output prediction and semi-supervised learning. However, they scale poorly to problems with many target variables and cannot take advantage of sparsity in the data. Both of these properties are quite common, especially in structured output prediction problems. For example, in (H)MLC problems there are often hundreds or thousands of possible labels (many target variables). Additionally, each example is typically labeled with only a handful of labels, which leads to sparse target matrices. Data can also be sparse on the input side, e.g., compounds described with binary fingerprints, text described with bag of words representations, etc. In this paper, we propose two methods for learning oblique predictive clustering trees, designed to improve the efficiency of learning on high dimensional and/or sparse data. The first variant of oblique predictive clustering trees is based on SVM and the second on gradient descent. We experimentally evaluate the proposed methods and show that they achieve state-of-the-art performance and are learned orders of magnitude faster than axis-parallel trees. We also perform parameter sensitivity analysis and demonstrate that meaningful feature importance scores can be extracted from the learned models. Initial experiments on classification with the gradient descent variant were presented in \cite{ismis:spyct-lncs}. The method has since been revised and improved with better support sparse data and regularization to reduce model sizes. In addition to the initial study, this paper proposes a new method (the SVM-based) and more extensive experiments that include single-target classification and regression as well as the three above-mentioned SOP tasks (MTR, MLC, and HMLC). The remainder of the paper is organized as follows. In Section~\ref{sec:background}, we present the background related to this paper. In Section~\ref{sec:methods}, we describe and analyze our proposed methods in detail. Next, we present our experimental setup (Section~\ref{sec:experimental}) and results from the benchmarking experiments (Section~\ref{sec:results}). We then illustrate the meaningfulness of the feature importances extracted from our models (Section~\ref{sec:fimpresults}) and present the results of parameter sensitivity analysis (Section~\ref{sec:parameters}). In Section~\ref{sec:conclusion}, we conclude the paper with a summary of the main findings. \section{Background} \label{sec:background} The research of tree-based predictive models was popularized in the 1980s \cite{Breiman84:CART}. Their use is widespread: they can be used for classification and regression tasks and can handle both numeric and nominal features. A single tree can be inspected and its predictions interpreted easily. When used in ensembles \citep{Breiman96:BAG,Breiman01:RF} they can achieve state-of-the-art performance. The performance boost comes at the cost of reduced interpretability, hence, different feature ranking approaches based on trees and tree ensembles have also been developed \citep{Breiman01:RF,Petkovic19:FIMP-MTR}. Predictive clustering trees \cite{Blockeel98:PCT,Blockeel02:PCT} generalize standard decision/regression trees by differentiating between three types of attributes: features, clustering attributes, and targets. Features are used to divide the examples; these are the attributes encountered in the split nodes. Clustering attributes are used to calculate the heuristic which guides the search of the best split at a given node. Targets are the attributes predicted in the leaves. The algorithm for learning PCTs follows the top-down induction algorithm described by \cite{Breiman84:CART}, and is presented in Algorithm~\ref{alg:pct}. \begin{algorithm}[tb] \caption{Learning a PCT: The inputs are matrices of features $X \in \mathcal{R}^{N \times D}$, targets $Y \in \mathcal{R}^{N \times T}$ and clustering attributes $Z \in \mathcal{R}^{N \times K}$.} \begin{algorithmic}[1] \Procedure{grow\_tree}{X, Y, Z} \State test = best\_test(X, Z) \If{acceptable(test)} \State rows1, rows2 = split(X, test) \State left\_subtree = grow\_tree(X[rows1], Y[rows1], Z[rows1]) \State right\_subtree = grow\_tree(X[rows2], Y[rows2], Z[rows2]) \State {\bf return} Node(test, left\_subtree, right\_subtree) \Else \State {\bf return} Leaf(prototype(Y)) \EndIf \EndProcedure \Procedure{best\_test}{X, Z} \State best = None \For{$d = 1, \dots, D$} \For{test $\in$ possible\_tests(X, d)} \If{score(test, X, Z) $>$ score(best, X, Z)} \State best = test \EndIf \EndFor \EndFor \State {\bf return} best $\frac{}{}$ \EndProcedure \Procedure{score}{test, X, Z} \State rows1, rows2 = split(X, test) \State n1 = num\_rows(rows1) \State n2 = num\_rows(rows2) \State n = n1 + n2 \State {\bf return} $n \cdot$ impurity(Z) - $n1 \cdot$ impurity(Z[rows1]) - $n2 \cdot$ impurity(Z[rows2]) \EndProcedure \end{algorithmic} \label{alg:pct} \end{algorithm} The algorithm takes as input matrices of features ($X$), clustering attributes ($Y$), and targets ($Z$). It then goes through all features and searches for a test that maximizes the heuristic score. The heuristic that is used to evaluate the tests is the reduction of impurity caused by splitting the data according to a test. It is calculated on the clustering attributes. If no acceptable split is found (e.g., no test reduces the variance significantly, or the number of examples in a node is below a user-specified threshold), then the algorithm creates a leaf and computes the prototype of the targets of the instances that were sorted to the leaf. The selection of the impurity and prototype functions depends on the types of clustering attributes and targets (e.g., variance and mean for regression, entropy and majority class for classification). In theory, clustering attributes can be completely independent of the features and the targets. In practice, the ultimate goal is to make accurate predictions for the targets, and the splitting heuristic should reflect that. The most basic (and common) approach is to use the targets also as clustering attributes. For example, in a classification problem, doing so makes PCTs equivalent to standard decision trees. But the attribute differentiation gives PCTs a lot of flexibility. They have been used for predicting various structured outputs \cite{Kocev13:SOP}. In addition to targets, we can also include features among the clustering attributes. This makes leaves homogeneous also in the input space, which is helpful if the targets are noisy, and can also be used for semi-supervised learning \cite{Levatic17:SSL-CL,Levatic18:SSL-MTR}. Embeddings of the targets have also been used as clustering attributes in order to reduce the time complexity of tree learning \cite{ismis:hmlc-lncs}. \begin{figure}[bt!] \centering \begin{tabular}{cc} \includegraphics[width=0.41\textwidth]{img/split_image.png} & \includegraphics[width=0.52\textwidth]{img/both_trees.png} \\ a) & b) \end{tabular} \caption{A toy dataset (a) with drawn decision boundaries learned by axis-parallel (red, dashed) and oblique (blue, solid) decision trees (b).} \label{fig:demo} \end{figure} Most of the research focus, including all of the above-mentioned work, is on \emph{axis-parallel} trees. The tests in axis-parallel trees only use single features and the splits are axis-parallel hyperplanes in the input space. Alternatively, \emph{oblique} trees (also called multivariate trees) use linear combinations of features in the tests, which allow for splits corresponding to an arbitrary hyperplane in the input space. They receive a lot less attention than axis-parallel trees, possibly due to the increased complexity of optimizing the split. Finding a hyperplane that splits a set of examples with binary labels in a way that minimizes misclassified examples on both sides of the hyperplane is an NP-hard problem \cite{Heath93:NP}. But the increased flexibility of splits can lead to models that fit the data much better as illustrated in Figure~\ref{fig:demo}. The initial proposals \citep{Breiman84:CART,Murthy94:OC1} for the induction of oblique trees relied on local search optimization and scale poorly to contemporary problems with thousands of examples and/or features. Weighted Oblique Decision Trees \cite{Yang19:WODT} learn splits by optimizing weighted information entropy and are a relatively efficient method for binary and multi-class classification. Oblique random forests \cite{Menze11:ORF} follow the standard Random Forest paradigm of learning trees on different bootstrapped samples of the training set and searching for each split in a different subset of features. Split learning is performed via ridge regression, which allows for efficient optimization of the hyperplanes, but their approach is limited to binary classification problems. Additionally, FastXML \cite{Prabhu14:FastXML} and PfastreXML \cite{Jain16:PFAST} present methods based on oblique trees for multi-label classification which work efficiently even with thousands of features and labels. When learning a split, these methods optimize a complex objective function that combines L1 regularization, logarithmic loss and normalized Distributed Cumulative Gain (nDCG) of examples on each side of the hyperplane. The criterion function optimization is performed in two steps. First, the examples are partitioned in a way that minimizes nDCG in each partition; this partitioning is improved iteratively. Then, a hyperplane is learned that approximates this partitioning as best as it can. This is done by using GLMNET method to solve a logistic regression problem. Existing oblique tree methods are specialized for specific predictive tasks and/or computationally inefficient. In this paper, we present a method that combines the flexibility of PCTs, uses oblique splits, applicable to a variety of predictive modeling tasks, and is computationally efficient. \section{Method description} \label{sec:methods} In this section, we describe oblique predictive clustering trees. We also follow the top-down induction algorithm presented in Algorithm~\ref{alg:pct}, but we modify the split searching procedure (the BEST\_TEST function) to learn oblique tests. We propose two modifications of the BEST\_TEST function based on SVMs and gradient descent. We start with the introduction of the notation used throughout the manuscript. Let $X \in \mathcal{R}^{N \times D}$ be the matrix containing the $D$ features of the $N$ examples in the learning set, $Y \in \mathcal{R}^{N \times T}$ be the matrix containing the $T$ targets associated with the $N$ examples in the learning set, and $Z \in \mathcal{R}^{N \times K}$ be the matrix containing the $C$ clustering attributes associated with the $N$ examples in the learning set. This formulation encapsulates several predictive modeling tasks. First, single-target regression ($T=1$) and multi-target regression ($T > 1$) fit this description easily. Second, for classification problems, the $Y$ matrix contains binary values: The value $y_{ij} = 1$ if the $i$-th example has $j$-th label, otherwise $y_{ij} = 0$. Third, for binary classification problems, the number of targets is $T=1$. Next, for multi-class classification with $k$ possible classes, the class information can be encoded via one-hot encoding ($T=k$). Each row in $Y$ has exactly one element set to 1, and the rest are set to 0. Note that similarly as for the targets, nominal features across all tasks are supported via one-hot encoding. Finally, label sets in (hierarchical) multi-label classification with $k$ possible labels can be encoded with binary vectors ($T=k$), but here each row can have multiple elements set to 1. Below, $M_{i.}$ will refer to the $i$-th row and $M_{.i}$ to the $i$-th column of the matrix $M$. Let $\texttt{colmean}(M)$ be a vector of column means of the matrix M. We wish to learn a vector of weights $w \in \mathcal{R}^D$ and a bias term $b \in \mathcal{R}$ that define a hyperplane, which splits the learning examples into two subsets. We will refer to the subset where $X_{i.} \cdot w+b \geq 0$ as the positive subset, and the subset where $X_{i.} \cdot w+b < 0$ as the negative subset. We want the examples in the same subset to have similar values of clustering attributes in $Z$. Prior to learning each split, features and clustering attributes are standardized to mean 0 and standard deviation 1. After learning the split, the tree construction continues recursively on the positive and negative subsets. As with standard PCTs, the learning stops when the learned split is not acceptable, at which point a leaf node is made where the prototype of the targets is stored. As prototypes, we use target means over the examples in the leaf, i.e., \texttt{colmean}(Y). This prototype can be used as-is to make predictions (regression problems, pseudo-probabilities for classification problems), or it can be used to predict the majority class, or all labels with frequencies above a specified threshold (multi-label classification). When making a prediction for a new example with features $x \in \mathcal{R}^D$, we calculate the value $x \cdot w + b$ in the root node. If it is non-negative, we repeat the process in the positive subtree, otherwise in the negative subtree. This is done until a leaf node is reached, where a prediction is made. \subsection{SVM-based split learning} In the SVM-based split learning, we perform hyperplane optimization in two steps. First, we group the examples into two subsets in such a way that the similarity of clustering attributes in each subset is maximized, regardless of the values of the features. This is calculated using only the $Z$ matrix, while $X$ is ignored. For example, in binary classification the subset partitioning is clear: each subset should contain examples from one class. For other tasks, the subsets are not obvious. To obtain them we use the k-means clustering \cite{kmeans} to cluster rows in $Z$ into two clusters. Let vector $c \in \{0, 1\}^N$ be the result of the clustering. Next, we look for a hyperplane in the input space that approximates this partitioning. In essence, we convert the split hyperplane optimization problem into a binary classification problem. We solve the following problem: $$\underset{w, b}{\min} \;\; ||w||_1 + C \sum_{i=1}^N max(0, 1 - c_i (X_{i.} \cdot w + b))^2,$$ where parameter $C\in \mathcal{R}$ determines the strength of regularization. To solve this problem we use the LIBLINEAR library \cite{liblinear}, specifically their $L1$-regularized $L2$-loss support vector classification. \subsection{Gradient-descent-based split learning} In the gradient-descent-based split learning, we directly search for a hyperplane split that minimizes the impurity on both sides of the hyperplane, unlike in the SVM-based split learning, where the split hyperplane is optimized indirectly, by first determining an ideal split and then trying to approximate it with a hyperplane. First, we calculate the vector $s = \sigma(Xw+b) \in [0, 1]^N$, where the sigmoid function $\sigma$ is applied component-wise. The vector $s$ contains values from the $[0, 1]$ interval, and we treat it as a fuzzy membership indicator. Specifically, the value $s_i$ tells us how much the $i$-th example belongs to the positive subset, whereas the value $1-s_i$ tells us how much it belongs to the negative subset. To measure the impurity of the positive subset, we calculate the weighted variance of each column (attribute) in $Z$, and we weigh each row (example) with its corresponding weight in $s$. To measure the impurity of the negative subset, we calculate the weighted variances with weights from $1-s$. Weighted variance of a vector $v \in \mathcal{R}^n$ with weights $a \in \mathcal{R}^n$ is defined as $$\texttt{var}(v, a) = \frac{\sum^n_{i=1} a_i(v_i - \texttt{mean}(v, a))^2}{A} = \texttt{mean}(v^2, a) - \texttt{mean}(v, a)^2,$$ where $A = \sum^n_{i=1} a_i$ is the sum of weights and $\texttt{mean}(v, a) = \frac{1}{A}\sum^n_{i=1} a_i v_i$ is the weighted mean of $v$. The impurity of a subset is the weighted sum of weighted variances over all the clustering attributes. The impurity of the positive subset is $\texttt{imp}(Z, p, s) = \sum^L_{j=1} p_j \texttt{var}(Z_{.j}, s)$, and similarly $\texttt{imp}(Z, p, 1-s)$ is the impurity of the negative subset. Weights $p \in \mathcal{R}^K$ enable us to give different priorities to different clustering attributes. The final split fitness function is $$ f(w, b) = S * \texttt{imp}(Z, p, s) + (N-S) * \texttt{imp}(Z, p, 1-s), $$ where $s = \sigma(Xw+b)$ and $S = \sum^N_{i=1} s_i$. The terms $S$ and $N-S$ represent the sizes of positive and negative subsets, and are added to guide the split-search procedure towards balanced splits. Finally, to obtain the split hyperplane, the following optimization problem is solved: $$\underset{w, b}{\min} \;\; ||w||_\frac{1}{2} + C f(w, b),$$ where $C$ again controls the strength of regularization and $||w||_\frac{1}{2} = (\sum_{i=1}^D \sqrt{|w_i|})^2$ is the L$\frac{1}{2}$ norm. We selected L$\frac{1}{2}$ regularization because it induces weight sparsity more aggressively than L1, which helps reducing the model size. For examples that are not close to the hyperplane, $s_i$ is close to 0 or 1. Weighted variances are therefore approximations of the variances of the subsets, that the hyperplane produces. This formulation makes the objective function differentiable and enables us to use the efficient Adam \cite{adam} gradient descent optimization method. The weight vector $w$ is initialized randomly, then $b$ is set such that the examples are initially split in half. \subsection{Time complexity analysis} \label{sec:time_complexity} In this section, we analyze the time complexity of learning oblique predictive clustering trees and compare it to the time complexity of learning standard PCTs. We focus on the cost of learning a split since this is the only significant difference between the two methods. Let us start with the SVM variant. The first step is to perform $2$-means clustering on the clustering data $Z \in \mathcal{R}^{N \times K}$, with time complexity $O(N K I_c)$, where $I_c$ is the number of clustering iterations we perform. Learning the linear SVM to differentiate the clusters costs $O(N D I_o)$, where $I_o$ is the number of optimization iterations. The total cost is then $O(N(K I_c + D I_o))$. For the gradient descent variant, the main operation is gradient calculation. The most expensive part of it is the multiplications of matrices $X$ and $Z$ with vectors, which costs $O(ND)$ and $O(NK)$, respectively. The total cost of learning a split is then $O(N I_o (D+K))$, where $I_o$ is again the number of optimization iterations. The time complexity of learning a split in standard PCTs is $O(DN \log N + NDK)$ \cite{Kocev13:SOP}. The important difference we can notice is that standard PCTs scale with $DK$, whereas both proposed variants of oblique PCTs scale with $D+K$. This difference is very noticeable when solving problems with many clustering variables, e.g., (hierarchical) multi-label classification and semi-supervised learning. An additional benefit of our approach can be obtained when dealing with sparse data. If the features and/or clustering data is sparse, both variants can exploit the sparsity by performing operations with sparse matrices. This reduces the time complexity to $O(N(\hat{K} I_c + \hat{D} I_o))$ for the SVM variant and $O(N I_o (\hat{D}+\hat{K})$ for the gradient descent variant, where $\hat{D} \ll D$ and $\hat{K} \ll K$ are average numbers of non-zero elements in each row of matrices $X$ and $Z$. \subsection{Ensembles of oblique predictive clustering trees} Single decision trees are mainly used when we wish to visually inspect and interpret the model. To achieve state-of-the-art performance, trees have to be used in ensembles. Oblique trees are inherently more difficult to visually interpret, because of the linear combinations in the split nodes. Also, because the splits are more complex, they can very easily overfit the data in the deeper nodes. For these reasons, we believe the most natural use of oblique PCTs is by combining them in ensembles. To construct bagging ensembles \cite{Breiman96:BAG}, we simply construct each oblique PCT on a different bootstrapped sample of the learning set. When making predictions, we average the prototypes predicted by each tree in the ensemble to get the final prediction. We can also build random forest ensembles \cite{Breiman01:RF} of oblique PCTs. In addition to bootstrapping, each split hyperplane is only learned on a random subset of features. After the hyperplane weights are learned, the $w_i$ for features that were not in the selected subset are set to $0$. \subsection{Feature importance} \label{sec:fimp} Even though ensembles of oblique PCTs are hard to interpret directly, we can still gain insight into how the models make their decision by calculating feature importances, much like it is done with ensembles of axis-parallel trees. The feature importance scores of a single oblique PCT are calculated as follows: $$ imp(T) = \sum_{s \in T} \frac{s_n}{N} \frac{s_w}{\Vert s_w \Vert_1}, $$ where $s$ iterates over split nodes in tree $T$, $s_w$ is the weight vector defining the split hyperplane, $s_n$ is the number of learning examples that were present in the node and $N$ is the total number of learning examples. The contributions of each node to the final feature importance scores are weighted according to the number of examples that were used to learn the split. This puts more emphasis on weights higher in the tree, which affect more examples. To get feature importance scores of an ensemble, we simply average feature importances of individual trees in the ensemble. \subsection{Implementation} We implemented the proposed oblique predictive clustering trees in a python package \emph{spyct}. It is freely licensed and available for use and download at \url{https://gitlab.com/TStepi/spyct}. The package includes both SVM and gradient descent variant, single trees, bagging, and random forest ensembles. In ensembles, trees can be built in parallel. We also provide a number of pre-pruning options: maximum tree depth, minimum number of examples required to perform a split, and the minimum reduction of impurity required for a split to be accepted. By default, tree depth is not limited, a split is always attempted if more than 1 example is still present in a node, and it is accepted if the impurity is reduced by at least 5\% in at least one of the subsets. Additionally, the splitting stops if one of the subsets is empty (the hyperplane does not split the data). We can control the maximum number of clustering iterations (SVM variant, 10 by default), learning rate (gradient variant, default 0.1), and the maximum number of hyperplane optimization iterations (both variants, default 100). We also make use of early stopping in both clustering and hyperplane optimization, if the process converges. Strength of regularization (both variants, $C=10$ by default) and other parameters of the Adam optimizer (gradient variant, default values from PyTorch \footnote{\url{https://pytorch.org/docs/stable/optim.html}} library) are also configurable. \section{Experimental setting} \label{sec:experimental} In this section, we present a comprehensive experimental study designed to evaluate the proposed oblique PCTs and compare them to standard PCTs and other baseline methods. We evaluated the methods on benchmark datasets from the classification spectrum: binary (BIN), multi-class (MCC), multi-label (MLC), and hierarchical multi-label classification (HMLC), as well as from the regression spectrum: single-target (STR) and multi-target regression (MTR). We first present the competing methods, and then the benchmark data and evaluation strategy. \subsection{Competing methods} The main baselines for comparison are standard PCTs, which can be used for all 6 predictive modeling tasks considered. We trained single trees, bagging ensembles of PCTs, and random forests of PCTs. Next, we compared to existing oblique tree methods in single tree settings, as they were proposed by the authors. This includes CART-LC \cite{Breiman84:CART}, OC1 \cite{Murthy94:OC1} and WODT \cite{Yang19:WODT}. Both CART-LC and OC1 are only applicable to BIN problems, whereas WODT can also be used for MCC. For ensembles of oblique trees, we considered oblique random forests (ORF) \cite{Menze11:ORF} and the FastXML method \cite{Prabhu14:FastXML} as competing methods. The ORF method is only applicable to BIN problems. The \textsc{FastXML} method is specialized for the MLC task and optimized to work with sparse data. We will also use it as a baseline for the HMLC datasets, by discarding the hierarchy and treating the problem as a flat MLC task. Additionally, the comparison includes \textsc{LightGBM} gradient boosted tree ensembles \cite{lightGBM}. They can be used for BIN, MCC, and STR tasks. Finally, to include a non-tree-based baseline, we used linear SVMs. They can be directly applied to BIN and STR tasks. We will also use them as baselines for other tasks, by learning a separate SVM for each label/target (one-vs-all approach in MCC, binary relevance in MLC and HMLC, local approach to MTR). We decided to use the linear kernel to keep the learning time manageable and because it is closest to the nature of the splits used in the oblique trees. Table~\ref{tab:baselines} sums up the baseline methods and provides links to the implementations we used in the experiments. We will refer to the proposed methods as \textsc{spyct-svm} and \textsc{spyct-grad}, for the SVM and gradient descent variant, respectively. For standard and oblique PCTs, we will prepend \textsc{BAG} or \textsc{RF} to mark bagging and random forest ensembles, respectively (e.g., \textsc{BAG-spyct-grad} denotes bagging ensembles of gradient descent variant oblique PCTs). \begin{table}[bt!] \centering \begin{tabular}{lrr} Method & Tasks & Implementation \\ \hline \textsc{spyct-svm} & all & \footnotesize{\url{gitlab.com/TStepi/spyct}}\\ \textsc{spyct-grad} & all & \footnotesize{\url{gitlab.com/TStepi/spyct}}\\ PCT & all & \footnotesize{\url{http://source.ijs.si/ktclus/clus-public/}}\\ SVM & all & \footnotesize{\url{https://scikit-learn.org/stable/}}\\ CART-LC & BIN & \footnotesize{\url{github.com/AndriyMulyar/sklearn-oblique-tree}}\\ OC1 & BIN & \footnotesize{\url{github.com/AndriyMulyar/sklearn-oblique-tree}} \\ WODT & BIN, MCC & \footnotesize{\url{www.lamda.nju.edu.cn/yangbb}}\\ ORF & BIN & \footnotesize{\url{rdrr.io/cran/obliqueRF/man/obliqueRF.html}}\\ \textsc{FastXML} & MLC, HMLC & \footnotesize{\url{manikvarma.org/code/FastXML/download.html}}\\ \textsc{LightGBM} & BIN, MCC, STR & \footnotesize{\url{lightgbm.readthedocs.io}}\\ \end{tabular} \caption{Methods included in the benchmarking comparison.} \label{tab:baselines} \end{table} \subsection{Evaluation} \begin{table}[bt!] \centering \begin{tabular}{l r r} Dataset & N & D \\ \hline \emph{ailerons} \cite{openml} & 13750 & 40 \\ \emph{cpmp-2015} \cite{openml} & 2108 & 23 \\ \emph{cpu\_small} \cite{openml} & 8192 & 12 \\ \emph{elevators} \cite{openml} & 16599 & 19 \\ \emph{house\_8L} \cite{openml} & 22784 & 8 \\ \end{tabular} \; \; \begin{tabular}{l r r} Dataset & N & D \\ \hline \emph{puma8NH} \cite{openml} & 8192 & 8 \\ \emph{qsar-234} \cite{openml} & 2145 & 1024 \\ \emph{satellite\_image} \cite{openml} & 6435 & 36 \\ \emph{space\_ga} \cite{openml} & 3107 & 6 \\ \emph{triazines} \cite{openml} & 186 & 60 \\ \end{tabular} \caption{Properties of the benchmark STR datasets. Columns show the number of examples (N) and the number features (D).} \label{tab:datasets_str} \end{table} \begin{table}[bt!] \centering \begin{tabular}{l r r r} Dataset & N & D & T \\ \hline \emph{atp1d} \cite{mulan} & 337 & 411 & 6 \\ \emph{edm} \cite{mulan} & 154 & 16 & 2 \\ \emph{enb} \cite{mulan} & 768 & 8 & 2 \\ \emph{jura} \cite{mulan} & 359 & 15 & 3 \\ \emph{oes10} \cite{mulan} & 403 & 298 & 16 \\ \end{tabular} \; \; \begin{tabular}{l r r r} Dataset & N & D & T \\ \hline \emph{oes97} \cite{mulan} & 334 & 263 & 16 \\ \emph{rf1} \cite{mulan} & 9125 & 64 & 8 \\ \emph{rf2} \cite{mulan} & 9125 & 576 & 8 \\ \emph{scm1d} \cite{mulan} & 9803 & 280 & 16 \\ \emph{slump} \cite{mulan} & 103 & 7 & 3 \\ \end{tabular} \caption{Properties of the benchmark MTR datasets. Columns show the number of examples (N), the number of features (D) and the number of targets (T).} \label{tab:datasets_mtr} \end{table} \begin{table}[bt!] \centering \begin{tabular}{l r r} Dataset & N & D \\ \hline \emph{banknote} \cite{openml} & 1372 & 4 \\ \emph{bioresponse} \cite{openml} & 3751 & 1776 \\ \emph{credit-approval} \cite{openml} & 690 & 15 \\ \emph{credit-g} \cite{openml} & 1000 & 20 \\ \emph{diabetes} \cite{openml} & 768 & 8 \\ \end{tabular} \; \; \begin{tabular}{l r r} Dataset & N & D \\ \hline \emph{musk} \cite{openml} & 6598 & 166 \\ \emph{OVA\_Breast} \cite{openml} & 1545 & 10935 \\ \emph{OVA\_Lung} \cite{openml} & 1545 & 10935 \\ \emph{spambase} \cite{openml} & 4601 & 57 \\ \emph{speeddating} \cite{openml} & 8378 & 120 \\ \end{tabular} \caption{Properties of the benchmark BIN datasets. Columns show the number of examples (N) and the number of features (D).} \label{tab:datasets_bin} \end{table} \begin{table}[bt!] \centering \begin{tabular}{l r r r} Dataset & N & D & T \\ \hline \emph{amazon-reviews} \cite{openml} & 1500 & 10000 & 50 \\ \emph{balance} \cite{openml} & 625 & 4 & 3 \\ \emph{diabetes130us} \cite{openml} & 101766 & 47 & 3 \\ \emph{gas-drift} \cite{openml} & 13910 & 128 & 6 \\ \emph{hepatitisC} \cite{openml} & 283 & 54621 & 3 \\ \end{tabular} \; \; \begin{tabular}{l r r r} Dataset & N & D & T \\ \hline \emph{isolet} \cite{openml} & 7797 & 617 & 26 \\ \emph{mfeat-pixel} \cite{openml} & 2000 & 240 & 10 \\ \emph{micro-mass} \cite{openml} & 571 & 1300 & 20 \\ \emph{vehicle} \cite{openml} & 846 & 18 & 4 \\ \emph{wine-white} \cite{openml} & 4898 & 11 & 7 \\ \end{tabular} \caption{Properties of the benchmark MCC datasets. Columns show the number of examples (N), the number of features (D) and the number of classes (T).} \label{tab:datasets_mcc} \end{table} \begin{table}[bt!] \centering \begin{tabular}{l r r r r} Dataset & N & D & T & Sparse \\ \hline \emph{bibtex} \cite{mulan} & 7395 & 1836 & 159 & D, T \\ \emph{bookmarks} \cite{mulan} & 87856 & 2150 & 208 & D, T \\ \emph{CAL500} \cite{mulan} & 502 & 68 & 174 & T \\ \emph{corel5k} \cite{mulan} & 5000 & 499 & 374 & D, T \\ \emph{emotions} \cite{mulan} & 593 & 72 & 6 & \\ \emph{eurlex-eurovoc} \cite{mulan} & 19348 & 5000 & 3993 & D, T \\ \emph{flags} \cite{mulan} & 194 & 19 & 7 & \\ \emph{rcv1subset1} \cite{mulan} & 6000 & 47236 & 101 & D, T \\ \emph{scene} \cite{mulan} & 2407 & 294 & 6 & \\ \emph{tmc2007} \cite{mulan} & 28596 & 49060 & 22 & D \\ \end{tabular} \caption{Properties of the benchmark MLC datasets. Columns show the number of examples (N), the number of features (D), the number of targets/labels (T), and whether the input or output space is sparse.} \label{tab:datasets_mlc} \end{table} \begin{table}[bt!] \centering \begin{tabular}{l r r r r} Dataset & N & D & T & Sparse \\ \hline \emph{enron} \cite{kocev_phd} & 1648 & 1001 & 56 & D, T \\ \emph{imclef07d} \cite{kocev_phd} & 11006 & 80 & 46 & T \\ \emph{reuters} \cite{kocev_phd} & 6000 & 47235 & 102 & D, T \\ \emph{wipo} \cite{kocev_phd} & 1710 & 74435 & 188 & D, T \\ \emph{yeast\_GO} \cite{schietgat} & 3465 & 5930 & 133 & D, T \\ \emph{yeast\_spo\_FUN} \cite{schietgat} & 3711 & 80 & 594 & T \\ \emph{yeast\_expr\_FUN} \cite{schietgat} & 3788 & 551 & 594 & T \\ \emph{ara\_exprindiv\_FUN} \cite{schietgat} & 3496 & 1251 & 261 & T \\ \emph{yeast\_gasch1\_FUN} \cite{schietgat} & 3773 & 173 & 594 & T \\ \emph{ara\_interpro\_GO} \cite{schietgat} & 11763 & 2815 & 630 & D, T \\ \end{tabular} \caption{Properties of the benchmark HMLC datasets. Columns show the number of examples (N), the number of features (D), the number of targets/labels (T), and whether the input or output space is sparse.} \label{tab:datasets_hmlc} \end{table} We evaluated the methods on 60 benchmarking datasets, 10 for each of the 6 tasks. Their details are presented in Tables~\ref{tab:datasets_str}-\ref{tab:datasets_hmlc}. If the input or output data matrix had fewer than 10\% of nonzero values, it was represented in a sparse format (marked in the tables, where applicable). To measure predictive performance of the methods on STR and MTR datasets, we used the \emph{coefficient of determination} as performance measure $$R^2(y, \hat{y}) = 1 - \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i (y_i - \bar{y})^2},$$ where $y$ is the vector of true target values, $\bar{y}$ is their mean, and $\hat{y}$ is the vector of predicted values. For MTR problems, the mean of $R^2$ scores per target was calculated. For BIN and MCC tasks, we used \emph{F1 score}, macro averaged in the MCC case. Methods solving MLC and HMLC tasks typically return a score for each label and each example, higher score meaning that example is more likely to have that label. Let $y \in \{0, 1\}^{n \times l}$ be the matrix of label indicators and $\hat{y} \in \mathcal{R}^{n \times l}$ the matrix of label scores returned by a method. We measured the performance of methods with weighted label ranking average precision $$LRAP(y, \hat{y}) = \frac{1}{n} \sum_{i=0}^{n-1} \sum_{j: y_{ij}=1} \frac{w_j}{W_i} \frac{L_{ij}}{R_{ij}},$$ where $L_{ij} = |\{ k: y_{ik} = 1 \wedge \hat{y}_{ik} \geq \hat{y}_{ij} \}|$ is the number of real labels assigned to example $i$ that the method ranked higher than label $j$, $R_{ij} = |\{k: \hat{y}_{ik} \geq \hat{y}_{ij} \}|$ is the number of all labels ranked higher than label $j$, $w_j$ is the weight we put to label $j$ and $W_i$ is the sum of weights of all labels assigned to example $i$. For MLC tasks we put equal weights to all labels, whereas for HMLC task we weighted each label with $0.75^d$, where $d$ is the depth of the label in the hierarchy \cite{Kocev13:SOP}. For all three measures used (R2, F1, LRAP), a higher value indicates better performance, with a value of 1 indicating the best possible performance. We estimated the predictive performance using 10-fold cross-validation on each dataset. In addition to performance measures, we also recorded the learning time of each method. All experiments were performed on the same computer and the methods were allowed to use up to 10 processor cores. We set the number of trees in the ensembles to 50. Random forest ensembles (RF-PCT, RF-spyct-svm, RF-spyct-grad, ORF) used $\sqrt{D}$ features for each split, where $D$ is the number of features. For SVM and LightGBM methods, parameter tuning is advised. In experiments with these methods, we set aside 20\% of the training set for each fold to tune the parameters. For SVM we selected the regularization parameter $C$ from values $\{ 0.1, 1, 10, 100 ,1000 \}$. For LightGBM we selected the maximum number of leaves in the trees among values $\{20, 50, 100\}$ and the minimum number of samples to perform a split from values $\{1, 10, 100, 1000\}$. The ORF method includes internal optimization of its regularization parameter, which we left at its default setting. In the HMLC experiments, standard and oblique PCTs also received the same label weights used to calculate LRAP. \section{Results and discussion} \label{sec:results} In this section, we present the results of our extensive benchmarking experiments. We first discuss the predictive performance of the proposed oblique PCTs in an ensemble setting. Next, we analyze the learning times, focusing on large and sparse datasets. Finally, we present a comparison of the proposed methods to other single-tree methods. \subsection{Predictive performance} \begin{figure}[bt!] \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{img/others_STR_diagram.png} & \includegraphics[width=0.45\textwidth]{img/others_MTR_diagram.png} \\ a) STR & b) MTR \\ & \\ \includegraphics[width=0.45\textwidth]{img/others_BIN_diagram.png} & \includegraphics[width=0.45\textwidth]{img/others_MCC_diagram.png} \\ c) BIN & d) MCC \\ & \\ \includegraphics[width=0.45\textwidth]{img/others_MLC_diagram.png} & \includegraphics[width=0.45\textwidth]{img/others_HMLC_diagram.png} \\ e) MLC & f) HMLC \end{tabular} \caption{Average ranks of predictive performances obtained by methods on single-target regression (a), multi-target regression (b), binary classification (c), multi-class classification (d), multi-label classification (e) and hierarchical multi-label classification (f) benchmark datasets.} \label{fig:ensemble_diagrams} \end{figure} To keep the paper concise, we do not present the large number of raw performance results of our experiments here (they are available in the Appendix), but instead focus on the aggregated results presenting the overall findings. Figure~\ref{fig:ensemble_diagrams} presents the average ranks of the methods based on predictive performance on the benchmark datasets, grouped by predictive modeling tasks. If a method did not finish the evaluation in 3 days, it was assigned the lowest rank. This only occurred for the MLC task, where BAG-PCT did not finish on \emph{eurlex-eurovoc} and \emph{tmc2007} datasets, and RF-PCT did not finish on \emph{eurlex-eurovoc} dataset. In a nutshell, the results show that in terms of predictive performance, the proposed bagging ensembles (\textsc{BAG-spyct-svm} and \textsc{BAG-spyct-grad}) are on par with current state-of-the-art methods. Furthermore, \textsc{BAG-spyct-grad} achieves the best rank on 3 tasks (MTR, MCC, and MLC), and is close to the top on the other 3 tasks as well (close second on BIN and HMLC, where on the latter best performing is \textsc{BAG-spyct-svm}). While the bagging ensembles of oblique PCTs have premium predictive performance, the proposed random forest ensembles of oblique PCTs (\textsc{RF-spyct-svm} and \textsc{RF-spyct-grad}) did not perform well. Especially \textsc{RF-spyct-svm} struggled to learn on several datasets. We believe this is the result of sparse input features (exacerbated by the one-hot encoding required for nominal features) combined with the stopping criterion used. A poor feature subset selection consisting of irrelevant features and features with most or all values of 0, makes split optimization difficult. If the learned split is not useful, splitting is immediately stopped on the current branch, hence the trees become heavily over-pruned. It appears that random forest ensembles of oblique PCTs require a deeper redesign of the learning algorithm and straight-forward subspacing is not effective. \subsection{Learning time} Experimental comparison of the learning times of different methods is a challenging task and difficult to do completely fairly. The implementations are in different programming languages (python, java, R, C++) and they are not equally optimized for efficient execution. Hence, we focus on the largest datasets, where time efficiency is more pronounced, and where the method's time complexity is more likely to dominate any implementation details. \begin{table} \centering \begin{tabular}{l r R{1.175cm} R{1.175cm} R{1.175cm} R{1.175cm} R{1.175cm}} Dataset & Task & \textsc{BAG-spyct-svm} & \textsc{BAG-spyct-grad} & BAG-PCT & \textsc{FastXML} & \textsc{LightGBM}\\ \hline \emph{diabetes130us} & MCC & 95.7 & 65.4 & 749.0 & NA & 11.2 \\ \emph{hepatitisC} & MCC & 5.4 & 1.8 & 12.4 & NA & 55.4 \\ \emph{bookmarks} & MLC & 28.4 & 44.4 & 1747.7 & 16.8 & NA \\ \emph{eurlex-eurovoc} & MLC & 200.8 & 8.0 & DNF & 8.1 & NA \\ \emph{rcv1subset1} & MLC & 6.0 & 18.2 & 987.2 & 0.4 & NA \\ \emph{tmc2007} & MLC & 22.5 & 129.2 & DNF & 4.8 & NA \\ \emph{reuters} & HMLC & 4.0 & 11.8 & 883.2 & 0.4 & NA \\ \emph{yeast\_expr\_FUN} & HMLC & 17.2 & 1.2 & 517.1 & 3.0 & NA \end{tabular} \caption{Learning times of bagging ensembles on large datasets in minutes. NA means the method is not applicable to the task, DNF means the method did not finish in 3 days (4320 minutes). For dataset properties see Tables~\ref{tab:datasets_mcc}-\ref{tab:datasets_hmlc}.} \label{tab:times} \end{table} Table~\ref{tab:times} presents the learning times on selected datasets with large numbers of examples, features, and/or targets. It is apparent, that both proposed variants \textsc{spyct-svm} and \textsc{spyct-grad} are computationally much more efficient than standard PCTs. There are several reasons for the improvement of the learning time. To begin with, the main advantage comes from better scaling with the number of targets (as theoretically analyzed in Section~\ref{sec:time_complexity}), which is evident in the large differences on MLC and HMLC datasets. Next, another advantage is the exploitation of the sparse representation of the data. For example, if dense matrices are used instead of sparse matrices on the \emph{reuters} dataset the learning time is dramatically increased: for \textsc{BAG-spyct-svm} it increases from 4 minutes to 247.7 minutes, and for \textsc{BAG-spyct-grad} from 11.8 minutes to 54 minutes. Although learning from dense matrices is much slower than learning on sparse matrices, it is still faster than \textsc{BAG-PCT}. Furthermore, another source of learning time improvements is that with oblique splits, models can be much smaller (in terms of numbers of split nodes) compared to axis-parallel trees, because the splits are more expressive. An illustrative example of this is the \emph{diabetes130us} dataset that does not even have multiple targets or sparse features: The \textsc{BAG-spyct-svm} model consists of only 87 nodes and the \textsc{BAG-spyct-grad} of 2345 nodes, while the standard \textsc{BAG-PCT} model consists of 1803069 nodes. Lastly, we noticed faster learning on datasets with a large number of features, even when they are not sparse (e.g., the learning times of the \emph{hepatitisC} dataset). We believe this is because the matrix operations are very well optimized in modern CPUs, which gives an advantage to the proposed optimization of oblique splits, compared to exhaustive search over the features in axis-parallel trees. We can also note that \textsc{LightGBM} is very efficient on datasets with many examples (\emph{diabetes130us}), but less so on datasets with many features (\emph{hepatitisC}). Furthermore, the implementation of the \textsc{FastXML} method is extremely well optimized for (H)MLC datasets with a sparse structure. Therefore, even though its theoretical computational complexity is very similar to our proposed methods, it has lower learning times on such datasets. However, when the features were not sparse, the observed learning times were in the same range as the proposed methods (e.g., for the \emph{yeast\_expr\_FUN dataset}, \textsc{FastXML} has worse learning time than \textsc{BAG-spyct-grad} and better than \textsc{BAG-spyct-svm}). \subsection{Single tree methods} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{img/trees_BIN_diagram.png} & \includegraphics[width=0.45\textwidth]{img/trees_MTR_diagram.png} \\ a) BIN & b) MTR \end{tabular} \caption{Average ranks of predictive performances obtained by single-tree methods on binary classification and multi-target regression benchmark datasets.} \label{fig:tree_diagrams} \end{figure} Even though we believe the oblique PCTs are mostly suited for use in ensembles, we briefly discuss our experiments for learning single trees. The results are illustrated in Figure~\ref{fig:tree_diagrams} by presenting the average ranks on two tasks: BIN and MTR. Note that the same behavior can be observed for all the other tasks. As discussed in the introduction, most existing oblique trees are applicable only to binary classification problems. We see that our proposed methods achieve competitive performance also in the single tree setting. The results for the \textsc{OC1} method on 3 datasets (\emph{bioresponse}, \emph{OVA\_Breast} and \emph{OVA\_Lung}) are missing, because the algorithm exited without returning a tree when it was unable to find a split of the root node. It was assigned the worst ranking in those cases. We should note that even though \textsc{WODT} method performed poorly on BIN datasets, it was on par with other methods on MCC datasets. On SOP tasks, oblique PCTs generally outperformed standard PCTs, as illustrated in Figure~\ref{fig:tree_diagrams}b. \section{Feature importance scoring} \label{sec:fimpresults} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.46\textwidth]{img/fimps_svm.png} & \includegraphics[width=0.46\textwidth]{img/fimps_grad.png} \\ \end{tabular} \caption{Feature importance scores obtained on different datasets by BAG-spyct-svm (left) and BAG-spyct-grad (right) methods. Results shows that the models do not rely on irrelevant features.} \label{fig:fimps} \end{figure} We demonstrate the capability of extracting meaningful feature importance scores from the models using the approach described in Section~\ref{sec:fimp}. To evaluate the proposed approach, we added random noise features to the datasets and then learned \textsc{BAG-spyct-svm} and \textsc{BAG-spyct-grad} models on the expanded datasets. If the dataset originally had $d$ features, we added another $d$ features with random values, so the number of features has doubled. We took data sparsity into account, so the ratio of non-zero values did not change (i.e., sparse datasets remained sparse). Figure~\ref{fig:fimps} presents the results on 6 datasets -- we illustrate this on one dataset for each task. The importances of real features have diverse values which is expected: Datasets can contain features that are informative and features that are less informative or not informative at all. The main observation is that the added random features do not obtain high importance scores. This indicates that obtained importance scores are meaningful, and that methods are resilient to spurious features. This holds even on datasets with thousands of features and on datasets with sparse features. \section{Parameter sensitivity analysis} \label{sec:parameters} \begin{table} \centering \begin{tabular}{l r r r r r} Dataset & Task & N & D & T & Sparse \\ \hline \emph{cholesterol} \cite{openml} & STR & 303 & 13 & 1 & \\ \emph{pol} \cite{openml} & STR & 15000 & 48 & 1 & \\ \emph{qsar-30007} \cite{openml} & STR & 534 & 1024 & 1 & \\ \emph{andro} \cite{mulan} & MTR & 49 & 30 & 6 & \\ \emph{atp7d} \cite{mulan} & MTR & 296 & 411 & 6 & \\ \emph{wq} \cite{mulan} & MTR & 1060 & 16 & 14 & \\ \emph{arrhythmia} \cite{openml} & BIN & 452 & 279 & 1 & \\ \emph{OVA\_Endometrium} \cite{openml} & BIN & 1545 & 10935 & 1 & \\ \emph{transfusion} \cite{openml} & BIN & 748 & 4 & 1 & \\ \emph{energy\_efficiency} \cite{openml} & MCC & 768 & 8 & 37 & \\ \emph{gcm} \cite{openml} & MCC & 190 & 160063 & 14 & \\ \emph{gesture} \cite{openml} & MCC & 9873 & 32 & 5 & \\ \emph{birds} \cite{mulan} & MLC & 645 & 260 & 19 & T \\ \emph{delicious} \cite{mulan} & MLC & 16105 & 500 & 983 & D,T \\ \emph{mediamill} \cite{mulan} & MLC & 43907 & 120 & 101 & T \\ \emph{ara\_scop\_GO} \cite{schietgat} & HMLC & 9843 & 2003 & 572 & T \\ \emph{imclef07a} \cite{kocev_phd} & HMLC & 11006 & 80 & 96 & T \\ \emph{yeast\_seq\_FUN} \cite{schietgat} & HMLC & 3932 & 478 & 594 & T \\ \end{tabular} \caption{Properties of datasets for parameter sensitivity study. Columns show predictive modelling task, number of examples (N), number of features (D), number of targets (T) and whether the input or output is sparse.} \label{tab:param_datasets} \end{table} We performed parameter sensitivity analysis to investigate the influence of the different parameter values on the performance of the proposed methods. The analysis was performed on a different selection of datasets than the benchmarking experiments, to avoid any test set leakage. We used 3 datasets per task, yielding 18 datasets -- their properties are presented in Table~\ref{tab:param_datasets}. For estimating the predictive performance, we used 5-fold cross validation We experimented with different regularization strengths $C \in \{0.1, 1, 10, 100, 1000 \}$ and different numbers of ensemble members (1, 10, 25, 50, 100). We also tried different thresholds for impurity reduction required to accept a split: $\{ 0\%, 5\%, 10\%, 20\%\}$. Note that at 0\%, pre-pruning based on impurity reduction is turned off. \begin{figure}[bt!] \centering \begin{tabular}{c} \includegraphics[width=0.95\textwidth]{img/param_trees.png} \\ \includegraphics[width=0.95\textwidth]{img/param_C.png} \\ \includegraphics[width=0.95\textwidth]{img/param_imp.png} \\ \end{tabular} \caption{Predictive performance obtained with different numbers of ensemble members (top), regularization strengths (middle) and impurity reduction thresholds (bottom).} \label{fig:params} \end{figure} We present the most interesting observations in Figure~\ref{fig:params}. First, as expected, increasing the number of trees in the ensemble leads to better predictive performance. However, on several datasets (e.g., the \emph{scm1d} dataset) the improvements become relatively small already from 10-25 trees onward. Second, the regularization strength $C$ is difficult to get right. If the value is too low, the model does not fit the data enough. If it is too high, the model can overfit and the split weights are less sparse which can lead to large model sizes (memory-wise). Gradient descent variant appeared to be more sensitive to this parameter than the SVM variant. The performance-wise results show that it is less damaging if $C$ is too large, compared to if it is too small. Third, the impurity reduction threshold determines how good a split must be to be accepted (otherwise the current tree branch stops growing). Similar to the regularization strength, if the threshold is set too high, the model does not fit the data enough. If it is set too low, we risk overfitting the data (e.g., the \emph{cholesterol} dataset) and/or inflating the model for no benefit (increases learning time and memory requirements, e.g., the \emph{birds} dataset). Finally, our benchmarking experiments have shown that the selected default parameter values work well overall, however, they are unlikely to be optimal for any particular dataset. The best performance on a given dataset will be obtained by tuning the parameters for that dataset. \section{Conclusions} \label{sec:conclusion} In this paper, we propose two methods for learning oblique predictive clustering trees. The nodes in oblique trees contain oblique splits that have linear combinations of features in the tests, hence the splits correspond to an arbitrary hyperplane in the input space. The first method starts by clustering the examples based on the target values and then learns a hyperplane in the input/feature space that approximates this clustering (the spyct-svm variant). The second method uses fuzzy membership indicators and weighted variance to directly optimize the hyperplane to minimize the impurity of examples on both sides (the spyct-grad variant). The proposed methods are designed to have drastically lower learning time on datasets with many targets compared to axis-parallel predictive clustering trees, while being capable of addressing many predictive modeling tasks, including structured output prediction. They can also exploit the sparsity present in the input/output data to more efficiently learn the predictive models. We evaluated the proposed methods in a single tree and ensemble settings on 60 benchmark datasets for 6 predictive modeling tasks: binary classification, multi-class classification, multi-label classification, hierarchical multi-label classification, single-target regression, and multi-target regression. The results from the empirical study show that the predictive performance of ensembles of \textsc{spyct-svm} and \textsc{spcyt-grad} trees is on-par with state-of-the-art methods on all considered predictive modeling tasks, often exceeding it (especially \textsc{spyct-grad}). The learning times on datasets with high dimensional input and output spaces are orders of magnitude lower than learning times of standard PCTs, especially when the data is sparse. We also demonstrate the potential for extracting meaningful feature importance scores from the models, additionally indicating that the models are resilient to irrelevant features. In future work, we plan to extend our work in several directions. To begin with, random forest ensembles of proposed oblique PCTs were not very successful. Hence, we plan to research the reasons behind these results and develop an improved version of the algorithm. Next, we will extend the developed methods towards the task of semi-supervised learning, where axis-parallel PCTs scale especially poorly in terms of computational cost due to a large number of clustering attributes considered there. Last but not least, we will evaluate the use of oblique trees for efficient learning of feature rankings in all of the above tasks as well as in the context of semi-supervised learning. \section*{Acknowledgements} This work was supported by projects from the Slovenian Research Agency through the research program grant number P2-0103 (Young Researcher grant to TS) and the project grant number J2-9230. \bibliographystyle{elsarticle-num}
{ "timestamp": "2020-11-06T02:11:01", "yymm": "2007", "arxiv_id": "2007.13617", "language": "en", "url": "https://arxiv.org/abs/2007.13617" }
\section{Introduction} Quantum walks are quantum counterparts of classical random walks \cite{Portugal:2013}. Similarly to classical random walks, there are two types of quantum walks: discrete-time quantum walks (DTQW), introduced by Aharonov~{\it et al.}~\cite{Aharonov:1993}, and continuous-time quantum walks (CTQW), introduced by Farhi~{\it et al.}~\cite{Farhi:1998}. For the discrete-time version, the step of the quantum walk is usually given by two operators -- coin and shift -- which are applied repeatedly. The coin operator acts on the internal state of the walker and rearranges the amplitudes of going to adjacent vertices. The shift operator moves the walker between the adjacent vertices. Quantum walks have been useful for designing algorithms for a variety of search problems\cite{Nagaj:2011}. To solve a search problem using quantum walks, we introduce the notion of marked elements (vertices), corresponding to elements of the search space that we want to find. We perform a quantum walk on the search space with one transition rule at the unmarked vertices, and another transition rule at the marked vertices. If this process is set up properly, it leads to a quantum state in which the marked vertices have higher probability than the unmarked ones. This method of search using quantum walks was first introduced in \cite{Shenvi:2003} and has been used many times since then. The problem of search on a two-dimensional rectangular grid was stated in 2002 by Paul Benioff \cite{Benioff:2002}, who conjectured that local search on $\sqrt{N} \times \sqrt{N}$ grid needs $\Omega(N)$ time, i.e. no quantum speed-up is possible. One year later Ambainis and Aaronson proposed an algorithm \cite{Aaronson:2003} which finds a marked vertex in $O(\sqrt{N} \log^2{N})$ steps. In 2005 Ambainis, Kempe and Rivosh \cite{Ambainis:2005} proposed a quantum walk based algorithm (AKR algorithm) which finds a marked vertex with $O(\frac{1}{\log{N}})$ probability in $O(\sqrt{N\log N})$ steps. Applying amplitude amplification this gives the running time of $O(\sqrt{N}\log N)$. Following the AKR algorithm, it had been conjectured that the running time can be reduced to $O(\sqrt{N\log N})$, hence, providing a full quadratic speed-up over the random walk based approach. This conjecture has been confirmed a few years later by Tulsi who in 2008 showed how to modify the AKR algorithm to achieve a constant success probability in $O(\sqrt{N\log N})$ steps \cite{Tulsi:2008}. Another method to achieve $O(\sqrt{N\log N})$ running time is to make the quantum walk lackadaisical, i.e. to add a self-loop to each vertex\footnote{There are also other methods, e.g. to run ARK algorithm and to classically search the neighbourhood of a found vertex~\cite{Nahimovs:2013}}. The concept of lackadaisical quantum walk (quantum walk with self loops) was first studied for DTQW on one-dimensional line \cite{Norio:2005,Stefanak:2014} and later applied to improve the DTQW based search on the complete graph~\cite{Wong:2015a} and two-dimensional rectangular grid~\cite{Wong:2018,Hoyer:2020}. The running time of the lackadaisical walk heavily depends on a weight of the self-loop. For a rectangular 2D grid with a single marked vertex one optimal weight of the self-loop is $4/N$. Following the AKR algorithm for the rectangular 2D grid, Abal {\it et.al.} studied the AKR walk on honeycomb~\cite{Abal:2010} and triangular~\cite{Abal:2012} grids. They showed that the walk has the same running time of $O(\sqrt{N} \log N)$, which by applying the Tulsi modification can be reduced to $O(\sqrt{N\log N})$. In this paper we apply lackadaisical approach to quantum walk search on triangular and honeycomb 2D grids. We show that for both types of grids adding a self-loop of weight $6/N$ and $3/N$ for triangular and honeycomb grids, respectively, results in $O(\sqrt{N \log{N}})$ running time, i.e. in the same improvement which is achieved by using the Tulsi modification. \section{Quantum walks on the two-dimensional grid}\label{sec:definitions} \comment{ \subsection{Non-lackadaisical quantum walk on rectangular 2D grid} Consider a two-dimensional rectangular grid of size $\sqrt{N}\times\sqrt{N}$ with periodic (torus-like) boundary conditions. The locations of the grid are labeled by the coordinates $(x,y)$ for $x, y \in \{0,\dots,\sqrt{N}-1\}$. The coordinates define a set of state vectors, $\ket{x,y}$, which span the $N$-dimensional Hilbert space ${\cal{H_P}}$ associated with the position. Additionally, we define a 4-dimensional Hilbert space ${\cal{H_C}}$, spanned by the set of states $\{\ket{c}: c\in \{\uparrow,\downarrow,\leftarrow,\rightarrow \}\}$, associated with the direction. We refer to it as the coin subspace. The Hilbert space of the quantum walk is $\mathbb{C}^N\otimes \mathbb{C}^4$. The evolution of a state of the walk (without searching) is driven by the unitary operator $U = S\cdot (I_N \otimes C)$, where $S$ is the flip-flop shift operator \begin{eqnarray} S\ket{x,y,\uparrow} & = & \ket{x,y+1,\downarrow} \\ S\ket{x,y,\downarrow} & = & \ket{x,y-1,\uparrow} \nonumber \\ S\ket{x,y,\leftarrow} & = & \ket{x-1,y,\rightarrow} \nonumber \\ S\ket{x,y,\rightarrow} & = & \ket{x+1,y,\leftarrow} \nonumber \end{eqnarray} and $C$ is the coin operator, given by the Grover's diffusion transformation \begin{equation} C = 2 \ket{s_c}\bra{s_c} - I_4 \end{equation} with $$ \ket{s_c} = \frac{1}{\sqrt{4}}(\ket{\uparrow} + \ket{\downarrow} + \ket{\leftarrow} + \ket{\rightarrow}) . $$ The system starts in \begin{equation} \ket{\psi(0)} = \frac{1}{\sqrt{N}} \sum_{x,y=0}^{\sqrt{N}-1} \ket{x,y} \otimes \ket{s_c} , \end{equation} which is uniform distribution over vertices and directions. Note, that this is a unique eigenvector of $U$ with eigenvalue $1$. The state of the system after $t$ steps is $\ket{\psi(t)} = U^t \ket{\psi(0)}$. To use quantum walk for search, we extend the step of the algorithm, making it $$ U' = U \cdot (Q \otimes I_4) , $$ where $Q$ is the query transformation which flips the sign at a marked vertex, irrespective of the coin state. Note that $\ket{\psi(0)}$ is a 1-eigenvector of $U$ but not of $U'$. If there are marked vertices, the state of the algorithm starts to deviate from $\ket{\psi(0)}$. In case of a single marked vertex, after $O(\sqrt{N\log{N}})$ steps the inner product $\braket{\psi(t)}{\psi(0)}$ becomes close to $0$. If the state is measured at this moment, the probability of finding a marked vertex is $O(1 / \log{N})$~\cite{Ambainis:2005}. With amplitude amplification this gives the total running time of $O(\sqrt{N} \log{N})$ steps. } \comment{ \subsection{Lackadaisical quantum walk on rectangular 2D grid} In case of lackadaisical quantum walk the coin subspace of the walk is 5-dimensional Hilbert space spanned by the set of states $\{\ket{c}: c\in \{\uparrow,\downarrow,\leftarrow,\rightarrow,\circlearrowleft \}\}$. The Hilbert space of the quantum walk is $\mathbb{C}^N\otimes \mathbb{C}^5$. The shift operator acts on a self loop as \begin{equation} S\ket{x,y,\circlearrowleft} = \ket{x,y,\circlearrowleft} . \end{equation} The coin operator is \begin{equation} C = 2 \ket{s_c}\bra{s_c} - I_5 \end{equation} with $$ \ket{s_c} = \frac{1}{\sqrt{4 + l}}(\ket{\uparrow} + \ket{\downarrow} + \ket{\leftarrow} + \ket{\rightarrow} + \sqrt{l}\ket{\circlearrowleft}) . $$ The system starts in \begin{equation} \ket{\psi(0)} = \frac{1}{\sqrt{N}} \sum_{x,y=0}^{\sqrt{N}-1} \ket{x,y} \otimes \ket{s_c} , \end{equation} which is uniform distribution over vertices, but not directions. As before $\ket{\psi(0)}$ is a unique 1-eigenvector of $U$. In case of search the step of the algorithm is $U' = U \cdot (Q \otimes I_5)$. As it is shown in \cite{Wong:2018}, in case of a single marked vertex, for the weight $l = \frac{4}{N}$, after $O(\sqrt{N\log{N}})$ steps the inner product $\braket{\psi(t)}{\psi(0)}$ becomes close to $0$. If one measures the state at this moment, he will find the marked vertex with $O(1)$ probability, which gives $O(\sqrt{\log{N})})$ improvement over the loopless algorithm. } \subsection{Non-lackadaisical quantum walk on triangular 2D grid} Consider a two-dimensional triangular grid of $N$ vertices with periodic boundary conditions. This defines the $N$-dimensional Hilbert space ${\cal{H_P}}$ associated with the position. The coin subspace of the walk is 6-dimensional Hilbert space spanned by the set of states $\{\ket{c}: c\in \{\nwarrow,\nearrow,\leftarrow,\rightarrow,\swarrow,\searrow\}\}$. The Hilbert space of the quantum walk is $\mathbb{C}^N\otimes \mathbb{C}^6$. \begin{figure}[!htb] \centering \subcaptionbox{Two-dimensional triangular grid.}{\includegraphics[scale=0.45]{Figures/Tri.pdf}} \hfill \subcaptionbox{Triangular grid mapped to rectangular grid.}{\includegraphics[scale=0.45]{Figures/Tri_on_Rect.pdf}} \label{fig:tri_on_rect} \caption{Two-dimensional triangular grid and its mapping to rectangular grid} \end{figure} The evolution of a state of the walk (without searching) is driven by the unitary operator $U = S\cdot (I_N \otimes C)$, where $S$ is the flip-flop shift operator and $C$ is the coin operator, given by the Grover's diffusion transformation \begin{equation} C = 2 \ket{s_c}\bra{s_c} - I_6 \end{equation} with $$ \ket{s_c} = \frac{1}{\sqrt{6}}(\ket{\nwarrow} + \ket{\nearrow} + \ket{\leftarrow} + \ket{\rightarrow} + \ket{\swarrow} + \ket{\searrow}) . $$ \begin{figure}[!htb] \centering \includegraphics[scale=0.6]{Figures/Tri_to_Rect_map.pdf} \caption{Mapping from triangular grid to rectangular grid.} \label{fig:tri_to_rect_map} \end{figure} There exists a simple mapping from triangular to rectangular grid as shown on figure \ref{fig:tri_to_rect_map}. This allows us to label the locations of the grid by the coordinates $(x,y)$ for $x, y \in \{0,\dots,\sqrt{N}-1\}$. The flip-flop shift operator $S$ then can be written as \begin{eqnarray} S\ket{x,y,\nwarrow} & = & \ket{x-1,y+1,\searrow} \\ S\ket{x,y,\searrow} & = & \ket{x+1,y-1,\nwarrow} \nonumber \\ S\ket{x,y,\leftarrow} & = & \ket{x-1,y,\rightarrow} \nonumber \\ S\ket{x,y,\rightarrow} & = & \ket{x+1,y,\leftarrow}, \nonumber \\ S\ket{x,y,\swarrow} & = & \ket{x,y-1,\nearrow} \nonumber \\ S\ket{x,y,\nearrow} & = & \ket{x,y+1,\swarrow} \nonumber \end{eqnarray} The system starts in \begin{equation} \ket{\psi(0)} = \frac{1}{\sqrt{N}} \sum_{x,y=0}^{\sqrt{N}-1} \ket{x,y} \otimes \ket{s_c} , \end{equation} which is uniform distribution over vertices and directions. Note, that this is a unique eigenvector of $U$ with eigenvalue $1$. The state of the system after $t$ steps is $\ket{\psi(t)} = U^t \ket{\psi(0)}$. To use quantum walk as a tool for search, we extend the step of the algorithm, making it $$ U' = U \cdot (Q \otimes I_6) , $$ where $Q$ is the query transformation which flips the sign at a marked vertex, irrespective of the coin state. Note that $\ket{\psi(0)}$ is a 1-eigenvector of $U$ but not of $U'$. If there are marked vertices, the state of the algorithm starts to deviate from $\ket{\psi(0)}$. In case of a single marked vertex, similar to rectangular grid case, after $O(\sqrt{N\log{N}})$ steps the inner product $\braket{\psi(t)}{\psi(0)}$ becomes close to $0$. If the state is measured at this moment, the probability of finding a marked vertex is $O(1 / \log{N})$~\cite{Abal:2012}. With amplitude amplification this gives the total running time of $O(\sqrt{N} \log{N})$ steps. \subsection{Lackadaisical quantum walk on triangular 2D grid} In case of lackadaisical quantum walk the coin subspace of the walk is 7-dimensional Hilbert space spanned by the set of states $\{\ket{c}: c\in \{\nwarrow, \nearrow, \leftarrow, \rightarrow, \swarrow, \searrow, \circlearrowleft \}\}$. The Hilbert space of the quantum walk is $\mathbb{C}^N\otimes \mathbb{C}^7$. The shift operator acts on a self loop as \begin{equation} S\ket{x,y,\circlearrowleft} = \ket{x,y,\circlearrowleft} . \end{equation} The coin operator is \begin{equation} C = 2 \ket{s_c}\bra{s_c} - I_7 \end{equation} with $$ \ket{s_c} = \frac{1}{\sqrt{6 + l}}(\ket{\nwarrow} + \ket{\nearrow} + \ket{\leftarrow} + \ket{\rightarrow} + \ket{\swarrow} + \ket{\searrow} + \sqrt{l}\ket{\circlearrowleft}) . $$ The system starts in \begin{equation} \ket{\psi(0)} = \frac{1}{\sqrt{N}} \sum_{x,y=0}^{\sqrt{N}-1} \ket{x,y} \otimes \ket{s_c} , \end{equation} which is uniform distribution over vertices, but not directions. As before $\ket{\psi(0)}$ is a unique 1-eigenvector of $U$. In case of search the step of the algorithm is $U' = U \cdot (Q \otimes I_7)$. In the next sections we will investigate the optimal weight of the self-loop and the running time of the search algorithm. \subsection{Non-lackadaisical quantum walk on honeycomb 2D grid} Consider a two-dimensional triangular grid of $N$ vertices with periodic boundary conditions. This defines the $N$-dimensional Hilbert space ${\cal{H_P}}$ associated with the position. The coin subspace of the walk is 6-dimensional Hilbert space spanned by the set of states $\{\ket{c}: c\in \{\nwarrow,\nearrow,\leftarrow,\rightarrow,\swarrow,\searrow\}\}$. There are two types of vertices, having either $\{\nwarrow,\swarrow,\rightarrow\}$ or $\{\leftarrow,\nearrow,\searrow\}$ directions. Therefore, instead of $6$ dimensional coin space we can use $3$ dimensional coin space $\{\leftrightarrow,\neswarrow,\nwsearrow\}$ which corresponds to $\{\rightarrow,\nwarrow,\swarrow\}$ or $\{\leftarrow,\searrow,\nearrow\}$ depending on the type of a vertex. The Hilbert space of the quantum walk is $\mathbb{C}^N\otimes \mathbb{C}^3$. \begin{figure}[!htb] \centering \subcaptionbox{Two-dimensional hexagonal grid.}{\includegraphics[scale=0.35]{Figures/Hex.pdf}} \hfill \subcaptionbox{Hexagonal grid mapped to rectangular grid.}{\includegraphics[scale=0.4]{Figures/Hex_on_Rect.pdf}} \label{fig:hex_on_rect} \end{figure} The evolution of a state of the walk (without searching) is driven by the unitary operator $U = S\cdot (I_N \otimes C)$, where $S$ is the flip-flop shift operator and $C$ is the coin operator, given by the Grover's diffusion transformation \begin{equation} C = 2 \ket{s_c}\bra{s_c} - I_3 \end{equation} with $$ \ket{s_c} = \frac{1}{\sqrt{3}}(\ket{\leftrightarrow} + \ket{\nwsearrow} + \ket{\neswarrow}). $$ There exists a simple mapping from hexagonal to rectangular grid as shown on figure \ref{fig:hex_to_rect_map}. \begin{figure}[!htb] \centering \includegraphics[scale=0.6]{Figures/Hex_to_Rect_map.pdf} \caption{Mapping from hexagonal grid to rectangular grid.} \label{fig:hex_to_rect_map} \end{figure} \noindent This allows us to label the locations of the grid by the coordinates $(x,y)$ for $x, y \in \{0,\dots,\sqrt{N}-1\}$. The flip-flop shift operator $S$ then can be written as \begin{eqnarray} S\ket{x,y,\nwarrow} & = & \ket{x,y+1,\searrow} \\ S\ket{x,y,\searrow} & = & \ket{x,y-1,\nwarrow} \nonumber \\ S\ket{x,y,\leftarrow} & = & \ket{x-1,y,\rightarrow} \nonumber \\ S\ket{x,y,\rightarrow} & = & \ket{x+1,y,\leftarrow}, \nonumber \\ S\ket{x,y,\swarrow} & = & \ket{x,y-1,\nearrow} \nonumber \\ S\ket{x,y,\nearrow} & = & \ket{x,y+1,\swarrow} \nonumber \end{eqnarray} The system starts in \begin{equation} \ket{\psi(0)} = \frac{1}{\sqrt{N}} \sum_{x,y=0}^{\sqrt{N}-1} \ket{x,y} \otimes \ket{s_c} , \end{equation} which is uniform distribution over vertices and directions. Note, that this is a unique eigenvector of $U$ with eigenvalue $1$. The state of the system after $t$ steps is $\ket{\psi(t)} = U^t \ket{\psi(0)}$. To use quantum walk as a tool for search, we extend the step of the algorithm, making it $$ U' = U \cdot (Q \otimes I_3) , $$ where $Q$ is the query transformation which flips the sign at a marked vertex, irrespective of the coin state. Note that $\ket{\psi(0)}$ is a 1-eigenvector of $U$ but not of $U'$. If there are marked vertices, the state of the algorithm starts to deviate from $\ket{\psi(0)}$. In case of a single marked vertex, similar to rectangular grid case, after $O(\sqrt{N\log{N}})$ steps the inner product $\braket{\psi(t)}{\psi(0)}$ becomes close to $0$. If the state is measured at this moment, the probability of finding a marked vertex is $O(1 / \log{N})$~\cite{Abal:2010}. With amplitude amplification this gives the total running time of $O(\sqrt{N} \log{N})$ steps. \subsection{Lackadaisical quantum walk on honeycomb 2D grid} In case of lackadaisical quantum walk the coin subspace of the walk is 4-dimensional Hilbert space spanned by the set of states $\{\ket{c}: c\in \{\leftrightarrow, \neswarrow, \nwsearrow, \circlearrowleft \}\}$. The Hilbert space of the quantum walk is $\mathbb{C}^N\otimes \mathbb{C}^4$. The shift operator acts on a self loop as \begin{equation} S\ket{x,y,\circlearrowleft} = \ket{x,y,\circlearrowleft} . \end{equation} The coin operator is \begin{equation} C = 2 \ket{s_c}\bra{s_c} - I_4 \end{equation} with $$ \ket{s_c} = \frac{1}{\sqrt{3 + l}}(\ket{\leftrightarrow} + \ket{\nwsearrow} + \ket{\neswarrow} + \sqrt{l}\ket{\circlearrowleft}) . $$ The system starts in \begin{equation} \ket{\psi(0)} = \frac{1}{\sqrt{N}} \sum_{x,y=0}^{\sqrt{N}-1} \ket{x,y} \otimes \ket{s_c} , \end{equation} which is uniform distribution over vertices, but not directions. As before $\ket{\psi(0)}$ is a unique 1-eigenvector of $U$. In case of search the step of the algorithm is $U' = U \cdot (Q \otimes I_4)$. In the next sections we will investigate the optimal weight of the self-loop and the running time of the search algorithm. \section{Analysis}\label{sec:analysis} The running time of the search by the lackadaisical quantum walk heavily depends on a weight of the self-loop. It was shown~\cite{Wong:2018} that for two-dimensional rectangular grid of size $\sqrt{N} \times \sqrt{N}$ with a single marked vertex the optimal weight of the self-loop is $l = 4/N$. In this section we study search by the lackadaisical quantum walk for a single marked vertex on two-dimensional triangular and honeycomb grids. We find optimal weights of the self-loop as well as the total running time of the search algorithm. The presented data is obtained from numerical simulations. The .NET code used to simulate the quantum walk search algorithms is available in \cite{Simulator}. \subsection{Lackadaisical quantum walk on triangular 2D grid} The running time of the lackadaisical quantum walk depends on a weight of the self-loop $l$. Figure \ref{fig:tri_diff_l} shows the evolution of the probability of finding a marked vertex for the lackadaisical quantum walk on a triangular grid of size $N = 16 \times 16$ for various values of $l$. As one can see different values of $l$ result in different success probabilities and numbers of steps till the first peak. \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{Figures/Tri_n=16_diff_l.pdf} \caption{Success probability as a function of time for different $l$ on triangular 2D grid of size $16 \times 16$.} \label{fig:tri_diff_l} \end{figure} Figure \ref{fig:tri_opt_l} shows the success probability for different values of $l$ for search on a triangular grid of size $100 \times 100$. \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{Figures/Tri_n=100_opt_l.pdf} \caption{Success probability as a function of time for different weights of the self-loop for triangular 2D grid of size $16 \times 16$.} \label{fig:tri_opt_l} \end{figure} \noindent When $l = 0$, the lackadaisical walk reproduces the regular (non-lackadaisical) quantum walk, where the success probability reaches a value of $O(1/\log{N})$ at $O(\sqrt{N\log{N}})$. As $l$ increases, the success probability grows, almost reaching $1$ when $l \approx 0.0006$ which is approximately $6/N$. As $l$ increases further the success probability starts to drop. Figure \ref{fig:tri_diff_n_opt_l} shows the probability and the number of steps till the first peak for search on triangular 2D grid of $N$ vertices with $l = 6/N$. As one can see as $N$ increases the success probability converges to a constant. The running time (the number of steps till the first peak in this case) fits $$ T = 1.31 \sqrt{N \log{N}} . $$ \begin{figure}[!htb] \centering \subcaptionbox{}{\includegraphics[scale=0.5]{Figures/Tri_n=10-200_Pr.pdf}} \hfill \subcaptionbox{}{\includegraphics[scale=0.5]{Figures/Tri_n=10-200_Steps.pdf}} \caption{The success probability (a) and the number of steps till the first peak (b) of lackadaisical quantum walk on 2D triangular grid with $l = 6/N$} \label{fig:tri_diff_n_opt_l} \end{figure} \noindent Thus, the numerical simulations suggest that the running time is $O(\sqrt{N \log{N}})$, which is an $O(\sqrt{\log{N}})$ improvement over the loopless algorithm. \subsection{Lackadaisical quantum walk on honeycomb 2D grid} Figure \ref{fig:hex_diff_l} shows the evolution of the probability of finding a marked vertex for the lackadaisical quantum walk on a hexagonal grid of size $N = 16 \times 16$ for various values of $l$. As one can see different values of $l$ result in different success probabilities and numbers of steps till the first peak. \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{Figures/Hex_n=16_diff_l.pdf} \caption{Success probability as a function of time for different weights of the self-loop for hexagonal 2D grid of size $16 \times 16$.} \label{fig:hex_diff_l} \end{figure} Figure \ref{fig:hex_opt_l} shows the success probability for different values of $l$ for search on a hexagonal grid of size $100 \times 100$. \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{Figures/Hex_n=100_opt_l.pdf} \caption{Success probability for different weights of the self-loop for hexagonal 2D grid of size $100 \times 100$.} \label{fig:hex_opt_l} \end{figure} \noindent When $l = 0$, the lackadaisical walk reproduces the regular (non-lackadaisical) quantum walk, where the success probability reaches a value of $O(1/\log{N})$ at $O(\sqrt{N\log{N}})$. As $l$ increases, the success probability grows, almost reaching $1$ when $l \approx 0.0003$ which is approximately $3/N$. As $l$ increases further the success probability starts to drop. Figure \ref{fig:hex_diff_n_opt_l} shows the probability and the number of steps till the first peak for search on triangular 2D grid of $N$ vertices with $l = 6/N$. As one can see as $N$ increases the success probability converges to a constant. The running time (the number of steps till the first peak in this case) fits $$ T = 1.56 \sqrt{N \log{N}} . $$ \begin{figure}[!htb] \centering \subcaptionbox{}{\includegraphics[scale=0.5]{Figures/Hex_n=10-200_Pr.pdf}} \hfill \subcaptionbox{}{\includegraphics[scale=0.5]{Figures/Hex_n=10-200_Steps.pdf}} \caption{The success probability (a) and the number of steps till the first peak (b) of lackadaisical quantum walk on 2D hexagonal grid with $l = 3/N$} \label{fig:hex_diff_n_opt_l} \end{figure} \noindent Thus, the numerical simulations suggest that the running time is $O(\sqrt{N \log{N}})$, which is an $O(\sqrt{\log{N}})$ improvement over the loopless algorithm. \section{Conclusions}\label{sec:conclusions} In this paper, we have studied the search for a single marked vertex by a lackadaisical quantum walk on triangular and honeycomb two-dimensional grids. We have shown that adding a self-loop, similarly to rectangular grid case, results in $O(\sqrt{N \log{N}})$ improvement over loopless algorithm. The weight of the self-loop is $6/N$ for triangular grid, $4/N$ for rectangular grid (shown by Wong in~\cite{Wong:2018}) and $3/N$ for honeycomb grid. In all cases the constant in numerator is equal to a degree of a vertex of the grid. This observation gives a natural generalisation of search by lackadaisical quantum walk to general graphs, which is the subject for further research. \comment { \subparagraph*{Acknowledgements.} }
{ "timestamp": "2020-07-28T02:40:01", "yymm": "2007", "arxiv_id": "2007.13564", "language": "en", "url": "https://arxiv.org/abs/2007.13564" }
\section{Introduction and overview}\label{section 1 introduction} Scattering theory has been an important tool in the mathematical and theoretical study of black hole solutions to the Einstein equations, which in vacuum take the form \begin{align}\label{EVE} R_{ab}[g]=0 \end{align} (setting the cosmological constant to zero). Whereas there has been extensive work on scattering for scalar, electromagnetic, fermionic fields on black hole backgrounds (see already \cite{DimockKayI}, \cite{BachelotAFMaxwell}, \cite{Nicolas}, \cite{DRSR14}, \cite{DaudeNicoleau}), in the case of the scattering of gravitational perturbations much of the historic literature has been concerned with solutions to equations governing fixed frequency modes (see \cite{Chandrasekhar}, \cite{HandlerFuttermanMatzner} for an extensive survey, and the very recent \cite{SRTdC}), and comparatively little has been said about scattering theory on black holes \textit{in physical space}. The aim of this work is to address this vacancy for the case of linearised gravitational perturbations around the Schwarzschild exterior, which in familiar coordinates has the metric \cite{Schwarzschild}: \begin{align}\label{SchwMetric} g=-\left(1-\frac{2M}{r}\right)dt^2+\left(1-\frac{2M}{r}\right)^{-1}dr^2+ r^2\left(d\theta^2+\sin^2\theta d\phi^2\right). \end{align} The subject of scattering theory is the study of perturbations evolved on scales that are large in comparison to a characteristic scale of the perturbed system. More concretely, scattering theory is relevant when the perturbations are meant to be asymptotically free from the effects of the target. In this picture, incoming and outgoing perturbations are approximated by solutions describing "free" propagation. A mathematical description of scattering hinges on an appropriate and rigorous formulation of these ideas, and much of the value of scattering theory lies in the identification of the correct candidates for spaces of "scattering states" that describe incoming and outgoing perturbations. In these terms, a satisfactory scattering theory must provide answers to the following questions: \begin{enumerate}[I] \item \textit{Existence of scattering states}: Is there an interesting class of initial data that evolve to solutions which can be associated with past/future scattering states?\label{QI} \item \textit{Uniqueness of scattering states}: Is the above association injective? Do solutions that give rise to the same scattering state coincide?\label{QII} \item \textit{Asymptotic completeness}: Does this association exhaust the class of initial data of interest?\label{QIII} \end{enumerate} Because of the nonlinear nature of the Einstein equations \bref{EVE}, the study of scattering in general relativity is dependent on a thorough understanding of the perturbative behaviour of the equations. As a first step, it is useful to understand the evolution of solutions to the linearised Einstein equations, which are obtained by formally expanding a family of solutions in some smallness parameter $\epsilon$ around some fixed background, e.g.~\bref{SchwMetric}, and keeping only leading order terms in $\epsilon$ in the equations \bref{EVE}. Studying the evolution of linear equations on black hole backgrounds has its own appeal, as black holes by their very nature are immune to "direct" observation and even their existence can only be inferred by examining their effects on the propagation of wave phenomena in spacetime. The linearised Einstein equations still inherit many of the features as well as the difficulties that plague the study of the nonlinear equations.\\ \indent A foundational breakthrough in the analysis of the linearised equations was discovered by Bardeen and Press \cite{Bardeen-Press} in the case of the Schwarzschild black hole \bref{SchwMetric} and Teukolsky \cite{TeukP74} in the case of the Kerr black hole \cite{Kerr}, who showed that by casting the equations of linearised gravity in the Newman--Penrose formalism, it is possible to identify gauge-invariant components of the curvature that obey 2nd order {\em decoupled} wave equations, which on the Schwarzschild spacetime take the forms \begin{align}\label{wave equation +} \Box_g \Omega^2\alpha +\frac{4}{r\Omega^2}\left(1-\frac{3M}{r}\right)\partial_u \Omega^2\alpha=V(r) \Omega^2\alpha, \end{align} \begin{align}\label{wave equation -} \Box_g \Omega^2\underline\alpha -\frac{4}{r\Omega^2}\left(1-\frac{3M}{r}\right)\partial_v \Omega^2\underline\alpha=V(r) \Omega^2\underline\alpha. \end{align} Here, $\Box_g$ is the d'Alembertian operator of the Schwarzschild metric $g$, $\alpha, \underline\alpha$ are symmetric traceless $S^2$-tangent 2-tensor fields, $\Omega^2=1-\frac{2M}{r}$ and $V=\frac{2(3\Omega^2+1)}{r^2}$ (see already \Cref{Chandra1}). Equations \bref{wave equation -}, \bref{wave equation +} are known as the \textbf{Teukolsky equations of spin $\bm{+2}$ and $\bm{-2}$} respectively.\\ \indent In addition to the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, the quantities $\alpha, \underline\alpha$ satisfy a closed system of equations known as the Teukolsky--Starobinsky identities: \begin{align} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3\alpha=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2{\underline\alpha}+12M\partial_t\hspace{.5mm}r\Omega^2{\underline\alpha}, \label{eq:227intro1}\\ \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^3{\underline\alpha}=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\alpha-12M\partial_t\hspace{.5mm}r\Omega^2\alpha.\label{eq:228intro1} \end{align} The purpose of this paper is to study the scattering theory of the Teukolsky equations \bref{wave equation +}, \bref{wave equation -} as a prelude to studying scattering for the full system of linearised Einstein equations. This is done by first developing a scattering theory for \bref{wave equation +}, \bref{wave equation -} in particular addressing points \ref{QI}, \ref{QII}, \ref{QIII} above, and then bridging this scattering theory to the full system of linearised Einstein equations by incorporating the constraints \bref{eq:227intro1} and \bref{eq:228intro1}. A complete treatment of the full system will appear in the forthcoming \cite{M2050}. \\ \indent To elaborate on the ideas involved we go through a quick survey of the history of the subject. In \Cref{RedshiftScalar} we review known scattering theory for the scalar wave equation highlighting the role of redshift as a feature of scattering on black hole backgrounds. \Cref{LinearisedGravity} is a survey of the difficulties encountered in the study of scattering for the (linearised) Einstein equations, and will motivate and introduce the main results. \Cref{IntroResults} contains a preliminary statement of the results of this paper. \Cref{subsection 1.4 outline} contains an outline of the structure of the paper. \subsection{Scattering for the scalar wave equation and the redshift effect}\label{RedshiftScalar} It is clear that understanding scattering for the scalar wave equation \begin{align}\label{wave equation} \Box_{g} \phi=0 \end{align} on a fixed Schwarzschild background \bref{SchwMetric} is a necessary prerequisite for our scattering problem, and already at this level we see many of the difficulties that characterise the evolution of perturbations to black holes. Much of the historical literature on scattering for \bref{wave equation} concerns the Schr\"odinger-like equation that results from a formal separation of \bref{wave equation} and governs the radial part. While this leads to important insights, it does not lead on its own to a satisfactory answer to points \ref{QI}, \ref{QII}, \ref{QIII} above. \\ \indent The first result on physical-space scattering for \bref{wave equation} on \bref{SchwMetric} goes back to Dimock and Kay \cite{DimockKayI}, who applied the Lax--Philips scattering theory to the scalar wave equation on the Schwarzschild spacetime. In \cite{Friedlander}, Friedlander's use of the radiation field at null infinity to describe future scattering states initiated an alternative method from the Lax--Philips formalism to a more geometric treatment of the notion of scattering states, and subsequent works have largely adhered to this point of view, see the discussion by Nicolas \cite{Nicolas}. The state of the art in this area is the work of Dafermos, Rodnianski and Shlapentokh-Rothman \cite{DRSR14}, where a complete understanding of scattering for the wave equation \bref{wave equation} on the Kerr exterior is laid out. The scattering problem for the scalar wave equation \bref{wave equation} on the extremal Reissner--Nordstr\"om background was definitively resolved in \cite{AAG19}. In the case of asymptotically de-Sitter black holes, we note the result \cite{HafnerGerardGeorgescu} on asymptotic completeness for the Klein--Gordon equation restricting to solution of fixed azimuthal modes against a very slowly rotating Kerr--de-Sitter black hole. Scattering for \bref{wave equation} has also been considered on the interior of the Reissner--Nordstr\"om black hole by Kehle and Shlapentokh-Rothman \cite{KSR18}.\\ \indent What leads to the rich theory available to \bref{wave equation} is the fact that it comes with a natural Lagrangian structure with which we can associate conservation laws encoded in the energy-momentum tensor: \begin{align} T_{\mu\nu}[\phi]=\partial_\mu \phi \; \partial_\nu \phi-\frac{1}{2}g_{\mu\nu}\;\partial_\alpha\phi \;\partial^\alpha \phi, \end{align} which satisfies $\nabla_\mu T^\mu{}^\nu[\phi]=0$. Since the vector field $T:=\partial_t$ generates an isometry, classical scattering theory immediately suggests the class of solutions of finite $T$-energy, defined as the flux on a spacelike or null hypersurface of the quantity \begin{align} n^\mu J^T_\mu[\phi], \end{align} where $J^X[\phi]_\mu=T_{\mu\nu}[\phi]X^\nu$ and $n^\mu$ is the vector field normal to the hypersurface, as this flux is non-negative definite and conserved. Solutions to \bref{wave equation} arising from suitable Cauchy data have sufficiently tame asymptotics to induce smooth radiation fields on $\mathscr{I}^+$ and $\mathscr{H}^+$. The conservation of $T$-energy allows us to resolve the scattering problem by constructing an isomorphism between the space of Cauchy data of finite energy and the corresponding space of radiation fields. With this, the answer to the questions \ref{QI}, \ref{QII}, \ref{QIII} of scattering theory for equation \bref{wave equation} is in the affirmative. \\ \indent At the same time, the fact that the vector field $T$ becomes null on the event horizon points to a deficiency, since the $T$-energy density then loses control over some derivatives and the norm on the event horizon defined by the $T$-energy, \begin{align}\label{horizon energy} \int_{\mathscr{H}^+}J^T_\mu[\phi]n^\mu_{\mathscr{H}^+}, \end{align} is degenerate. The energy density observed along a horizon-penetrating timelike curve is better described by $J^N_\mu[\phi]$ for a timelike vector field $N$, but such a vector field cannot be Killing everywhere. The flux of this quantity is therefore not conserved and new issues appear, paramount among which is the \textit{redshift effect}.\\ \indent An intuitive hint of the role played by the redshift effect is the exponential decay in frequency that affects signals originating near the event horizon by the time they reach late-time observers, which relates to the divergence of outgoing null geodesics near the event horizon towards the future. It turns out that this effect can be exploited to produce nondegenerate energies useful for evolution in the future direction, precisely by choosing a timelike $N$ to be a time-translation invariant vector field measuring the separation of null geodesics near the event horizon, see \cite{DR05}. In addition to using $N$ as a multiplier $X=N$, key to this method is the fact that commuting the wave equation \bref{wave equation} with such $N$ produces terms of lower order derivatives that come with a good sign when estimating the solution forwards. This can be traced to the positivity of the surface gravity; the fact that on $\mathscr{H}^+$, $\nabla_T T=\kappa T$ with $\kappa>0$. See \cite{DR08} for a detailed exposition.\\ \indent Unfortunately, when it comes to backwards evolution the technique described above does not work, as the redshift effect in the forwards evolution problem turns to a deleterious blueshift effect when evolving towards the past, and it is not possible to use the energy associated with $N$ to bound the solution in the backwards direction. Furthermore, it can be shown that there exists a large class of scattering data having a finite $N$-energy on the future event horizon $\mathscr{H}^+$ whose $N$-energy blows up evolving backwards, see \cite{DSR17}.\\ \indent Note that in the case of the Kerr exterior $(a\neq0)$ there is no obvious analogue of the $T$-energy scattering theory, as the stationary Killing vector field becomes spacelike in the ergoregion and therefore its flux no longer has a definite sign. Therefore, superradiance features as an additional aspect of scattering theory. One cannot hope for a unitary map, but one can still hope for a bounded invertible map. In view of the above discussion, the $N$-energy space is not appropriate however. One of the difficulties is indeed identifying the correct notion of energy. See \cite{DRSR14} for the detailed treatment. \subsection{Linearised gravity and the Teukolsky equations}\label{LinearisedGravity} The above discussion involves linear \textit{scalar} perturbations only, i.e.~solutions to \bref{wave equation}, and little is known about the scattering theory of the Einstein equations even when linearised, see \cite{Chandrasekhar} and \cite{HandlerFuttermanMatzner} for a survey. Indeed, a comprehensive study of scattering under the Einstein equations \bref{EVE} on black hole exteriors involves and subsumes major aspects of the study of black hole stability. To date, full nonlinear stability for an asymptotically flat spacetime has only been satisfactorily proven for Minkowski space, see \cite{Ch-K}, \cite{Lin-Rod} for instance. For asymptotically flat black holes, stability results against generic perturbations exist only for the linearised Einstein equations, see \cite{DHR16} for the case of the Schwarzschild spacetime, \cite{DHR18}, \cite{Ma}, \cite{AnderssonKerr} and \cite{HVH19} for the case of very slowly rotating Kerr black holes, and \cite{SRTdC} for the general subextremal case. For the case of asymptotically de-Sitter black holes, results concerning the nonlinear stability of black hole solutions with positive cosmological constant do exist, see \cite{Hintz2018}. \subsubsection{The Bianchi equations and the lack of a Lagrangian structure}\label{subsubsection 1.2.1 no lagrangian} In a spacetime satisfying the Einstein equations \bref{EVE} with a vanishing cosmological constant, the components of the Weyl curvature tensor satisfy the \textit{Bianchi equations} \begin{align}\label{Bianchi} \nabla^a W_{abcd}=0. \end{align} These equations, along with the equations defining the connection components, comprise the evolutionary content of the Einstein equations \bref{EVE}. Importantly, the Bel--Robinson tensor \begin{align} Q_{abcd}=W_{aecf} W_b{}^e{}_d{}^f + {}^*W_{aecf} {}^*W_b{}^e{}_d{}^f \end{align} acts as an energy-momentum tensor for the Bianchi equations. Upon linearising these equations against the background of Minkowski space, this structure survives in the linearised equations and allows to estimate the curvature components using the vector field method in the same way that it was applied to study the scalar wave equation, as was done in \cite{Ch-K-linear}. In fact, the vector field method applied using the Bel--Robinson tensor was key to the proof of nonlinear stability of the Minkowski spacetime by Christodoulou and Klainerman in \cite{Ch-K}, and it is possible to use this strategy to study scattering for small perturbations to the Minkowski spacetime evolving according to the nonlinear Einstein equations \bref{EVE}.\\ \indent Unfortunately, this structure is lost in the process of linearising around black holes, where the connection components couple to the curvature in a way that destroys the Lagrangian structure of the equations \bref{Bianchi}: in terms of a formal expansion of perturbed quantities of the form \begin{align} \bm{g}=g\;+\stackrel{\;\;\mbox{\scalebox{0.4}{(1)}}}{\epsilon g}, \qquad \bm{\Gamma}=\Gamma+\stackrel{\;\;\mbox{\scalebox{0.4}{(1)}}}{\epsilon\; \Gamma}, \qquad \bm{R}=R+\stackrel{\;\;\mbox{\scalebox{0.4}{(1)}}}{\epsilon R}, \end{align} the linearised version of equations \bref{Bianchi} have the schematic form \begin{align}\label{coupling} \stackrel{\;\;\;\;\mbox{\scalebox{0.4}{(1)}}}{\nabla \; W}+\stackrel{\mbox{\scalebox{0.4}{(1)}}\;\;\;\;\;\;}{\Gamma\; W}=0. \end{align} Therefore, it is not possible to directly use the Bianchi equations alone to prove boundedness and decay results for curvature components independently of the connection components. See the discussion in \cite{DHR16}, \cite{DHR17}. \subsubsection{Double null gauge}\label{subsubsection 1.2.2 double null gauge} It is important to note that the formulation of the problem depends crucially on the choice of gauge. It turns out that working with a \textit{double null gauge} is particularly useful to manifest a special structure in the linearised Einstein equations that reveals an alternative method to control curvature. This gauge leads to a well-posed reduction of the linearised Einstein equations around Schwarzschild, arising from a well-posed reduction of the full Einstein equations (see \cite{DHR16} and \cite{Ch-K}).\\ \indent A double null gauge is a coordinate system $(\bm{u},\bm{v},\bm{\theta}^A)$ that foliates spacetime with two families of ingoing and outgoing null hypersurfaces. In this gauge we decompose the curvature and connection components in terms of $\mathcal{S}_{\bm{u},\bm{v}}$-tangent tensor fields, where $\mathcal{S}_{\bm{u},\bm{v}}$ is the compact 2-dimensional manifold where the null hypersurfaces of constant $\bm{u}, \bm{v}$ intersect (see already \Cref{section 2 preliminaries} and \Cref{Appendix B Double null guage}). On the exterior of the Schwarzschild spacetime, the Eddington--Finkelstein null coordinates $(u,v,\theta^A)$ provide an example of this gauge (where $\mathcal{S}_{u,v}$ are just standard spheres). \\ \indent For an example of the resulting equations, the linearised curvature components $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}_{AB}=\stackrel{\mbox{\scalebox{0.4}{(1)}}}{W}_{A4B4}$ and $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}_A=\stackrel{\mbox{\scalebox{0.4}{(1)}}}{W}_{A434}$ obey the transport equations \begin{align}\label{example1} \frac{1}{\Omega}\slashed{\nabla}_3 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}\;=-2r\slashed{\mathcal{D}}^*_2 \Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta} +\frac{6M}{r^2}\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}, \qquad\qquad \Omega\slashed{\nabla}_4 r^4\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}-2M r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}\;= r\slashed{div}\;r^3\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}, \end{align} where $\Omega^2=\left(1-\frac{2M}{r}\right)$, $\slashed{\nabla}_4,\slashed{\nabla}_3$ denote the projections of the null covariant derivatives to $S^2_{u,v}$ and $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}$ denotes the linearised outgoing shear. The coupling to the connection components means we must simultaneously consider the connection components like $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat\chi}$, which satisfy transport equations of a similar form, for example: \begin{align}\label{example2} \Omega\slashed{\nabla}_4\; r\Omega \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}+\Big(1-\frac{4M}{r}\Big)\Omega \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}=-r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}. \end{align} \indent We note that in this formulation, we can see the presence of a \textit{blueshift} effect in the linearised Einstein equations by observing that the second equation of \bref{example1} above carries a lower order term with a sign that forces the solution to grow exponentially when evolved forward in a neighborhood of the horizon. This appears to be an essential feature of working with tensorial quantities decomposed using null frames. \subsubsection{The Teukolsky equations}\label{subsubsection 1.2.3 Teukolsky} A quick glance at \bref{example1}, \bref{example2} reveals that we can derive a decoupled equation for $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}$ alone by acting on the first equation of \bref{example1} with $\Omega\slashed{\nabla}_4$ and following through the remaining equations to discover that $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}$ obeys the $+2$ Teukolsky equation \bref{wave equation +}. The linearisation of the component $\bm{\underline\alpha}_{AB}=W_{A4B4}$ can be shown to obey \bref{wave equation -} by a similar logic, see \Cref{subsection 2.2 Linearised Einstein equations in double null gauge} for the full list of the linearised Einstein equations around the Schwarzschild background.\\ \indent The derivation of \bref{wave equation +}, \bref{wave equation -} by Bardeen and Press \cite{Bardeen-Press} for perturbations around Schwarzschild and their extension to the Kerr black holes by Teukolsky \cite{Teu73} (using the Newman--Penrose formalism) was a game changer in the study of linearised gravity. If one can estimate solutions to the Teukolsky equations (i.e.~equations \bref{wave equation +}, \bref{wave equation -} on Schwarzschild), one can hope to make use of the hierarchical nature of the linearised Einstein equations in double null gauge (as manifest in \bref{example1}, \bref{example2} for example) to estimate the remaining components. \\ \indent Unfortunately, however, having arrived at the decoupled wave equations \bref{wave equation +}, \bref{wave equation -} for the components $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$, the essential difficulty in dealing with the linearised Einstein equations is still inherited by the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, in the sense that equations \bref{wave equation +}, \bref{wave equation -}, taken in isolation, also suffer from the lack of a variational principle, and neither \bref{wave equation +} nor \bref{wave equation -} has its own energy-momentum tensor. This is related to the 1st order null derivative term on the left hand side of \bref{wave equation +}, \bref{wave equation -}. These first order terms are reminiscent of the wave equation \bref{wave equation} when commuted with the redshift vector field $N$ (note in particular that the 1st order term in the $-2$ Teukolsky equation \bref{wave equation -} has a redshift sign near $\mathscr{H}^+$, while the $+2$ has a 1st order term with a blueshift sign near $\mathscr{H}^+$). This issue meant that the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, despite their decoupling, have remained immune to known methods for a long time. \subsubsection{Chandrasekhar-type transformations in physical space}\label{subsubsection 1.2.4 DHR} In \cite{DHR16}, Dafermos, Holzegel and Rodnianski succeed in deriving boundedness and decay estimates for \bref{wave equation +} and \bref{wave equation -} and they subsequently prove the linear stability of the Schwarzschild solution in double null gauge. Key to their work is the exploitation of a physical space version of a trick due to Chandrasekhar \cite{Chandrasekhar}, which works by commuting derivatives in the null directions past the equations. This commutation removes the first order derivative terms and reduces the equations \bref{wave equation +}, \bref{wave equation -} to a familiar form: \begin{align}\label{RWintro} \Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4\stackrel{\mbox{\scalebox{0.4}{(1)}}}\Psi-\Omega^2\slashed{\Delta}\stackrel{\mbox{\scalebox{0.4}{(1)}}}\Psi+V(r)\stackrel{\mbox{\scalebox{0.4}{(1)}}}\Psi=0, \end{align} where $V(r)=\frac{\Omega^2(3\Omega^2+1)}{r^2}$ and \begin{align}\label{transport} \stackrel{\mbox{\scalebox{0.4}{(1)}}}\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha. \end{align} The same applies to $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ by differentiating in the $4$- direction instead and we obtain a quantity $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\Psi}$ satisfying \bref{RWintro} via \begin{align}\label{transport 2} \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\Psi}=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}. \end{align} \indent Equation \bref{RWintro} is the well-known Regge--Wheeler equation, which first appeared in the context of the theory of metric perturbations studied by Regge and Wheeler \cite{ReggeWheeler}, Vishveshwara \cite{Vishveshwara}, and Zerilli \cite{Zerilli} to describe gauge invariant combinations of the metric perturbations. The Regge--Wheeler equation \bref{RWintro} has a very similar structure to the equation that governs the radiation field of the scalar wave equation \bref{wave equation}, and in particular the vector field method can be adapted to study \bref{RWintro}. This is what was done in \cite{DHR16} to obtain boundedness and decay estimates for solutions of \bref{RWintro}. These estimates for \bref{RWintro} can in turn be used to estimate $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ \textit{by regarding \bref{transport} and its $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ counterpart as transport equations for $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$}. For this to work, it was fundamental that a sufficiently strong decay statement is available for solutions of \bref{RWintro} for a nondegenerate energy (i.e.~the analogue of the $N$-energy above).\\ \indent Note that in the case of the Kerr spacetime $a\neq 0$, the strategy outlined above suffers from the fact that the analogues of \bref{RWintro} are coupled to $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ via $a$. Nevertheless, it is possible to apply the same strategy to obtain boundedness and decay results for solutions to the Teukolsky equations, see \cite{DHR18} and \cite{Ma} for the case of the very slowly rotating Kerr exterior $|a|\ll M$ and the very recent \cite{SRTdC} for the full subextremal range $|a|<M$. For the case of the extremal Kerr exterior $a=M$, see \cite{Rita2019}, \cite{Lucietti_2012}.\\ \indent The first preliminary goal of our work will be to analyse the Regge--Wheeler equation \bref{RWintro} from the point of view of scattering. The fact that the conservation of the $T$-energy leads to a scattering theory for the scalar wave equation \bref{wave equation} means one can expect to prove an analogous statement for the Regge--Wheeler equation using analogous methods. This will be the content of \textbf{Theorem 1} (see \Cref{subsubsection 1.3.1 scattering for RW}). \subsubsection{Reconstructing curvature from the Regge--Wheeler equation}\label{subsubsection 1.2.5 RW} \indent Starting from such a scattering theory for the Regge--Wheeler equation \bref{RWintro}, one can hope to apply the strategy used in \cite{DHR16} to construct a scattering theory for the Teukolsky equations \bref{wave equation +} and \bref{wave equation -} via the transport relations \bref{transport} and \bref{transport 2}. It is however far from clear that the transport equations \bref{transport}, \bref{transport 2} can lead to a suitable scattering theory, in particular one that could in turn lead to a scattering theory for the linearised Einstein equations. The central question we aim to address is whether the $T$-energy obtained via the Regge--Wheeler equation could define a Hilbert space of scattering states for solutions to \bref{wave equation +}, \bref{wave equation -}, for which the central questions of scattering theory (points \ref{QI}, \ref{QII}, \ref{QIII} above) could be answered. \\ \indent Adapting the strategy above to a scattering setting based on $T$-energies, we succeed in constructing such a scattering theory for the Teukolsky equations answering \ref{QI}, \ref{QII}, \ref{QIII} in the affirmative. This will lead to \textbf{Theorem 2} of this paper (see \Cref{subsubsection 1.3.2 scattering for teukolsky}). \subsubsection{The Teukolsky--Starobinsky correspondence}\label{subsubsection 1.2.6 TS} Finally, we treat what is known as the Teukolsky--Starobinsky correspondence. The Teukolsky--Starobinsky correspondence is the study of the relationship between $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ using \bref{eq:227intro1}, \bref{eq:228intro1} and the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, independently of the remaining components of a solution to the linearised Einstein system. The idea that knowing either $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}$ or $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ uniquely determines the other via \bref{eq:227intro1}, \bref{eq:228intro1} permeates the literature on the Einstein equations since the appearance of the constraints in \cite{TeukP74}, \cite{StarC}, but little has been done in the way of a systematic study of the combined system consisting of the Teukolsky equations \bref{wave equation +}, \bref{wave equation -} and the constraints \bref{eq:227intro1}, \bref{eq:228intro1}, governing a pair $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha, \stackrel{\mbox{\scalebox{0.4}{(1)}}} {\underline\alpha}$. \\ \indent The constraints \bref{eq:227intro1}, \bref{eq:228intro1} provide a bridge between the scattering theory we construct for equations \bref{wave equation +}, \bref{wave equation -} and the full linearised Einstein equations. This is because scattering for the linearised Einstein equations would involve scattering data for the metric components, from which data for only one of $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ or $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ could be constructed from the scattering data for the metric on each component of the asymptotic boundary. One can hope to use the identities \bref{eq:227intro1}, \bref{eq:228intro1} to obtain scattering data for either $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ or $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ out of the other, but it is entirely unclear whether we would obtain scattering data that are compatible with the scattering theory constructed here for \bref{wave equation +}, \bref{wave equation -}, or even whether the system consisting of \bref{wave equation +}, \bref{wave equation -}, \bref{eq:227intro1}, \bref{eq:228intro1} is well-posed. In the context of scattering, we are specifically interested in whether the operators involved on each side of the identities \bref{eq:227intro1}, \bref{eq:228intro1} are invertible on the spaces of scattering states, and we would like to know whether, given scattering data for $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ related via \bref{eq:227intro1}, \bref{eq:228intro1}, the ensuing solutions to \bref{wave equation +}, \bref{wave equation -} would in turn satisfy \bref{eq:227intro1}, \bref{eq:228intro1}.\\ \indent Interestingly, it turns out that the study of constraints \bref{eq:227intro1}, \bref{eq:228intro1} is much more transparent when done via scattering rather than directly via the Cauchy problem, and combining this with asymptotic completeness will answer the question of well-posedness for the system \bref{wave equation +}, \bref{wave equation -}, \bref{eq:227intro1}, \bref{eq:228intro1}. We also find that it is only in the context where solutions to \bref{wave equation +}, \bref{wave equation -} are studied on the entirety of the exterior region that the constraints \bref{eq:227intro1}, \bref{eq:228intro1} are sufficient to determine $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ completely from $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ and vice versa. Scattering necessarily involves considering solutions globally on the exterior. These considerations are the subject of \textbf{Theorem 3}. \\ \indent A corollary to our main results is that one may formulate a scattering statement for a combined pair $(\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha,\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha})$ satisfying the Teukolsky equations \bref{wave equation +}, \bref{wave equation -} and the constraints \bref{eq:227intro1}, \bref{eq:228intro1} (this is \textbf{Corollary 1}, see \Cref{subsection 4.4 Corollary 1: mixed scattering}). One can then hope that such a scattering statement would provide a bridge towards scattering for the full linearised Einstein equations, taking into account \Cref{example2} relating $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}$ to $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}$ and counterpart equation relating $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ to $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\chi}}$. We will immediately remark at the end of this introduction on how to formally derive a conservation law at the level of the shears $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\chi}}$ which excludes the possibility of superradiant reflection (see \bref{conservation law} of \Cref{subsubsection 1.3.4 corollary}). This will be treated in detail again in the upcoming \cite{M2050} as part of a complete scattering theory for the linearised Einstein equation in double null gauge. \subsection{Scattering maps}\label{IntroResults} The following are preliminary statements of the results of this work, with detailed statements to follow in the body of the paper (see \cref{section 4 main theorems}). \subsubsection{Scattering for the Regge--Wheeler equation}\label{subsubsection 1.3.1 scattering for RW} We begin by stating the result for the Regge--Wheeler equation \bref{RWintro} (we omit the superscript $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{{}}$ in what follows). We show that a solution arising from Cauchy data with initially finite $T$-energy gives rise to a set of radiation fields in the limit towards $\mathscr{I}^+, \mathscr{H}^+$, from which the solution can be recovered. The choice of the Cauchy surface does not affect the fact that the flux of the $T$-energy defines a Hilbert space norm on Cauchy data. For the surface $\overline\Sigma=\{t=0\}$, this flux is given by \begin{align}\label{RWfluxT} \big\|(\Psi|_{\overline\Sigma},\slashed{\nabla}_{n_{\overline{\Sigma}}}\Psi|_{\overline\Sigma})\big\|^2_{\mathcal{E}^T_{\overline\Sigma}}=\int_{\overline\Sigma}dr\sin\theta d\theta d\phi\;|\slashed{\nabla}_{n_{\overline\Sigma}}\Psi|^2+\Omega^2|\slashed{\nabla}_r\Psi|^2+|\slashed{\nabla}\Psi|^2+\frac{3\Omega^2+1}{r^2}|\Psi|^2. \end{align} Conservation of the $T$-energy suggests Hilbert space norms on $\mathscr{I}^+, \mathscr{H}^+$: \begin{align}\label{RWfluxIH} \|\bm{\uppsi}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2=\int_{\mathscr{I}^+}du\sin\theta d\theta d\phi\; |\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2,\qquad\qquad \|\Psi_{{\mathscr{H}^+}}\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2=\int_{\mathscr{H}^+}du\sin\theta d\theta d\phi\;|\partial_v\bm{\uppsi}_{\mathscr{H}^+}|^2. \end{align} The Hilbert spaces $\mathcal{E}^T_{\overline{\Sigma}}, \mathcal{E}^T_{\overline{\mathscr{H}^+}},\mathcal{E}^T_{\overline{\mathscr{I}^+}}$ are defined to be the completion of smooth, compactly supported data under the norms defined in \bref{RWfluxT}, \bref{RWfluxIH} and the spaces $\mathcal{E}^T_{\mathscr{H}^-}, \mathcal{E}^T_{\mathscr{I}^-}$ are defined analogously. \begin{theorem*}\label{Theorem 1} Forward evolution under the Regge--Wheeler equation \bref{RWintro} extends to a unitary Hilbert space isomorphism \begin{align} \mathscr{F}^+:\mathcal{E}^T_{\overline\Sigma} \longrightarrow \mathcal{E}^T_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^T_{\mathscr{I}^+}. \end{align} A similar statement holds for scattering towards $\mathscr{H}^-, \mathscr{I}^-$. As a corollary, we obtain the Hilbert space isomorphism \begin{align} \mathscr{S}:\mathcal{E}^T_{\mathscr{H}^-}\oplus\mathcal{E}^T_{\mathscr{I}^-}\longrightarrow\mathcal{E}^T_{\mathscr{H}^+}\oplus\mathcal{E}^T_{\mathscr{I}^+}. \end{align} \end{theorem*} The precise statement of this result is contained in \Cref{forwardRW,,backwardRW,,RW isomorphisms} of \Cref{subsection 4.1 Theorem 1}.\\ \indent Note that Theorem 1 can be applied to the study of scattering for the linearised Einstein equations in the Regge--Wheeler gauge, see also the recent \cite{TruongConformal}. \subsubsection{Scattering for the Teukolsky equations}\label{subsubsection 1.3.2 scattering for teukolsky} Given $\alpha$ or $\underline\alpha$ solving the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, the weighted null derivatives $\Psi, \underline\Psi$ defined by \bref{transport}, \bref{transport 2} satisfy the Regge--Wheeler equation \bref{RWintro}, so we can try to use Theorem 1 to construct a scattering theory for $\alpha, \underline\alpha$ using the spaces of scattering states associated to \bref{RWintro}: \\ \indent Let $(\upalpha,\upalpha')$, $(\underline\upalpha,\underline\upalpha')$ be Cauchy data for \bref{wave equation +}, \bref{wave equation -} respectively on $\overline{\Sigma}$ and define \begin{align} \|(\upalpha,\upalpha')\|^2_{\mathcal{E}^{T,+2}_{\overline\Sigma}}:=\|(\Psi,\slashed{\nabla}_{n_{\overline\Sigma}}\Psi)\|^2_{\mathcal{E}^T_{\overline{\Sigma}}},\qquad\qquad\|(\underline\upalpha,\underline\upalpha')\|^2_{\mathcal{E}^{T,-2}_{\overline\Sigma}}:=\|(\underline\Psi,\slashed{\nabla}_{n_{\overline\Sigma}}\underline\Psi)\|^2_{\mathcal{E}^T_{\overline{\Sigma}}}. \end{align} The expressions $ \|\;\|^2_{\mathcal{E}^{T,+2}_{\overline\Sigma}}, \|\;\|^2_{\mathcal{E}^{T,-2}_{\overline\Sigma}}$ turn out indeed to be norms on smooth, compactly supported data sets on $\overline\Sigma$ and thus they define Hilbert space norms on the completions of such data. Note that the values on $\overline\Sigma$ of $\Psi,\underline\Psi$ and their derivatives can be computed locally using the Teukolsky equations \bref{wave equation +}, \bref{wave equation -}, out of higher order derivatives of the initial data $(\upalpha,\upalpha')$, $(\underline\upalpha,\underline\upalpha')$ on $\overline\Sigma$.\\ \indent As mentioned earlier, the energies defining the Hilbert spaces of scattering states for the Teukolsky equations stem from the $T$-energy associated to the Regge--Wheeler equations. Remarkably, on $\mathscr{I}^\pm, \mathscr{H}^\pm$, the radiation fields of $\Psi, \underline\Psi$ are related to those of $\alpha, \underline\alpha$ by tangential derivatives, and it is possible to find meaningful expressions for the corresponding norms on $\mathscr{I}^\pm, \overline{\mathscr{H}^\pm}$ directly in terms of the radiation fields of $\alpha, \underline\alpha$. \begin{theorem*}\label{Theorem 2} For the Teukolsky equations \bref{wave equation +}, \bref{wave equation -} of spins $\pm2$, evolution from smooth, compactly supported data on a Cauchy surface extends to unitary Hilbert space isomorphisms: \begin{align} {}^{(+2)}\mathscr{F}^{+}:\mathcal{E}^{T,+2}_{\overline\Sigma}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}},\qquad\qquad{}^{(-2)}\mathscr{F}^{+}:\mathcal{E}^{T,-2}_{\overline\Sigma}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}},\\ {}^{(+2)}\mathscr{F}^{-}:\mathcal{E}^{T,+2}_{\overline\Sigma}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}},\qquad\qquad{}^{(-2)}\mathscr{F}^{-}:\mathcal{E}^{T,-2}_{\overline\Sigma}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}. \end{align} The spaces of past/future scattering states $\mathcal{E}^{T,\pm2}_{\mathscr{I}^\pm},\mathcal{E}^{T,\pm2}_{\mathscr{H}^\pm},$ are the Hilbert spaces obtained by completing suitable smooth, compactly supported data on $\mathscr{I}^\pm, \mathscr{H}^\pm$ under the corresponding norms in the following: \begin{changemargin}{-1cm}{2cm} \begin{center} \setstretch{1.5} \begin{tikzpicture}[scale=0.6,on grid] \node (I) at ( 0,0) {}; \path (I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$] +(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$] +(180:4) coordinate (Ileft) +(0:4) coordinate (Iright) coordinate[label=0:$i^0$] ; \draw (Ileft) -- node[align=center,yshift=15,xshift=15]{$\Big\|(\mathring{\slashed{\Delta}}-2)(\mathring{\slashed{\Delta}}-4)\left(2M\int^{\infty}_v d\bar{v}e^{\frac{1}{2M}({v}-\bar{v})}\Omega^2\alpha\right)\Big\|^2_{L^2(\overline{\mathscr{H}^+})}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$\\$+\Big\|6M\partial_v\left(2M\int^{\infty}_v d\bar{v}e^{\frac{1}{2M}({v}-\bar{v})}\Omega^2\alpha\right)\Big\|_{L^2(\overline{\mathscr{H}^+})}^2\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\;\qquad$} node[rotate=45,below]{$\overline{\mathscr{H}^+}$} (Itop) ; \draw (Ileft) -- node[yshift=-15,xshift=15]{$\Big\|2M\left(-2(2M\partial_u)+3(2M\partial_u)^2-(2M\partial_u)^3\right)2M\Omega^{-2}\alpha\Big\|^2_{L^2(\overline{\mathscr{H}^-})}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$} node[rotate=-45,above]{$\overline{\mathscr{H}^-}$} (Ibot) ; \draw[dash dot dot] (Ibot) -- node[align=center][yshift=-10,xshift=-15]{$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\Big\|6M\upalpha_{\mathscr{I}^-}\Big\|^2_{L^2(\mathscr{I}^-)}$\\[1mm] $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left\|(\mathring{\slashed{\Delta}}-2)(\mathring{\slashed{\Delta}}-4)\left(\int^{v}_{-\infty}\upalpha_{\mathscr{I}^-} d\bar{v}\right)\right\|^2_{L^2(\mathscr{I}^-)}$} node[rotate=45,above]{$\mathscr{I}^-$}(Iright) ; \draw[dash dot dot] (Iright) -- node[yshift=10,xshift=-10]{$\qquad\qquad\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\qquad\left\|(\partial_u)^3\upalpha_{\mathscr{I}^+}\right\|^2_{L^2(\mathscr{I}^+)}$} node[rotate=-45,below]{$\mathscr{I}^+$}(Itop) ; \filldraw[white] (Itop) circle (3pt); \draw[black] (Itop) circle (3pt); \filldraw[white] (Ibot) circle (3pt); \draw[black] (Ibot) circle (3pt); \draw[black] (Ileft) circle (3pt); \filldraw[black] (Ileft) circle (3pt); \filldraw[white] (Iright) circle (3pt); \draw[black] (Iright) circle (3pt); \end{tikzpicture} \end{center} \end{changemargin} \begin{changemargin}{-1.4cm}{2cm} \begin{center} \begin{tikzpicture}[scale=0.6] \node (I) at ( 0,0) {}; \path (I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$] +(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$] +(180:4) coordinate (Ileft) +(0:4) coordinate (Iright) coordinate[label=0:$i^0$] ; \draw (Ileft) -- node[yshift=20,xshift=25]{$\Big\|2M\left(2(2M\partial_v)+3(2M\partial_v)^2+(2M\partial_v)^3\right)2M\Omega^{-2}\underline\alpha\Big\|^2_{L^2(\overline{\mathscr{H}^+})}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$} node[rotate=45,below]{$\overline{\mathscr{H}^+}$} (Itop) ; \draw (Ileft) -- node[align=center][yshift=-10,xshift=10]{$\Big\|6M\partial_u\left(2M\int^{u}_{-\infty}d\bar{u}e^{\frac{1}{2M}(u-\bar{u})}\Omega^2\underline\alpha\right)\Big\|^2_{L^2(\overline{\mathscr{H}^-})}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$\\[1mm] $+\left\|(\mathring{\slashed{\Delta}}-2)(\mathring{\slashed{\Delta}}-4)\left(2M\int^{u}_{-\infty}d\bar{u}e^{\frac{1}{2M}(u-\bar{u})}\Omega^2\underline\alpha\right)\right\|^2_{L^2(\overline{\mathscr{H}^-})}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad$} node[rotate=-45,above]{$\overline{\mathscr{H}^-}$} (Ibot) ; \draw[dash dot dot] (Ibot) -- node[yshift=-12,xshift=-12]{$\qquad\qquad\;\;\;\;\;\;\;\;\;\;\;\;\;\;\qquad\qquad\left\|(\partial_v)^3\underline\upalpha_{\mathscr{I}^-}\right\|_{L^2(\mathscr{I}^-)}^2$} node[rotate=45,above]{$\mathscr{I}^-$}(Iright) ; \draw[dash dot dot] (Iright) -- node[align=center][yshift=10,xshift=-20]{ $\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left\|(\mathring{\slashed{\Delta}}-2)(\mathring{\slashed{\Delta}}-4)\left(\int^{u}_{-\infty}\underline\upalpha_{\mathscr{I}^+} d\bar{u}\right)\right\|^2_{L^2(\mathscr{I}^+)}$ \\$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\Big\|6M\underline\upalpha_{\mathscr{I}^+}\Big\|^2_{L^2(\mathscr{I}^+)}$} node[rotate=-45,below]{$\mathscr{I}^+$}(Itop) ; \filldraw[white] (Itop) circle (3pt); \draw[black] (Itop) circle (3pt); \filldraw[white] (Ibot) circle (3pt); \draw[black] (Ibot) circle (3pt); \filldraw[white] (Iright) circle (3pt); \draw[black] (Iright) circle (3pt); \filldraw[black] (Ileft) circle (3pt); \draw[black] (Iright) circle (3pt); \end{tikzpicture} \end{center} \end{changemargin} The maps ${}^{(\pm2)}\mathscr{F}^{\pm}$ lead to the Hilbert-space isomorphisms \begin{align} \begin{split} &\mathscr{S}^{+2}: \mathcal{E}^{T,+2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}},\\ &\mathscr{S}^{-2}: \mathcal{E}^{T,-2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}. \end{split} \end{align} \end{theorem*} \begin{remark*} The scattering maps of Theorem 2 answer the questions \ref{QI}, \ref{QII}, \ref{QIII} posed at the beginning of the introduction. In particular, the issue of asymptotic completeness is answered in the sense that the spaces $\mathcal{E}^{T,\pm2}_{\overline{\Sigma}}$ include all smooth, compactly supported Cauchy data for \bref{wave equation +}, \bref{wave equation -} as dense subspaces. \end{remark*} \begin{remark*}\label{introduction regular frame norm} As the Eddington--Finkelstein coordinate system degenerates at the bifurcation sphere $\mathcal{B}$, it is necessary to use a regular coordinate system, such as the Kruskal coordinates $U=e^{-\frac{u}{2M}}, V=e^{\frac{v}{2M}}$. In this coordinate system we see that $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{W}_{AVBV}\sim V^{-2}\Omega^2\alpha \sim U^2\Omega^{-2}\alpha$ and $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{W}_{AUBU}\sim V^{2}\Omega^{-2}\alpha \sim U^{-2}\Omega^{2}\alpha$ extend regularly to the bifurcation sphere. The integrands defining $\mathcal{E}^{T,\pm2}_{\mathscr{H}^\pm}$ also extend regularly to the bifurcation sphere $\mathcal{B}$. For example, \begin{align}\label{introduction regular frame expression H-} &-2(2M\partial_u)+3(2M\partial_u)^2-(2M\partial_u)^3\Omega^{-2}\alpha=U\partial_U^3 U^{2}\Omega^{-2}\alpha, \end{align} \begin{align}\label{introduction regular frame expression I-} \int^{\infty}_v e^{\frac{1}{2M}({v}-\bar{v})}\Omega^2\alpha\;d\bar{v}=V\int_{V}^{\infty} \overline{V}^{-2}\Omega^2\alpha\; d\overline{V}. \end{align} We take $L^2(\overline{\mathscr{H}^+})$ to be defined with respect to the measure $dv\sin\theta d\theta d\phi$, and we define $ L^2({\mathscr{I}^+})$ via the measure $du\sin\theta d\theta d\phi$. Analogous statements apply to $\mathscr{I}^-, \overline{\mathscr{H}^-}$.\\ \indent The detailed statement of Theorem 2 is contained in \Cref{+2 future forward scattering,,+2 future backward scattering,,+2 past forward scattering,,scatteringthm+2} of \Cref{subsubsection 4.2.1 scattering for the +2 equation}, and \Cref{-2 future forward scattering,,-2 future backward scattering,,-2 past forward scattering,,scatteringthm-2} of \Cref{subsubsection 4.2.2 Scattering for the -2 equation}. \end{remark*} \subsubsection{Teukolsky--Starobinsky correspondence}\label{subsubsection 1.3.3 TS} \indent Finally, concerning the Teukolsky--Starobinsky correspondence relating $\alpha, \underline\alpha$, we may summarise our result as follows: \begin{theorem*}\label{Theorem 3} The constraints \bref{eq:227intro1}, \bref{eq:228intro1} can be used to define unitary Hilbert space isomorphisms: \begin{align} \mathcal{TS}_{\mathscr{I}^+}:\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow\mathcal{E}^{T,-2}_{\mathscr{I}^+},\qquad\qquad\mathcal{TS}_{\mathscr{H}^+}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}, \end{align} \begin{align} \mathcal{TS}=\mathcal{TS}_{\mathscr{H}^+}\oplus\mathcal{TS}_{\mathscr{I}^+}: \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}. \end{align} Applying $\mathcal{TS}$ to scattering data, one can associate to a solution to the $+2$ Teukolsky equation \bref{wave equation +} arising from smooth scattering data in $\mathcal{E}^{T,+2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$ a unique solution $\underline\alpha$ of the $-2$ Teukolsky equation \bref{wave equation -} with smooth scattering data in $\mathcal{E}^{T,-2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$ such that \bref{eq:227intro1}, \bref{eq:228intro1} are satisfied everywhere on the exterior region of Schwarzschild. \end{theorem*} The map $\mathcal{TS}_{\mathscr{I}^+}$ is realised by taking the limit of constraint \bref{eq:227intro1} near $\mathscr{I}^+$ and inverting either side of the constraint on smooth, compactly supported scattering data, which are by definition dense subsets of $\mathcal{E}^{T,\pm2}_{\mathscr{I}^+}$. The map $\mathcal{TS}_{\mathscr{H}^+}$ is obtained analogously by studying constraint \bref{eq:228intro1} near $\overline{\mathscr{H}^+}$. Note that in order to obtain a unique smooth radiation field $\upalpha_{\mathscr{H}^+}$ for the +2 Teukolsky equation \bref{wave equation +} on the event horizon starting from a radiation field $\underline\upalpha_{\mathscr{H}^+}$ for the $-2$ equation \bref{wave equation -}, it is necessary to specify $\underline\upalpha_{\mathscr{H}^+}$ on the entirety of $\overline{\mathscr{H}^+}$, and vice versa for $\mathscr{I}^+$. Thus the isomorphisms $\mathcal{TS}_{\mathscr{I}^+}$, $\mathcal{TS}_{\mathscr{H}^+}$ can only be defined on spaces of scattering data that determine solutions to \bref{wave equation +}, \bref{wave equation -} \textit{globally} on the Schwarzschild exterior.\\ \indent In particular, note that spacetimes of Robinson--Trautman type are excluded from our scattering theory, see \Cref{section 9 TS correspondence} and \Cref{Appendix A Robinson--Trautman}. The Robinson--Trautman spacetimes have the property that one of $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ or $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ is non-trivial while the other is vanishing, and as such they pose a counterexample to the Teukolsky--Starobinsky correspondence if not properly formulated. We show that this possibility is eliminated when finite-energy scattering is considered globally on the entirety of the exterior of the Schwarzschild solution.\\ \indent The detailed statement of Theorem 3 is contained in \Cref{Theorem 3 detailed statement} of \Cref{subsection 4.3 the Teukolsky--Starobinsky identities}. See \Cref{section 9 TS correspondence} for the detailed treatment. \subsubsection{A preview of scattering for the full linearised Einstein equations}\label{subsubsection 1.3.4 corollary} In reference to Theorem 2, Theorem 3 allows us to bridge the scattering theory we build for the Teukolsky equations to develop scattering for the full system of linearised Einstein equations in double null gauge via the following corollary: \begin{corollary*}\label{Corollary 1} Given a smooth, compactly supported $\upalpha_{\mathscr{I}^-}$ on $\mathscr{I}^-$ such that $\int_{-\infty}^\infty d\bar{v} \; \upalpha_{\mathscr{I}^-}=0$, and an $\underline\upalpha_{\mathscr{H}^-}$ such that $U^{-2}\underline\upalpha_{\mathscr{H}^-}$ is smooth, compactly supported on $\overline{\mathscr{H}^-}$, there exists a unique smooth pair $(\alpha, \underline\alpha)$ on the exterior region of Schwarzschild, satisfying equations \bref{wave equation +}, \bref{wave equation -} respectively, where $\alpha$ realises $\upalpha_{\mathscr{H}^+}$ as its radiation field on $\overline{\mathscr{H}^+}$, $\underline\alpha$ realises $\underline\upalpha_{\mathscr{I}^+}$ as its radiation field on ${\mathscr{I}^+}$, such that constraints \bref{eq:227intro1} and \bref{eq:228intro1} are satisfied. Moreover, $\alpha, \underline\alpha$ induce smooth radiation fields $\underline{\upalpha}_{\mathscr{I}^+}, \upalpha_{\mathscr{H}^+}$ in $\mathcal{E}^{T,-2}_{{\mathscr{I}^+}}, \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$ respectively. This extends to a unitary Hilbert-space isomorphism: \begin{align} \mathscr{S}^{-2,+2}:\mathcal{E}^{T,+2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{I}^+}\oplus\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}. \end{align} \end{corollary*} \begin{center} \begin{tikzpicture}[->,scale=0.7, arrow/.style={ color=black, draw=blue,thick, -latex, font=\fontsize{8}{8}\selectfont}, ] \node (I) at ( 0,0) {}; \path (I) +(90:4) coordinate (Itop) coordinate [label={$i^+$}] +(180:4) coordinate (Ileft) coordinate [label=180:{$\mathcal{B}\;$}] +(0:4) coordinate (Iright) coordinate [label=0:{$\;i^0$}] +(270:4) coordinate (Ibot) coordinate [label=-90:{$i^-$}] ; \draw[arrow] ($(Itop)+(-90:3.6cm)$) to [in=-25,out=90] ($(Itop)+(-135:2.5cm)$); \draw[arrow] ($(Itop)+(-90:3.6cm)$) to [in=205,out=90]($(Itop)+(-45:2.5cm)$); \draw[arrow] ($(Ibot)+(135:2.7cm)$) to [out=10,in=-90] ($(Ibot)+(90:3.6cm)$); \draw[arrow] ($(Ibot)+(45:2.7cm)$) to [out=170,in=-90] ($(Ibot)+(90:3.6cm)$); \draw (Ileft) -- node[yshift=4mm,xshift=-1mm]{$\upalpha_{\mathscr{H}^+}$} (Itop) ; \draw[dash dot dot] (Iright) -- node[yshift=4mm,xshift=1.mm]{$\underline\upalpha_{\mathscr{I}^+}$}(Itop) ; \node[draw] at ($(Itop)+(-90:4cm)$) {$(\alpha,\underline\alpha)$}; \draw[dash dot dot] (Iright) -- node[yshift=-4mm,xshift=1.mm]{$\upalpha_{\mathscr{I}^-}$}(Ibot) ; \draw (Ileft) -- node[yshift=-4mm,xshift=-1mm]{$\underline\upalpha_{\mathscr{H}^-}$} (Ibot) ; \filldraw[white] (Itop) circle (3pt); \draw[black] (Itop) circle (3pt); \filldraw[white] (Ibot) circle (3pt); \draw[black] (Ibot) circle (3pt); \filldraw[white] (Iright) circle (3pt); \draw[black] (Iright) circle (3pt); \filldraw[black] (Ileft) circle (3pt); \draw[black] (Ileft) circle (3pt); \end{tikzpicture} \end{center} \Cref{Corollary 1} is stated again as \Cref{corollary to be proven} of \Cref{subsection 4.4 Corollary 1: mixed scattering}. The proof is contained in \Cref{subsection 9.4 mixed scattering}.\\ \indent To apply this result to scattering for the linearised Einstein equations, the strategy will be to start from data for the metric on $\mathscr{H}^-, \mathscr{I}^-$ (or $\mathscr{H}^+, \mathscr{I}^+$), obtain data for the shears $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}$ and hence $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}$ on $\mathscr{H}^+$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline{\chi}}}$ and hence $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ on $\mathscr{I}^+$, then use \Cref{Corollary 1} to obtain scattering data and solutions to \cref{wave equation +} and \cref{wave equation -}, and conclude by constructing the remaining quantities using the linearised Bianchi and null structure equations. This will be the subject of a forthcoming sequel to this paper \cite{M2050}.\\ \indent We can give a preview of the scattering results of the full system: assume we have a solution to the linearised Einstein equations defined on the whole of the exterior region (see \Cref{subsection 2.2 Linearised Einstein equations in double null gauge} for a full list of equations), such that $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ induce radiation fields $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\upalpha}_{\mathscr{I}^+}\in\mathcal{E}^{T,-2}_{\mathscr{I}^+}$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\upalpha}_{\mathscr{I}^-}\in\mathcal{E}^{T,+2}_{\mathscr{I}^-}$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\upalpha}_{{\mathscr{H}^+}}\in\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\upalpha}_{{\mathscr{H}^-}}\in\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}$. Using \bref{example2} and its counterpart in the 4-direction, we can assert that the radiation fields belonging to the linearised shears $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat\chi}, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\chi}}$ must satisfy \begin{align}\label{boundedness shears} \begin{split} &\left\|\left(\mathring{\slashed{\Delta}}-2\right)\left(\mathring{\slashed{\Delta}}-4\right)\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{I}^+}\right\|_{L^2(\mathscr{I}^+)}^2+\left\|6M\partial_u\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{I}^+}\right\|_{L^2(\mathscr{I}^+)}^2\\ &\qquad\qquad +\left\|\left(\mathring{\slashed{\Delta}}-2\right)\left(\mathring{\slashed{\Delta}}-4\right)\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{H}^+}\right\|_{L^2(\overline{\mathscr{H}^+})}^2+\left\|6M\partial_v\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{H}^+}\right\|_{L^2(\overline{\mathscr{H}^+})}^2\\ &\qquad\qquad\qquad\qquad\qquad=\left\|\left(\mathring{\slashed{\Delta}}-2\right)\left(\mathring{\slashed{\Delta}}-4\right)\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{I}^-}\right\|_{L^2(\mathscr{I}^-)}^2+\left\|6M\partial_v\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{I}^-}\right\|_{L^2(\mathscr{I}^-)}^2\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left\|\left(\mathring{\slashed{\Delta}}-2\right)\left(\mathring{\slashed{\Delta}}-4\right)\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{H}^-}\right\|_{L^2(\overline{\mathscr{H}^-})}^2+\left\|6M\partial_u\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{H}^-}\right\|_{L^2(\overline{\mathscr{H}^-})}^2. \end{split} \end{align} \indent The fact that time translation and angular momentum operators commute with $\Box_g$ means that we can project scattering data on individual azimuthal modes and consider solutions in frequency space. Since $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\chi}}$ are supported on $\ell\geq2$, and in view of the unitarity of \bref{boundedness shears}, we can translate \bref{boundedness shears} in terms of fixed frequency, fixed azimuthal mode solutions to the following statement: \begin{align} \Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{H}^+,\;\omega,m,\ell}\Big\|_{L^2_\omega }^2+\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{I}^+,\;\omega,m,\ell}\Big\|^2_{L^2_\omega }\;=\;\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{H}^-,\;\omega,m,\ell}\Big\|_{L^2_\omega }^2+\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{I}^-,\;\omega,m,\ell}\Big\|^2_{L^2_\omega }. \end{align} Resumming in $\ell_{m,\ell}^2$ and using Plancherel, we obtain the identity \begin{align}\label{conservation law} \Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{H}^+}\Big\|_{L^2(\overline{\mathscr{H}^+})}^2+\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{I}^+}\Big\|_{L^2(\mathscr{I}^+)}^2\;=\; \Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\upchi}}_{\mathscr{H}^-}\Big\|^2_{L^2(\overline{\mathscr{H}^-})}+\Big\|\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\upchi}}_{\mathscr{I}^-}\Big\|^2_{L^2(\mathscr{I}^-)}. \end{align} The statement \bref{conservation law} above ties up with the work by Holzegel \cite{Holzegel_2016}, where a set of conservation laws are derived for the full system of linearised Einstein equations on the Schwarzschild exterior \bref{SchwMetric} (using purely physical-space methods).\\ \indent Note that in particular, for past scattering data that is vanishing on $\overline{\mathscr{H}^-}$, the identity \bref{conservation law} has the interpretation that the energy of the gravitational energy radiated to $\mathscr{I}^+$ is bounded \underline{with constant 1} by the incoming gravitational energy radiated from $\mathscr{I}^-$, i.e.~there is no superradiant amplification of reflected gravitational radiation on the Schwarzschild exterior. \subsection{Outline of the paper}\label{subsection 1.4 outline} This paper is organised as follows: We review the linearised Einstein equations in double null gauge around the Schwarzschild spacetime in \Cref{section 2 preliminaries}. In \Cref{TRW} we introduce the Teukolsky equations, the Regge--Wheeler equations and derive important identities connecting the equations. Detailed statements of the results of this work are presented in \Cref{section 4 main theorems}, and then the scattering theory of the Regge--Wheeler equations is studied in \Cref{section 5 scattering theory for RW}. We develop scattering for the Teukolsky equations by first working out the necessary estimates to understand the asymptotic behaviour in forward evolution for both equations in \Cref{section 6} and \Cref{section 7}. Backwards scattering for both equations is treated in \Cref{section 8 constructing the scattering maps}, followed by the study of the constraints \bref{eq:227intro1} and \bref{eq:228intro1} in \Cref{section 9 TS correspondence}. \Cref{Appendix A Robinson--Trautman} is concerned with Robinson--Trautman spacetimes, and \Cref{Appendix B Double null guage} is a brief review of the double null gauge. \setcounter{tocdepth}{2} \subsection*{Acknowledgements} The author would like to express his gratitude to his supervisors Mihalis Dafermos, Claude Warnick and Malcolm J. Perry for introducing him to the fascinating area of scattering theory in general relativity, and for their unwavering support throughout the undertaking of this project. The author would like to thank Dejan Gajic, Leonhard Kehrberger, Rita Teixeira da Costa and Yakov Shlapentokh-Rothman for stimulating discussions and helpful remarks. The author acknowledges support by the EPSRC grant EP/L016516/1. \section{Preliminaries}\label{section 2 preliminaries} \subsection{The Schwarzschild exterior in a double null gauge}\label{subsection 2.1 schwarzschild in dng} \indent Denote by $\mathscr{M}$ the exterior of the maximally extended Schwarzschild spacetime. Using Kruskal coordinates, this is the manifold with corners \begin{align} \mathscr{M}=\{(U,V,\theta^A)\in(-\infty,0]\times [0,\infty)\times S^2\} \end{align} equipped with the metric \begin{align}\label{metric Kruskal} ds^2=-\frac{32M^3}{r(U,V)}e^{-\frac{r(U,V)}{2M}}dUdV+r(U,V)^2\gamma_{AB}d\theta^A d\theta^B. \end{align} The function $r(U,V)$ is determined by $-UV=\left(\frac{r}{2M}-1\right)e^{\frac{r}{2M}}$, $(\theta^A)$ is a coordinate system on $S^2$ and $\gamma_{AB}$ is the standard metric on the unit sphere $S^2$. The time-orientation of $\mathscr{M}$ is defined by the vector field $\partial_U+\partial_V$. The boundary of $\mathscr{M}$ consists of the two null hypersurfaces \begin{align} \mathscr{H}^+&=\{0\}\times(0,\infty)\times S^2,\\ \mathscr{H}^-&=(-\infty,0)\times \{0\}\times S^2, \end{align} and the 2-sphere $\mathcal{B}$ where $\mathscr{H}^+$ and $\mathscr{H}^-$ bifurcate: \begin{align} \mathcal{B}=\{U,V=0\}\cong S^2 . \end{align} We define $\overline{\mathscr{H}^+}=\mathscr{H}^+\cup \mathcal{B}$, $\overline{\mathscr{H}^-}=\mathscr{H}^-\cup \mathcal{B}$. \\ \indent The interior of $\mathscr{M}$ can be covered with the familiar Schwarzschild coordinates $(t,r,\theta^A)$ and the metric takes the form \bref{SchwMetric}, i.e. \begin{align} ds^2=-\left(1-\frac{2M}{r}\right)dt^2+\left(1-\frac{2M}{r}\right)^{-1}dr^2+r^2\gamma_{AB}d\theta^Ad\theta^B. \end{align} Let $\Omega^2=\left(1-\frac{2M}{r}\right)$. It will be convenient to work instead in Eddington--Finkelstein coordinates \begin{align}\label{EF null coordinates} u=\frac{1}{2}(t-r_*),\qquad\qquad\qquad v=\frac{1}{2}(t+r_*), \end{align} where $r_*$ is defined up to a constant by $\frac{dr_*}{dr}=\frac{1}{\Omega^2}$. The coordinates $(u,v,\theta^A)$ also define a double null foliation (see Appendix B) of the interior of $\mathscr{M}$ since the metric takes the form \begin{align} ds^2=-4\left(1-\frac{2M}{r}\right)dudv+r(u,v)^2(d\theta^2+\sin^2\theta d\phi^2). \end{align} In particular the null frame defined by the coordinates \bref{EF null coordinates} is given by (see Appendix B): \begin{align} e_3=\frac{1}{\Omega}\partial_u,\qquad\qquad e_4=\frac{1}{\Omega}\partial_v. \end{align} We may relate $U,V$ to $u,v$ after fixing the residual freedom in defining $t,r_*$ by \begin{align}\label{Kruskal} U=-e^{-\frac{u}{2M}},\qquad\qquad V=e^{\frac{v}{2M}}, \end{align} Note that the intersections of null hypersurfaces of constant $u,v$ are spheres with metric $\slashed{g}_{AB}:=r^2\gamma_{AB}$. We denote these spheres by $S^2_{u,v}$.\\ \indent The $(u,v)$-coordinate system degenerates on $\overline{\mathscr{H}^+}$ and $\overline{\mathscr{H}^-}$ where $u=\infty,v=-\infty$ respectively. To compensate for this we can use the Kruskal coordinates to introduce weighted quantities in the coordinates $(u,v,\theta^A)$ that are regular on $\mathscr{H}^\pm$. We note already at this stage that the regularity of $\partial_U,\partial_V$ on the event horizons implies that $\frac{1}{\Omega}e_3, \Omega e_4$ are regular on $\mathscr{H}^+$ and $\frac{1}{\Omega}e_4, \Omega e_3$ are regular on $\mathscr{H}^-$ (but not $\overline{\mathscr{H}^\pm}$, which include $\mathcal{B}$).\\ \indent We denote by $\mathscr{C}_{u^*}$ the ingoing null hypersurface of constant $u=u^*$, and similarly $\underline{\mathscr{C}}_{v^*}$ denotes the outgoing null hypersurface $v=v^*$; define $\mathscr{C}_{u^*}\cap[v_1,v_2]$ to be the subset of $\mathscr{C}_{u^*}$ for which $v\in[v_1,v_2]$, $\underline{\mathscr{C}}_v\cap[u_1,u_2]$ denotes the subset of $\underline{\mathscr{C}}_v$ for which $u\in[u_-,u_+]$. Let ${\Sigma}$ be the spacelike surface $\{t=0\}$ and let $\overline{\Sigma}=\Sigma\cup\mathcal{B}$ be the topological closure of $\Sigma$ in $\mathscr{M}$. $\overline{\Sigma}$ is a smooth Cauchy surface for $\mathscr{M}$ which connects $\mathcal{B}$ with "spacelike infinity"; in Kruskal coordinates it is given by $\{U+V=0\}$. We also work with a spacelike hypersurface $\Sigma^*$ intersecting $\mathscr{H}^+$ to the future of $\mathcal{B}$, defined as follows: let \begin{align} t^*=t+2M\log\left(\frac{r}{2M}+1\right). \end{align} The function $t^*$ can be extended to $\mathscr{H}^\pm$ to define a smooth function on all of $\mathscr{M}$, and we define $\Sigma^*$ by \begin{align} \Sigma^*=\{t^*=0\} \end{align} \noindent Note that $\Sigma^*$ intersects $\mathscr{H}^+$ at $v=0$ and asymptotes to spacelike infinity. Define $\mathscr{H}^+_{\geq 0}:=\mathscr{H}^+\cap J^+(\Sigma^*)$. We will occasionally use the notation $x:=1-\frac{1}{\Omega^2}$. We denote the spacetime region bounded by $\mathscr{C}_{u_0}\cap[v_0,v_1], \mathscr{C}_{u_1}\cap[v_0,v_1], \underline{\mathscr{C}}_{v_0}\cap[u_0,u_1], \underline{\mathscr{C}}_{v_1}\cap[u_0,u_1]$ by $\mathscr{D}^{u_1,v_1}_{u_0,v_0}$. We also denote the spacetime region bounded by $\mathscr{C}_u,\underline{\mathscr{C}}_v, \Sigma^*$ by $\mathscr{D}^{u,v}_{\Sigma^*}$. \begin{center} \begin{tikzpicture}[scale=1] \node (I) at ( 0,0) {$\mathscr{D}^{u_1,v_1}_{u_0,v_0}$}; \path (I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$] +(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$] +(180:4) coordinate (Ileft) +(0:4) coordinate (Iright) coordinate[label=0:$i^0$] ; \path (I) +(90:2) coordinate (Ictop) +(-90:2) coordinate (Icbot) +(180:2) coordinate (Icleft) +(0:2) coordinate (Icright) ; \draw (Ileft) -- node[rotate=45,below] {$u=\infty$} node[rotate=45,above]{$\mathscr{H}^+$} (Itop) ; \draw (Ileft) -- node[rotate=-45,above] {$v=-\infty$} node[rotate=-45,below]{$\mathscr{H}^-$}(Ibot) ; \draw[dash dot dot] (Ibot) -- node[rotate=45,above] {$u=-\infty$} node[rotate=45,below]{$\mathscr{I}^-$}(Iright) ; \draw[dash dot dot] (Iright) -- node[rotate=-45,below] {$v=\infty$} node[rotate=-45,above]{$\mathscr{I}^+$}(Itop) ; \draw(Icleft) --node[rotate=45,above] {$\mathscr{C}_{u_1}\cap[v_0,v_1]$} (Ictop); \draw(Ictop) -- node[rotate=-45,above] {$\underline{\mathscr{C}}_{v_1}\cap[u_0,u_1]$}(Icright); \draw(Icright) -- node[rotate=45,below] {$\mathscr{C}_{u_0}\cap[v_0,v_1]$}(Icbot); \draw(Icbot) -- node[rotate=-45,below] {$\underline{\mathscr{C}}_{v_0}\cap[v_0,v_1]$}(Icleft); \filldraw[white] (Itop) circle (3pt); \draw[black] (Itop) circle (3pt); \filldraw[white] (Ibot) circle (3pt); \draw[black] (Ibot) circle (3pt); \filldraw[white] (Iright) circle (3pt); \draw[black] (Iright) circle (3pt); \filldraw[black] (Ileft) circle (3pt); \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale=0.4] \node (I) at ( 0,0) {}; \path (I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$] +(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$] +(180:4) coordinate (Ileft) +(0:4) coordinate (Iright) coordinate[label=0:$i^0$] ; \draw (Ileft) -- node[rotate=45,above]{$\mathscr{H}^+_{\geq0}$} (Itop) ; \draw (Ileft) -- (Ibot) ; \draw[dash dot dot] (Ibot) -- (Iright) ; \draw[dash dot dot] (Iright) -- node[rotate=-45,above]{$\mathscr{I}^+$}(Itop) ; \draw ($(Ileft)+(45:1.2)$) to[out=-0, in=165, edge node={node [below] {$\Sigma^*$}}] ($(Iright)$); \filldraw[white] (Itop) circle (3pt); \draw[black] (Itop) circle (3pt); \filldraw[white] (Ibot) circle (3pt); \draw[black] (Ibot) circle (3pt); \filldraw[white] (Iright) circle (3pt); \draw[black] (Iright) circle (3pt); \filldraw[black]($(Ileft)+(45:1.2)$) circle (3pt); \end{tikzpicture}\hspace{2cm}\begin{tikzpicture}[scale=0.4] \node (I) at ( 0,0) {}; \path (I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$] +(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$] +(180:4) coordinate (Ileft) +(0:4) coordinate (Iright) coordinate[label=0:$i^0$] ; \draw (Ileft) -- node[rotate=45,above]{$\mathscr{H}^+$} (Itop) ; \draw (Ileft) --(Ibot) ; \draw[dash dot dot] (Ibot) -- (Iright) ; \draw[dash dot dot] (Iright) -- node[rotate=-45,above]{$\mathscr{I}^+$}(Itop) ; \draw ($(Ileft)$) to[out=0, in=180, edge node={node [above] {$\Sigma$}}] ($(Iright)$); \filldraw[white] (Itop) circle (3pt); \draw[black] (Itop) circle (3pt); \filldraw[white] (Ibot) circle (3pt); \draw[black] (Ibot) circle (3pt); \filldraw[white] (Iright) circle (3pt); \draw[black] (Iright) circle (3pt); \filldraw[white] (Ileft) circle (3pt); \draw[black] (Ileft) circle (3pt); \end{tikzpicture}\hspace{2cm}\begin{tikzpicture}[scale=0.4] \node (I) at ( 0,0) {}; \path (I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$] +(-90:4) coordinate (Ibot) coordinate[label=-90:$i^-$] +(180:4) coordinate (Ileft) coordinate[label=180:$\mathcal{B}$] +(0:4) coordinate (Iright) coordinate[label=0:$i^0$] ; \draw (Ileft) -- node[rotate=45,above]{$\overline{\mathscr{H}^+}$} (Itop) ; \draw (Ileft) --(Ibot) ; \draw[dash dot dot] (Ibot) -- (Iright) ; \draw[dash dot dot] (Iright) -- node[rotate=-45,above]{$\mathscr{I}^+$}(Itop) ; \draw ($(Ileft)$) to[out=0, in=180, edge node={node [above] {$\overline{\Sigma}$}}] ($(Iright)$); \filldraw[white] (Itop) circle (3pt); \draw[black] (Itop) circle (3pt); \filldraw[white] (Ibot) circle (3pt); \draw[black] (Ibot) circle (3pt); \filldraw[white] (Iright) circle (3pt); \draw[black] (Iright) circle (3pt); \filldraw[black] (Ileft) circle (3pt); \draw[black] (Ileft) circle (3pt); \end{tikzpicture} \end{center} \subsubsection*{Null infinity $\mathscr{I}^\pm$} We define the notion of null infinity by directly attaching it as a boundary to $\mathscr{M}$. Define $\mathscr{I}^+,\mathscr{I}^-$ to be the manifolds \begin{align} \mathscr{I}^+,\mathscr{I}^-:=\mathbb{R}\times S^2 \end{align} and define $\overline{\mathscr{M}}$ to be the extension \begin{align} \overline{\mathscr{M}}=\mathscr{M}\cup\mathscr{I}^+\cup\mathscr{I}^-. \end{align} For sufficiently large $R$ and any open set $\mathcal{O}\subset\mathbb{R}\times S^2$, declare the sets $\mathcal{O}^+_R=(R,\infty]\times\mathcal{O}$ to be open in $\overline{\mathscr{M}}$, identifying $\mathscr{I}^+$ with the points $(u,\infty,\theta,\phi)$. To the set $\mathcal{O}_R^+$ we assign the coordinate chart $(u,s,\theta,\phi)\in \mathbb{R}\times[0,1)\times S^2$ via the map \begin{align} (u,v,\theta,\phi)\longrightarrow(u,\frac{R}{v},\theta,\phi), \end{align} where $(u,v,\theta,\phi)$ are the Eddington--Finkelstein coordinates we defined earlier. The limit $\lim_{v\longrightarrow\infty} (u,v,\theta,\phi)$ exists and is unique, and we use it via the above charts to fix a coordinate system $(u,\theta,\phi)$ on $\mathscr{I}^+$. The same can be repeated to define an atlas attaching $\mathscr{I}^-$ as a boundary to $\overline{\mathscr{M}}$. \subsubsection{$S^2_{u,v}$-projected connection and angular derivatives}\label{D1D2} We will be working primarily with tensor fields that are everywhere tangential to the $S^2_{u,v}$ spheres foliating $\mathscr{M}$. By this we mean any tensor fields of type $(k,l)$, $\digamma\in \mathcal{T}^{(k,l)}\mathscr{M}$ on $\mathscr{M}$ such that for any point $q=(u,v,\theta^A)\in\mathscr{M}$ we have $\digamma|_q\in \mathcal{T}^{(k,l)}_{(\theta^A)}S^2_{u,v}$. (Note that a vector $X^A\in \mathcal{T}_{(\theta^A)}S^2_{u,v}$ is canonically identified with a vector $X^a\in\mathcal{T}_q\mathscr{M}$ via the inclusion map, whereas we make the identification of a 1-form $\eta_A\in\mathcal{T}^*_{(\theta^A)}\mathscr{M}$ as an element in the cotangent bundle of $\mathscr{M}$ by declaring that $\eta(X)=0$ if $X$ is in the orthogonal complement of $\mathcal{T}S^2_{u,v}$ under the spacetime metric \bref{metric Kruskal}.) We will refer to such tensor fields as "$S^2_{u,v}$-tangent" tensor fields in the following. It will also be convenient to work with an "$S^2_{u,v}$ projected" version of the covariant derivative belonging to the Levi-Civita connection of the metric \bref{SchwMetric}. We define these notions as follows:\\ \indent We denote by $\slashed{\nabla}_A$ (or sometimes simply $\slashed{\nabla}$) the covariant derivative on $S^2_{u,v}$ with the metric $\slashed{g}_{AB}$. Note that $r\slashed{\nabla}=\slashed{\nabla}_{\mathbb{S}^2}$ which we also denote by $\mathring{\slashed{\nabla}}$.\\ \indent For an $S^2_{u,v}$-tangent 1-form $\xi$, define $\slashed{\mathcal{D}}_1 \xi$ to be the pair of functions \begin{align} \slashed{\mathcal{D}}_1{\xi}=(\slashed{\text{div}}\xi,\slashed{\text{curl}}\xi), \end{align} where $\slashed{\text{div}}\xi=\slashed{\nabla}^A \xi_A$ and $\slashed{\text{curl}}\xi=\slashed{\epsilon}^{AB} \slashed{\nabla}_A\xi_B$. For an $S^2_{u,v}$-tangent symmetric 2-tensor $\Xi_{AB}$ we define $\slashed{\mathcal{D}}_2 \theta$ to be the 1-form given by \begin{align} (\slashed{\mathcal{D}}_2 \theta)_A=(\slashed{\text{div}}\theta)_A=\slashed{\nabla}^B \Xi_{BA}. \end{align} \indent We define the operator $\slashed{\mathcal{D}}^*_1 $ to be the $L^2({S^2_{u,v}})$-dual to $\slashed{\mathcal{D}}_1$. For scalars $(f,g)$ the 1-form $\slashed{\mathcal{D}}^*_1(f,g)$ is given by \begin{align} \slashed{\mathcal{D}}^*_1(f,g)=-\slashed{\nabla}_A f +\epsilon_{AB}\slashed{\nabla}^B g. \end{align} \indent Similarly we denote by $\slashed{\mathcal{D}}^*_2$ the $L^2_{S^2_{u,v}}$-dual to $\slashed{\mathcal{D}}_2$. For an $S^2_{u,v}$-tangent 1-form $\xi$ this is given by \begin{align} (\slashed{\mathcal{D}}^*_2\xi)_{AB}=-\frac{1}{2}\left(\slashed{\nabla}_A \xi_B+\slashed{\nabla}_B\xi_A-\slashed{g}_{AB}\slashed{\text{div}}\xi\right). \end{align} We also use the notation \begin{align} \begin{split} \mathring{\slashed{\mathcal{D}}}_1:=r\slashed{\mathcal{D}}_1,\qquad\qquad\mathring{\slashed{\mathcal{D}}^*_1}:=r\slashed{\mathcal{D}}^*_1,\\ \mathring{\slashed{\mathcal{D}}}_2:=r\slashed{\mathcal{D}}_2,\qquad\qquad\mathring{\slashed{\mathcal{D}}^*_2}:=r\slashed{\mathcal{D}}^*_2. \end{split} \end{align} For example, if $\xi$ is a 1-form on $S^2_{u,v}$ then \begin{align} \mathring{\slashed{\mathcal{D}}^*_2}\xi=-\frac{1}{2}\left(\mathring{\slashed{\nabla}}_A \xi_B+\mathring{\slashed{\nabla}}_B\xi_A-\slashed{g}_{AB}\mathring{\slashed{\nabla}}_C\xi^C\right). \end{align} and so on. Let $\xi$ be an $S^2_{u,v}$-tangent tensor field. We denote by $D\xi$ and $\underline{D}\xi$ the projected Lie derivative of $\xi$ in the 3- and 4-directions respectively. In EF coordinates we have \begin{align} (D\xi)_{A_1 A_2...A_n}=\partial_u(\xi_{A_1 A_2 ... A_n})\qquad (\underline{D}\xi)_{A_1 A_2...A_n}=\partial_v(\xi_{A_1 A_2 ... A_n}) \end{align} Similarly, we define $\slashed{\nabla}_3 \xi$ and $\slashed{\nabla}_4 \xi$ to be the projections of the covariant derivatives $\nabla_3 \xi$ and $\nabla_4 \xi$ to $S^2_{u,v}$. \subsubsection{Elliptic estimates on $S^2_{u,v}$}\label{subsubsection 2.1.2 Elliptic estimates on S2} For a $k$-covariant $S^2_{u,v}$-tangent tensor field $\theta$ on $\mathscr{M}$, define \begin{align} |\theta|_{S^2}=\sqrt{\gamma^{A_1B_1}\gamma^{A_2B_2}\cdot\cdot\cdot\gamma^{A_pB_p}\Xi_{A_1...A_p}\Xi_{B_1...B_p}},\qquad |\theta|=r^{-k}|\theta|_{S^2} \end{align} The following is a summary of Section 4.4 of \cite{DHR16}. Given scalars $(f,g)$ we can define an $S^2_{u,v}$ 1-form by $\xi=r\slashed{\mathcal{D}}^*_1(f,g)$. In turn, given a 1-form $\xi$ we can define a symmetric traceless 2-form $\theta$ via $\theta=r\slashed{\mathcal{D}}^*_2\xi$. It turns out that these representations span the space of such $\xi$ and $\theta$: \begin{proposition} Let $\xi$ be an $S^2_{u,v}$-tangent 1-form. Then there exist scalars $f,g$ such that \begin{align} \xi=r\slashed{\mathcal{D}}^*_1(f,g). \end{align} Let $\Xi$ be $S^2_{u,v}$-tangent symmetric traceless 2-form. Then there exist scalars $f,g$ such that \begin{align} \Xi=r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1(f,g) \end{align} \end{proposition} Note that when considering the decomposition of $f,g$ into their spherical harmonic modes, the operation of acting by $\slashed{\mathcal{D}}^*_1$ annihilates their $\ell=0$ modes and the action of $\slashed{\mathcal{D}}^*_2$ annihilates their $\ell=1$ modes. Thus in the case of a 1-form $f,g$ can be taken to have vanishing $\ell=0$ modes, in which case $f,g$ are unique. Similarly, for a symmetric traceless $S^2_{u,v}$ 2-tensor there exist a unique pair $f,g$ with vanishing $\ell=0,1$ such that $\theta$ is given by the expression above. \begin{remark} The operators $\slashed{\mathcal{D}}_1,\slashed{\mathcal{D}}_2,\slashed{\mathcal{D}}^*_1,\slashed{\mathcal{D}}^*_2$ defined in \Cref{D1D2} can be combined to give \begin{align} \begin{split} &-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2=\mathring{\slashed{\Delta}}-2\qquad\qquad-r^2\slashed{\mathcal{D}}^*_1\slashed{\mathcal{D}}_1=\mathring{\slashed{\Delta}}-1\\ &-2r^2\slashed{\mathcal{D}}_2\slashed{\mathcal{D}}^*_2=\mathring{\slashed{\Delta}}+1\qquad\qquad -r^2\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}^*_1=\mathring{\slashed{\Delta}}. \end{split} \end{align} The operator $\mathring{\slashed\Delta}$ is the Laplacian on the unit 2-sphere $S^2$. \end{remark} \begin{proposition} Let $\Xi$ be a smooth symmetric traceless $S^2_{u,v}$ 2-tensor. We have the following identities: \begin{align} \int_{S^2_{u,v}}\sin\theta d\theta d\phi\left[ |\slashed{\nabla}\Xi|^2+2K|\Xi|^2\right]=2\int_{S^2_{u,v}} \sin\theta d\theta d\phi |\slashed{\mathcal{D}}_2 \Xi|^2, \end{align} \begin{align} \int_{S^2_{u,v}}\sin\theta d\theta d\phi\left[\frac{1}{4}|\slashed{\Delta}\Xi|^2+K|\Xi|^2 +K^2|\slashed{\nabla}\Xi|^2\right]=\int_{S^2_{u,v}} \sin\theta d\theta d\phi |\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2 \Xi|^2, \end{align} where $K=\frac{1}{r^2}$ is the Gaussian curvature of $S^2_{u,v}$. \end{proposition} We also note the following Poincar\'e inequality: \begin{proposition}\label{poincaresection} Let $\Xi$ be a smooth symmetric traceless $S^2_{u,v}$ 2-tensor, then we have \begin{align}\label{poincare} 2K\int_{S^2_{u,v}}\sin\theta d\theta d\phi|\Xi|^2\leq \int_{S^2_{u,v}} \sin\theta d\theta d\phi|\slashed{\nabla} \Xi|^2 \end{align} \end{proposition} \begin{remark} We will be using the notation \begin{align} \mathcal{A}_2:=-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2=\mathring{\slashed{\Delta}}-2. \end{align} \end{remark} \subsubsection{Asymptotics of $S^2_{u,v}$-tensor fields}\label{subsubsection 2.1.3 Asymptotics of S2 tensor fields} Let $\digamma$ be a $k$-covariant $S^2_{u,v}$-tangent tensor field on $\mathscr{M}$. We say that $\digamma$ converges to $F=F_{A_1A_2...A_p}(u)$ as $v\longrightarrow\infty$ if $r^{-k}\digamma\longrightarrow F$ in the norm $|\;\;|_{S^2}$. We may write \begin{align} \begin{split} \left|\frac{1}{r^k}\digamma(u,v,\theta^A)-F(u,\theta^A)\right|_{S^2}&=\left|\int_{v}^\infty d\bar{v} \frac{d}{dv}\frac{1}{r^k}\digamma \right|_{S^2}\leq \int_{v}^\infty d\bar{v} \left|\frac{d}{dv}\frac{1}{r^k}\digamma\right|_{S^2} \\&=\int_{v}^{\infty}d\bar{v}\left|r^k\frac{d}{dv}\frac{1}{r^k}\digamma\right|=\int_{v}^\infty d\bar{v}|\Omega\slashed{\nabla}_4 \digamma|. \end{split} \end{align} Therefore, if $\Omega\slashed{\nabla}_4\digamma$ is integrable in $L^1_vL^2_{S^2_{u,v}}$ then $\digamma$ has a limit towards $\mathscr{I}^+$. It is easy to see that if $\{\digamma_n\}_n^\infty$ is a Cauchy sequence in $|\;\;|$ then $\digamma_n$ converges in the sense of this definition. The above extends to tensors of rank $(k,\ell)$, where $r^{-k}$ is replaced by $r^{-k+\ell}$. Similar considerations apply when taking the limit towards $\mathscr{I}^-$. In particular, for a symmetric tensor $\Psi$ of rank $(2,0)$, it will be simpler to work with $\Psi^{A}{}_B$. Note that $\Omega\slashed{\nabla}_4\Psi^A{}_B=\partial_v\Psi^A{}_B$, $\Omega\slashed{\nabla}_3\Psi^A{}_B=\partial_u \Psi^A{}_B$. Unless otherwise indicated, we work with $S^2_{u,v}$-tangent $(1,1)$-tensors throughout. \subsection{Linearised Einstein equations in a double null gauge}\label{subsection 2.2 Linearised Einstein equations in double null gauge} When linearising the Einstein equations \bref{EVE} against the Schwarzschild background in a double null gauge, the quantities governed by the resulting equations can be organised into a collection of $S^2_{u,v}$-tangent tensor fields: \begin{itemize} \item The linearised metric components \begin{align}\label{linearised metric} \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\slashed{g}}}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{b}\;,\;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sqrt{\slashed{g}}}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega}\;, \end{align} \item the linearised connection coefficients \begin{align}\label{linearised connection} \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\underline\chi}}\;, \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{(\Omega \tr\chi)}\;, \;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{(\Omega \tr\underline\chi)}\;,\;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\omega}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\omega}}\;,\; \end{align} \item the linearised curvature components \begin{align}\label{linearised curvature} \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}\;,\;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sigma}\;,\; \stackrel{\mbox{\scalebox{0.4}{(1)}}}{K}. \end{align} \end{itemize} See Appendix B and \cite{DHR16} for the details of linearising the vacuum Einstein equations \bref{EVE} in a double null gauge. We now state the linearised vacuum Einstein equations around the Schwarzschild black hole in a double null gauge: \begin{itemize} \item The equations governing the linearised metric components \bref{linearised metric}: \begin{align} \partial_v \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sqrt{\slashed{g}}}\;=\;2(\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega\tr\chi})-2\;\slashed{div}\stackrel{\mbox{\scalebox{0.4}{(1)}}}{b}&,\qquad\Omega\slashed{\nabla}_4 \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\slashed{g}}}\;=\;2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}+2\slashed{\mathcal{D}}^*_2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{b},\\ \partial_u \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sqrt{\slashed{g}}}\;=\;2(\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega\tr\underline\chi})&,\qquad \Omega\slashed{\nabla}_3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\slashed{g}}}\;=\;2\Omega\underline{\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}}.\\ \partial_u\stackrel{\mbox{\scalebox{0.4}{(1)}}}{b}\;=\;&2\Omega^2(\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}-\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}),\\ \partial_v\left(\frac{\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega}}{\Omega}\right)\;=\;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\omega},\qquad\qquad\partial_u\left(\frac{\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega}}{\Omega}\right)\;&=\;\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\omega},\qquad\qquad\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}_A+\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\eta}}_A\;=\;2\slashed{\nabla}_A \left(\frac{\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\Omega}}{\Omega}\right).\label{omega omegabar eta etabar} \end{align} \item The equations governing the linearised connection coefficients \bref{linearised connection}: \begin{equation} \label{start of full system} \Omega\slashed\nabla_4\; r\overone{\Omega tr\underline\chi}=2\Omega^2\left(\slashed{div}\; r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}+2r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho} -\frac{4M}{r^2}\frac{\stackrel{\mbox{\scalebox{0.45}{(1)}}}{\Omega}}{\Omega}\right)+\Omega^2\overone{\Omega tr\chi}, \end{equation} \begin{equation}\label{D3TrChiBar} \Omega\slashed\nabla_3\; r\overone{\Omega tr\chi}=2\Omega^2\left(\slashed{div}\; r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}+2r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho}-\frac{4M}{r^2}\frac{\stackrel{\mbox{\scalebox{0.45}{(1)}}}{\Omega}}{\Omega}\right) -\Omega^2 \overone{\Omega tr\underline\chi}, \end{equation} \begin{equation}\label{D4TrChi} \Omega\slashed\nabla_4\frac{r^2}{\Omega^2}\overone{\Omega tr\chi}=4r\overset{\mbox{\scalebox{0.4}{(1)}}}{\omega},\qquad\qquad \Omega\slashed\nabla_3\frac{r^2}{\Omega^2}\overone{\Omega tr\underline\chi}=-4r\overset{\mbox{\scalebox{0.4}{(1)}}}{\underline\omega}, \end{equation} \begin{equation}\label{D4Chihat} \Omega\slashed\nabla_4\frac{r^2 \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}}{\Omega}=-r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha},\qquad\qquad \Omega\slashed\nabla_3\frac{r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat{\chi}}}}{\Omega}=-r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}, \end{equation} \begin{equation}\label{D3Chihat} \Omega\slashed\nabla_3\; r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}=-2r\slashed{\mathcal{D}}^*_2 \Omega^2 \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}-\Omega^2 \left(\Omega \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat{\chi}}}\right), \end{equation} \begin{equation}\label{D4Chihatbar} \Omega\slashed\nabla_4\;r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat{\chi}}}=-2r\slashed{\mathcal{D}}^*_2\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}+\Omega^2\left(\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}\right), \end{equation} \begin{equation}\label{D3etabar} \Omega\slashed\nabla_3r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}=r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}-\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta},\qquad\qquad \Omega\slashed\nabla_4r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}=-r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}+\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}, \end{equation} \begin{equation}\label{D4etabar} \Omega\slashed\nabla_4 r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta}=2r^2\slashed\nabla_A\overset{\mbox{\scalebox{0.4}{(1)}}}{\omega}+r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta},\qquad\qquad \Omega\slashed\nabla_3r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}=2r^2\slashed\nabla_A\underline{\overset{\mbox{\scalebox{0.4}{(1)}}}{\omega}}-r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}, \end{equation} \item The equations governing the curvature components \bref{linearised curvature}: \begin{equation}\label{Bianchi +2} \Omega\slashed\nabla_3 \;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}=-2r\slashed{\mathcal{D}}^*_2 \Omega^2 \Omega \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}+\frac{6M\Omega^2}{r^2}\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}},\qquad\quad \Omega\slashed\nabla_4\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}=2r\slashed{\mathcal{D}}^*_2\Omega^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}+\frac{6M\Omega^2}{r^2}\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat{\chi}}},\; \end{equation} \begin{equation}\label{Bianchi +1a} \Omega\slashed\nabla_4 \frac{r^4\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}}{\Omega}=r\slashed{div}\;r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha},\qquad\qquad \Omega\slashed\nabla_3\frac{r^4\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}}{\Omega}=-r\slashed{div}\; r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}, \end{equation} \begin{equation}\label{Bianchi +1b} \Omega\slashed\nabla_4r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}=r\slashed{\mathcal{D}}^*_1(r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho},r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sigma})+\frac{6M\Omega^2}{r}\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\eta},\qquad\quad\Omega\slashed\nabla_3 r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}=r\slashed{\mathcal{D}}^*_1(-r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho},r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sigma})-\frac{6M\Omega^2}{r}\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\eta}, \end{equation} \begin{equation}\label{Bianchi 0} \Omega\slashed\nabla_4\; r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho}=r\slashed{div}\;r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}+3M \overone{\Omega tr\chi},\qquad\qquad \Omega\slashed\nabla_3\;r^3 \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\rho}=-r\slashed{div}\;r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}+3M\overone{\Omega tr\underline\chi}, \end{equation} \begin{equation}\label{Bianchi 0*} \Omega\slashed\nabla_4\; r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\sigma}=-r\slashed{curl} \;r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta},\qquad\qquad \Omega\slashed\nabla_3\; r^3\sigma=-r\slashed{curl}\;r^2\Omega \stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}. \end{equation} \end{itemize} \begin{remark}\label{regular} The degeneration of the Eddington--Finkelstein (EF) frame near $\overline{\mathscr{H}^+}$ carries over to a degeneration of the quantities governed by equations \bref{start of full system}--\bref{Bianchi 0*}, as these quantities were derived via the EF frame (see Appendix B). By switching to a regular frame, e.g.~the Kruskal frame, it can be shown that these quantities extend regularly to $\overline{\mathscr{H}^+}$ when supplied with the appropriate weights in $U,V$. In particular, note that \begin{align} \tilde{\alpha}=V^{-2}\Omega^2\alpha,\qquad\underline{\widetilde{\alpha}}=U^2\Omega^{-2}\underline\alpha, \end{align} extend regularly to $\overline{\mathscr{H}^+}$, including $\mathcal{B}$. \end{remark} \section{The Teukolsky equations, the Teukolsky--Starobinsky identities and the Regge--Wheeler equations}\label{TRW} \subsection{The Teukolsky equations and their well-posedness}\label{Chandra1} Let $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ belong to a solution to the linearised Einstein equations \bref{start of full system}--\bref{Bianchi 0*}. It turns out that the linearised fields $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$, $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ obey decoupled 2nd order hyperbolic equations, the well-known Teukolsky equations.\\ \indent Take $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ and multiply by $\frac{r^4}{\Omega^4}$: \begin{equation} \frac{r^4}{\Omega^4}\Omega\slashed\nabla_3\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=-2r\slashed{\mathcal{D}}^*_2 \frac{r^4\stackrel{\mbox{\scalebox{0.4}{(1)}}}\beta}{\Omega}+6M\frac{r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}}{\Omega}. \end{equation} Now differentiate in the $\Omega e_4$ direction and multiply by $\frac{\Omega^2}{r^2}$ to obtain the \textbf{Spin +2 Teukolsky equation}: \begin{equation}\label{T+2} \frac{\Omega^2}{r^2}\Omega\slashed\nabla_4 \;\frac{r^4}{\Omega^4}\Omega \slashed\nabla_3\; r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha-\frac{6M}{r}r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha. \end{equation} We note that: \begin{equation} \slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2=-\frac{1}{2}\slashed{\Delta}+\frac{1}{r^2},\qquad\qquad \Omega\slashed\nabla_4 \frac{r^2}{\Omega^2}=-\Omega\slashed\nabla_3 \frac{r^2}{\Omega^2}=r(x+2). \end{equation} We may rewrite the equation as: \begin{equation}\label{T+2d} -\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha+r^2\slashed\Delta\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha-2r(x+2)\Omega\slashed\nabla_3 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha+(3\Omega^2-5)r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=0. \end{equation} \indent An analogous procedure produces the \textbf{Spin }$\bm{-2}$\textbf{ Teukolsky equation} \begin{equation}\label{T-2} \frac{\Omega^2}{r^2}\Omega\slashed\nabla_3 \;\frac{r^4}{\Omega^4}\Omega \slashed\nabla_4\; r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}=-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}-\frac{6M}{r}r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}, \end{equation} which we may rewrite as \begin{equation}\label{T-2d} -\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}+r^2\slashed\Delta\;r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}+2r(x+2)\Omega\slashed\nabla_4 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}+(3\Omega^2-5)r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}=0. \end{equation} We now state well-posedness theorems which are standard for linear second-order hyperbolic equations of the type that \cref{T+2}, \cref{T-2} fall under. Taking into account \Cref{regular}, we start with the future evolution of $\Omega^2\alpha$ and $\Omega^{-2}\underline\alpha$. \\ \indent Having derived the Teukolsky equations \bref{T+2}, \bref{T-2}, we can study these equations in isolation. Since the following theorems do not pertain to the linearised Einstein equations, we drop the superscript $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{{}}$. \begin{proposition}\label{WP+2Sigma*} Prescribe on $\Sigma^*$ a pair of smooth symmetric traceless $S^2_{u,v}$ 2-tensor fields $(\upalpha,\upalpha')$. Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Omega^2\alpha$ that satisfies \bref{T+2} on $J^+(\Sigma^*)$, with $\Omega^2\alpha|_{\Sigma^*}=\upalpha, \slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha|_{\Sigma^*}=\upalpha'$. \end{proposition} \begin{proposition}\label{WP-2Sigma*} Prescribe on $\Sigma^*$ a pair of smooth symmetric traceless $S^2_{u,v}$ 2-tensor fields $(\underline\upalpha,\underline\upalpha')$. Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Omega^{-2}\underline\alpha$ that satisfies \bref{T-2} on $J^+(\Sigma^*)$, with $\Omega^{-2}\underline\alpha|_{\Sigma^*}=\underline\upalpha, \slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\underline\alpha|_{\Sigma^*}=\underline\upalpha'$. \end{proposition} The same applies replacing $\Sigma^*$ with any other $\mathscr{H}^+$-penetrating spacelike surface ending at $i^0$.\\ \indent The degeneration of the EF frame discussed in \Cref{regular} is inherited by \bref{T+2}, \bref{T-2}, and we must work with $\widetilde{\alpha}=V^{-2}\Omega^2\alpha, \widetilde{\underline\alpha}=U^2\Omega^{-2}\underline\alpha$ in order to study the Teukolsky equations with data on $\overline{\Sigma}$. The weighted quantities $\widetilde{\alpha}, \widetilde{\underline\alpha}$ satisfy the following equations: \begin{align}\label{T+2B} \frac{1}{\Omega^2}\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4 r\widetilde{\alpha}+\frac{1}{M}(4-3\Omega^2)\Omega\slashed{\nabla}_3 r\widetilde{\alpha}-\frac{1}{r}(3\Omega^2-5)\widetilde{\alpha}-\slashed{\Delta}r\widetilde{\alpha}=0, \end{align} \begin{align}\label{T-2B} \frac{1}{\Omega^2}\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4 r\widetilde{\underline\alpha}-\frac{1}{M}(4-3\Omega^2)\Omega\slashed{\nabla}_4 r\widetilde{\underline\alpha}-\frac{1}{r}(3\Omega^2-5)\widetilde{\underline\alpha}-\slashed{\Delta}r\widetilde{\underline\alpha}=0. \end{align} Equations (\ref{T+2B}) and (\ref{T-2B}) do not degenerate near $\mathcal{B}$ and we can make the following well-posedness statement: \begin{proposition}\label{WP+2Sigmabar} Prescribe a pair of smooth symmetric traceless $S^2_{U,V}$ 2-tensor fields $(\widetilde{\upalpha},\widetilde{\upalpha}')$ on $\overline{\Sigma}$. Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Omega^2{\alpha}$ that satisfies (\ref{T+2}) on $ J^+(\overline{\Sigma})$ with $V^{-2}\Omega^2\alpha|_{\overline{\Sigma}}=\widetilde{\upalpha}$ and $\slashed{\nabla}_{n_{\overline{\Sigma}}}V^{-2}\Omega^2\alpha|_{\overline{\Sigma}}=\widetilde{\upalpha}'$. \end{proposition} \begin{proposition}\label{WP-2Sigmabar} Prescribe a pair of smooth symmetric traceless $S^2_{U,V}$ 2-tensor fields $(\widetilde{\underline\upalpha},\widetilde{\underline\upalpha}')$ on $\overline{\Sigma}$. Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Omega^{-2}{\underline\alpha}$ that satisfies (\ref{T-2}) on $ J^+(\overline{\Sigma})$ with $V^{2}\Omega^{-2}\underline\alpha|_{\overline{\Sigma}}=\widetilde{\underline\upalpha}$ and $\slashed{\nabla}_{n_{\overline{\Sigma}}}V^{2}\Omega^{-2}\underline\alpha|_{\overline{\Sigma}}=\widetilde{\underline\upalpha}'$. \end{proposition} Analogous statements to the above apply to past development from $\overline{\Sigma}$ with $U,\Omega^2$ switching places with $V,\Omega^{-2}$ respectively.\\ \indent In developing backwards scattering we will use the following well-posedness statement for the past development of a mixed initial-characteristic value problem: \begin{proposition}\label{WP+2backwards} Let $u_+<\infty, v_+<v_*<\infty$. Let $\widetilde{\Sigma}$ be a spacelike hypersurface connecting $\mathscr{H}^+$ at $v_*$ to $\mathscr{I}^+$ at $u_+$ and let $\underline{\mathscr{C}}=\underline{\mathscr{C}}_{v_*}\cap J^-(\widetilde{\Sigma})\cap J^+(\overline\Sigma)$. Prescribe a pair of symmetric traceless $S^2_{u,v}$ 2-tensor fields: \begin{itemize} \item $\alpha_{{\mathscr{H}^+}}$ on ${\mathscr{H}^+}\cap\{v\leq v_+\}$ vanishing in a neighborhood of $\mathscr{H}^+\cap\{v=v_+\}$, such that $V^{-2}\alpha_{\overline{\mathscr{H}^+}}$ extends smoothly to $\mathcal{B}$, \item $\alpha_{0,in}$ on $\underline{\mathscr{C}}$ vanishing in a neighborhood of $\underline{\mathscr{C}}\cap\widetilde{\Sigma}$. \end{itemize} Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor $\alpha$ on $D^-\left(\overline{\mathscr{H}^+}\cup\widetilde{\Sigma}\cup\underline{\mathscr{C}}\right)\cap J^+(\overline{\Sigma})$ such that $V^{-2}\Omega^2\alpha|_{\overline{\mathscr{H}^+}}=V^{-2}\alpha_{\overline{\mathscr{H}^+}}$, $\alpha|_{\underline{\mathscr{C}}}=\alpha_{0,in}$ and $\left(\Omega^2\alpha|_{\widetilde{\Sigma}},\slashed{\nabla}_{n_{\widetilde{\Sigma}}}\Omega^2\alpha|_{\widetilde{\Sigma}}\right)=(0,0)$. \end{proposition} \begin{proposition}\label{WP-2backwards} Let $u_+<\infty, v_+<v_*<\infty$. Let $\widetilde{\Sigma}$ be a spacelike hypersurface connecting $\mathscr{H}^+$ at $v_+$ to $\mathscr{I}^+$ at $u_+$ and let $\underline{\mathscr{C}}=\underline{\mathscr{C}}_{v_*}\cap J^+(\widetilde{\Sigma})\cap\{t\geq0\}$. Prescribe a pair of symmetric traceless $S^2_{u,v}$ 2-tensor fields: \begin{itemize} \item $\underline\alpha_{{\mathscr{H}^+}}$ on ${\mathscr{H}^+}\cap\{v<v_+\}$ vanishing in a neighborhood of $v_+$, such that $V^{2}\underline\alpha_{{\mathscr{H}^+}}$ extends smoothly to $\mathcal{B}$, \item $\underline\alpha_{0,in}$ on $\underline{\mathscr{C}}$ vanishing in a neighborhood of $\underline{\mathscr{C}}\cap\mathscr{H}^+$. \end{itemize} Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor $\underline\alpha$ on $D^-\left(\overline{\mathscr{H}^+}\cup\widetilde{\Sigma}\cup\underline{\mathscr{C}}\right)\cap J^+(\overline{\Sigma})$ such that $V^{2}\Omega^{-2}\underline\alpha|_{\overline{\mathscr{H}^+}}=V^{2}\underline\alpha_{\overline{\mathscr{H}^+}}$, $\underline\alpha|_{\underline{\mathscr{C}}}=\underline\alpha_{0,in}$ and $\left(\Omega^{-2}\underline\alpha|_{\widetilde{\Sigma}},\slashed{\nabla}_{n_{\widetilde{\Sigma}}}\Omega^{-2}\underline\alpha|_{\widetilde{\Sigma}}\right)=(0,0)$. \end{proposition} \begin{center} \begin{tikzpicture}[scale=0.7] \node (I) at ( 0,0) {}; \path (I) +(90:4) coordinate (Itop) coordinate[label=90:$i^+$] +(180:4) coordinate (Ileft) coordinate[label=180:$\mathcal{B}$] +(0:4) coordinate (Iright) coordinate[label=0:$i^0$] ; \draw (Ileft) -- node[yshift=1mm,above]{$v_+$} (Itop) ; \draw[dash dot dot] (Iright) -- node[yshift=1mm,above]{$u_+$}(Itop) ; \draw [line width=0.3mm]($(Ileft)$)--($(Ileft)+(45:3.5cm)$); \draw [line width=0.3mm]($(Iright)+(180:0.5cm)$)--node[below,xshift=-0.2cm,yshift=0.15cm]{$\underline{\mathscr{C}}\;$}($(Iright)+(135:3.2cm)+(180:0.8cm)$); \draw ($(Ileft)+(45:3.5cm)$) to[out=-25, in=205, edge node={node [below] {$\widetilde{\Sigma}$}}] ($(Iright)+(135:3.5cm)$); \draw ($(Ileft)$) to[out=0, in=180, edge node={node [below] {$\overline{\Sigma}$}}] ($(Iright)$); \filldraw[white] (Itop) circle (3pt); \draw[black] (Itop) circle (3pt); \filldraw[white] (Iright) circle (3pt); \draw[black] (Iright) circle (3pt); \filldraw[black] (Ileft) circle (3pt); \draw[black] (Ileft) circle (3pt); \end{tikzpicture} \end{center} We will also need \begin{proposition}\label{backwards wellposedness +2} Let $\tilde{\alpha}_{\mathscr{H}^+}$ be a smooth symmetric traceless $S^2_{\infty,v}$ 2-tensor on $\overline{\mathscr{H}^+}\cap J^-(\Sigma^*)$, $(\widetilde{\upalpha}_{\Sigma^*},\widetilde{\upalpha}_{\Sigma^*}')$ be a pair of smooth symmetric traceless $S^2_{\infty,v}$ 2-tensors on $\Sigma^*$. Then there exists a unique solution $\widetilde{\alpha}$ to \bref{T+2B} in $J^+(\overline{\Sigma})\cap\{t^*\leq 0\}$ such that $\widetilde{\alpha}|_{\overline{\mathscr{H}^+}}=\widetilde{\alpha}_{\overline{\mathscr{H}^+}}$, $(\widetilde{\alpha}|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\widetilde{\alpha}|_{\Sigma^*})=(\widetilde{\upalpha}_{\Sigma^*},\widetilde{\upalpha}_{\Sigma^*}')$. \end{proposition} \begin{proposition}\label{backwards wellposedness -2} An analogous statement to \Cref{backwards wellposedness +2} holds for \cref{T-2B}. \end{proposition} Analogous statements apply for the "finite" backwards scattering problem from the past of $\overline{\Sigma}$, with $U$ replacing $V$ and $\Omega^2$ switching places with $\Omega^{-2}$. \begin{remark}[\textbf{Time inversion}]\label{time inversion} Under the transformation $t\longrightarrow-t$, $u\longrightarrow -v$ and $v\longrightarrow -u$ and thus $\alpha(u,v,\theta^A)\longrightarrow\alpha(-v,-u,\theta^A)=:\raisebox{\depth}{\scalebox{1}[-1]{$\alpha$}}(u,v,\theta^A)$ and $\underline\alpha(u,v,\theta^A)\longrightarrow\underline\alpha(-v,-u,\theta^A)=:\underline\raisebox{\depth}{\scalebox{1}[-1]{$\alpha$}}(u,v,\theta^A)$.\\ \indent It is clear $\raisebox{\depth}{\scalebox{1}[-1]{$\alpha$}}(u,v,\theta^A)$ satisfies the $-2$ Teukolsky equation, i.e.~the equation satisfied by $\underline\alpha$. Similarly, $\underline\raisebox{\depth}{\scalebox{1}[-1]{$\alpha$}}(u,v,\theta^A)$ satisfies the $+2$ Teukolsky equation, i.e.~the equation satisfied by $\alpha$. This observation means that the asymptotics of $\alpha$ towards the future are identical to those of $\underline\alpha$ towards the past, i.e.~determining the asymptotics of both $\underline\alpha$ and $\alpha$ towards the future is enough to determine the asymptotics of either $\alpha$ or $\underline\alpha$ in both the past and future directions. We will use this fact to obtain bijective scattering maps from studying the forward evolution of the fields $\alpha,\underline\alpha$. In particular, this prescription is sufficient to obtain well-posedness statements for the equations (\ref{T-2}) and (\ref{T+2}) for past development. \end{remark} \subsection{Derivation of the Teukolsky--Starobinsky identities}\label{derivation of the Teukolsky--Starobinsky identities} We now return to the full system \bref{start of full system}--\bref{Bianchi 0*} to derive the Teukolsky--Starobinsky identities \bref{eq:227intro1}, \bref{eq:228intro1}.\\ \indent Let $\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha$ belong to a solution of the linearised Einstein equations. \Cref{Bianchi +2} implies: \begin{align} \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}=-2r\slashed{\mathcal{D}}^*_2r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\beta}+6M\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}. \end{align} Using \bref{Bianchi +1a} and \bref{D3Chihat} we obtain \begin{equation} \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\left(-r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}\rho,r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}\sigma\right)+6M(r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\hat{\chi}}-r\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat\chi}}). \end{equation} We now apply $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3$ to both sides and use equations \bref{Bianchi 0}, \bref{Bianchi 0*}, \bref{D3Chihat} and the second equation of \bref{D4Chihat} to deduce \begin{align} \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\left(\frac{r^4\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta}}{\Omega}\right)+6M\left[r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}^*_1\left(\frac{r^2}{\Omega^2}\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{f}},0\right)+ r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}-(3\Omega^2-1)\frac{r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat\chi}}}{\Omega}-2r\slashed{\mathcal{D}}_2^*r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\eta\right]. \end{align} Now we apply $\Omega\slashed{\nabla}_3$ once again and use \bref{D3TrChiBar}, the second equation of \bref{D4Chihat} and the second equations of \bref{D4etabar}: \begin{align} \begin{split} \Omega\slashed{\nabla}_3 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha&=-2r^3\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1(-r\slashed{\mathcal{D}}_2r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha})+6M\Bigg[r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\left(-4r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\omega},0\right)-(3\Omega^2-1)r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}\\&\;\;+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha+6M\frac{r^2}{\Omega^2} \frac{r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline{\hat\chi}}}{\Omega}-(3\Omega^2-1)(-r^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha})-2r\slashed{\mathcal{D}}^*_2(2r\slashed{\nabla}r\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\omega}-r^2\Omega\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\beta})\Bigg] \\&=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r^3\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}+6M\frac{r^2}{\Omega^2}\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}. \end{split} \end{align} Finally, we have \begin{align}\label{eq:TS1} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}+6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}. \end{align} \indent An entirely analogous procedure starting from the equation for $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}$ in \bref{Bianchi +2} leads to \begin{align}\label{eq:TS2} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^3r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}-6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha}. \end{align} Equation \bref{eq:TS2} is the constraint \bref{eq:228intro1}. \subsection{Physical-space Chandrasekhar transformations and the Regge--Wheeler equation}\label{Chandra} The Regge--Wheeler equation for a symmetric traceless $S^2_{u,v}$ 2-tensor $\Psi$ is given by \begin{align}\label{RW} \Omega\slashed{\nabla}_4\Omega\slashed{\nabla}_3\Psi-\Omega^2\slashed{\Delta}\Psi+\frac{\Omega^2}{r^2}(3\Omega^2+1)\Psi=0. \end{align} \indent Suppose the field $\alpha$ satisfies the +2 Teukolsky equation. Define the following hierarchy of fields \begin{align}\label{hier+} \begin{split} &r^3\Omega \psi:=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3 r\Omega^2\alpha,\\ &\Psi:=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3 r^3\Omega \psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2 r\Omega^2\alpha. \end{split} \end{align} We have the following commutation relation: \begin{align}\label{commutation relation} \begin{split} \Bigg[&-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4-(k+xk^')r\Omega\slashed\nabla_3+a\Omega^2+bx+c\Bigg]\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3 \\&=\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\left[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4-\left(k+2+x(k^'+1)\right)r\Omega\slashed\nabla_3+(a+2k+2k^')\Omega^2+bx+c-k-2k^'\right]\\&+2M(a+2k+2k^'), \end{split} \end{align} where $a,b,c,k,k'$ are integers. We commute the operator $\left(\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\right)^2$ past the Regge--Wheeler operator: \begin{align*} \begin{split} \left[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+r^2\slashed\Delta-3\Omega^2-1\right]\left(\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\right)^2=\Bigg\{\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3&\Bigg[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+r^2\slashed\Delta-(2+x)r\Omega\slashed\nabla_3\\ &-3\Omega^2-1\Bigg]-6M\Bigg\}\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3 \end{split} \end{align*} \begin{align} \begin{split} &=\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Bigg\{\left[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+r^2\slashed\Delta-(2+x)r\Omega\slashed\nabla_3-3\Omega^2-1\right]\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3-6M\Bigg\} \\&=\left(\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\right)^2\Bigg\{\left[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+r^2\slashed\Delta-2(2+x)r\Omega\slashed\nabla_3+3\Omega^2-5\right]-6M+6M\Bigg\}\label{commutator} \end{split} \end{align} This shows that if $\alpha$ satisfies the +2 Teukolsky equation then $\Psi$ satisfies the Regge--Wheeler equation (\ref{RW}).\\ \indent Analogously, with the following hierarchy of fields \begin{align}\label{hier-} \begin{split} &r^3\Omega \underline\psi:=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 r\Omega^2\underline\alpha,\\ &\underline\Psi:=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 r^3\Omega \underline\psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2 r\Omega^2\underline\alpha, \end{split} \end{align} we have \begin{align}\label{commutation relation 2} \begin{split} \Bigg[&-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+(l+xl^')r\Omega\slashed\nabla_4+a\Omega^2+bx+c\Bigg]\frac{r^2}{\Omega^2}\Omega\slashed\nabla_4 \\&=\frac{r^2}{\Omega^2}\Omega\slashed\nabla_4\left[-\frac{r^2}{\Omega^2}\Omega\slashed\nabla_3\Omega\slashed\nabla_4+\left(l+2+x(l^'+1)\right)r\Omega\slashed\nabla_4+(a+2l+2l^')\Omega^2+bx+c-l-2l^'\right]\\&+6M(a+2l+2l^'), \end{split} \end{align} where $a,b,c,l,l'$ are integers. Thus, if $\underline\alpha$ satisfies the $-2$ Teukolsky equation then $\underline\Psi$ also satisfies the Regge--Wheeler equation.\\ \indent We state a standard well-posedness result for (\ref{RW}): \begin{proposition}\label{RWwpCauchy} For any pair $(\uppsi,\uppsi')$ of smooth symmetric traceless $S^2_r$ 2-tensor fields on $\Sigma^*$, there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Psi$ which solves \cref{RW} in $ J^+(\Sigma^*)$ such that $\Psi|_{\Sigma^*}=\uppsi$ and $\slashed{\nabla}_{n_{\Sigma^*}} \Psi|_{\Sigma^*}=\uppsi'$. The same applies when data are posed on $\Sigma$ or $\overline{\Sigma}$. \end{proposition} In contrast to the Teukolsky equations \bref{T+2}, \bref{T-2}, the Regge--Wheeler equation \bref{RW} does not suffer from additional regularity issues near $\mathcal{B}$, as can be seen by rewriting \cref{RW} in Kruskal coordinates: \begin{align} \slashed{\nabla}_U\slashed{\nabla}_V\Psi-\mathring{\slashed{\Delta}}+\frac{3\Omega^2+1}{r^2}\Psi=0. \end{align} If $\Psi$ is related to a field $\alpha$ that satisfies \bref{T+2}, then it is related to $\widetilde{\alpha}$ by \begin{align} \Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha=\left(2Mr^2f(r)\slashed{\nabla}_U\right)^2r\tilde{\alpha}. \end{align} \begin{proposition}\label{RWwpSigmabar} \Cref{RWwpCauchy} is valid replacing $\Sigma^*$ with $\overline{\Sigma}$ everywhere. \end{proposition} For backwards scattering we will need the following well-posedness statement: \begin{proposition}\label{RWwpBackwards} Let $u_+<\infty, v_+<v_*<\infty$. Let $\widetilde{\Sigma}$ be a spacelike hypersurface connecting $\mathscr{H}^+$ at $v=v_+$ to $\mathscr{I}^+$ at $u=u_+$ and let $\underline{\mathscr{C}}=\underline{\mathscr{C}}_{v_*}\cap J^+(\widetilde{\Sigma})\cap\{t\geq0\}$. Prescribe a pair of smooth symmetric traceless $S^2_{u,v}$ 2-tensor fields: \begin{itemize} \item $\Psi_{{\mathscr{H}^+}}$ on ${\overline{\mathscr{H}^+}}\cap\{v<v_+\}$ vanishing in a neighborhood of $\widetilde{\Sigma}$, \item $\Psi_{0,in}$ on $\underline{\mathscr{C}}$ vanishing in a neighborhood of $\widetilde{\Sigma}$. \end{itemize} Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor $\Psi$ on $D^-\left(\overline{\mathscr{H}^+}\cup\widetilde{\Sigma}\cup\underline{\mathscr{C}}\right)\cap J^+(\overline{\Sigma})$ such that $\Psi|_{\overline{\mathscr{H}^+}}=\Psi_{{\mathscr{H}^+}}$, $\Psi|_{\underline{\mathscr{C}}}=\Psi_{0,in}$ and $\left(\Psi|_{\widetilde{\Sigma}},\slashed{\nabla}_{n_{\widetilde{\Sigma}}}\Psi|_{\widetilde{\Sigma}}\right)=(0,0)$. \end{proposition} We will also need \begin{proposition}\label{RWwp local statement near B} Let $(\uppsi,\uppsi')$ be smooth symmetric traceless $S^2_{u,v}$ 2-tensor fields on $\Sigma^*$, $\uppsi_{\mathscr{H}^+}$ be a smooth symmetric traceless $S^2_{\infty,v}$ 2-tensor field on $\overline{\mathscr{H}^+}\cap\{t^*\leq0\}$. Then there exists a unique smooth symmetric traceless $S^2_{u,v}$ 2-tensor field $\Psi$ on $J^-(\Sigma^*)$ such that $\Psi|_{\overline{\mathscr{H}^+}\cap\{t^*\leq0\}}=\uppsi_{\mathscr{H}^+}, \left(\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*}\right)=(\uppsi,\uppsi')$. \end{proposition} \begin{remark}\label{time inversion of RW} Unlike the Teukolsky equations \bref{T+2}, \bref{T-2}, the Regge--Wheeler equation \bref{RW} is invariant under time inversion. If $\Psi(u,v)$ satisfies \bref{RW}, then $\raisebox{\depth}{\scalebox{1}[-1]{$\Psi$}}(u,v):=\Psi(-v,-u)$ also satisfies \bref{RW}. \end{remark} \subsection{Further constraints among $\alpha,\Psi$ and $\underline\alpha,\underline\Psi$}\label{constraint derivation} We can apply the same ideas as in \Cref{Chandra} to transform solutions of the Regge--Wheeler equation into solutions of the +2 Teukolsky equation. Let $\Psi$ satisfy \Cref{RW}, then using \bref{commutation relation 2} we can show that \begin{align}\label{alpha to Psi} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi \end{align} satisfies \Cref{T+2}. \\ \indent Now suppose $\alpha$ satisfies \Cref{T+2} and $\Psi$ is the solution to \Cref{RW} related to $\alpha$ by \Cref{hier+}. We can evaluate the expression \bref{alpha to Psi} using \Cref{T+2}: we apply $\Omega\slashed{\nabla}_4$ and substitute using the $+2$ equation only (we drop the superscript $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{}$): \begin{align*} \begin{split} \Omega\slashed{\nabla}_4\Psi&=\Omega\slashed{\nabla}_4\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha \\&=r(x+2)\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha \\&=\frac{3\Omega^2-1}{r}\Psi+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha \\&=\frac{3\Omega^2-1}{r}\Psi+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4\frac{r^4}{\Omega^4}\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha \\&=\frac{3\Omega^2-1}{r}\Psi+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\left[\left(-\frac{\Omega^4}{r^4}r(x+2)\right)\frac{r^4}{\Omega^4}\Omega\slashed{\nabla}_3r\Omega^2\alpha\right] \\&\;\;+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4\frac{r^4}{\Omega^4}\Omega\slashed{\nabla}_3r\Omega^2\alpha \\&=\frac{3\Omega^2-1}{r}\Psi-\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\left[\frac{\Omega^2}{r^2}r(x+2)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha\right]+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\mathcal{T}^{+2}_N r\Omega^2\alpha \\&=-2(3\Omega^2-2)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha+\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\mathcal{T}^{+2}_Nr\Omega^2\alpha\\ \end{split} \end{align*} \begin{align}\label{eq:d4Psi} =-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha-6Mr\Omega^2\alpha-(3\Omega^2-1)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha, \end{align} i.e., \begin{align} \begin{split} \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi=-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2 \frac{r^4}{\Omega^4}\Omega\slashed{\nabla}_3r\Omega^2\alpha-(3\Omega^2-1)\frac{r^4}{\Omega^4}\Omega\slashed{\nabla}_3r\Omega^2\alpha-6M\frac{r^2}{\Omega^2} r\Omega^2\alpha. \end{split} \end{align} We act on both sides with $\Omega\slashed{\nabla}_4$ again: \begin{align} \begin{split} \Omega\slashed{\nabla}_4 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 \Psi &=-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2\left[\frac{r^2}{\Omega^2}\left(-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2 r\Omega^2\alpha-\frac{6M}{r}r\Omega^2\alpha\right)\right]\qquad\qquad\qquad\qquad\qquad\\ &\;\;\;\;\;\;-6M\left[\frac{r^2}{\Omega^2}\left(\Omega\slashed{\nabla}_3+\Omega\slashed{\nabla}_4\right)r\Omega^2\alpha+r(x+2)r\Omega^2\alpha\right] \\&\qquad-\left[-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2-\frac{6M}{r}\right]\left[\frac{r^2}{\Omega^2}(3\Omega^2-1)r\Omega^2\alpha\right] \\&=-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2\left[\frac{r^2}{\Omega^2}\left(-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2 r\Omega^2\alpha-2r\Omega^2\alpha\right)\right]-6M\left[\frac{r^2}{\Omega^2}\left(\Omega\slashed{\nabla}_3+\Omega\slashed{\nabla}_4\right)r\Omega^2\alpha\right]. \end{split} \end{align} We finally arrive at \begin{align}\label{eq:d4d4Psi} \begin{split} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 \Psi&=-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2\left[-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2 r\Omega^2\alpha-2r\Omega^2\alpha\right]-6M\left[\left(\Omega\slashed{\nabla}_3+\Omega\slashed{\nabla}_4\right)r\Omega^2\alpha\right]. \end{split} \end{align} \indent We record the same for $\underline\Psi$: Using only the Teukolsky equation \bref{T-2} we obtain the analogue of \bref{eq:d4Psi} \begin{align}\label{eq:d3psibar} \Omega\slashed{\nabla}_3\underline\Psi=-(3\Omega^2-1)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4r\Omega^2\underline\alpha+6Mr\Omega^2\underline\alpha-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4r\Omega^2\underline\alpha, \end{align} and the analogue of \bref{eq:d4d4Psi} \begin{align}\label{eq:d3d3psibar} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\underline\Psi=+6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\underline\alpha+\left[-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2-2\right]\left(-2r^2\slashed{\mathcal{D}}_2^*\slashed{\mathcal{D}}_2r\Omega^2\underline\alpha\right). \end{align} In the remainder of this paper we focus exclusively on the Teukolsky equations \bref{T+2}, \bref{T-2}, the Teukolsky--Starobinsky identities \bref{eq:227}, \bref{eq:228} and the Regge--Wheeler equation \bref{RW}. In particular, we do not refer to the linearised Einstein equations \bref{start of full system}--\bref{Bianchi 0*} and as such, we drop the superscript $\stackrel{\mbox{\scalebox{0.4}{(1)}}}{{}}$.\\ \indent Throughout this paper we will we distinguish between solutions arising from data on $\Sigma^*, \Sigma$ or $\overline{\Sigma}$, and we subsequently construct separate scattering statements for each of these cases, in particular distinguishing between spaces of scattering states on $\mathscr{H}^+_{\geq0}, \mathscr{H}^\pm$ and $\overline{\mathscr{H}^\pm}$. It will be easiest to work with data $\Sigma^*$ first, and then the results for the remaining cases would follow easily. \section{Main theorems}\label{section 4 main theorems} We define in this section the spaces of scattering states and provide a precise statement of the results. In what follows, $L^2$ spaces on $\mathscr{I}^\pm, \mathscr{H}^+_{\geq0}, \mathscr{H}^\pm,\overline{\mathscr{H}^\pm}$ are defined with respect to the measures $du\sin\theta d\theta d\phi$, $dv\sin\theta d\theta d\phi$ induced by the Eddington--Finkelstein coordinates. \begin{notation*} For a spherically symmetric submanifold $\mathcal{S}$ of $\overline{\mathscr{M}}$, denote by $\Gamma(\mathcal{S})$ the space of smooth symmetric traceless $S^2_{u,v}$ 2-tensor fields on $\mathcal{S}$. The space of such fields that are compactly supported is denoted by $\Gamma_c (\mathcal{S})$. We use the same notation for smooth fields on $\mathscr{I}^\pm, \mathscr{H}^\pm,\overline{\mathscr{H}^\pm}$. \end{notation*} \noindent In particular, note that $A\in\Gamma(\Sigma^*)$ says that $A$ is smooth up to and including $\Sigma^*\cap\mathscr{H}^+$. \subsection{Theorem 1: Scattering for the Regge--Wheeler equation}\label{subsection 4.1 Theorem 1} \begin{defin}\label{RWscatteringsigma} Let $(\uppsi,\uppsi')\in\Gamma_c (\Sigma^*)\oplus\Gamma_c(\Sigma^*)$ be Cauchy data on $\Sigma^*$ for \bref{RW} of compact support. Define the space $\mathcal{E}^{T}_{\Sigma^*} $ to be the completion of $\Gamma_c (\Sigma^*)$ data under the norm \begin{align}\label{this22222} \|(\uppsi,\uppsi')\|^2_{\mathcal{E}^T_{\Sigma^*}}=\int_{\Sigma^*} dr\sin\theta d\theta d\phi\; (2-\Omega^2)|\slashed{\nabla}_{T^*}\Psi|^2+\Omega^2|\slashed{\nabla}_{R}\Psi|^2+|\slashed{\nabla}\Psi|^2+\frac{3\Omega^2+1}{r^2}|\Psi|^2, \end{align} where $\Psi$ is smooth and satisfies $\Psi|_{\Sigma^*}=\uppsi, \slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*}=\uppsi'$. The space $\mathcal{E}^T_{\Sigma}$ is similarly defined with the norm \begin{align}\label{this2222} \|(\uppsi,\uppsi')\|^2_{\mathcal{E}^T_{\Sigma}}=\int_{\Sigma} dr\sin\theta d\theta d\phi \;|\slashed{\nabla}_{n_{\Sigma}} \Psi|^2+\Omega^2|\slashed{\nabla}_R\Psi|^2+|\slashed{\nabla}\Psi|^2+\frac{3\Omega^2+1}{r^2}|\Psi|^2. \end{align} Define the space $\mathcal{E}^T_{\Sigma}$ to be the completion of $\Gamma_{c}(\Sigma)$ data under the norm \bref{this2222}. The space $\mathcal{E}^{T}_{\overline{\Sigma}}$ and the norm $\|\;\|_{\mathcal{E}^T_{\overline{\Sigma}}}$ are similarly defined. \end{defin} \begin{remark}\label{RW enough to be in space} The kernel of $\|\;\;\|_{\mathcal{E}^T_{\Sigma^*}}$ has trivial intersection with $\Gamma(\Sigma^*)$. It suffices for a smooth data set $(\uppsi,\uppsi')$ to satisfy $\|(\uppsi,\uppsi')\|_{\mathcal{E}^T_{\Sigma^*}}<\infty$ to have $(\uppsi,\uppsi')\in\mathcal{E}^T_{\Sigma^*}$, so $\|\;\|_{\mathcal{E}^T_{\Sigma*}}, \|\;\|_{\mathcal{E}^T_{\Sigma}}, \|\;\|_{\mathcal{E}^T_{\overline\Sigma}}$ and \bref{this2222} define normed spaces that can be extended to Hilbert spaces. \end{remark} \begin{defin}\label{RW def of rad at H} Define the space $\mathcal{E}_{\mathscr{H}^+}^{T}$ to be the completion of $\Gamma_c (\mathscr{H}^+_{\geq 0})$ under the norm \begin{align}\label{RW def rad flux at H} ||\Psi||_{\mathcal{E}_{\mathscr{H}^+_{\geq 0}}^{T}}^2=\int_{\mathscr{H}^+_{\geq v_0}}|\partial_v\Psi|^2\sin\theta d\theta d\phi dv. \end{align} The spaces $\mathcal{E}^T_{\mathscr{H}^+}$, $\mathcal{E}^T_{\overline{\mathscr{H}^+}}$ are analogously defined. \end{defin} \begin{remark}\label{Subspace of L2} \begin{enumerate} \item The energy $\|\;\|_{\mathcal{E}_{\mathscr{H}^+_{\geq0}}^{T}}$ indeed defines a norm on $\Gamma_c(\mathscr{H}^+_{\geq0})$, which thus extends to a Hilbert space $\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}$ when completed under $\|\;\|_{\mathcal{E}_{\mathscr{H}^+_{\geq0}}^{T}}$.\\ \item The space $\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}$ can be realised as the subset $\Psi_{\mathscr{H}^+}\in L^2_{loc}(\mathscr{H}^+_{\geq0})$ such that \begin{itemize} \item $\Omega\slashed{\nabla}_4\Psi_{\mathscr{H}^+}\in L^2(\mathscr{H}^+_{\geq0})$, \item $\lim_{v\longrightarrow\infty} \|\Psi_{\mathscr{H}^+}\|_{L^2(S_{\infty,v}^2)}=0$. \end{itemize} Note that Hardy's inequality holds on elements of this space and we have \begin{align}\label{weighted L2 statement} \int_{\mathscr{H}^+_{\geq0}} dv \sin\theta d\theta d\phi \frac{|\Psi_{\mathscr{H}^+}|^2}{v^2+1}\lesssim\|\xi\|^2_{\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}}<\infty. \end{align} \end{enumerate} \end{remark} \begin{defin} Define the space $\mathcal{E}^T_{\mathscr{I}^+}$ to be the completion of $\Gamma_c(\mathscr{I}^+)$ under the norm \begin{align} \|\Psi\|_{\mathcal{E}_{\mathscr{I}^+}^{T}}^2=\int_{\mathscr{I}^+}|\partial_u\Psi|^2\sin\theta d\theta d\phi du. \end{align} \end{defin} \begin{defin} Define the space $\mathcal{E}_{\mathscr{H}^-}^{T}$ to be the completion of $\Gamma_c (\mathscr{H}^-)$ under the norm \begin{align} ||\Psi||_{\mathcal{E}_{\mathscr{H}^-}^{T}}^2=\int_{\mathscr{H}^-}|\partial_u\Psi|^2\sin\theta d\theta d\phi du. \end{align} The space $\mathcal{E}^T_{\overline{\mathscr{H}^-}}$ is similarly defined. \end{defin} \begin{defin} Define the space $\mathcal{E}_{\mathscr{I}^-}^T$ to be the completion of $\Gamma_c(\mathscr{I}^-)$ under the norm \begin{align} \|\Psi\|_{\mathcal{E}^T_{\mathscr{I}^-}}^2=\int_{\mathscr{I}^-} |\partial_v \Psi|^2 dv\sin\theta d\theta d\phi. \end{align} \end{defin} \begin{remark} Similar statements to \Cref{Subspace of L2} apply to the norms $\|\;\;\|_{\mathcal{E}^T_{\mathscr{H}^\pm}}, \|\;\;\|_{\mathcal{E}^T_{\overline{\mathscr{H}^\pm}}}, \|\;\;\|_{\mathcal{E}^T_{\mathscr{I}^\pm}}$; they are positive-definite on smooth, compactly supported data on the respective regions of $\overline{\mathscr{M}}$, thus they define normed spaces which extend to Hilbert spaces $\mathcal{E}^T_{\mathscr{H}_{\geq0}}, \mathcal{E}^T_{\mathscr{H}^\pm}, \mathcal{E}^T_{\overline{\mathscr{H}^\pm}}, \mathcal{E}^T_{\mathscr{I}^\pm}$ upon completion. Elements of these spaces can be identified with tensor fields in $L^2_{loc}(\mathscr{H}^-)$ for which a similar statement to \bref{weighted L2 statement} applies. \end{remark} \begin{thm}\label{forwardRW} Let $(\uppsi,\uppsi')\in\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)$. Then the corresponding unique solution $\Psi$ to \bref{RW} given by \Cref{RWwpCauchy} on $J^+(\Sigma^*)$ induces smooth radiation fields $(\bm{\uppsi}_{\mathscr{H}^+},\bm{\uppsi}_{\mathscr{I}^+})\in \Gamma(\mathscr{H}^+_{\geq0})\oplus\Gamma(\mathscr{I}^+)$ as in definitions \ref{RW future rad field scri} and \ref{RWonH}, with $\bm{\uppsi}_{\mathscr{I}^+}, \Psi_{\mathscr{H}^+_{\geq0}}$ satisfying \begin{align} \left|\left|(\uppsi,\uppsi')\right|\right|_{\mathcal{E}^T_{\Sigma^*}}^2=\left|\left|\bm{\uppsi}_{\mathscr{I}^+}\right|\right|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\left|\left|\bm{\uppsi}_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^T_{\mathscr{H}^+}}^2. \end{align} This extends to a map \begin{align} \mathscr{F^+}: \mathcal{E}^T_{\Sigma^*}\longrightarrow \mathcal{E}^T_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^T_{\mathscr{I}^+}. \end{align} Analogously, forward evolution from smooth compactly supported data on $\Sigma$ or $\overline{\Sigma}$ extends to the maps, \begin{align} \mathscr{F}^+:\mathcal{E}^T_\Sigma \longrightarrow \mathcal{E}^T_{\mathscr{{H}^+}} \oplus \mathcal{E}^T_{\mathscr{I}^+},\\ \mathscr{F}^+:\mathcal{E}^T_{\overline{\Sigma}} \longrightarrow \mathcal{E}^T_{\overline{\mathscr{{H}^+}}} \oplus \mathcal{E}^T_{\mathscr{I}^+}. \end{align} \end{thm} \begin{thm}\label{backwardRW} Let $\bm{\uppsi}_{\mathscr{I}^+}\in \Gamma_c (\mathscr{I}^+), \bm{\uppsi}_{\mathscr{H}^+} \in \Gamma_c (\mathscr{H}^+_{\geq0})$. Then there exists a unique solution $\Psi$ to \cref{RW} in $J^+(\Sigma^*)$ which is smooth, such that \begin{align} \lim_{v\longrightarrow\infty} \Psi(u,v,\theta^A)=\bm{\uppsi}_{\mathscr{I}^+},\qquad\qquad \Psi\big|_{\mathscr{H}^+_{\geq0}}=\bm{\uppsi}_{\mathscr{H}^+}. \end{align} with $\left|\left|(\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*})\right|\right|_{\mathcal{E}^T_{\Sigma^*}}^2=\left|\left|\bm{\uppsi}_{\mathscr{I}^+}\right|\right|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\left|\left|\bm{\uppsi}_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^T_{\mathscr{H}^+}}^2$. This extends to a map \begin{align} \mathscr{B}^-: \mathcal{E}^T_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^T_{\mathscr{I}^+}\longrightarrow \mathcal{E}^T_{{\Sigma^*}} , \end{align} which inverts the map $\mathscr{F}^+$ of \Cref{forwardRW}. Thus $\mathscr{F}^+, \mathscr{B}^+$ are unitary Hilbert space isomorphisms and \begin{align} \mathscr{B}^-\circ\mathscr{F}^+=\mathscr{F}^+\circ\mathscr{B}^+=Id. \end{align} Similar statements apply to produce maps \begin{align} \mathscr{B}^-: \mathcal{E}^T_{\mathscr{{H}^+}} \oplus \mathcal{E}^T_{\mathscr{I}^+}\longrightarrow \mathcal{E}^T_\Sigma,\\ \mathscr{B}^-: \mathcal{E}^T_{\overline{\mathscr{{H}^+}}} \oplus \mathcal{E}^T_{\mathscr{I}^+} \longrightarrow \mathcal{E}^T_{\overline{\Sigma}}. \end{align} \end{thm} \begin{thm}\label{RW isomorphisms} Analogously to \cref{forwardRW,backwardRW}, there exist bounded maps \begin{align} \mathscr{F}^-:\mathcal{E}^T_\Sigma\longrightarrow \mathcal{E}^T_{\mathscr{H}^-}\oplus \mathcal{E}^T_{\mathscr{I}^-},\qquad\qquad\qquad \mathscr{B}^+:\mathcal{E}^T_{\mathscr{H}^-}\oplus \mathcal{E}^T_{\mathscr{I}^-}\longrightarrow \mathcal{E}^T_\Sigma, \end{align} \begin{align} \mathscr{F}^-:\mathcal{E}^T_{\overline{\Sigma}}\longrightarrow \mathcal{E}^T_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^T_{\mathscr{I}^-},\qquad\qquad\qquad \mathscr{B}^+:\mathcal{E}^T_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^T_{\mathscr{I}^-}\longrightarrow \mathcal{E}^T_{\overline\Sigma}, \end{align} such that $\mathscr{F}^-\circ\mathscr{B}^+=\mathscr{B}^+\circ\mathscr{F}^-=Id$ on the respective domains. The maps \begin{align} \mathscr{S}=\mathscr{F}^+\circ\mathscr{B}^+:\mathcal{E}^T_{\mathscr{H}^-}\oplus \mathcal{E}^T_{\mathscr{I}^-}\longrightarrow \mathcal{E}^T_{\mathscr{H}^+}\oplus\mathcal{E}^T_{\mathscr{I}^+},\\ \mathscr{S}=\mathscr{F}^+\circ\mathscr{B}^+:\mathcal{E}^T_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^T_{\mathscr{I}^-}\longrightarrow \mathcal{E}^T_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^T_{\mathscr{I}^+} \end{align} constitute unitary Hilbert space isomorphism with inverses \begin{align} \mathscr{S}=\mathscr{F}^-\circ\mathscr{B}^-:\mathcal{E}^T_{\mathscr{H}^+}\oplus \mathcal{E}^T_{\mathscr{I}^+}\longrightarrow \mathcal{E}^T_{\mathscr{H}^-}\oplus\mathcal{E}^T_{\mathscr{I}^-},\\ \mathscr{S}=\mathscr{F}^-\circ\mathscr{B}^-:\mathcal{E}^T_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^T_{\mathscr{I}^+}\longrightarrow \mathcal{E}^T_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^T_{\mathscr{I}^-} \end{align} on the respective domains. \end{thm} \begin{remark}\label{RW distinct spaces} We emphasise that the spaces $\mathcal{E}^T_{\Sigma}$ and $\mathcal{E}^T_{\overline{\Sigma}}$ are different and $\mathcal{E}^T_{\Sigma}\subsetneq\mathcal{E}^T_{\overline{\Sigma}}$. Similarly, $\mathcal{E}^T_{\mathscr{H}^+}\subsetneq\mathcal{E}^T_{\overline{\mathscr{H}^+}}$. Our prescription in distinguishing between these spaces is consistent in the sense that elements of $\mathcal{E}^T_{\Sigma}$ are mapped into $\mathcal{E}^T_{\mathscr{H}^+}$ and vice versa. Our point of view is that the spaces $\mathcal{E}^T_{\overline{\Sigma}}, \mathcal{E}^T_{\overline{\mathscr{H}}^\pm}$ are the natural spaces to consider, since in these spaces scattering data are not restricted to vanish at the bifurcation sphere $\mathcal{B}$. It is however useful to have the statements involving $\mathcal{E}^T_{{\Sigma}}, \mathcal{E}^T_{{\mathscr{H}}^\pm}$. In particular, solutions arising from past scattering data identically vanishing on $\mathscr{H}^-$ will lie in these spaces. \end{remark} \subsection{Theorem 2: Scattering for the Teukolsky equations of spins $\pm2$}\label{subsection 4.2 scattering for the teukolsky equations of spins +,-2} \subsubsection{Scattering for the +2 Teukolsky equation}\label{subsubsection 4.2.1 scattering for the +2 equation} \begin{defin}\label{+2 norm on Sigma} Let $(\upalpha,\upalpha')\in\Gamma_c(\Sigma^*)\oplus\Gamma_c(\Sigma^*)$ be Cauchy data for $\bref{T+2}$ on $\Sigma^*$ giving rise to a solution $\alpha$. Define the space $\mathcal{E}^{T,+2}_{\Sigma^*}$ to be the completion of $\Gamma_c(\Sigma^*)\oplus\Gamma_c(\Sigma^*)$ under the norm \begin{align} ||(\upalpha,\upalpha')||_{\mathcal{E}^{T,+2}_{\Sigma^*}}^2=||(\Psi,\slashed{\nabla}_{n_{\Sigma^*}}\Psi)||_{\mathcal{E}^{T}_{\Sigma^*}}^2, \end{align} where $\Psi$ is the weighted second derivative $\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha$ of $\alpha$. The spaces $\mathcal{E}^{T,+2}_{\Sigma}$, $\mathcal{E}^{T,+2}_{\overline{\Sigma}}$ are similarly defined. \end{defin} We immediately note the following: \begin{proposition}\label{+2 norm on Sigma is coercive} $\|\;\|_{\mathcal{E}^{T,+2}_{\Sigma}}$ indeed defines a norm on $\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)$. Similar statements hold for $\|\;\|_{\mathcal{E}^{T,+2}_{\Sigma^*}}, \|\;\|_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}}$. \end{proposition} \begin{proof} It suffices to check that $\|(\upalpha,\upalpha')\|_{\mathcal{E}^{T,+2}_{\Sigma}}=0$ for a smooth, compactly supported pair $(\upalpha,\upalpha')$ implies that $(\upalpha,\upalpha')=(0,0)$. Let $\alpha$, $\Psi$ be as in \Cref{+2 norm on Sigma}. It is clear that $\Psi=0$, and (\ref{eq:d4d4Psi}) implies: \begin{align}\label{424242} \slashed{\nabla}_T\alpha=\frac{1}{12M}\mathcal{A}_2(\mathcal{A}_2-2)\alpha. \end{align} \Cref{eq:d4Psi} implies that on $\Sigma$ \begin{align} \left(\mathcal{A}_2-2+\frac{6M}{r}\right)\left(\frac{1}{12M}\mathcal{A}_2(\mathcal{A}_2-2)-\slashed{\nabla}_{R^*}\right)r\Omega^2\alpha-6M\frac{\Omega^2}{r^2}r\Omega^2\alpha=0. \end{align} Take $F=\left(\mathcal{A}_2-2+\frac{6M}{r}\right)r\Omega^2\alpha$, then the above says $\slashed{\nabla}_{R^*}F=\frac{1}{12M} \mathcal{A}_2\left(\mathcal{A}_2-2\right)F-12M\frac{\Omega^2}{r^2}r\Omega^2\alpha$. We integrate over the region $R_0<r<R$ on $\Sigma$: \begin{align} \begin{split} \|F\|^2_{S^2,r=R}=\|F\|^2_{S^2,r=R_0}+\int_{\Sigma\cap\{R_0<r<R\}} \frac{1}{6M}&\left\{|\mathcal{A}_2F|^2+2|\mathring{\slashed{\nabla}}F|^2+4|F|^2\right\}\\&+24M\frac{\Omega^2}{r^2}\left\{|\mathring{\slashed{\nabla}}r\Omega^2\alpha|^2+\left(4-\frac{6M}{r}\right)|r\Omega^2\alpha|^2\right\}. \end{split} \end{align} This implies $\|F\|^2_{S^2,r=R}\geq \|F\|^2_{S^2,r=R_0}$ (notice that the integral on the right hand side remains positive by Poincar\'e's inequality). If the data are compactly supported then $F$ must vanish everywhere on $\Sigma$, and the vanishing of $F$ implies the vanishing of $\Omega^2\alpha$ for smooth $\alpha$ since the operator $\mathcal{A}_2-2+\frac{6M}{r}$ is uniformly elliptic on the set of symmetric, traceless 2-tensor field on $S^2$. This in turn implies the vanishing of $\slashed{\nabla}_T\Omega^2\alpha$ by \bref{424242}. We can repeat this argument for data on $\Sigma^*, \overline{\Sigma}$. \end{proof} \begin{defin} Define the space of future scattering states $\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq 0}}$ on $\mathscr{H}^+$ to be the completion of $\Gamma_c (\mathscr{H}^+_{\geq 0})$ under the norm \begin{align}\label{+2 scattering norm on H+} \begin{split} &\|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}=\left\|\mathcal{A}_2(\mathcal{A}_2-2)\left(\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right)\right\|^2_{L^2(\mathscr{H}^+_{\geq0})}+\left\|6M\partial_v \left(\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right)\right\|^2_{L^2(\mathscr{H}^+_{\geq0})}\\&+\int_{S^2}\sin\theta d\theta d\phi \left(\left|\mathring{\slashed{\Delta}}\int_{\bar{v}=0}^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right|^2+6\left|\mathring{\slashed{\nabla}}\int_{\bar{v}=0}^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right|^2+8\Big|\int_{\bar{v}=0}^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\Big|^2\right). \end{split} \end{align} Define the space $\mathcal{E}^{T,+2}_{\mathscr{H}^+}$ to be the completion of $\Gamma_c(\mathscr{H}^+)$ under the norm \begin{align} \|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}=\left\|\mathcal{A}_2(\mathcal{A}_2-2)\left(\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right)\right\|^2_{L^2(\mathscr{H}^+)}+\left\|6M\partial_v\left(\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A\right)\right\|^2_{L^2(\mathscr{H}^+)}. \end{align} Define the space $\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$ to be the completion of the space consisting of symmetric traceless $S^2_{\infty,v}$ 2-tensor fields $A$ on $\overline{\mathscr{H}^+}$ such that $V^{-2}A\in \Gamma_c \left(\overline{\mathscr{H}^+}\right)$, under the same norm above evaluated over $\overline{\mathscr{H}^+}$. \end{defin} \begin{remark}\label{+2 norm is norm on H+} Let $A\in\Gamma_c(\mathscr{H}_{\geq0}^+)$. If $\|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}=0$ then $\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}A=0$ for all $v$, which implies that $A$ must vanish if it is smooth. Thus $\|\;\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}$ defines a norm on $\Gamma_c(\mathscr{H}^+_{\geq0})$, which then extends to the Hilbert space $\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}$. The same applies to $\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}$, $\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$. \end{remark} \begin{defin} Define the space of future scattering states $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ on $\mathscr{I}^+$ to be the completion of $\Gamma_c (\mathscr{I}^+)$ under the norm \begin{align} \|A\|_{\mathcal{E}_{\mathscr{I}^+}^{T,+2}}=\left|\left|\partial_u^3A\right|\right|_{L^2(\mathscr{I}^+)}. \end{align} \end{defin} \begin{remark}\label{+2 norm is norm on scri+} The energy $\|\;\|_{\mathcal{E}_{\mathscr{I}^+}^{T,+2}}$ indeed defines a norm on $\Gamma_c(\mathscr{I}^+)$, which thus extends to a Hilbert space $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ when completed under $\|\;\|_{\mathcal{E}_{\mathscr{I}^+}^{T,+2}}$. We can identify $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ as the subset $A\in L^2_{loc}(\mathscr{I}^+)$ whose elements satisfy \begin{itemize} \item $\partial_u^3 A\in L^2(\mathscr{I}^+)$, \item $\lim_{u\longrightarrow\infty} \|A\|_{L^2(S^2)}=0$. \end{itemize} Hardy's inequality holds and we have on this subset \begin{align} \int_{\mathscr{H}^+_{\geq0}} dv \sin\theta d\theta d\phi \frac{|A|^2}{v^6+1}\lesssim\|\xi\|^2_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}} <\infty. \end{align} \end{remark} \begin{defin}\label{+2 backwards scattering H} Define the space of past scattering states $\mathcal{E}^{T,+2}_{\mathscr{H}^-}$ on $\mathscr{H}^-$ to be the completion of $\Gamma_c (\mathscr{H}^-)$ under the norm \begin{align}\label{this 2424} \|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^-}}=\left|\left|2(2M\partial_u)A-3(2M\partial_u)^2A+(2M\partial_u)^3A\right|\right|_{L^2(\mathscr{H}^-)}. \end{align} Define the space $\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}$ to be the closure of the space consisting of symmetric traceless $S^2_{u,-\infty}$ 2-tensor fields $A$ on $\overline{\mathscr{H}^-}$ such that $U^2A\in \Gamma_c \left(\overline{\mathscr{H}^-}\right)$, under the same norm above evaluated over $\overline{\mathscr{H}^-}$. \end{defin} \begin{remark}\label{+2 norm is norm on H-} As mentioned in \Cref{introduction regular frame norm} of Section 1.3.2 of the introduction, the energy defined in \bref{this 2424} can be written using the Kruskal frame as \begin{align} \|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^-}}=\|U^{1/2}\partial_U^3U^2A\|_{L^2_UL^2(S^2)}. \end{align} This defines a norm on $\Gamma_c(\mathscr{H}^-)$, which then extends to the Hilbert space $\mathcal{E}^{T,+2}_{\mathscr{H}^-}$. It is possible to represent the elements of $\mathcal{E}^{T,+2}_{\mathscr{H}^-}$ as the subset $A\in L^2_{loc}(\mathscr{H}^-)$ whose elements satisfy \begin{itemize} \item $\partial_uA$, $\partial_u^2 A$, $\partial_u^3 A \in L^2(\mathscr{H}^-)$, \item $\lim_{u\longrightarrow -\infty} \|A\|_{L^2(S^2)}=0$ \end{itemize} Hardy's inequality holds on this space we have \begin{align} \int_{\mathscr{H}^-} du \sin\theta d\theta d\phi \frac{|A|^2}{u^2+1}\lesssim\|\xi\|^2_{\mathcal{E}^{T,+2}_{\mathscr{H}^-}}<\infty. \end{align} \end{remark} \begin{defin}\label{+2 backwards scattering scri} Define the space of past scattering states $\mathcal{E}^{T,+2}_{\mathscr{I}^-}$ on $\mathscr{I}^-$ to be the completion of the space \begin{align} A\in\Gamma(\mathscr{I}^-): \int_{-\infty}^\infty dv\;A=0 \end{align} under the norm \begin{align} \|A\|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}=&\int_{\mathscr{I}^-} d\bar{v}\sin\theta d\theta d\phi\left[ 6M|A|^2+\left|\mathcal{A}_2(\mathcal{A}_2-2)\int_{\bar{v}}^\infty A\right| ^2\right]. \end{align} \end{defin} \begin{remark}\label{+2 norm is norm on scri-} Let $A\in\Gamma_c(\mathscr{I}^-)$. If $\|A\|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}=0$ then $A=0$. Thus $\|\;\|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}$ defines a norm on $\Gamma_c(\mathscr{I}^-)$ which then extends to the Hilbert space $\mathcal{E}^{T,+2}_{\mathscr{I}^-}$. \end{remark} \begin{thm}\label{+2 future forward scattering} Forward evolution under the $+2$ Teukolsky equation \bref{T+2} from smooth, compactly supported data $(\upalpha,\upalpha')$ on $\Sigma^*$ gives rise to smooth radiation fields $(\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})\in \mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ where \begin{enumerate} \item $\upalpha_{\mathscr{H}^+}=2M\Omega^{2}{\alpha}\big|_{\mathscr{H}^+} \in \Gamma(\mathscr{H}^+)$, \item $\upalpha_{\mathscr{I}^+}=\lim_{v\longrightarrow \infty} r^5\alpha(v,u,\theta^A)$, with $\upalpha_{\mathscr{I}^+}\in \Gamma(\mathscr{I}^+)$, \end{enumerate} with $\upalpha_{\mathscr{I}^+}, \alpha_{\mathscr{H}^+_{\geq0}}$ satisfying \begin{align} \left|\left|(\upalpha,\upalpha')\right|\right|_{\mathcal{E}^{T,+2}_{\Sigma^*}}^2=\left|\left|\upalpha_{\mathscr{I}^+}\right|\right|_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}^2+\left|\left|\upalpha_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}^2. \end{align} This extends to a unitary map \begin{align} {}^{(+2)}\mathscr{F^+}: \mathcal{E}^{T,+2}_{\Sigma^*}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}. \end{align} The same conclusions apply when replacing $\Sigma^*$ with $\Sigma$ and $\mathscr{H}^+_{\geq0}$ with $\mathscr{H}^+$, or when replacing with $\overline\Sigma$ and $\overline{\mathscr{H}^+}$. In the latter case, $(\upalpha,\upalpha')$ must be consistent with the well-posedness statement $\Cref{WP+2Sigma*}$ and consequently we obtain that $V^{-2}\upalpha_{{\mathscr{H}^+}}\in \Gamma(\overline{\mathscr{H}^+})$. \end{thm} \begin{thm}\label{+2 future backward scattering} Let $\upalpha_{\mathscr{I}^+}\in \Gamma_c (\mathscr{I}^+), \upalpha_{\mathscr{H}^+} \in \Gamma_c (\mathscr{H}^+_{\geq0})$. Then there exists a unique solution $\alpha$ to \cref{T+2} in $J^+(\Sigma^*)$ which is smooth, such that \begin{align} \lim_{v\longrightarrow\infty} r^5\alpha(u,v,\theta^A)=\upalpha_{\mathscr{I}^+},\qquad\qquad \Omega^{2}\alpha\big|_{\mathscr{H}^+_{\geq0}}=\upalpha_{\mathscr{H}^+}, \end{align} with $(\Omega^2\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha|_{\Sigma^*})\in \mathcal{E}^{T,+2}_{\Sigma^*} $ and $ \left\|(\Omega^2\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha|_{\Sigma^*})\right\|_{\mathcal{E}^{T,+2}_{\Sigma^*}}^2=\left\|\upalpha_{\mathscr{I}^+}\right\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}^2+\left|\left|\upalpha_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}^2$. This extends to a unitary map \begin{align} {}^{(+2)}\mathscr{B}^-: \mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,+2}_{{\Sigma^*}}, \end{align} which inverts the map ${}^{(+2)}\mathscr{F}^+$ of \Cref{+2 future forward scattering} \begin{align} {}^{(+2)}\mathscr{B}^-\circ{}^{(+2)}\mathscr{F}^+={}^{(+2)}\mathscr{F}^+\circ{}^{(+2)}\mathscr{B}^-=Id. \end{align} The same conclusions apply when replacing $\Sigma^*$ with $\Sigma$ and $\mathscr{H}^+_{\geq0}$ with $\mathscr{H}^+$, or when replacing with $\overline\Sigma$ and $\overline{\mathscr{H}^+}$. In the latter case, we require that $V^{-2}\alpha_{{\mathscr{H}^+}}\in \Gamma(\overline{\mathscr{H}^+})$ and with that $(\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\alpha|_{\Sigma^*})$ is consistent with \Cref{WP+2Sigma*}. \end{thm} \begin{thm}\label{+2 past forward scattering} Evolution from $(\upalpha,\upalpha')\in\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)$ to $J^-(\Sigma)$ gives rise to radiation fields on $\mathscr{H}^-,\mathscr{I}^-$ analogously to \Cref{+2 future forward scattering}, where the radiation fields are defined by \begin{align} \lim_{v\longrightarrow\infty} r\alpha(u,v,\theta^A)=\upalpha_{\mathscr{I}^-},\qquad\qquad 2M\Omega^{-2}\alpha\big|_{\mathscr{H}^-}=\upalpha_{\mathscr{H}^-}. \end{align} This extends to a unitary map \begin{align} {}^{(+2)}\mathscr{F^-}: \mathcal{E}^{T,+2}_{\Sigma}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}, \end{align} with inverse ${}^{(+2)}\mathscr{B}^+:\mathcal{E}^{T,+2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,+2}_{\Sigma}$. The same conclusions apply when replacing $\Sigma$ with $\overline{\Sigma}$ and $\mathscr{H}^-$ with $\overline{\mathscr{H}^-}$. In this case, we require that $(U^{2}\Omega^{-2}\upalpha,U^{2}\Omega^{-2}\upalpha')$ are smooth up to and including $\mathcal{B}$, and consequently we obtain that $U^{2}\upalpha_{{\mathscr{H}^-}}\in \Gamma(\overline{\mathscr{H}^-})$. \end{thm} \begin{thm}\label{scatteringthm+2} The maps \begin{align} {}^{(+2)}\mathscr{S}&={}^{(+2)}\mathscr{F}^+\circ{}^{(+2)}\mathscr{B}^+:\mathcal{E}^{T,+2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{H}^+}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+},\\ {}^{(+2)}\mathscr{S}&={}^{(+2)}\mathscr{F}^+\circ{}^{(+2)}\mathscr{B}^+:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+} \end{align} constitute unitary Hilbert space isomorphism with inverses \begin{align} {}^{(+2)}\mathscr{S}^-={}^{(+2)}\mathscr{F}^-\circ{}^{(+2)}\mathscr{B}^-:\mathcal{E}^{T,+2}_{\mathscr{H}^+}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,+2}_{\mathscr{H}^-}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^-}\\ {}^{(+2)}\mathscr{S}^-={}^{(+2)}\mathscr{F}^-\circ{}^{(+2)}\mathscr{B}^-:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^-} \end{align} on the respective domains. \end{thm} \subsubsection{Scattering for the $-2$ Teukolsky equation}\label{subsubsection 4.2.2 Scattering for the -2 equation} \begin{defin}\label{-2 norm on Sigma*} Let $(\underline\upalpha,\underline\upalpha')\in\Gamma_c(\Sigma^*)\oplus\Gamma_c(\Sigma^*)$ be Cauchy data for $\bref{T+2}$ on $\Sigma^*$ giving rise to a solution $\underline\alpha$. Define the space $\mathcal{E}^{T,-2}_{\Sigma^*}$ to be the completion of $\Gamma_c(\Sigma^*)\oplus\Gamma_c(\Sigma^*)$ under the norm \begin{align}\label{equivnorm-2} ||(\underline\upalpha,\underline\upalpha')||_{\mathcal{E}^{T,-2}_{\Sigma^*}}^2=||(\underline\Psi,\slashed{\nabla}_{n_{\Sigma^*}}\underline\Psi)||_{\mathcal{E}^{T}_{\Sigma^*}}^2, \end{align} where $\underline\Psi$ is the weighted second derivative $\underline\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\alpha$ of $\underline\alpha$. The spaces $\mathcal{E}^{T,-2}_{\Sigma}$, $\mathcal{E}^{T,-2}_{\overline{\Sigma}}$ are similarly defined. \end{defin} \begin{proposition}\label{-2 norm on Sigma is coercive} $\|\;\|_{\mathcal{E}^{T,-2}_{\Sigma}}$ indeed defines a norm on $\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)$. \end{proposition} \begin{proof} It suffices to check that $\|(\underline\upalpha,\underline\upalpha')\|_{\mathcal{E}^{T,-2}_{\Sigma}}=0$ implies $(\underline\upalpha,\underline\upalpha')=(0,0)$. Let $\underline\alpha$ and $\underline\Psi$ be as in \Cref{-2 norm on Sigma*}. It is clear that $\|(\underline\upalpha,\underline\upalpha')\|_{\mathcal{E}^{T,-2}_{\Sigma}}=0$ implies $\Psi=0$. \Cref{eq:d3d3psibar} implies that \begin{align} \slashed{\nabla}_T r\Omega^2\underline\alpha=-\frac{1}{12M}\mathcal{A}_2(\mathcal{A}_2-2)r\Omega^2\underline\alpha. \end{align} \Cref{eq:d3psibar} then gives us \begin{align}\label{this2323} \left[\mathcal{A}+2-\frac{6M}{r}\right]\left(\frac{1}{12M}\mathcal{A}_2(\mathcal{A}_2-2)-\slashed{\nabla}_{R^*}\right)r\Omega^2\underline\alpha+6M\frac{\Omega^2}{r^2}r\Omega^2\underline\alpha=0. \end{align} Let $\underline{F}=\left(\mathring{\slashed{\Delta}}-\frac{6M}{r}\right)r\Omega^2\underline\alpha$, then \bref{this2323} above implies that $\slashed{\nabla}_{R^*}\underline{F}=\frac{1}{12M}\mathcal{A}_2(\mathcal{A}_2-2)\underline{F}$. The result follows similarly to \Cref{+2 norm on Sigma is coercive}. \end{proof} \begin{defin} Define the space of future scattering states $\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq 0}}$ on $\mathscr{H}^+_{\geq0}$ to be the completion of $\Gamma_c(\mathscr{H}^+_{\geq0})$ under the norm \begin{align} \|\underline{A}\|_{\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}}=(2M)^2\left|\left|2(2M\partial_v)\underline{A}+3(2M\partial_v)^2\underline{A}+(2M\partial_v)^3\underline{A}\right|\right|_{L^2(\mathscr{H}^+_{\geq0})}. \end{align} The space $\mathcal{E}^{T,-2}_{\mathscr{H}^+}$ is defined by the same norm taken over $\mathscr{H}^+$. Define and $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$ to be the closure of the space consisting of symmetric traceless $S^2_{\infty,v}$ 2-tensor fields $\underline{A}$ on $\overline{\mathscr{H}^+}$ such that $V^{2}\underline{A}\in \Gamma_c \left(\overline{\mathscr{H}^+}\right)$, under the same norm above evaluated over $\overline{\mathscr{H}^+}$. \end{defin} \begin{remark}\label{-2 norm is norm on H+} As with \Cref{+2 norm is norm on H-} on $\|\;\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^-}}$, the energy $\|\;\|_{\mathcal{E}^{T,-2}_{\mathscr{H}^+}}$ indeed defines a norm on $\Gamma_c(\overline{\mathscr{H}^+})$, which then extends to the Hilbert space $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$. It is possible to represent the elements of $\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}$ as the subset $\underline{A}\in L^2_{loc}(\mathscr{H}^+_{\geq0})$ whose elements satisfy \begin{itemize} \item $\partial_v \underline{A}$, $\partial_v^2 \underline{A}$, $\partial_v^3 A \in L^2(\mathscr{H}^+_{\geq0})$, \item $\lim_{v\longrightarrow \infty} \|\underline{A}\|_{L^2(S^2)}=0$ \end{itemize} Hardy's inequality holds on this space we have \begin{align} \int_{\mathscr{H}^+_{\geq0}} dv \sin\theta d\theta d\phi \frac{|\underline{A}|^2}{v^2+1}\lesssim\|\underline{A}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}}<\infty. \end{align} Similar statements apply to $\mathcal{E}^{T,-2}_{{\mathscr{H}^+}}$, $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$. \end{remark} \begin{defin} Define the space of future scattering states $\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ on $\mathscr{I}^+$ to be the completion of the space \begin{align} \underline{A}\in \Gamma_c (\mathscr{I}^-): &\int_{-\infty}^{\infty} du\underline{A}=0 \end{align} under the norm \begin{align}\label{-2 tricky norm at scri} \|\underline{A}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}=&\int_{\mathscr{I}^+} d{u}\sin\theta d\theta d\phi\left[ (6M)^2|\underline{A}|^2+\left|\mathcal{A}_2(\mathcal{A}_2-2)\int_{\bar{u}}^\infty d\bar{u}\; \underline{A}\right| ^2\right]. \end{align} \end{defin} \begin{remark}\label{-2 norm on scri+} As with $\|\;\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}$ and \Cref{+2 norm is norm on scri-}, the energy $\|\;\|_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}$ indeed defines a norm on $\Gamma_c({\mathscr{I}^+})$, which then extends to the Hilbert space $\mathcal{E}^{T,-2}_{{\mathscr{I}^+}}$. \end{remark} \begin{defin} Define the space $\mathcal{E}^{T,-2}_{\mathscr{H}^-}$ to be the completion of $\Gamma_c(\mathscr{H}^-)$ under the norm \begin{align} \|\underline{A}\|_{\mathcal{E}^{T,-2}_{\mathscr{H}^-}}=\left\|\mathcal{A}_2(\mathcal{A}_2-2)\left(\int^u_{-\infty} d\bar{u}\; e^{\frac{1}{2M}(u-\bar{u})}\underline{A}\right)\right\|^2_{L^2(\mathscr{H}^-)}+\left\|6M\partial_u\left( \int^u_{-\infty} d\bar{u}\; e^{\frac{1}{2M}(u-\bar{u})}\underline{A}\right)\right\|^2_{L^2(\mathscr{H}^-)}. \end{align} Define the space $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}$ to be the completion of the space consisting of symmetric traceless $S^2_{u,-\infty}$ 2-tensor fields $\underline{A}$ on $\overline{\mathscr{H}^-}$ such that $U^{-2}A\in \Gamma_c \left(\overline{\mathscr{H}^-}\right)$, under the same norm above evaluated over $\overline{\mathscr{H}^-}$. \end{defin} \begin{remark}\label{-2 norm is norm on H-} As with $\|\;\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}$ and \Cref{-2 norm is norm on H-}, the energy $\|\;\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}$ indeed defines a norm on $\Gamma_c(\overline{\mathscr{H}^-})$, which then extends to the Hilbert space $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}$. \end{remark} \begin{defin}\label{-2 norm on scri-} Define the space of future scattering states $\mathcal{E}^{T,-2}_{\mathscr{I}^-}$ on $\mathscr{I}^-$ to be the completion of $\Gamma_c (\mathscr{I}^-)$ under the norm \begin{align} \|\underline{A}\|_{\mathcal{E}_{\mathscr{I}^-}^{T,-2}}=\left|\left|\partial_v^3\underline{A}\right|\right|_{L^2(\mathscr{I}^-)}. \end{align} \end{defin} \begin{remark}\label{-2 norm is norm on scri-} The energy $\|\;\|_{\mathcal{E}_{\mathscr{I}^-}^{T,-2}}$ indeed defines a norm on $\Gamma_c(\mathscr{I}^-)$, which thus extends to a Hilbert space $\mathcal{E}^{T,-2}_{\mathscr{I}^-}$ when completed under $\|\;\|_{\mathcal{E}_{\mathscr{I}^-}^{T,-2}}$. We can identify $\mathcal{E}^{T,-2}_{\mathscr{I}^-}$ as the subset $A\in L^2_{loc}(\mathscr{I}^-)$ whose elements satisfy \begin{itemize} \item $\partial_v \underline{A}$, $\partial_v^2 \underline{A}$, $\partial_v^3 \underline{A}\in L^2(\mathscr{I}^-)$, \item $\lim_{v\longrightarrow-\infty} \|\underline{A}\|_{L^2(S^2)}=0$. \end{itemize} Hardy's inequality holds and we have on this subset \begin{align} \int_{\mathscr{I}^-} dv \sin\theta d\theta d\phi \frac{|\underline{A}|^2}{v^6+1}\lesssim\|\underline{A}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^-}} <\infty. \end{align} \end{remark} \begin{thm}\label{-2 future forward scattering} Forward evolution under the $-2$ Teukolsky equation \bref{T-2} from smooth, compactly supported data $(\underline\upalpha,\underline\upalpha')$ on $\Sigma^*$ gives rise to smooth radiation fields $(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})\in \mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ where \begin{enumerate} \item $\underline\upalpha_{\mathscr{H}^+}=2M\Omega^{-2}{\underline\alpha}\big|_{\mathscr{H}^+} \in \Gamma(\mathscr{H}^+)$, \item $\underline\upalpha_{\mathscr{I}^+}=\lim_{v\longrightarrow \infty} r\underline\alpha(v,u,\theta^A)$, with $\underline\upalpha_{\mathscr{I}^+}\in \Gamma(\mathscr{I}^+)$, \end{enumerate} with $\underline\upalpha_{\mathscr{I}^+}, \underline\upalpha_{\mathscr{H}^+}$ satisfying \begin{align} \left|\left|(\underline\upalpha,\underline\upalpha')\right|\right|_{\mathcal{E}^{T,-2}_{\Sigma^*}}^2=\left|\left|\upalpha_{\mathscr{I}^+}\right|\right|_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}^2+\left|\left|\upalpha_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}}^2. \end{align} This extends to a unitary map \begin{align} {}^{(-2)}\mathscr{F^+}: \mathcal{E}^{T,-2}_{\Sigma^*}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}. \end{align} The same conclusions apply when replacing $\Sigma^*$ with $\Sigma$ and $\mathscr{H}^+_{\geq0}$ with $\mathscr{H}^+$, or when replacing with $\overline\Sigma$ and $\overline{\mathscr{H}^+}$. In the latter case, $(\underline\upalpha,\underline\upalpha')$ must be consistent with the well-posedness statement $\Cref{WP-2Sigma*}$ and consequently we obtain that $V^{2}\underline\upalpha_{{\mathscr{H}^+}}\in \Gamma(\overline{\mathscr{H}^+})$. \end{thm} \begin{thm}\label{-2 future backward scattering} Let $\underline\upalpha_{\mathscr{I}^+}\in \Gamma_c (\mathscr{I}^+), \underline\upalpha_{\mathscr{H}^+} \in \Gamma_c (\mathscr{H}^+_{\geq0})$ with $\int_{-\infty}^\infty d\bar{u}\; \underline\upalpha_{\mathscr{I}^+}=0$. Then there exists a unique solution $\underline\alpha$ to \cref{T-2} in $J^+(\Sigma^*)$ which is smooth, such that \begin{align} \lim_{v\longrightarrow\infty} r\underline\alpha(u,v,\theta^A)=\underline\upalpha_{\mathscr{I}^+},\qquad\qquad 2M\Omega^{-2}\underline\alpha\big|_{\mathscr{H}^+_{\geq0}}=\underline\upalpha_{\mathscr{H}^+}, \end{align} with $(\underline\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\underline\alpha|_{\Sigma^*})\in \mathcal{E}^{T,-2}_{\Sigma^*} $ and $ \left|\left|(\underline\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\underline\alpha|_{\Sigma^*})\right|\right|_{\mathcal{E}^{T,-2}_{\Sigma^*}}^2=\left|\left|\underline\upalpha_{\mathscr{I}^+}\right|\right|_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}^2+\left|\left|\underline\upalpha_{\mathscr{H}^+}\right|\right|_{\mathcal{E}^{T,-2}_{\mathscr{H}^+}}^2$. This extends to a unitary map \begin{align} {}^{(-2)}\mathscr{B}^-: \mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{{\Sigma^*}}, \end{align} which inverts the map ${}^{(-2)}\mathscr{F}^+$ of \Cref{-2 future forward scattering} \begin{align} {}^{(-2)}\mathscr{B}^-\circ{}^{(-2)}\mathscr{F}^+={}^{(-2)}\mathscr{F}^+\circ{}^{(-2)}\mathscr{B}^-=Id. \end{align} The same conclusions apply when replacing $\Sigma^*$ with $\Sigma$ and $\mathscr{H}^+_{\geq0}$ with $\mathscr{H}^+$, or when replacing with $\overline\Sigma$ and $\overline{\mathscr{H}^+}$. In the latter case, we require that $V^2\underline\upalpha_{{\mathscr{H}^+}}\in \Gamma(\overline{\mathscr{H}^+})$ and with that $(\underline\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\underline\alpha|_{\Sigma^*})$ is consistent with $\Cref{WP-2Sigma*}$ \end{thm} \begin{thm}\label{-2 past forward scattering} Evolution from $(\underline\upalpha,\underline\upalpha')\in\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)$ to $J^-(\Sigma)$ gives rise to radiation fields on $\mathscr{H}^-,\mathscr{I}^-$ analogously to \Cref{+2 future forward scattering}, where the radiation fields are defined by \begin{align} \lim_{v\longrightarrow\infty} r^5\underline\alpha(u,v,\theta^A)=\underline\upalpha_{\mathscr{I}^-}\qquad\qquad 2M\Omega^{2}\underline\alpha\big|_{\mathscr{H}^-}=\underline\upalpha_{\mathscr{H}^-} \end{align} This extends to a unitary map \begin{align} {}^{(-2)}\mathscr{F^-}: \mathcal{E}^{T,-2}_{\Sigma}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^-} \end{align} with inverse ${}^{(-2)}\mathscr{B}^+:\mathcal{E}^{T,-2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,-2}_{\Sigma}$. The same conclusions apply when replacing $\Sigma$ with $\overline{\Sigma}$ and $\mathscr{H}^-$ with $\overline{\mathscr{H}^-}$. In this case, we require that $(U^{-2}\Omega^2\underline\alpha,U^{-2}\Omega^2\underline\alpha')$ are smooth up to and including $\mathcal{B}$, and consequently we obtain that $U^{-2}\underline\alpha_{{\mathscr{H}^+}}\in \Gamma(\overline{\mathscr{H}^+})$ \end{thm} \begin{thm}\label{scatteringthm-2} The maps \begin{align} &{}^{(-2)}\mathscr{S}^+={}^{(-2)}\mathscr{F}^+\circ{}^{(-2)}\mathscr{B}^+:\mathcal{E}^{T,-2}_{\mathscr{H}^-}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{H}^+}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+},\\ &{}^{(-2)}\mathscr{S}^+={}^{(-2)}\mathscr{F}^+\circ{}^{(-2)}\mathscr{B}^+:\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+} \end{align} constitute unitary Hilbert space isomorphism with inverses \begin{align} {}^{(-2)}\mathscr{S}^-={}^{(-2)}\mathscr{F}^-\circ{}^{(-2)}\mathscr{B}^-:\mathcal{E}^{T,-2}_{\mathscr{H}^+}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{H}^-}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^-}\\ {}^{(-2)}\mathscr{S}^-={}^{(-2)}\mathscr{F}^-\circ{}^{(-2)}\mathscr{B}^-:\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^-} \end{align} on the respective domains. \end{thm} \begin{remark} We emphasise that the spaces $\mathcal{E}^{T,\pm2}_{\Sigma}$ and $\mathcal{E}^{T,\pm2}_{\overline{\Sigma}}$ are different and $\mathcal{E}^{T,\pm2}_{\Sigma}\subsetneq\mathcal{E}^{T,\pm2}_{\overline{\Sigma}}$. Similarly, $\mathcal{E}^{T,\pm2}_{\mathscr{H}^+}\subsetneq\mathcal{E}^{T,\pm2}_{\overline{\mathscr{H}^+}}$. Our prescription in distinguishing between these spaces is consistent in the sense that elements of $\mathcal{E}^{T,\pm2}_{\Sigma}$ are mapped into $\mathcal{E}^{T,\pm2}_{\mathscr{H}^+}$ and vice versa. As mentioned for the Regge--Wheeler equation \bref{RW} in \Cref{RW distinct spaces}, our point of view is that the spaces $\mathcal{E}^{T,\pm2}_{\overline{\Sigma}}, \mathcal{E}^{T,\pm2}_{\overline{\mathscr{H}}^\pm}$ are the more natural spaces to consider, but as we make the distinction between these spaces, we additionally face the issue that the inclusion of the bifurcation sphere $\mathcal{B}$ in the domains of the scattering data requires studying both the equations \bref{T+2}, \bref{T-2} and their unknowns in a different frame near $\mathcal{B}$. \end{remark} \subsection{Theorem 3: The Teukolsky--Starobinsky correspondence}\label{subsection 4.3 the Teukolsky--Starobinsky identities} \begin{thm}\label{Theorem 3 detailed statement} Let $\upalpha_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$. There exists a unique $\underline\upalpha_{\mathscr{I}^+}\in\Gamma(\mathscr{I}^+)$ such that $\|\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,+2}_{{\mathscr{I}^+}}}=\|\underline\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,-2}_{{\mathscr{I}^+}}}$ and \begin{align}\label{constraint null infinity section 4} \partial_u^4\upalpha_{\mathscr{I}^+}=\Big[2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}+6M\partial_u\Big]\underline{\alpha}_{\mathscr{I}^+}. \end{align} An analogous statement applies starting from $\underline\upalpha_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$ to obtain $\upalpha_{\mathscr{I}^+}\in\Gamma(\mathscr{I}^+)$ with $\|\underline\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,-2}_{{\mathscr{I}^+}}}=\|\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,+2}_{{\mathscr{I}^+}}}$ satisfying \bref{constraint null infinity section 4}.\\ \indent Let $\underline\upalpha_{{\mathscr{H}^+}}$ be such that $V^2\underline\upalpha_{{\mathscr{H}^+}}\in\Gamma_c(\overline{\mathscr{H}^+})$. There exists a unique $\upalpha_{\mathscr{H}^+}\in\Gamma(\overline{\mathscr{H}^+})$ such that $\|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}=\|\underline\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}}$ and \begin{align}\label{constraint horizon section 4} \partial_V^4 V^2\underline\upalpha_{\mathscr{H}^+}=\Big[2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}-3V\partial_V-6\Big]V^{-2}\upalpha_{\mathscr{H}^+}. \end{align} An analogous statement applies starting from $\upalpha_{\mathscr{H}^+}$ such that $V^{-2}\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$ to obtain $\underline\upalpha_{\mathscr{H}^+}\in\Gamma(\overline{\mathscr{H}^+})$ with $\|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}=\|\underline\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}}$ satisfying \bref{constraint horizon section 4}. \indent The statements above give rise to unitary Hilbert space isomorphisms \begin{align} \mathcal{TS}_{\mathscr{I}^+}:\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow\mathcal{E}^{T,-2}_{\mathscr{I}^+},\qquad\qquad\mathcal{TS}_{\mathscr{H}^+}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}. \end{align} \begin{align} \mathcal{TS}^+=\mathcal{TS}_{\mathscr{H}^+}\oplus\mathcal{TS}_{\mathscr{I}^+}: \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}. \end{align} Let $\alpha$ be a solution to the $+2$ Teukolsky equation \bref{T+2} arising from scattering data $\upalpha_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$, $\upalpha_{\mathscr{H}^+}$ be such that $V^{-2}\upalpha_{\mathscr{H}^+}\in \Gamma_c(\overline{\mathscr{H}^+})$. Using $\mathcal{TS}^+_{\mathscr{I}^+}, \mathcal{TS}^+_{\mathscr{H}^+}$ we can find a unique set of smooth scattering data $\underline\upalpha_{\mathscr{I}^+}, \underline\upalpha_{\mathscr{H}^+}$ on $\mathscr{I}^+, \mathscr{H}^+$ with $V^2\underline\upalpha_{\mathscr{H}^+}$ regular on $\overline{\mathscr{H}^+}$, giving rise to a solution $\underline\alpha$ to the $-2$ Teukolsky equation \bref{T-2} such that the constraints \begin{align} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3\alpha-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2{\underline\alpha}-6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2{\underline\alpha}=0,\label{theorem constraint 1}\\ \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^3{\underline\alpha}-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\alpha+6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\alpha=0.\label{theorem constraint 2} \end{align} are satisfied by $\alpha, \underline\alpha$ on $\overline{\mathscr{M}}$. The data satisfy \begin{align}\label{unitarity} \|\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}^2=\|\underline\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}^2,\qquad\qquad \|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}^2=\|\underline\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,-2}_{\mathscr{H}^+}}^2. \end{align} \indent Analogously, let $\underline\alpha$ be a solution to the $-2$ Teukolsky equation \bref{T-2} arising from scattering data $\underline\upalpha_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$, $\underline\upalpha_{\mathscr{H}^+}$ be such that $V^{2}\underline\upalpha_{\mathscr{H}^+}\in \Gamma_c(\overline{\mathscr{H}^+})$. Then there exist unique smooth scattering data $\upalpha_{\mathscr{I}^+}, \upalpha_{\mathscr{H}^+}$ on $\mathscr{I}^+, \mathscr{H}^+$ with $V^{-2}\upalpha_{\mathscr{H}^+}$ regular on $\overline{\mathscr{H}^+}$, giving rise to a solution $\alpha$ to the +2 Teukolsky equation \bref{T+2} such that $\alpha, \underline\alpha$ satisfy the constraints \bref{theorem constraint 1}, \bref{theorem constraint 2}.\\ \indent An analogous statement applies to scattering from $\mathscr{I}^-, \mathscr{H}^-$ and we have the isomorphism \begin{align} \mathcal{TS}^-=\mathcal{TS}_{\mathscr{H}^-}\oplus\mathcal{TS}_{\mathscr{I}^-}: \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^-}. \end{align} \end{thm} \subsection{Corollary 1: A mixed scattering statement for combined ($\alpha,\underline\alpha$)}\label{subsection 4.4 Corollary 1: mixed scattering} Importantly, we have the following corollary: \begin{corollary}\label{corollary to be proven} Let $\upalpha_{\mathscr{I}^-}\in\Gamma_c(\mathscr{I}^-)$, $\underline\upalpha_{\mathscr{H}^+}$ be such that $V^{2}\underline\upalpha_{\mathscr{H}^-}\in\Gamma_c(\overline{\mathscr{H}^-})$. Then there exists a unique smooth pair $(\alpha, \underline\alpha)$ on $\mathscr{M}$, such that $\alpha$ solves \bref{T+2}, $\underline\alpha$ solves \bref{T-2}, $\alpha, \underline\alpha$ satisfy \bref{theorem constraint 1}, \bref{theorem constraint 2} and $\underline\alpha$ realises $\underline\upalpha_{\mathscr{H}^-}$ as its radiation field on $\overline{\mathscr{H}^-}$, $\alpha$ realises $\upalpha_{\mathscr{I}^-}$ as its radiation field on $\mathscr{I}^-$. Moreover, the radiation fields of $\alpha$ and $\underline\alpha$ on $\overline{\mathscr{H}^+}, \mathscr{I}^+$ are such that \begin{align} \|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}^2\;+\;\|\underline\upalpha_{\mathscr{I}^+}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}=\|\upalpha_{\mathscr{I}^-}\|_{\mathcal{E}^{T,+2}_{{\mathscr{I}^-}}}^2\;+\;\|\underline\alpha_{{\mathscr{H}^-}}\|^2_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}. \end{align} This extends to a unitary Hilbert-space isomorphism \begin{align} \mathscr{S}^{-2,+2}:\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}. \end{align} \end{corollary} \section{Scattering theory of the Regge--Wheeler equation}\label{section 5 scattering theory for RW} This section is devoted to proving \Cref{Theorem 1} in the introduction, whose detailed statement is contained in \Cref{forwardRW,,backwardRW,,RW isomorphisms}.\\ \indent We will first study in \Cref{subsection 5.2 subsection Radiation fields} the behaviour of future radiation fields belonging to solutions of the Regge--Wheeler equation \bref{RW} that arise from smooth, compactly supported data on $\Sigma^*$ using the estimates gathered in \Cref{subsection 5.1 Basic integrated boundedness and decay estimates}, and this will justify the definitions of radiation fields and spaces of scattering states made in \Cref{subsection 4.1 Theorem 1}. We will first prove \Cref{forwardRW} (in \Cref{subsection 5.3 the forwards scattering map}) and \Cref{backwardRW,,RW isomorphisms} (in \Cref{subsubsection 5.4 the backwards scattering map}) for the case of data on $\Sigma^*$, and most of what follows applies to $\Sigma$ and $\overline{\Sigma}$ unless otherwise stated. \Cref{subsection 5.5 auxiliary results} contain additional results on backwards scattering that will become important later on in the study of the Teukolsky--Starobinsky identities in \Cref{section 9 TS correspondence}. \subsection{Basic integrated boundedness and decay estimates}\label{subsection 5.1 Basic integrated boundedness and decay estimates} Here we collect basic boundedness and decay results for (\ref{RW}) proven in \cite{DHR16}. In what follows $(\uppsi,\uppsi')$ is a smooth data set for \cref{RW} as in \Cref{RWwpCauchy}. \noindent $\bullet$ \emph{\textbf{Energy boundedness}} Let $X=T:=\Omega e_3+\Omega e_4$, multiply (\ref{RW}) by $\slashed{\nabla}_X$ and integrate by parts over $S^2$ to obtain \begin{align}\label{T derivative identity} \Omega\slashed{\nabla}_3\left[|\Omega\slashed{\nabla}_4\Psi|^2+\Omega^2|\slashed\nabla \Psi|^2+V|\Psi|^2\right]+\Omega\slashed{\nabla}_4\left[|\Omega\slashed{\nabla}_3\Psi|^2+\Omega^2|\slashed\nabla \Psi|^2+V|\Psi|^2\right]\stackrel{S^2}{\equiv}0. \end{align} For an outgoing null hypersurface $\mathscr{N}$ define \begin{align} F_{\mathscr{N}}^T[\Psi]:=\int_{\mathscr{N}}\sin\theta d\theta d\phi dv\left[|\Omega\slashed\nabla_4\Psi|^2+\Omega^2|\slashed\nabla\Psi|^2+V|\Psi|^2\right]. \end{align} Similarly for an ingoing null hypersurface $\underline{\mathscr{N}}$ we define \begin{align} \underline{F}_{\underline{\mathscr{N}}}^T[\Psi]:=\int_{\underline{\mathscr{N}}}\sin\theta d\theta d\phi du\left[|\Omega\slashed\nabla_3\Psi|^2+\Omega^2|\slashed\nabla\Psi|^2+V|\Psi|^2\right]. \end{align} Denote $F_{u}^T[\Psi](v_0,v)=F_{\mathscr{C}_{u}\cap\{\bar{v}\in[v_0,v]\}}^T[\Psi]$, $\underline{F}_{v}^T[\Psi](u_0,u)=\underline{F}_{\underline{\mathscr{C}}_{v}\cap\{\bar{u}\in[u_0,u]\}}^T[\Psi]$. Integrating \bref{T derivative identity} over the region $\mathscr{D}^{u,v}_{u_0,v_0}$ yields \begin{align} F^T_u[\Psi](v_0,v)+\underline{F}^T_v[\Psi](u_0,u)= F^T_{u_0}[\Psi](v_0,v)+\underline{F}^T_{v_0}[\Psi](u_0,u). \end{align} Similarly, integrating \bref{T derivative identity} over $J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)$ yields \begin{align} F^T_u[\Psi](v_0,v)+\underline{F}^T_v[\Psi](u_0,u)=\mathbb{F}_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}[\Psi], \end{align} where $\mathbb{F}_{\Sigma^*}[\Psi]$ is given by \begin{align} \mathbb{F}_{\Sigma^*}[\Psi]= \int_{\Sigma^*}dr\sin\theta d\theta d\phi\; (2-\Omega^2)|\slashed{\nabla}_{T^*}\Psi|^2+\Omega^2|\slashed{\nabla}_R\Psi|^2+|\slashed{\nabla}\Psi|^2+(3\Omega^2+1)\frac{|\Psi|^2}{r^2}, \end{align} and $\mathbb{F}_{\mathcal{U}}$ for a subset $\mathcal{U}\in\Sigma^*$ being defined analogously.\\ \indent Integrating \bref{T derivative identity} over $J^+(\Sigma)\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)$ instead yields a similar identity: \begin{align} F^T_u[\Psi](v_0,v)+\underline{F}^T_v[\Psi](u_0,u)=\mathbb{F}_{\Sigma\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}[\Psi], \end{align} with \begin{align} \mathbb{F}^T_{\Sigma}[\Psi]=\int_{\Sigma} \sin\theta dr d\theta d\phi \;\frac{1}{\Omega^2}|\slashed{\nabla}_T\Psi|^2+\Omega^2|\slashed{\nabla}_R \Psi|^2+|\slashed{\nabla}\Psi|^2+(3\Omega^2+1)\frac{|\Psi|^2}{r^2}. \end{align} and similarly for $\overline{\Sigma}$.\\ \indent All of the energies defined here so far become degenerate at $\overline{\mathscr{H}^+}$. We can compensate for that for energies defined over hypersurfaces do not intersect the bifurcation sphere $\mathcal{B}$, and we do this by modifying $X$ with a multiple of $\frac{1}{\Omega^2}T$ and repeating the procedure above as in \cite{DR08}, making crucial use of the positivity of the surface gravity of $\mathscr{H}^+$. We then obtain the so called 'redshift' estimates: \begin{defin} Define the following nondegenerate energies \begin{align} F_{\mathscr{N}}[\Psi]=\int_{\mathscr{N}}\sin\theta d\theta d\phi dv \left[|\Omega\slashed\nabla_4\Psi|^2+|\slashed\nabla\Psi|^2+\frac{1}{r^2}|\Psi|^2\right], \end{align} \begin{align} \underline{F}_{\underline{\mathscr{N}}}[\Psi]=\int_{\underline{\mathscr{N}}}\sin\theta d\theta d\phi du\Omega^2 \left[|\Omega^{-1}\slashed\nabla_3\Psi|^2+|\slashed\nabla\Psi|^2+\frac{1}{r^2}|\Psi|^2\right], \end{align} \begin{align} \mathbb{F}_{\Sigma^*}[\Psi]=\int_{\Sigma^*} \sin\theta dr d\theta d\phi\left[|\slashed{\nabla}_{T^*}\Psi|^2+|\slashed{\nabla}_R\Psi|^2+\frac{1}{r^2}|\Psi|^2+|\slashed{\nabla}\Psi|^2\right], \end{align} and their higher order versions \begin{align} F_{\mathscr{N}}^{n,T,\slashed{\nabla}}[\Psi]=\sum_{i+|\alpha|\leq n}F_{\mathscr{N}}[T^i(r\slashed{\nabla})^\alpha \Psi](v_0,v), \end{align} \begin{align} \underline{F}_{\underline{\mathscr{N}}}^{n,T,\slashed{\nabla}}[\Psi]=\sum_{i+|\alpha|\leq n} \underline{F}_{\underline{\mathscr{N}}}[T^i(r\slashed{\nabla})^\alpha \Psi](u_0,u), \end{align} \begin{align} F_{\mathscr{N}}[\Psi]=\sum_{i+j+|\alpha|\leq n} F_{\mathscr{N}}[(\Omega^{-1}\slashed{\nabla}_3 )^i(r\Omega\slashed{\nabla}_4)^j(r\slashed{\nabla})^\alpha\Psi](v_0,v), \end{align} \begin{align} \underline{F}_{\underline{\mathscr{N}}}[\Psi]=\sum_{i+j+|\alpha|\leq n} \underline{F}_{\underline{\mathscr{N}}}[(\Omega^{-1}\slashed{\nabla}_3 )^i(r\Omega\slashed{\nabla}_4)^j(r\slashed{\nabla})^\alpha\Psi](v_0,v), \end{align} \begin{align} \mathbb{F}^{n,T,\slashed{\nabla}}_{\Sigma^*}[\Psi]=\sum_{i+|\alpha|\leq n} \mathbb{F}_{\Sigma^*}[T^i(r\slashed{\nabla})^\alpha \Psi], \end{align} \begin{align} \mathbb{F}^{n}_{\Sigma^*}[\Psi]=\sum_{i_1+i_2+|\alpha|\leq n}\mathbb{F}_{\Sigma^*}\left[\slashed{\nabla}_T^{i_1}\left(\Omega^{-1}\slashed{\nabla}_3\right)^{i_2}(r\slashed{\nabla})^\alpha\Psi\right]. \end{align} \end{defin} \begin{proposition}\label{RWredshift} Let $\Psi$ be a solution to (\ref{RW}) arising from data as in \Cref{RWwpCauchy}, then we have \begin{align} F_u[\Psi](v_0,\infty)+\underline{F}_v[\Psi](u_0,\infty)\lesssim \mathbb{F}_{\Sigma^*}[\Psi]. \end{align} Similar statements hold for $F^{n,T,\slashed{\nabla}}_u[\Psi](v_0,v), \underline{F}^{n,T,\slashed{\nabla}}_v[\Psi](u_0,u), F^n_u[\Psi](v_0,v)$ and $\underline{F}^n_v[\Psi](u_0,u)$. \end{proposition} \noindent $\bullet$ \emph{\textbf{Integrated local energy decay}} We have the following Morawetz-type integrated decay estimate: \begin{proposition}\label{RWILED} Let $\Psi$ be a solution to (\ref{RW}) arising from data as in \Cref{RWwpCauchy}, $\mathscr{D}_{\Sigma^*}^{u,v}= J^+(\Sigma^*)\cap J^-(\mathscr{C}_u\cup\underline{\mathscr{C}}_v)$ and define \begin{align}\label{RWILEDestimate} \begin{split} \mathbb{I}_{deg}^{u,v}[\Psi]= \int_{\mathscr{D}_{\Sigma^*}^{u,v}}d\bar{u}d\bar{v} \sin\theta &d\theta d\phi \Omega^2 \Bigg[\frac{1}{r^2}|\slashed{\nabla}_{R^*}\Psi|^2+\frac{1}{r^3}|\Psi|^2\\ &+\frac{1}{r}\left(1-\frac{3M}{r}\right)^2\left(|\slashed{\nabla}\Psi|^2+\frac{1}{r^2}|\Omega\slashed{\nabla}_4\Psi|^2+\frac{\Omega^2}{r^2}|\Omega^{-1}\slashed{\nabla}_3\Psi|^2\right)\Bigg]. \end{split} \end{align} then we have \begin{align*} \begin{split} \mathbb{I}_{deg}^{u,v}[\Psi]\lesssim \mathbb{F}_{\Sigma^*}[\Psi]. \end{split} \end{align*} A similar statement holds for \begin{align} \mathbb{I}_{deg}^{u,v,n,T,\slashed{\nabla}}[\Psi]=\sum_{i+|\alpha|\leq n} \mathbb{I}_{deg}^{u,v}[T^i(r\slashed{\nabla})^\alpha \Psi] \end{align} and \begin{align} \mathbb{I}_{deg}^{u,v,n}[\Psi]=\sum_{i+j+|\alpha|\leq n}\mathbb{I}_{deg}^{u,v,n}[(\Omega^{-1}\slashed{\nabla}_3)^i(r\Omega\slashed{\nabla}_4)^j(r\slashed{\nabla})^\alpha\Psi]. \end{align} \end{proposition} \noindent $\bullet$ \emph{\textbf{$r^p$-hierarchy of estimates near $\mathscr{I}^+$}} If we multiply (\ref{RW}) by $r^p\Omega^{-2k}\Omega\slashed{\nabla}_4\Psi$ and integrate by parts on $S^2$ we obtain the following \begin{align} \begin{split} &\Omega\slashed\nabla_3\left[r^p\Omega^{-2k}|\Omega\slashed\nabla_4\Psi|^2\right]+\Omega\slashed\nabla_4\left[r^p\Omega^{-2k}(\Omega^2|\slashed\nabla\Psi|^2+V|\Psi|^2)\right]\\ &+r^{p-1}\Omega^{-2k}\Bigg\{(p+kx)|\Omega\slashed\nabla_4\Psi|^2-\left[\frac{4\Omega^2}{r^2}+V(p-3+x(k-1))\right]\Omega^2|\Psi|^2 \\&\qquad\qquad\qquad-(p-2+x(k-1))\Omega^4|\slashed\nabla\Psi|^2\Bigg\}\stackrel{S^2}{\equiv}0. \end{split} \end{align} We can ensure that the bulk term is non-negative by taking $p=0,k=0$ or $p=2, 1\leq k\leq2$ or $p\in(0,2)$ and restricting to large enough $r$. Integrating in a region $\mathscr{D}^{u,v}_{\Sigma^*}\cap \{r>R\}$ yields (after averaging in $R$ and using \Cref{RWILED} to deal with the timelike boundary term) \begin{proposition}\label{RWrp} Let $\Psi$ be a solution to (\ref{RW}) arising from data as in \Cref{RWwpCauchy}, and define \begin{align} {\mathbb{I}_p}_{u_0,v_0}^{u,v}[\Psi]=\int_{\mathscr{D}_{\Sigma^*}^{u,v}\cap\{r>R\}} dudv\sin\theta d\theta d\phi r^{p-1}\left[p|\Omega\slashed{\nabla}_4\Psi|^2+(2-p)|\slashed{\nabla}\Psi|^2+{r^{-2}}|\Psi|^2\right], \end{align} then we have for $p\in [0,2]$ \begin{align}\label{RW rp estimate} \begin{split} \int_{\mathscr{C}_u\cap\{r>R\}} dv \sin\theta d\theta d\phi r^p |\Omega\slashed{\nabla}_4 \Psi|^2+ {\mathbb{I}_p}_{u_0,v_0}^{u,v}[\Psi]\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap\{r>R\}} r^p|\Omega\slashed{\nabla}_4\Psi|^2 dr\sin\theta d\theta d\phi. \end{split} \end{align} A similar statement holds for \begin{align} {{\mathbb{I}_p}_{u_0,v_0}^{u,v}}^{n,T,\slashed{\nabla}}[\Psi]=\sum_{i+|\alpha|\leq n}{\mathbb{I}_p}_{u_0,v_0}^{u,v}[T^i (r\slashed{\nabla})^\alpha\Psi] \end{align} and for \begin{align} {{\mathbb{I}_p}_{u_0,v_0}^{u,v}}^{n,k}[\Psi]=\sum_{i+j+|\alpha|\leq n}{\mathbb{I}_p}_{u_0,v_0}^{u,v}[(\Omega^{-1}\slashed{\nabla}_3)^i(r^k\Omega\slashed{\nabla}_4)^j(r\slashed{\nabla})^\alpha\Psi] \end{align} if $0\leq k\leq2$. \end{proposition} We sketch how to establish higher order versions of the estimates of \Cref{RWrp}. Commuting with $r^h\Omega\slashed{\nabla}_4$ for $0\leq h \leq 2$ or $r\slashed{\nabla}$ produces terms with favorable signs and we can close the argument by appealing to Hardy and Poincar\'e estimates. Consider for example $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi:=\Phi^{(1)}$, which satisfies \begin{align}\label{Phi 1 transport equation} \begin{split} \Omega\slashed{\nabla}_3 \Phi^{(1)}+\frac{3\Omega^2-1}{r}\Phi^{(1)}=\mathring{\slashed{\Delta}}\Psi-(3\Omega^2+1)\Psi. \end{split} \end{align} Applying $\Omega\slashed{\nabla}_4$ and using \bref{RW} we obtain \begin{align}\label{Phi 1 wave equation} \Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4\Phi^{(1)}+\frac{3\Omega^2-1}{r}\Omega\slashed{\nabla}_4\Phi^{(1)}-\frac{\Omega^2}{r^2}(3\Omega^2-5)\Phi^{(1)}-\Omega^2\slashed\Delta\Phi^{(1)}=-6M\frac{\Omega^2}{r^2}\Psi. \end{align} We see that the new $\Omega\slashed{\nabla}_4\Phi^{(1)}$ term has a good sign, so that we when we multiply by $r^p\Omega^{-2k}\Omega\slashed{\nabla}_4\Phi^{(1)}$, integrate by parts over $S^2$ and use Cauchy--Schwarz we get: \begin{align} \begin{split} &\Omega\slashed{\nabla}_3\left[r^p\Omega^{-2k}|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2\right]+\Omega\slashed{\nabla}_4\left[r^p\Omega^{-2k}\left(\Omega^2|{\slashed{\nabla}}\Phi^{(1)}|^2+(5-3\Omega^2)\frac{\Omega^2}{r^2}|\Phi^{(1)}|^2\right)\right]+{r^{p-1}}{\Omega^{-2(k-1)}}\times\\ &\Bigg\{(p+4+x(k+2)-\epsilon)|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+(p-2+x(k-1))\Omega^2|\slashed{\nabla}\Phi^{(1)}|^2+\left[\frac{6M}{r}+(5-3\Omega^2)(p-1+x(k-1))\right]\frac{\Omega^2}{r^2}|\Phi^{(1)}|^2\Bigg\}\\ &\stackrel{S^2}{\lesssim} r^{p-3}\Omega^{2(k-1)}|\Psi|^2, \end{split} \end{align} where $\epsilon>0$ is sufficiently small. Integrating over $\mathscr{D}^{u,v}_{\Sigma^*}\cap\{r>R\}$ for large enough $R$ and using \Cref{RWrp} for $p\in[0,2]$ we get (using $d\omega=\sin\theta d\theta d\phi$): \begin{align} \begin{split} &\int_{{\mathscr{C}}_u\cap\{r>R\}}d\bar{v}d\omega\; r^p|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}\cap\{r>R\}} d\bar{u}d\bar{v}d\omega\;r^{p-1}\left[(p+4)|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+(2-p)|\slashed{\nabla}\Phi^{(1)}|^2+r^{-2}|\Phi^{(1)}|^2\right] \\ &+\int_{\mathscr{I}^+\cap\{\bar{u}\in[u_0,u]\}}d\bar{u}d\omega\; |\mathring{\slashed{\nabla}}\Phi^{(1)}|^2+2|\Phi^{(1)}|^2\lesssim \int_{\Sigma^*\cap\{r>R\}}drd\omega\; r^p|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+\int_{r=R}dtd\omega\; r^p\left[|\slashed{\nabla}\Phi^{(1)}|^2+r^{-2}|\Phi^{(1)}|^2\right]\\&+\int_{\mathscr{D}^{u,v}_{\Sigma^*}\cap\{r>R\}}d\bar{u}d\bar{v}d\omega\;r^{p-3}|\Psi|^2. \end{split} \end{align} We control the second term by averaging in $R$ and appealing to \Cref{RWILED} commuted with $\Omega\slashed{\nabla}_4$, and we deal with the last term using the lower order estimate for $\Psi$ from \Cref{RWrp}. Thus \begin{align}\label{RWrp k=1} \begin{split} &\int_{{\mathscr{C}}_u\cap\{r>R\}}d\bar{v}d\omega\; r^p|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}\cap\{r>R\}} d\bar{u}d\bar{v}d\omega\;r^{p-1}\left[(p+4)|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+(2-p)|\slashed{\nabla}\Phi^{(1)}|^2+r^{-2}|\Phi^{(1)}|^2\right] \\ &+\int_{\mathscr{I}^+\cap\{\bar{u}\in[u_0,u]\}}d\bar{u}d\omega\; |\mathring{\slashed{\nabla}}\Phi^{(1)}|^2+|\Phi^{(1)}|^2 \lesssim \int_{\Sigma^*\cap\{r>R\}}d\bar{v}d\omega\; r^p\left[|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+|\Omega\slashed{\nabla}_4\Psi|^2\right]+\mathbb{F}^1_{\Sigma^*}[\Psi]. \end{split} \end{align} We could do this again for $\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\Psi:=\Phi^{(2)}$ and get a similar estimate following the same steps: \begin{align}\label{RWrp k=2} \begin{split} &\int_{{\mathscr{C}}_u\cap\{r>R\}} d\bar{v}d\omega\; r^p|\Omega\slashed{\nabla}_4\Phi^{(2)}|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}\cap\{r>R\}} d\bar{u}d\bar{v}d\omega\; r^{p-1}\left[(p+8)|\Omega\slashed{\nabla}_4\Phi^{(2)}|^2+(2-p)|\slashed{\nabla}\Phi^{(2)}|^2+r^{-2}|\Phi^{(2)}|^2\right] \\ &+\int_{\mathscr{I}^+\cap\{\bar{u}\in[u_0,u]\}}d\bar{u}d\omega\;|\mathring{\slashed{\nabla}}\Phi^{(2)}|^2-|\Phi^{(2)}|^2\lesssim \int_{\Sigma^*\cap\{r>R\}}d\bar{v}d\omega\; r^p\left[|\Omega\slashed{\nabla}_4\Phi^{(2)}|^2+|\Omega\slashed{\nabla}_4\Phi^{(1)}|^2+|\Omega\slashed{\nabla}_4\Psi|^2\right]+\mathbb{F}^2_{\Sigma^*}[\Psi]. \end{split} \end{align} Note that the integral on $\mathscr{I}^+$ on the right hand side is positive by Poincar\'e's inequality. See \cite{AAG16a}, \cite{AAG16b}, \cite{Mos18} for more about this method, applied to the scalar wave equation. \subsection{Radiation fields}\label{subsection 5.2 subsection Radiation fields} In this section we establish the properties of future radiation fields belonging to solutions that arise from smooth, compactly supported data on $\Sigma^*$ \subsubsection{Radiation on $\mathscr{H}^+$}\label{subsubsection 5.2.1 radiation on H+} \begin{defin}\label{RWonH} Let $\Psi$ be a solution to (\ref{RW}) arising from smooth data $(\uppsi,\uppsi')$ on $\Sigma^*, \Sigma$ or $\overline{\Sigma}$ as in \Cref{RWwpCauchy}. The radiation field $\bm{\uppsi}_{\mathscr{H}^+}$ is defined to be the restriction of $\Psi$ to $\mathscr{H}^+_{\geq0}, \mathscr{H}^+$ or $\overline{\mathscr{H}}^+$ respectively. \end{defin} \begin{remark} We will be using the same notation for the radiation field on $\mathscr{H}^+_{\geq0}, \mathscr{H}^+$ or $\overline{\mathscr{H}^+}$. \end{remark} As a corollary to \Cref{RWwpCauchy} we have \begin{corollary} The radiation field $\bm{\uppsi}_{\mathscr{H}^+}$ as in \Cref{RWonH} is smooth on $\mathscr{H}^+_{\geq0}$. The same applies to $(\Omega^{-1}\slashed{\nabla}_3)^k\Psi$ for arbitrary $k$. \end{corollary} The integrated local energy decay statement of \Cref{RWILED} gives a quick way to show slow decay for $\bm{\uppsi}_{\mathscr{H}^+}$ and its derivatives: \begin{proposition}\label{RWdecayfixedR} For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, $\left|\Psi|_{\{r=R\}}\right|$ decays as $t\longrightarrow\infty$. \end{proposition} \begin{proof} Commuting \bref{RWILEDestimate} with $\mathcal{L}_T$ twice and using the redshift estimate of \Cref{RWredshift} give us for any $R<\infty$ \begin{align} \int_{v_0}^\infty d\bar{v}\;\left[ \underline{F}_{\underline{\mathscr{C}}_v\cap\{r<R\}}[\Psi]+ \underline{F}_{\underline{\mathscr{C}}_v\cap\{r<R\}}[\slashed{\nabla}_T\Psi]\right]<\infty. \end{align} This in turn implies energy decay in a neighborhood of $\mathscr{H}^+$: \begin{align*} \lim_{v\longrightarrow\infty} \underline{F}_v[\Psi](u_{R},\infty)=0, \end{align*} where $v-u_R=R^*$. Commuting with $\Omega^{-1} e_3$ and ${\mathcal{L}_{\Omega^i}}$ and using \Cref{RWredshift} again gives \begin{align*} \lim_{v\longrightarrow \infty} \sup_{u\in[u_R,\infty]} |\Psi|_{v}=0. \end{align*} \end{proof} \begin{remark} The preceding argument works to show that $(\Omega^{-1}\slashed{\nabla}_3)^k\Psi$ decays on any hypersurface $r=R$. See also Section 8.2 of \cite{DRSR14}. \end{remark} \begin{proposition} For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, The energy flux on $\mathscr{H}^+$ is equal to \begin{align*} F^T_{\mathscr{H}^+}=\int_{\mathscr{H}^+} |\partial_v\Psi|^2 dv \sin\theta d\theta d\phi. \end{align*} \end{proposition} \begin{proof} This follows from the regularity of $\Psi$ and its angular derivatives on $\mathscr{H}^+$ together with energy conservation. \end{proof} \subsubsection{Radiation on $\mathscr{I}^+$}\label{subsubsection 5.2.2 radiation field on I+} An $r^p$-estimate like \Cref{RWrp} implies the existence of radiation field on $\mathscr{I}^+$ as a "soft" corollary. \begin{proposition}\label{RWradscri} For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, \begin{align} \bm{\uppsi}_{\mathscr{I}^+}(u,\theta^A)=\lim_{v\longrightarrow\infty}\Psi(u,v,\theta^A) \end{align} exists and belongs to $\Gamma(\mathscr{I}^+)$. Moreover, \begin{align}\label{RW limit of energy at null infinity} \lim_{v\longrightarrow \infty}\int_{\mathscr{C}_v\cap\{u\in[u_0,u_1]\}}dud\omega\; |\Omega\slashed{\nabla}_3\Psi|^2+\Omega^2|\slashed{\nabla}\Psi|^2+V|\Psi|^2=\int_{\mathscr{I}^+\cap \{u\in[u_0,u_1]\}}dud\omega\; |\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2. \end{align} \end{proposition} \begin{proof} Let $r_2>r_1>8M$, fix $u$ and set $v(r_2,u)\equiv v_2, v(r_1,u)\equiv v_1$. The Sobolev embedding on the sphere $W^{3,1}(S^2)\hookrightarrow L^\infty(S^2)$ and the fundamental theorem of calculus give us: \begin{align}\label{first order} \begin{split} |\Psi(u,v_2,\theta,\phi)-\Psi(u,v_1,\theta,\phi)|^2\leq& B\left[\sum_{|\gamma|\leq 3} \int_{S^2}d\omega\; |\slashed{\mathcal{L}}^\gamma_{S^2} (\Psi(u,v_2,\theta,\phi)-\Psi(u,v_1,\theta,\phi))|\right]^2\\ &= B\left[\sum_{|\gamma|\leq 3} \int_{S^2}d\omega\int_{v_1}^{v_2}dv |\slashed{\mathcal{L}}^\gamma_{S^2} \Omega\slashed\nabla_4\Psi|\right]^2 \end{split} \end{align} Cauchy--Schwarz gives: \begin{align} |\Psi(u,v_2,\theta,\phi)-\Psi(u,v_1,\theta,\phi)|^2 \leq \frac{B}{r_1}\Bigg[\sum_{|\gamma|\leq 3} \int_{\mathscr{C}_u\cap\{v>v_1\}}dvd\omega\; r^2|\slashed{\mathcal{L}}^\gamma_{S^2}\Omega\slashed{\nabla}_4\Psi|^2dv\sin \theta d\theta d\phi\Bigg]. \end{align} where $\slashed{\mathcal{L}}^\gamma_{S^2}=\mathcal{L}_{\Omega_1}^{\gamma_1}\mathcal{L}_{\Omega_2}^{\gamma_2}\mathcal{L}_{\Omega_3}^{\gamma_3}$ denotes Lie differentiation on $S^2$ with respect to its $so(3)$ algebra of Killing fields. This says that $\Psi(u,v,\theta,\phi)$ converges in $L^\infty(\mathscr{I}^+\cap\{u>u_0\})$ for some $u_0>-\infty$ as $v\longrightarrow\infty$. Using higher order $r^p$-estimates we can repeat this argument to show \begin{align}\label{second order} \left|\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi(u,v_2,\theta,\phi)-\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi(u,v_1,\theta,\phi)\right|^2\lesssim \frac{1}{r_1}\Bigg[\sum_{|\gamma|\leq 3} \int_{\mathscr{C}_u\cap\{v>v_1\}} \left|r^2\slashed{\mathcal{L}}^\gamma_{S^2}\Omega\slashed{\nabla}_4\left(r^2\Omega\slashed{\nabla}_4\right)\Psi\right|^2dv\sin \theta d\theta d\phi\Bigg]. \end{align} Commuting \bref{first order} with $T$ and $\Omega^i$ and using \bref{second order} gives that $\Psi|_{\mathscr{I}^+}=\lim_{v\longrightarrow \infty} \Psi(u,v,\theta,\phi)$ is differentiable on $\mathscr{I}^+$. We can repeat this argument with higher order $r^p$-estimates to find that $\bm{\uppsi}_{\mathscr{I}^+}$ is smooth and $\lim_{v\longrightarrow\infty}\Omega\slashed{\nabla}_3^i(r\slashed{\nabla})^\gamma \Psi=\partial_u^i\mathring{\slashed{\nabla}}{}^\gamma\bm{\uppsi}_{\mathscr{I}^+}$ for any index $i$ and multiindex $\gamma$. \Cref{RW limit of energy at null infinity} follows immediately. \end{proof} In the following, define $\Phi^{(k)}:=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^k\Psi$. \begin{corollary}\label{RW transverse derivatives converge} Under the assumptions of \Cref{RWradscri}, the limit \begin{align} \bm{\upphi}^{(k)}_{\mathscr{I}^+}(u,\theta^A)=\lim_{v\longrightarrow\infty}\Phi^{(k)}(u,v,\theta^A) \end{align} exists and defines an element of $\Gamma(\mathscr{I}^+)$. \end{corollary} \begin{proof} Let $R,u_1$ be such that $\Psi$ vanishes on $\mathscr{C}_{u}\cap\{r>R\}$ for $u\leq u_0$. We can integrate \cref{RW} from a point $(u_0,v,\theta^A)$ to $(u,v,\theta^A)$ where $r(u_0,v)>R$ to find \begin{align} \Phi^{(1)}(u,\theta^A)=\frac{r^2}{\Omega^2}(u,v)\int_{u_0}^u\frac{\Omega^2}{r^2}\left[\mathring{\slashed{\Delta}}\Psi-(3\Omega^2+1)\Psi\right]. \end{align} The right hand side converges as $v\longrightarrow\infty$ by \Cref{RWradscri} and Lebesgue's bounded convergence theorem. An inductive argument works to show the same for higher order derivatives. \end{proof} \begin{defin}\label{RW future rad field scri} Let $\Psi$ be a solution to \cref{RW} arising from smooth data of compact support on $\Sigma^*, \Sigma$ or $\overline{\Sigma}$. The future radiation field on $\mathscr{I}^+$ is defined to be the limit of $\Psi$ towards $\mathscr{I}^+$ \begin{align*} \bm{\uppsi}_{\mathscr{I}^+}(u,\theta,\phi)=\lim_{v\longrightarrow\infty}\Psi(u,v,\theta,\phi). \end{align*} \end{defin} \begin{remark} Note that a solution $\Psi$ arising from compactly supported data on $\overline{\Sigma}$ necessarily corresponds to compactly supported data on $\Sigma^*$. \end{remark} \noindent The $r^p$-estimates of \Cref{RWrp} further imply that $\bm{\uppsi}_{\mathscr{I}^+}$ decays as $u\longrightarrow\infty$: \begin{proposition}\label{RWdecayscri} Let $\Psi,\bm{\uppsi}_{\mathscr{I}^+}$ be as in \Cref{RWradscri}. Then $\bm{\uppsi}_{\mathscr{I}^+}$ decays along $\mathscr{I}^+$. \end{proposition} \begin{proof} The fundamental theorem of calculus, Cauchy--Schwarz and a Hardy estimate give us: \begin{align} \begin{split} \int_{S^2_{u,\infty}}|\Psi_{\mathscr{I}^+}|^2\leq&\int_{S^2_{u,v(r=R)}}|\Psi_{r=R}|^2+\int_{\mathscr{C}_u}\frac{1}{r^2}|\Psi|^2\times \int_{\mathscr{C}_u}r^2|\Omega\slashed{\nabla}_4\Psi|^2\\ \lesssim&\int_{S^2_{u,v(r=R)}}|\Psi_{r=R}|^2+\int_{\mathscr{C}_u}|\Omega\slashed{\nabla}_4\Psi|^2\times \int_{\mathscr{C}_u}r^2|\Omega\slashed{\nabla}_4\Psi|^2. \end{split} \end{align} \Cref{RWrp} applied to $\Psi$ and $\slashed{\nabla}_T\Psi$ implies the decay of $\int_{\mathscr{C}_u\cap\{r>R\}}|\Omega\slashed{\nabla}_4\Psi|^2$ and the boundedness of $\int_{\mathscr{C}_u\cap\{r>R\}}r^2|\Omega\slashed{\nabla}_4\Psi|^2$, and the result follows considering \Cref{RWdecayfixedR}. \end{proof} We can in fact compute $\bm{\upphi}_{\mathscr{I}^+}^{(k)}$ out of $\bm{\uppsi}_{\mathscr{I}^+}$ for $k=1,2$: \begin{corollary}\label{Phi 1 forward} For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, we have \begin{align} \bm{\upphi}^{(1)}_{\mathscr{I}^+}(u,\theta^A)=-\int_u^\infty d\bar{u}\left[\mathcal{A}_2-2\right]\bm{\uppsi}_{\mathscr{I}^+}(\bar{u},\theta^A). \end{align} \end{corollary} \begin{proof} Let $-\infty<u_1<u_2<\infty$, $v$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$. We integrate \cref{Phi 1 transport equation} on $\underline{\mathscr{C}}_v$ between $u_1,u_2$ and use the fact that $\Phi^{(1)}$ has a finite limit $\bm{\upphi}^{(1)}_{\mathscr{I}^+}$ towards $\mathscr{I}^+$ to get \begin{align} \bm{\upphi}^{(1)}_{\mathscr{I}^+}(u_1,\theta^A)-\bm{\upphi}^{(1)}_{\mathscr{I}^+}(u_2,\theta^A)=-\int_{u_1}^{u_2}d\bar{u}\left[\mathcal{A}_2-2\right]\bm{\uppsi}_{\mathscr{I}^+}(\bar{u},\theta^A). \end{align} Since $\bm{\upphi}^{(1)}_{\mathscr{I}^+}$ is uniformly bounded, we have that $\left[\mathcal{A}_2-2\right]\bm{\uppsi}_{\mathscr{I}^+}$ is integrable over $\mathscr{I}^+$. The result follows since $\bm{\upphi}^{(1)}_{\mathscr{I}^+}(u,\theta^A)$ decays as $u\longrightarrow\infty$. \end{proof} \begin{lemma} If $\Psi$ satisfies \bref{RW} then \begin{align}\label{eq:191} \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi=\left[\mathcal{A}_2(\mathcal{A}_2-2)-12M\slashed{\nabla}_T\right]\Psi. \end{align} \end{lemma} \begin{proof} Straightforward computation using \cref{RW}. \end{proof} Following the same steps as in the proof of \Cref{Phi 1 forward} we find \begin{corollary}\label{Phi 2 forward} For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, then $\bm{\upphi}^{(2)}_{\mathscr{I}^+}(u,\theta^A)$ satisfies \begin{align} \bm{\upphi}^{(2)}_{\mathscr{I}^+}(u,\theta^A)=\int_{u}^\infty\int_{u_1}^\infty du_1 du_2 \left[\mathcal{A}(\mathcal{A}_2-2)-6M\partial_u\right]\bm{\uppsi}_{\mathscr{I}^+}(u_2,\theta^A). \end{align} \end{corollary} \begin{corollary} For a solution $\Psi$ to \cref{RW} arising from smooth data of compact support on $\Sigma^*$, then the radiation field $\bm{\uppsi}_{\mathscr{I}^+}$ satisfies \begin{align} \int_{-\infty}^\infty du_1 \bm{\uppsi}_{\mathscr{I}^+}= \int_{-\infty}^\infty \int_{u_1}^\infty du_1 du_2 \bm{\uppsi}_{\mathscr{I}^+}=0. \end{align} \end{corollary} \subsection{The forwards scattering map}\label{subsection 5.3 the forwards scattering map} This section combines the results of \Cref{subsection 5.2 subsection Radiation fields} above to prove \Cref{forwardRW}. \begin{proposition}\label{RWfcp} Solutions to (\ref{RW}) arising from smooth data on $\Sigma^*$ of compact support give rise to smooth radiation fields $\uppsi_{\mathscr{I}^+}\in\mathcal{E}_{\mathscr{I}^+}^{T}$ on $\mathscr{I}^+$ and $\uppsi_{\mathscr{H}^+}\in\mathcal{E}_{\mathscr{H}^+_{\geq0}}^{T}$ on $\mathscr{H}^+_{\geq0}$, such that \begin{align}\label{818181} ||\bm{\uppsi}_{\mathscr{I}^+}||_{\mathcal{E}^T_{\mathscr{I}^+}}^2+||\bm{\uppsi}_{\mathscr{H}^+}||_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}^2=||(\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*}) ||_{\mathcal{E}^T_{\Sigma^*}}^2 . \end{align} \end{proposition} \begin{proof} For data of compact support, Propositions \ref{RWwpCauchy} and \ref{RWradscri} give us the existence of smooth radiation fields $\bm{\uppsi}_{\mathscr{I}^+}$ and $\bm{\uppsi}_{\mathscr{H}^+}$, and by Propositions \ref{RWdecayfixedR}, \ref{RWdecayscri}, $\bm{\uppsi}_{\mathscr{I}^+}$ decays towards $\mathscr{I}^+_+$ and $\bm{\uppsi}_{\mathscr{H}^+}$ decays towards $\mathscr{H}^+$. Let $R$ be sufficiently large and let $v_+,u_+$ be such that $v_+-u_+=R^*$, $v_++u_+>0$. A $T$-energy estimate on the region bounded by $\Sigma^*$, $\mathscr{H}^+_{\geq0}\cap\{v\leq v_+\}$, $\mathscr{I}^+\cap\{u\leq u_+\}$ and $\mathscr{C}_{u_+}\cap\{r\geq R\}, \underline{\mathscr{C}}_{v_+}\cap\{r\leq R\}$ gives \begin{align} \underline{F}^T_{v_+}[\Psi](u_+,\infty)+F^T_{u_+}[\Psi](v_+,\infty)+ \int_{\mathscr{H}^+_{\geq0}\cap \{v\leq v_+\}}dvd\omega\;|\partial_v\Psi|^2+\int_{\mathscr{I}^+\cap\{u\leq u_+\}}dud\omega\;|\partial_u\Psi|^2=||\Psi||_{\mathcal{E}^T_{\Sigma^*}}^2. \end{align} The integrated local energy decay statement of \Cref{RWILED} commuted with $\slashed{\nabla}_T$, along with the estimate \bref{RW rp estimate} of \Cref{RWrp} for $p=1$ commuted with $\slashed{\nabla}_T$, imply that $\underline{F}^T_{v_+}[\Psi](u_+,\infty)+F^T_{u_+}[\Psi](v_+,\infty)$ decay as $u_+\longrightarrow\infty$. This gives us that $\bm{\uppsi}_{\mathscr{I}^+}\in\mathcal{E}^T_{\mathscr{I}^+}$ and $\bm{\uppsi}_{\mathscr{H}^+}\in\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}$ and that $\bm{\uppsi}_{\mathscr{I}^+}, \bm{\uppsi}_{\mathscr{H}^+}$ satisfy \bref{818181}. \end{proof} \begin{corollary}\label{RWfcpSigma} Solutions to (\ref{RW}) arising from data on ${\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T}$ and $\mathcal{E}_{{\mathscr{H}^+}}^{T}$. Solutions to (\ref{RW}) arising from data on $\overline{\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T}$ and $\mathcal{E}_{\overline{\mathscr{H}^+}}^{T}$ \end{corollary} \begin{proof} The evolution of $\Psi$ on $ J^+({\Sigma^*})\cap J^-(\Sigma)$ can be handled locally. A $T$-energy estimate on $ J^+({\Sigma})\cap J^-(\Sigma^*)$ gives the result. An identical statement applies to $\overline{\Sigma}$. \end{proof} \Cref{RWfcp,,RWfcpSigma} allow us to define the forwards maps $\mathscr{F}^+$ from dense subspaces of $\mathcal{E}^{T}_{\Sigma^*}$, $\mathcal{E}^{T}_{\Sigma}$, $\mathcal{E}^{T}_{\overline{\Sigma}}$. \begin{defin} Let $(\uppsi,\uppsi')$ be a smooth data set to the Regge--Wheeler equation \bref{RW} on $\Sigma^*$ as in \Cref{RWwpCauchy}. Define the map $\mathscr{F}^+$ by \begin{align} \mathscr{F}^+:\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)\longrightarrow \Gamma(\mathscr{H}^+_{\geq0})\times\Gamma(\mathscr{I}^+), (\uppsi,\uppsi')\longrightarrow (\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+}), \end{align} where $(\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})$ are as in the proof of \Cref{RWfcp}.\\ The map $\mathscr{F}^+$ is defined analogously for data on $\Sigma, \overline{\Sigma}$: \begin{align} \mathscr{F}^+:\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)\longrightarrow \Gamma(\mathscr{H}^+)\times\Gamma(\mathscr{I}^+), (\uppsi,\uppsi')\longrightarrow (\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+}),\\ \mathscr{F}^+:\Gamma_c(\overline{\Sigma})\times\Gamma_c(\overline{\Sigma})\longrightarrow \Gamma(\overline{\mathscr{H}^+})\times\Gamma(\mathscr{I}^+), (\uppsi,\uppsi')\longrightarrow (\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+}). \end{align} \end{defin} The map $\mathscr{F}^+$ uniquely extends to the forward scattering map of \Cref{RWforwardmap}: \begin{corollary} \label{RWforwardmap} The map defined by the forward evolution of data in $\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)$ as in \Cref{RWfcp} uniquely extends to a map \begin{align} \mathscr{F}^{+}: \mathcal{E}^{T}_{\Sigma^*} \longrightarrow \mathcal{E}_{\mathscr{H}^+_{\geq0}}^{T}\oplus \mathcal{E}_{\mathscr{I}^+}^{T}, \end{align} which is bounded: \begin{align} ||(\uppsi,\uppsi')||_{\mathcal{E}^{T}_{\Sigma^*}}^2=||\bm{\uppsi}_{\mathscr{H}^+}||_{\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}}^2+||\bm{\uppsi}_{\mathscr{I}^+}||_{\mathcal{E}^{T}_{\mathscr{I}^+}}^2 . \end{align} We similarly obtain bounded maps \begin{align} \mathscr{F}^{+}: \mathcal{E}^{T}_{\Sigma} \longrightarrow \mathcal{E}_{\mathscr{H}^+}^{T}\oplus \mathcal{E}_{\mathscr{I}^+}^{T},\\ \mathscr{F}^{+}: \mathcal{E}^{T}_{\overline{\Sigma}} \longrightarrow \mathcal{E}_{\overline{\mathscr{H}^+}}^{T}\oplus \mathcal{E}_{\mathscr{I}^+}^{T}. \end{align} The map $\mathscr{F}^+$ is injective on $\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)$ and therefore extends to a unitary Hilbert-space isomorphism on its image. \end{corollary} \subsection{The backwards scattering map}\label{subsubsection 5.4 the backwards scattering map} This section contains the proof of \Cref{backwardRW,,RW isomorphisms}. We define backwards evolution from data on the event horizon and null infinity in \Cref{RWbackwardsexistence}, and this defines the map $\mathscr{B}^-$ which inverts $\mathscr{F}^+$. \Cref{RW isomorphisms} follows immediately by \Cref{time inversion of RW}.\\ \indent We begin by constructing a solution to the equation on $ J^-(\mathscr{I}^+\cup\mathscr{H}^+_{\geq0})$ out of compactly supported future scattering data. \begin{proposition}\label{RWbackwardsexistence} Let $\bm{\uppsi}_{\mathscr{H}^+}\in\Gamma_c(\mathscr{H}^+_{\geq0})$ be supported on $v<v_+<\infty$ such that $\|\bm{\uppsi}_{\mathscr{H}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}<\infty$, $\bm{\uppsi}_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$ be supported on $u<u_+<\infty$ such that $\|\bm{\uppsi}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}<\infty$. Then there exists a unique smooth $\Psi$ defined on $ J^+(\Sigma^*)$ that satisfies \cref{RW} and realises $\bm{\uppsi}_{\mathscr{I}^+}$, $\bm{\uppsi}_{\mathscr{H}^+}$ as its radiation fields. Moreover, $(\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma}}\Psi|_{\Sigma^*})\in \mathcal{E}^T_{\Sigma}$. \end{proposition} \begin{proof} Assume $\bm{\uppsi}_{\mathscr{H}^+}$ is supported on $\{(v,\theta^A), v\in[v_-,v_+]\}\subset\mathscr{H}^+_{\geq0}$ and $\bm{\uppsi}_{\mathscr{I}^+}$ is supported on $[u_-,u_+]$, with $-\infty<u_-,u_+,v_-,v_+<\infty$. Let $\widetilde{\Sigma}$ be a spacelike surface connecting $\mathscr{H}^+$ at a finite $v_*>v_+$ to $\mathscr{I}^+$ at a finite $u_*>u_+$. Fix $\mathcal{R}_{\mathscr{I}^+}>3M$ and let $v^\infty$ be sufficiently large so that $\underline{\mathscr{C}}_{v^\infty}\cap [u_-,u_+]\subset J^+(\Sigma^*)$ and $r(u,v^\infty)>\mathcal{R}_{\mathscr{I}^+}$ for $u\in[u_-,u_+]$. Denote by $\mathscr{D}$ the region bounded by $\mathscr{H}^+_{\geq 0}\cap\{v\in[v_-,v_*]\}$, $\widetilde{\Sigma}$,$ \underline{\mathscr{C}}_{v^\infty}$, $\Sigma^*$ and $\mathscr{C}_{u_-}$. We can find $\Psi$ that solves the "finite" backwards problem for \bref{RW} in $\mathscr{D}$ with the following data: \begin{itemize} \item $\bm{\uppsi}_{\mathscr{H}^+}$ on $\mathscr{H}^+\cap\{v\in[v_-,v_+]\}$, \item $(0,0)$ on $\widetilde{\Sigma}$, \item $\bm{\uppsi}_{\mathscr{I}^+}$ on $\underline{\mathscr{C}}_{v^\infty}$. \end{itemize} \noindent From \bref{RW} we derive \begin{align}\label{RW first transverse derivative in the 3 direction} \Omega\slashed{\nabla}_3\left[\frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4\Psi|^2\right]+\frac{3\Omega^2-1}{r}\frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4\Psi|^2=-\Omega\slashed{\nabla}_4\left[|\mathring{\slashed{\nabla}}\Psi|^2+(3\Omega^2+1)|\Psi|^2\right]+\frac{6M\Omega^2}{r^2}|\Psi|^2. \end{align} Let $\tilde{v}<v^\infty$ be large enough that $r(u,\tilde{v})>\mathcal{R}_{\mathscr{I}^+}$ for $u\in[u_-,u_+]$. For $\tilde{v}\leq v<v^\infty$ integrate \bref{RW first transverse derivative in the 3 direction} in the region $\mathscr{D}_{v}=\mathscr{D}\cap J^+(\underline{\mathscr{C}}_v)$ with measure $dudvd\omega$ to derive \begin{align} \begin{split} \int_{\mathscr{C}_u\cap[v,v^\infty]}d\bar{v}d\omega\frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4\Psi|^2\leq &\int_{u}^{u_+}d\bar{u}\int_{\mathscr{C}_{\bar{u}}[v,v^\infty]}d\bar{v}d\omega\frac{2\Omega^2}{r}\frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4\Psi|^2\\&+\|\Psi\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\Psi\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\int_{u_-}^{u_+}d\bar{u}\int_{S^2}d\omega|\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{I}^+}|^2_{S^2}+4|\bm{\uppsi}_{\mathscr{I}^+}|^2_{S^2}. \end{split} \end{align} Applying Gr\"onwall's inequality to the above gives \begin{align}\label{this} \int_{\mathscr{C}_u\cap[v,v^\infty]}d\bar{v}d\omega\; r^2|\Omega\slashed{\nabla}_4\Psi|^2\leq\frac{r(u,v)^2}{r(u_+,v)^2}\left[\|\Psi\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\Psi\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\int_{[u_-,u_+]\times S^2}d\bar{u}d\omega\;|\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{I}^+}|_{S^2}^2+4|\bm{\uppsi}_{\mathscr{I}^+}|_{S^2}^2\right]. \end{align} Using \bref{this} we can modify the argument of \Cref{RWradscri} to conclude that for $v>\tilde{v}$ \begin{align}\label{ptwise infinity} \begin{split} \left|\Psi|_{(u,v)}-\bm{\uppsi}_{\mathscr{I}^+}\right|\;\lesssim_{M,u_-,\mathcal{R}_{\mathcal{I}^+}} \frac{1}{v}\Bigg[\sum_{|\gamma|\leq2}\int_{[u_-,u_+]\times S^2}d\bar{u}d\omega\;&\left[|\slashed{\mathcal{L}}_{\Omega_i}^\gamma\bm{\uppsi}_{\mathscr{I}^+}|_{S^2}^2+|\mathring{\slashed{\nabla}}\slashed{\mathcal{L}}^\gamma_{\Omega_i}\bm{\uppsi}_{\mathscr{I}^+}|_{S^2}^2+|\slashed{\mathcal{L}}^\gamma_{\Omega_i}\partial_u\bm{\uppsi}_{\mathscr{I}^+}|_{S^2}^2\right]\\&+\|\slashed{\mathcal{L}}^\gamma_{\Omega_i}\Psi\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2\Bigg]. \end{split} \end{align} Analogously, let $\tilde{u}$ be such that $\mathcal{R}_{\mathscr{H}^+}<r(\tilde{u},v)<3M$ for $v\in[v_-,v_+]$, where $\mathcal{R}_{\mathscr{H}^+}<3M$ is fixed. We can multiply the equation by $\frac{1}{\Omega^2}\Omega\slashed{\nabla}_3\Psi$ and integrate by parts over a region $\mathscr{D}_{u}=\mathscr{D}\cap J^+(\mathscr{C}_u)$ to get \begin{align} \begin{split} \int_{\underline{\mathscr{C}}_v\cap[u,\infty]}dud\omega&\frac{1}{\Omega^2}|\Omega\slashed{\nabla}_3\Psi|^2+\int_{\mathscr{C}_u\cap[v,v_+]}dvd\omega\left[\frac{1}{r^2}|\slashed{\nabla}\Psi|^2+\frac{1}{r^2}|\Psi|^2\right]+\int_{\mathscr{D}_{u}}\Omega^2dudv\left[|\mathring{\slashed{\nabla}}\Psi|^2+|\Psi|^2\right]\\&\lesssim \int_{\mathscr{H}^+\cap[v,v_+]}dvd\omega\; \left[|\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{H}^+}|^2+|\bm{\uppsi}_{\mathscr{H}^+}|^2\right]+\int_{v}^{v_+}d\bar{u}\int_{\underline{\mathscr{C}}_{\bar{v}}\cap[u,\infty]}d\omega\;\frac{2M}{r^2}\frac{1}{\Omega^2}|\Omega\slashed{\nabla}_3\Psi|^2. \end{split} \end{align} Gr\"onwall's inequality implies \begin{align}\label{RW exponential backwards near H+} \begin{split} \int_{\underline{\mathscr{C}}_v[u,\infty]}d\bar{u}d\omega\;\frac{1}{\Omega^2}|\Omega\slashed{\nabla}_3\Psi|^2&\lesssim e^{\frac{1}{2M}(v_+-v)}\left\{\int_{\mathscr{H}^+\cap[v,v_+]} \left[|\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{H}^+}|^2+|\bm{\uppsi}_{\mathscr{H}^+}|^2\right]dvd\omega+\|\Psi\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\Psi\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2\right\}. \end{split} \end{align} In turn, this implies pointwise control of $\Psi$ near $\mathscr{H}^+$: \begin{align}\label{ptwise horizon} |\Psi(u,v,\theta^A)&-\bm{\uppsi}_{\mathscr{H}^+}(v,\theta^A)|^2\lesssim \int_{u}^\infty e^{\frac{v-\bar{u}}{2M}}d\bar{u}\times \int_{\underline{\mathscr{C}}_v\cap[u,\infty]}dud\omega\sum_{|\gamma|\leq2}\frac{1}{\Omega^2}\left|\slashed{\mathcal{L}}_{\Omega_i}^\gamma\Omega\slashed{\nabla}_3\Psi\right|^2\\&\lesssim_{M} r\Omega^2(u,v_+) \left[\sum_{|\gamma|\leq2}\int_{u_-}^{u_+}d\bar{u}\int_{S^2}d\omega\left[|\slashed{\mathcal{L}}_{\Omega_i}^\gamma\bm{\uppsi}_{\mathscr{I}^+}|^2+|\mathring{\slashed{\nabla}}\slashed{\mathcal{L}}^\gamma_{\Omega_i}\bm{\uppsi}_{\mathscr{I}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{\Omega_i}\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2\right]+\|\slashed{\mathcal{L}}^\gamma_{\Omega_i}\Psi\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2\right]. \end{align} In the region $\mathscr{D}\textbackslash (\mathscr{D}_{\tilde{u}}\cap\mathscr{D}_{\tilde{v}})$, $r$ is bounded and energy conservation is sufficient to control $\Psi$ in $L^\infty$. In conclusion, we find that $\Psi$ is controlled in $L^\infty(\mathscr{D})$.\\ \indent Let $\{v_n^\infty\}_{n=0}^\infty$ be a monotonically increasing sequence tending to $\infty$ with $v_0^\infty=v^\infty$ and define $\mathscr{D}$ in terms of $v_n^\infty$ analogously to $\mathscr{D}$. Denote $\underline{\mathscr{C}}_n=\underline{\mathscr{C}}_{v_n^\infty}\cap\{u\in[u_-,u_+]\}$. We can repeat the above on the region $\mathscr{D}_n$ with data $\bm{\uppsi}_{\mathscr{I}^+}$ on $\underline{\mathscr{C}}_{n}$ to obtain a sequence $\{\Psi_n\}_{n=0}^\infty$. $\Psi_n$ is bounded uniformly in $n$ in the region $\mathscr{D}_k$ for any $k<n$ and we can show uniform boundedness of the derivatives by commuting $\slashed{\nabla}_T, \slashed{\nabla}_{\Omega_i}$ and using the equation to obtain higher order versions of the estimates above. By Arzela-Ascoli we can extract a convergent subsequence in $C^k(\mathscr{D}_l)$ for any $k,l$ with a limit $\Psi$ that satisfies \bref{RW}. Note that this procedure can be used to uniquely define $\Psi$ everywhere on $ J^-(\widetilde{\Sigma})\cap J^+(\Sigma^*)$. Clearly, $\Psi|_{\mathscr{H}^+}=\bm{\uppsi}_{\mathscr{H}^+}$ and \bref{ptwise infinity} implies $\Psi\longrightarrow \bm{\uppsi}_{\mathscr{I}^+}$. Finally, a $T$-energy estimate implies that \begin{align}\label{subunitarity of B-} \|(\Psi|_{\Sigma^*}, \slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*})\|_{\mathcal{E}^T_{\Sigma^*}}^2\leq \|\bm{\uppsi}_{\mathscr{H}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\|\bm{\uppsi}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2, \end{align} so $(\Psi|_{\Sigma^*}, \slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*})\in \mathcal{E}^T_{\Sigma^*}$. \end{proof} \begin{defin}\label{RW definition of B-} Let $\uppsi_{\mathscr{H}^+}, \uppsi_{\mathscr{I}^+}$ be as in \Cref{RWbackwardsexistence}. Define the map $\mathscr{B}^-$ by \begin{align} \mathscr{B}^-:\Gamma_c(\mathscr{H}^+_{\geq0})\times\Gamma_c(\mathscr{I}^+)\longrightarrow\Gamma(\Sigma^*)\times\Gamma(\Sigma^*), (\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})\longrightarrow (\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*}), \end{align} where $\Psi$ is the solution to \bref{RW} arising from scattering data $(\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})$ as in \Cref{RWbackwardsexistence}. \end{defin} \begin{corollary}\label{B- inverts F+} The maps $\mathscr{F}^+$, $\mathscr{B}^-$ extend uniquely to unitary Hilbert space isomorphisms on their respective domains, such that $\mathscr{F}^+\circ\mathscr{B}^-=Id$, $\mathscr{B}^-\circ\mathscr{F}^+=Id$. \end{corollary} \begin{proof} We will prove the statement for the map define on data on $\Sigma^*$. We already know that $\mathscr{F}^+$ is a unitary isomorphism and that $\mathscr{F}^+\left[\mathcal{E}^T_{\Sigma^*}\right]\subset\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}$. Let $\uppsi_{\mathscr{H}^+}\in\Gamma_c(\mathscr{H}^+_{\geq0})$, $\uppsi_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)$. \Cref{RWbackwardsexistence} yields a solution $\Psi$ on $J^+(\Sigma^*)$ to \cref{RW}. Since $\Psi$ realises $\uppsi_{\mathscr{I}^+}$, $\uppsi_{\mathscr{H}^+}$ as its radiation fields as in \Cref{RW future rad field scri,,RWonH} and since $\mathscr{B}^-(\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})\in\left[\Gamma(\Sigma^*)\times\Gamma(\Sigma^*)\right]\cap\mathcal{E}^T_{\Sigma^*}$ (see \Cref{RW enough to be in space}), we have that $\mathscr{F}^+\circ\mathscr{B}^-=Id$ on $\Gamma_c(\mathscr{H}^+_{\geq0})\times\Gamma_c(\mathscr{I}^+)$, which is dense in $\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}$. Therefore, since $\mathscr{F}^+\left[\mathcal{E}^T_{\Sigma^*}\right]$ is complete, we have that $\mathscr{F}^+\left[\mathcal{E}^T_{\Sigma^*}\right]=\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}$. The fact that $\mathscr{B}^-$ is bounded means that its unique extension to $\mathcal{E}^{T}_{\mathscr{H}^+_{\geq0}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}$ must be the inverse of $\mathscr{F}^+$ and we have that $\mathscr{B}^-\circ\mathscr{F}^+=Id_{\mathcal{E}^{T}_{\Sigma^*}}$. \end{proof} \begin{remark}\label{unitarity of B- is trivial} Note that the proof of \Cref{RWbackwardsexistence} only establishes the boundedness of $\mathscr{B}^-$, but showing that $\mathscr{B}^-$ inverts $\mathscr{F}^+$ as was done \Cref{B- inverts F+} turns \bref{subunitarity of B-} to an equality: \begin{align}\label{unitarity of B- formula} \|\mathscr{B}^-(\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})\|_{\mathcal{E}^T_{\Sigma^*}}^2=\|\uppsi_{\mathscr{H}^+}\|^2_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}+\|\uppsi_{\mathscr{I}^+}\|^2_{\mathcal{E}^{T}_{\mathscr{I}^+}}. \end{align} \end{remark} Since the region $J^+(\overline{\Sigma})\cap J^-(\Sigma^*)$ can be handled locally via \Cref{RWwp local statement near B}, \Cref{RWwpSigmabar} and $T$-energy conservation, we can immediately deduce the following: \begin{corollary} The map $\mathscr{B}^-$ can be defined on the following domains: \begin{align} \mathscr{B}^{-}:\mathcal{E}^{T}_{\mathscr{H}^+}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\Sigma},\\ \mathscr{B}^{-}:\mathcal{E}^{T}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\overline{\Sigma}}, \end{align} and we have \begin{align} \mathscr{F}^{+}\circ\mathscr{B}^{-}=Id_{\mathcal{E}^T_{\mathscr{H}^+}\oplus\;\mathcal{E}^T_{\mathscr{I}^+}},\qquad \mathscr{B}^{-}\circ\mathscr{F}^{+}=Id_{\mathcal{E}^T_{\Sigma}},\\ \mathscr{F}^{+}\circ\mathscr{B}^{-}=Id_{\mathcal{E}^T_{\overline{\mathscr{H}^+}}\oplus\;\mathcal{E}^T_{\mathscr{I}^+}},\qquad \mathscr{B}^{-}\circ\mathscr{F}^{+}=Id_{\mathcal{E}^T_{\overline{\Sigma}}}. \end{align} \end{corollary} We have just completed the proof of \Cref{backwardRW}.\\ \indent Since the Regge--Wheeler equation \bref{RW} is invariant under time inversion, the existence of the maps $\mathscr{F}^-, \mathscr{B}^+$ is immediate: \begin{proposition}\label{RW past scattering} Solutions to (\ref{RW}) arising from smooth data of compact support on $\Sigma$ (or $\overline{\Sigma}$) give rise to smooth radiation fields $\uppsi_{\mathscr{I}^-}\in\mathcal{E}_{\mathscr{I}^-}^{T}$ on $\mathscr{I}^-$ and $\uppsi_{\mathscr{H}^-}\in\mathcal{E}_{\mathscr{H}^-}^{T}$ (or $\mathcal{E}_{\overline{\mathscr{H}^-}}^{T}$) on $\mathscr{H}^-$ (or $\overline{\mathscr{H}^-}$), such that \begin{align}\label{919191} ||\bm{\uppsi}_{\mathscr{I}^-}||_{\mathcal{E}^T_{\mathscr{I}^-}}^2+||\bm{\uppsi}_{\mathscr{H}^-}||_{\mathcal{E}^T_{\mathscr{H}^-}}^2=||(\Psi|_{\Sigma},\slashed{\nabla}_{n_{\Sigma}}\Psi|_{\Sigma}) ||_{\mathcal{E}^T_{\Sigma}}^2.\\ ||\bm{\uppsi}_{\mathscr{I}^-}||_{\mathcal{E}^T_{\mathscr{I}^-}}^2+||\bm{\uppsi}_{\mathscr{H}^-}||_{\mathcal{E}^T_{\overline{\mathscr{H}^-}}}^2=||(\Psi|_{\Sigma},\slashed{\nabla}_{n_{\Sigma}}\Psi|_{\Sigma}) ||_{\mathcal{E}^T_{\overline{\Sigma}}}^2. \end{align} As in the case of $\mathscr{F}^+$, there exist Hilbert space isomorphisms \begin{align} \mathscr{F}^{-}:\mathcal{E}^{T}_{\mathscr{H}^+}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\Sigma},\\ \mathscr{F}^{-}:\mathcal{E}^{T}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\overline{\Sigma}}, \end{align} Let $\bm{\uppsi}_{\mathscr{H}^-}\in\Gamma_c(\mathscr{H}^-)$ be supported on $u>u_+>-\infty$ such that $\|\bm{\uppsi}_{\mathscr{H}^-}\|_{\mathcal{E}^T_{\mathscr{H}^-}}<\infty$, $\bm{\uppsi}_{\mathscr{I}^-}\in\Gamma_c(\mathscr{I}^-)$ be supported on $v>v_+>-\infty$ such that $\|\bm{\uppsi}_{\mathscr{I}^-}\|_{\mathcal{E}^T_{\mathscr{I}^-}}<\infty$. Then there exists a unique smooth $\Psi$ defined on $ J^-(\Sigma)$ that satisfies \cref{RW} and realises $\bm{\uppsi}_{\mathscr{I}^-}$, $\bm{\uppsi}_{\mathscr{H}^-}$ as its radiation fields. Moreover, $(\Psi|_{\Sigma},\slashed{\nabla}_{n_{\Sigma}}\Psi|_{\Sigma})\in \mathcal{E}^T_{\Sigma}$ and \bref{919191} is satisfied. A similar statement applies in the case of compactly supported smooth scattering data on $\overline{\mathscr{H}^-}, \mathscr{I}^-$ mapping into $\mathcal{E}^T_{\overline{\Sigma}}$.\\ \indent Therefore, as in the case of $\mathscr{B}^-$, there exist Hilbert space isomorphisms \begin{align} \mathscr{B}^{+}:\mathcal{E}^{T}_{\mathscr{H}^+}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\Sigma},\\ \mathscr{B}^{+}:\mathcal{E}^{T}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T}_{\overline{\Sigma}}, \end{align} which satisfy \begin{align} \mathscr{F}^{-}\circ\mathscr{B}^{+}=Id_{\mathcal{E}^T_{\mathscr{H}^-}\oplus\;\mathcal{E}^T_{\mathscr{I}^-}},\qquad \mathscr{B}^{+}\circ\mathscr{F}^{-}=Id_{\mathcal{E}^T_{\Sigma}},\\ \mathscr{F}^{-}\circ\mathscr{B}^{+}=Id_{\mathcal{E}^T_{\overline{\mathscr{H}^-}}\oplus\;\mathcal{E}^T_{\mathscr{I}^-}},\qquad \mathscr{B}^{+}\circ\mathscr{F}^{-}=Id_{\mathcal{E}^T_{\overline{\Sigma}}}. \end{align} \end{proposition} With \Cref{RW past scattering}, \Cref{RW isomorphisms} is immediate. \begin{remark} It is possible to realise the map $\mathscr{S}$ by directly studying the future radiation fields $\mathscr{I}^+$, $\overline{\mathscr{H}^+}$ on of a solution to the Regge--Wheeler equation \bref{RW} arising all the way from past scattering data on $\mathscr{I}^-$, $\overline{\mathscr{H}^-}$, instead of obtaining it by formally composing $\mathscr{F}^+, \mathscr{B}^+$. The proof uses a subset of the ideas needed to prove \Cref{Corollary 1} of the introduction, so we will state the result here. \end{remark} \begin{proposition} Given smooth, compactly supported past scattering data $(\uppsi_{\mathscr{H}^-},\uppsi_{\mathscr{I}^-})$ for the Regge--Wheeler equation \bref{RW}, there exists a unique solution $\Psi$ realising $\uppsi_{\mathscr{H}^-},\uppsi_{\mathscr{I}^-}$ as its radiation fields on $\overline{\mathscr{H}^-}, \mathscr{I}^-$ respectively. The solution $\Psi$ induces future radiation fields $(\uppsi_{\mathscr{H}^+},\uppsi_{\mathscr{I}^+})\in \mathcal{E}^{T}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T}_{{\mathscr{I}^+}}$ such that \begin{align} \|\uppsi_{{\mathscr{H}^-}}\|^2_{\mathcal{E}^{T}_{\overline{\mathscr{H}^-}}}+\|\uppsi_{{\mathscr{I}^-}}\|^2_{\mathcal{E}^{T}_{{\mathscr{I}^-}}}= \|\uppsi_{{\mathscr{H}^+}}\|^2_{\mathcal{E}^{T}_{\overline{\mathscr{H}^+}}}+\|\uppsi_{{\mathscr{I}^+}}\|^2_{\mathcal{E}^{T}_{{\mathscr{I}^+}}} \end{align} The same result applies with scattering data restricted to $\mathcal{E}^{T}_{{\mathscr{H}^\pm}}$. \end{proposition} \subsection{Auxiliary results on backwards scattering}\label{subsection 5.5 auxiliary results} \subsubsection{Radiation fields of transverse null derivative near $\mathscr{I}^+$}\label{subsubsection 5.5.1 convergence of transverse null derivative} We can recover the formulae of \Cref{Phi 1 forward,,Phi 2 forward} in backwards scattering from scattering data that is supported away from the future ends of $\mathscr{I}^+,\mathscr{H}^+$: \begin{corollary}\label{Phi 1 backwards} Let $(\bm{\uppsi}_{\mathscr{H}^+},\bm{\uppsi}_{\mathscr{I}^+})$ be smooth, compactly supported scattering data for \cref{RW} with corresponding solution $\Psi$. Then \begin{align} \lim_{v\longrightarrow\infty}\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\Psi=\int^{u_+}_u d\bar{u}(\mathcal{A}_2-2)\bm{\uppsi}_{\mathscr{I}^+}. \end{align} \end{corollary} \begin{proof} In a similar fashion to \Cref{Phi 1 forward}, we integrate \bref{RW first transverse derivative in the 3 direction} on a hypersurface $\underline{\mathscr{C}}_v$ from $u_+$ to $u$ to find \begin{align} \Phi^{(1)}=\frac{r^2}{\Omega^2}\int_u^{u_+}d\bar{u}\; \frac{\Omega^2}{r^2}\left[\mathring{\slashed{\Delta}}\Psi-(3\Omega^2+1)\Psi\right]. \end{align} Repeating the argument leading to \Cref{RW transverse derivatives converge} gives the result: \begin{align} \bm{\upphi}^{(1)}_{\mathscr{I}^+}= \lim_{v\longrightarrow\infty}\Phi^{(1)}=\int_u^{u_+}d\bar{u}\left(\mathcal{A}_2-2\right)\bm{\uppsi}_{\mathscr{I}^+}. \end{align} \end{proof} \Cref{Phi 2 forward} can also be recovered in backwards scattering for compactly supported data: \begin{corollary}\label{Phi 2 backwards} Let $\Psi$ be a solution to \cref{RW} arising from smooth, compactly supported scattering data $(\bm{\uppsi}_{\mathscr{H}^+},\bm{\uppsi}_{\mathscr{I}^+})$, then \begin{align} \begin{split} \lim_{v\longrightarrow\infty}\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\Psi&=\int_{u}^\infty\int_{u_1}^\infty du_1 du_2 \left[\mathcal{A}(\mathcal{A}_2-2)-6M\partial_u\right]\bm{\uppsi}_{\mathscr{I}^+}(u_2,\theta^A)\\ &=\int_u^{u_+}d\bar{u}(\bar{u}-u_-)\left[\mathcal{A}_2(\mathcal{A}_2-2)-6M\partial_u\right]\bm{\uppsi}_{\mathscr{I}^+}(\bar{u},\theta^A). \end{split} \end{align} \end{corollary} Note that we do not need compact support in the direction of $u\longrightarrow-\infty$ on $\mathscr{I}^+$ for the above results to hold: \begin{corollary} \Cref{Phi 1 backwards,,Phi 2 backwards} hold if $\bm{\uppsi}_{\mathscr{I}^+}$ is supported on $(-\infty,u]$, provided $\|\bm{\uppsi}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}<~\infty$. \end{corollary} \subsubsection{Backwards $r^p$-estimates}\label{backwards rp estimates} It is possible to use energy conservation to develop $r$-weighted estimates in the backwards direction that are uniform in $u$, provided $\bm{\uppsi}_{\mathscr{I}^+}$ is compactly supported in $u$. These estimates will help us show that $\mathscr{B}^-$ satisfies \bref{unitarity of B- formula} without reference to $\mathscr{F}^+$ or forwards scattering. We will also use them to show that $\Psi|_{\Sigma^*}\longrightarrow0$ towards $i^0$, and later to obtain similar statements for $\alpha,\underline\alpha$. These estimates first appeared in \cite{AAG19}.\\ \indent Let $u_-,u_+,v_-,v_+$ be as in the proof of \Cref{RWbackwardsexistence}, so that $\mathscr{C}_{u_+}\cap\{r>R\}$ is beyond the support of $\Psi$. Let $u<u_+$, then repeating the proof of \Cref{RWrp} in the region $\mathscr{D}_{u,v_+}^{u_+,\infty}$ for $p=1,2$ gives us (using $d\omega=\sin\theta d\theta d\phi$) \begin{align} \begin{split} \int_{\mathscr{C}_u\cap\{v>v_+\}}dvd\omega\; r|\Omega\slashed{\nabla}_4\Psi|^2\lesssim& \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\;r(|\slashed{\nabla}\Psi|^2+V|\Psi|^2)\\&+\int_{\mathscr{D}_{u,v_+}^{u_+,\infty}}dudvd\omega\; \left[|\Omega\slashed{\nabla}_4\Psi|^2+|\slashed{\nabla}\Psi|^2+V|\Psi|^2\right], \end{split} \end{align} \begin{align}\label{136} \int_{\mathscr{C}_u\cap\{v>v_+\}}dvd\omega\;r^2|\Omega\slashed{\nabla}_4\Psi|^2\lesssim \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\; r^2(|\slashed{\nabla}\Psi|^2+V|\Psi|^2)+\int_{\mathscr{D}_{u,v_+}^{u_+,\infty}} dudv d\omega \;r|\Omega\slashed{\nabla}_4\Psi|^2. \end{align} We estimate the bulk terms on the right hand side as follows: An energy estimate applied in $\mathscr{D}_{u,v_+}^{u_+,\infty}$ gives for all $u<u_+$: \begin{align}\label{backwards p=1} \int_{\mathscr{C}_u\cap\{v>v_+\}}dvd\omega\;\left[|\Omega\slashed{\nabla}_4\Psi|^2+|\slashed{\nabla}\Psi|^2+V|\Psi|^2\right]\leq \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\; |\partial_u\Psi|^2. \end{align} Integrating in $u$ gives \begin{align}\label{backwards p=1 integrated} \int_{\mathscr{D}_{u,v_+}^{u_+,\infty}}dudvd\omega\;\left[|\Omega\slashed{\nabla}_4\Psi|^2+|\slashed{\nabla}\Psi|^2+V|\Psi|^2\right]&\leq \int_{u_-}^{u_+}du_1\int_{\mathscr{I}^+\cap\{u_2\in[u_1,u_+]\}}du_2d\omega\; |\partial_u\Psi|^2\\&=\int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\; (u_+-u)|\partial_u\Psi|^2, \end{align} knowing that $\slashed{\nabla}_3\Psi=0$ at $u=u_+,v>v_+$. Returning to the above we have \begin{align} \int_{\mathscr{C}_u\cap\{v>v_+\}}dvd\omega\;r|\Omega\slashed{\nabla}_4\Psi|^2\lesssim \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\;r(|\slashed{\nabla}\Psi|^2+V|\Psi|^2)+(u_+-u)|\partial_u\Psi|^2. \end{align} Integrating once more in $u$ and substituting in (\ref{136}) gives us \begin{align}\label{RWbackwardsboundedness} \int_{\mathscr{C}_u\cap\{v>v_+\}}dvd\omega\;r^2|\Omega\slashed{\nabla}_4\Psi|^2\lesssim \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\;r(u_+-u)(|\slashed{\nabla}\Psi|^2+V|\Psi|^2)+\frac{1}{2}(u-u_+)^2|\partial_u\Psi|^2. \end{align} We can integrate in $u$ once more: \begin{align}\label{RWbackwardsdecay} \int_{\mathscr{D}_{u,v_+}^{u_+,\infty}}dudvd\omega\;r^2|\Omega\slashed{\nabla}_4\Psi|^2\lesssim \int_{\mathscr{I}^+\cap\{u\in[u_-,u_+]\}}dud\omega\;\frac{1}{2}r(u-u_+)^2(|\slashed{\nabla}\Psi|^2+V|\Psi|^2)+\frac{1}{6}(u_+-u)^3|\partial_u\Psi|^2. \end{align} Note that all of the bulk integrals above could be done over $\mathscr{D}=\mathscr{D}_{u,v_+}^{u_+,\infty}\cup\{ J^-(\mathscr{C}_{u_-})\cap J^+(\Sigma^*)\}$ provided that $\partial_u\bm{\uppsi}_{\mathscr{I}^+}$ decays sufficiently fast, such that $\int_{-\infty}^u dud\omega\; |\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2$ is integrable on $(-\infty,u_+]$. The first application will be to show that the $\mathscr{B}^+$ is unitary: \begin{proposition}\label{RW unitary backwards} Let $\Psi$ arise from smooth scattering data $\bm{\uppsi}_{\mathscr{I}^+}\in \mathcal{E}^T_{\mathscr{I}^+}, \bm{\uppsi}_{\mathscr{H}^+}\in \mathcal{E}^T_{\mathscr{H}^+}$ as in \Cref{RWbackwardsexistence}. Assume that $\bm{\uppsi}_{\mathscr{I}^+}$ is supported on $u\leq u_+<\infty$, $\bm{\uppsi}_{\mathscr{H}^+}$ is supported on $v \leq v_+ < \infty$, and that $\int_{-\infty}^u dud\omega |\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2$ is integrable on $(-\infty, u_+]$. Then \begin{align} \lim_{u\longrightarrow-\infty} F^T_{\mathscr{C}_u\cap J^+(\Sigma^*)}[\Psi]=0. \end{align} \end{proposition} \begin{proof} The energy estimate \begin{align} \mathbb{F}_{\Sigma^*}^T[\Psi\cdot\theta_u]+F^T_{\mathscr{C}_u\cap J^+(\Sigma^*)}= \|\bm{\uppsi}_{\mathscr{H}^+}\|^2_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}+\|\bm{\uppsi}_{\mathscr{I}^+}\|^2_{\mathcal{E}^T_{\mathscr{I}^+}} \end{align} implies that $F^T_{\mathscr{C}_u\cap J^+(\Sigma^*)}[\Psi]$ decays monotonically as $u\longrightarrow\infty$ (here $\theta_u$ is the characteristic function of the subset $\Sigma^*\textbackslash J^-(\mathscr{C}_u)$ of $\Sigma^*$). Combining this with \bref{backwards p=1 integrated} gives the result. \end{proof} \begin{corollary}\label{RW unitary backwards corollary} Let $\Psi$ be as in \Cref{RW unitary backwards}, then \begin{align} \|(\Psi|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Psi|_{\Sigma^*})\|_{\mathcal{E}^T_{\Sigma^*}}^2=\|\bm{\uppsi}_{\mathscr{H}^+}\|^2_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}+\|\bm{\uppsi}_{\mathscr{I}^+}\|^2_{\mathcal{E}^T_{\mathscr{I}^+}}. \end{align} \end{corollary} \indent In the following, we show that if $\bm{\uppsi}_{\mathscr{I}^+}$ is compactly supported on $\mathscr{I}^+$ then we have pointwise decay for $\Psi$ towards $i^0$: \begin{proposition}\label{RW backwards decay at sigma} Let $\Psi$ arise from scattering data $(\bm{\uppsi}_{\mathscr{I}^+},\bm{\uppsi}_{\mathscr{H}^+})\in\Gamma_c(\mathscr{I}^+)\times\Gamma_c(\mathscr{H}^+)$ as in \Cref{RWbackwardsexistence}, then $\Psi|_{\Sigma}\longrightarrow 0$ as $r\longrightarrow \infty$. \end{proposition} \begin{proof} For $R$ large enough, we can estimate \begin{align} \int_{S^2} \left|\Psi|_{\Sigma^*\cap\{r=R\}}-\bm{\uppsi}_{\mathscr{I}^+}\right|\lesssim \int_{v=\frac{1}{2}R^*}^{\infty} \int_{S^2}\sin\theta d{\bar{v}}d\theta d\phi|\Omega\slashed{\nabla}_4\Psi|\lesssim \frac{1}{\sqrt{R}}\int_{\mathscr{C}_{-\frac{1}{2}R^*}\cap\{v>\frac{1}{2}R^*\}}r^2|\Omega\slashed{\nabla}_4\Psi|^2. \end{align} The result follows noting that $\bm{\uppsi}_{\mathscr{I}^+}$ is compactly supported and that the integral on the right hand side is bounded according to (\ref{RWbackwardsboundedness}). \end{proof} \begin{proposition} Let $\Psi$ arise from the backwards evolution of scattering data $(\bm{\uppsi}_{\mathscr{I}^+},\bm{\uppsi}_{\mathscr{H}^+})$ in $\Gamma_c (\mathscr{I}^+)\times \Gamma_c (\mathscr{H}^+_{\geq0})$ as in \Cref{RWbackwardsexistence}, then \begin{align} \lim_{R\longrightarrow\infty} \int_{\underline{\mathscr{C}}_{v=\frac{1}{2}R^*} \cap J^+(\Sigma^*)} \Psi= \int_{\mathscr{I}^+} \bm{\uppsi}_{\mathscr{I}^+}. \end{align} \end{proposition} \begin{proof} Assume the support of $\bm{\uppsi}_{\mathscr{I}^+}$ is in $\mathscr{I}^+\cap \{u\in[u_-,u_+]\}, -\infty<u_-<u_+<\infty$. Let $R$ be such that $u|_{t=0,r=R}=-\frac{1}{2}R^*<u_-$ and let $\tilde{v}=v(t=0,r=R)=\frac{1}{2}R^*$, $\tilde{u}>u_+$. We have \begin{align} \left|\int_{\mathscr{C}_{u=-\frac{1}{2}R^*} \cap J^+(\Sigma)} \Psi-\int_{\mathscr{I}^+} \bm{\uppsi}_{\mathscr{I}^+}\right|^2\leq \left[\int_{\mathscr{D}} |\Omega\slashed{\nabla}_4 \Psi|\right]^2\lesssim \frac{1}{{R}}\int_{\mathscr{D}} r^2|\Omega\slashed{\nabla}_4\Psi|^2, \end{align} where $\mathscr{D}= J^+(\Sigma^*\cap\{r\geq R\})\cap J^-(\mathscr{C}_{\tilde{u}})$. The result follows as (\ref{RWbackwardsdecay}) gives us that $\int_{\mathscr{D}} r^2|\Omega\slashed{\nabla}_4\Psi|^2<\infty$. \end{proof} \subsubsection{Backwards scattering for data of noncompact support}\label{subsubsection 5.5.3 backwards scattering data of noncompact support} Estimates \bref{ptwise infinity} and \bref{ptwise horizon} are uniform in the future cutoffs of $\bm{\uppsi}_{\mathscr{I}^+}, \bm{\uppsi}_{\mathscr{H}^+}$ if the relevant fluxes on $\mathscr{I}^+, \mathscr{H}^+_{\geq0}$ are finite, in which case we can remove these cutoffs altogether and work with non-compactly supported scattering data. This follows by a simple modification of the argument leading to the limit $\Psi$ in the proof of \Cref{RWbackwardsexistence}. \begin{proposition}\label{RW backwards noncompact} The results of \Cref{RWbackwardsexistence} hold when $\bm{\uppsi}_{\mathscr{I}^+}, \bm{\uppsi}_{\mathscr{H}^+}$ are not compactly supported, provided \begin{align &\int_{[u_-,\infty)\times S^2} du\sin\theta d\theta d\phi \sum_{|\gamma|\leq2}| \slashed{\mathcal{L}}^\gamma_{S^2}\partial_u\bm{\uppsi}_{\mathscr{I}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{S^2}\bm{\uppsi}_{\mathscr{I}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{S^2}\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{I}^+}|^2 <\infty,\label{tthis_hypothesis_1}\\ &\int_{[v_-,\infty)\times S^2} dv\sin\theta d\theta d\phi\sum_{|\gamma|\leq2}| \slashed{\mathcal{L}}^\gamma_{S^2}\partial_v\bm{\uppsi}_{\mathscr{H}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{S^2}\bm{\uppsi}_{\mathscr{H}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{S^2}\mathring{\slashed{\nabla}}\bm{\uppsi}_{\mathscr{H}^+}|^2 <\infty.\label{thist_hypothesis_2} \end{align} Corollaries \ref{Phi 1 backwards} and \ref{Phi 2 backwards} also hold provided the fluxes of \bref{tthis_hypothesis_1}, \bref{thist_hypothesis_2} are finite with the sums running up to $|\gamma|\leq 4$. \end{proposition} \begin{proof} Let $R>3M$ be fixed, $\{u_{+,n}\}_{n=1}^\infty$ a monotonically increasing sequence and $\{v_{+,n}\}_{n=1}^\infty$ such that $v_{+,n}-u_{+,n}=R^*$. Let $\xi_n^u,\xi_n^v$ be smooth cutoff functions cutting off at $u_{+,n}$ and $v_{+,n}$ respectively. Using $\xi_n^u\bm{\uppsi}_{\mathscr{I}^+}, \xi_n^v\bm{\uppsi}_{\mathscr{H}^+}$ as scattering data, we can apply \Cref{RWbackwardsexistence} to obtain solutions $\Psi_n$ to \cref{RW}, each defined on $\mathscr{D}_n:=J^+(\Sigma^*)\cap\{\{u<u_{+,n}\}\cup\{v<v_{+,n}\}\}$. On $\mathscr{D}_k$, the sequence $\{\Psi_n\}$ for $n>k$ is bounded and equicontinuous, so repeating the argument of \Cref{RWbackwardsexistence} we can find a subsequence converging to $\Psi$ in the topology of compact convergence. The estimate \bref{tthis_hypothesis_1} and the estimates \bref{ptwise horizon}, \bref{ptwise infinity} imply that $\Psi\longrightarrow \bm{\uppsi}_{\mathscr{I}^+}$ towards $\mathscr{I}^+$ and $\Psi\longrightarrow \bm{\uppsi}_{\mathscr{H}^+}$ towards $\mathscr{H}^+$. The solution $\Psi$ can be extended to the future by repeating the above argument for each $\mathscr{D}_k$ as $k\longrightarrow\infty$. The remaining statements follow by analogous arguments. \end{proof} \section{Future asymptotics of the +2 Teukolsky equation}\label{section 6} \Cref{section 6} is devoted to the study of future radiation fields induced by solutions to the $+2$ Teukolsky equation arising from smooth, compactly supported data on $\Sigma^*$, as was done for the Regge--Wheeler equation in \Cref{subsection 5.2 subsection Radiation fields}.\\ \indent We first gather the estimates we need in \Cref{T+2estimates}. We collect in \Cref{subsubsection 6.1.1 transport estimates} results from \cite{DHR16} estimating $\alpha$ from $\Psi$ defined via (\ref{hier+}) and the estimates of \Cref{subsection 5.1 Basic integrated boundedness and decay estimates} for $\Psi$. Building upon these estimates we then use the methods of \cite{DRrp} and \cite{AAG16a} to obtain $r$-weighted estimates for $\alpha, \psi$ in \Cref{rp+2}. We apply these results to study the future radiation fields and their fluxes in \Cref{+2 radiation}. \subsection{Integrated boundedness and decay for $\alpha$ via $\Psi$}\label{T+2estimates} We begin with the following basic proposition, already proven in \Cref{Chandra}: \begin{proposition}\label{+2 implies RW} Let $(\upalpha,\upalpha')$ be data on $\Sigma^*$, $\Sigma$ or $\overline{\Sigma}$ giving rise to a solution $\alpha$ to \cref{T+2} as in \Cref{WP+2Sigma*} or \Cref{WP+2Sigmabar} respectively. Then $\Psi$ defined via \bref{hier+} out of the solution $\alpha$ on $ J^+(\Sigma^*)$, $ J^+(\Sigma)$ or $ J^+(\overline{\Sigma})$ satisfies \cref{RW}. \end{proposition} \subsubsection{Transport estimates for $\alpha$}\label{subsubsection 6.1.1 transport estimates} In what follows assume a small fixed $0<\epsilon<1/8$. \begin{proposition}\label{psiILED} Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$ \footnote{All integrals on $\underline{\mathscr{C}}_v$ here are done with respect to the measure $\Omega^2\sin\theta dvd\theta d\phi$} \begin{align}\label{locallabel1} \int_{\mathscr{C}_{u}\cap J^+(\Sigma^*)\cap J^-(\underline{\mathscr{C}}_v)}d\bar{v}d\omega\; r^{8-\epsilon}\Omega^2|\psi|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}}d\bar{u}d\bar{v}d\omega\; r^{7-\epsilon}\Omega^4|\psi|^2 \lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\; r^{8-\epsilon}\Omega^2|\psi|^2. \end{align} \end{proposition} \begin{proof} Here we repeat the argument of Proposition 12.1.1 of \cite{DHR16}. Using the definition of $\psi$ in (\ref{hier+}) we can derive \begin{align} \partial_u \left[r^{6+n}\Omega^4|\psi|^2\right]+nr^{n+5}\Omega^4|\psi|^2=2r^{n-1}\frac{\Omega^2}{r^2}\Psi \cdot r^3\Omega\psi\leq \frac{1}{2}nr^{n+5}\Omega^4|\psi|^2+\frac{2}{n}r^{n-3}\Omega^2|\Psi|^2. \end{align} The result follows by integrating over $\mathscr{D}^{u,v}_{\Sigma^*}$ for $0<n<2$ and using \Cref{RWILED,,RWrp}. \end{proof} \begin{proposition}\label{alphaILED} Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$ \begin{align}\label{locallabel2} \begin{split} \int_{\mathscr{C}_{u}\cap J^+(\Sigma^*)\cap J^-(\underline{\mathscr{C}}_v)}d\bar{v}d\omega\; r^{6-\epsilon}\Omega^4|\alpha|^2&+\int_{\mathscr{D}^{u,v}_{\Sigma^*}}d\bar{u}d\bar{v}d\omega\; r^{5-\epsilon}\Omega^6|\alpha|^2 \\ &\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\; r^{8-\epsilon}\Omega^2|\psi|^2+r^{6-\epsilon}\Omega^4|\alpha|^2. \end{split} \end{align} provided the right hand side is finite. \end{proposition} \begin{proof} Similar to \Cref{psiILED}. See Propositions 12.1.2, 12.2.6 and 12.2.7 of \cite{DHR16}. \end{proof} \begin{proposition}\label{ILED alpha 2nd angular} Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$ \begin{align}\label{2ndderivativeofpsi} \begin{split} \int_{\mathscr{C}_{u}\cap J^+(\Sigma^*)\cap J^-(\underline{\mathscr{C}}_v)}d\bar{v}d\omega\; r^{8-\epsilon}|-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2&(r^3\Omega\psi)|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}}d\bar{u}d\bar{v}d\omega\; \frac{\Omega^2}{r^3}\left(1-\frac{3M}{r}\right)^2 |-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2(r^3\Omega\psi)|^2 \\ &\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}dr d\omega\; r^{8-\epsilon}\Omega^2|\psi|^2+r^{6-\epsilon}\Omega^4|\alpha|^2, \end{split} \end{align} provided the right hand side is finite. \end{proposition} \begin{proof} Control of $\psi,\alpha$ as in \Cref{psiILED,,alphaILED} allows us to directly control the flux of $-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2(r^3\Omega\psi)$ on $\mathscr{C}_u$ using (\ref{eq:d4Psi}) and the flux bound of \Cref{RWrp}, while the spacetime integral can be controlled via \Cref{RWILED}. \end{proof} Commuting (\ref{hier+}) with $r\slashed{\mathcal{D}}_2$ and using the flux bound of the previous proposition allows us to obtain an integrated decay statement for $r\slashed{\mathcal{D}}_2\psi$: \begin{proposition} Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$ \begin{align} \int_{\mathscr{D}^{u,v}_{\Sigma^*}} d\bar{u}d\bar{v} d\omega\; r^{7-\epsilon} \Omega^4|r\slashed{\mathcal{D}}_2\psi|^2 &\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\;r^{8-\epsilon}\Omega^2|r\slashed{\mathcal{D}}_2\psi|^2+r^{6-\epsilon}\Omega^4|\alpha|^2. \end{align} provided the right hand side is finite. \end{proposition} Finally, commuting the equation for $\psi$ in (\ref{hier+}) with $\slashed{\nabla}_{R^*}$ gives us control over the remaining $\Omega\slashed{\nabla}_4\psi$ using the estimates for $\Psi$ and the nondegenerate control of $\slashed{\nabla}_{R^*} \psi$ in \Cref{RWILED}. We can optimise the weights near the event horizon and null infinity by commuting further with $\Omega^{-1}\slashed{\nabla}_3$ and $r\Omega\slashed{\nabla}_4$ respectively: \begin{proposition}\label{ILED psi higherorder} Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$ \begin{align} \begin{split} \int_{\mathscr{C}_{u}\cap J^+(\Sigma^*)\cap J^-(\underline{\mathscr{C}}_v)} d\bar{v}d\omega\;r^{4-\epsilon} |\Omega\slashed{\nabla}_4(r^3\Omega&\psi)|^2 + \int_{\mathscr{D}^{u,v}_{\Sigma^*}}d\bar{u}d\bar{v}d\omega\;r^{7-\epsilon}\left[|(\Omega^{-1}\slashed{\nabla}_3(\Omega\psi)|^2+|r\Omega\slashed{\nabla}_4\Omega\psi|^2\right]\\&\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap\ensuremath J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}dr d\omega\;r^{8-\epsilon}\left[|\Omega\psi|^2+|\Omega^{-1}\slashed{\nabla}_3\psi|^2+|r\Omega\slashed{\nabla}_4\psi|^2\right] \end{split} \end{align} provided the right hand side is finite. \end{proposition} Similar estimates can be obtained for $\alpha$ by applying these ideas one more time to (\ref{hier+}), see section 12.3 of \cite{DHR16}. \begin{proposition}\label{alphaILED higher order} Let $\alpha, \psi, \Psi$ be as in \bref{hier+} and \Cref{+2 implies RW}, Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$ \begin{align} \begin{split} &\int_{\mathscr{C}_{u}\cap J^+(\Sigma^*)\cap J^-(\underline{\mathscr{C}}_v)}d\bar{v}d\omega\;r^{6-\epsilon}\left[|r\slashed{\mathcal{D}}_2\Omega^2\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3\Omega^2\alpha|^2+|r\Omega\slashed{\nabla}_4\Omega^2\alpha|^2\right]\\&+\int_{\mathscr{D}^{u,v}_{\Sigma^*}}d\bar{u}d\bar{v}d\omega\;r^{5-\epsilon}\left[|r\slashed{\mathcal{D}}_2\Omega^2\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3\Omega^2\alpha|^2+|r\Omega\slashed{\nabla}_4\Omega^2\alpha|^2\right] \\ &\lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*\cap\ensuremath J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\;\Bigg\{r^{8-\epsilon}\left[|r\slashed{\mathcal{D}}_2\Omega\psi|^2+|\Omega^{-1}\slashed{\nabla}_3\Omega\psi|^2+|r\Omega\slashed{\nabla}_4\Omega\psi|^2\right]\\&+r^{6-\epsilon}\left[|r\slashed{\mathcal{D}}_2\Omega^2\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3\Omega^2\alpha|^2+|r\Omega\slashed{\nabla}_4\Omega^2\alpha|^2\right]\Bigg\}, \end{split} \end{align} provided the right hand side is finite. \end{proposition} \subsubsection{An $r^p$-estimate for $\alpha,\psi$}\label{rp+2} The structure of the $+2$ Teukolsky equation allows us to apply the method of \cite{DRrp} and \cite{AAG16a} to \Cref{T+2} in the same way it was applied in \Cref{subsection 5.1 Basic integrated boundedness and decay estimates}. \begin{proposition}\label{T+2rp} Let $\alpha$ be a solution to the +2 equation (\ref{T+2}), then for $p\in[0,2], u>u_0$ and $\mathscr{D}=\{(u,v,\theta,\phi): \bar{u}\in[u_0,u], r>R\}$ we have the following: \begin{align} \begin{split} \int_{\mathscr{C}_{u}\cap\{r>R\}}d\bar{v}d\omega\;r^p|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2+\int_{\mathscr{D}}d\bar{u}d\bar{v}d\omega\; (p+8)r^{p-1}|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2+(2-p)r^{p-1}|\slashed{\nabla} r^5\Omega^{-2}\alpha|^2\\ \lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*}r^{8-\epsilon}\Omega^2|\psi|^2+r^{6-\epsilon}\Omega^2|\alpha|^2+\int_{\Sigma^*\cap\{r>R\}}drd\omega\; r^p|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2. \end{split} \end{align} \end{proposition} \begin{proof} Rewrite the +2 equation in terms of $r^5\Omega^2\alpha$: \begin{align}\label{+2 equation for radiation field} \Omega\slashed{\nabla}_4\Omega\slashed{\nabla}_3 r^5\Omega^{-2}\alpha+2\frac{3\Omega^2-1}{r}\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha-\Omega^2\slashed{\Delta}r^5\Omega^{-2}\alpha-\frac{\Omega^2}{r^2}(15\Omega^2-13)r^5\Omega^{-2}\alpha=0. \end{align} Multiply by $r^p\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha$ and integrate by parts: \begin{align} \begin{split} &\Omega\slashed{\nabla}_3\left[r^p|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2\right]+\Omega\slashed{\nabla}_4\left[r^p\Omega^2\left(|\slashed{\nabla} r^5\Omega^{-2}\alpha|^2-(15\Omega^2-13)\frac{1}{r^2}|r^5\Omega^{-2}\alpha|^2\right)\right]\\ &+\left\{4(3\Omega^2-1)+p\Omega^2\right\}r^{p-1}|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2+\left[2-p-\frac{2M}{r}\right]r^{p-1}\left|\slashed{\nabla} r^5\Omega^{-2}\alpha\right|^2\\ &-\left[\frac{2M}{r}(30\Omega^2-13)+(2-p)(15\Omega^4-13\Omega^2)\right]r^{p-3}\Omega^2|r^5\Omega^{-2}\alpha|^2=0. \end{split} \end{align} Integrating in $\mathscr{D}$, the Poincar\'e inequality (\ref{poincare}) ensures that the leading order terms in the $\mathscr{I}^+$ flux term are positive, and we similarly use (\ref{poincare}) to absorb the last term in the previous equation into the term containing the angular derivative. Finally we can deal with the $r=R$ flux term by averaging over $R$ and using the integrated decay statement of \Cref{alphaILED}. \end{proof} Similarly, we have \begin{proposition}\label{T+1rp} Let $\psi$ arise from $\alpha$ according to (\ref{hier+}), then we have \begin{align} \int_{\mathscr{C}_{u}\cap\{r>R\}}d\bar{v}d\omega\;r^p|\Omega\slashed{\nabla}_4 r^5\Omega^{-1}\psi|^2+\int_{\mathscr{D}}d\bar{v}d\omega\; (p+4)r^{p-1}|\Omega\slashed{\nabla}_4 r^5\Omega^{-1}\psi|^2+(2-p)r^{p-1}|\slashed{\nabla} r^5\Omega^{-1}\psi|^2\\ \lesssim \mathbb{F}_{\Sigma^*}[\Psi]+\int_{\Sigma^*}drd\omega\;r^{8-\epsilon}\Omega^2|\psi|^2+r^{6-\epsilon}\Omega^2|\alpha|^2+ \int_{\Sigma^*\cap\{r>R\}}drd\omega\;r^p|\Omega\slashed{\nabla}_4 r^5\Omega^{-1}\psi|^2. \end{align} \end{proposition} \begin{proof} Rewrite the definition of $\psi$ in terms of $r^5\Omega^{-1}\psi$ and differentiate via $\Omega\slashed{\nabla}_3$ to get \begin{align} \Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4 r^5\Omega^{-1}\psi+\frac{3\Omega^2-1}{r}\Omega\slashed{\nabla}_4 r^5\Omega^{-1}\psi-\Omega^2\slashed{\Delta} r^5\Omega^{-1}\psi+\frac{\Omega^2}{r^2}(3\Omega^2-5)r^5\Omega^{-1}\psi=-12M^2\frac{\Omega^4}{r^4} r^5\Omega^{-2}\alpha. \end{align} We repeat the argument employed in \Cref{T+2rp} using Cauchy--Schwarz to estimate the $\alpha$ term on the right hand side. \end{proof} \begin{remark}\label{transversealphapsi} We have similar statements to \Cref{T+2rp,,T+1rp} for $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4$ derivatives of $r^5\Omega^{-1}\psi$ and $r^5\Omega^{-2}\alpha$ . \end{remark} \subsection{Future radiation fields and fluxes}\label{+2 radiation} In this section the notion of future radiation fields of solutions to the +2 Teukolsky equation \bref{T+2} is defined, and some of the properties of these radiation fields are studied, in particular obtaining their $\mathcal{E}^{T,+2}_{\mathscr{H}^+}$, $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ fluxes when they belong to solutions of \bref{T+2} arising from smooth data of compact support. \subsubsection{Radiation on $\mathscr{H}^+$}\label{+2 radiation on H+} \begin{defin}\label{+2 radiation alpha definition H} Let $\alpha$ be a solution to (\ref{T+2}) arising from smooth data as in \Cref{WP+2Sigma*} or \Cref{WP+2Sigmabar}. The radiation field of $\alpha$ along $\mathscr{H}^+$, denoted $\upalpha_{\mathscr{H}^+}$ is defined to be the restriction of $2M\Omega^2\alpha$ to $\mathscr{H}^+$. \end{defin} \begin{remark} We will use the same notation for the radiation field on $\mathscr{H}^+_{\geq0}, \mathscr{H}+$ or $\overline{\mathscr{H}^+}$. \end{remark} As an easy consequence of the estimates of the previous section we have the following non-quantitative decay statements: (All statements here apply to $\overline{\mathscr{H}^+}$) \begin{corollary}\label{psi+2ptwisedecay} For smooth data of compact support for the +2 on $\Sigma^*$, $\Sigma$ or $\overline{\Sigma}$, $\psi$ decays along any hypersurface $r=R$ \begin{align} \lim_{v\longrightarrow \infty} \left|\left|\Omega\psi\right|\right|_{L^2(S^2_{R})}=0. \end{align} \end{corollary} \begin{proof} \Cref{psiILED} applied to $\psi$ and $\slashed{\nabla}_T\psi$ implies \begin{align} \lim_{v\longrightarrow \infty} \int_{\underline{\mathscr{C}}_v\cap\{r\in[2M,R]\}} \Omega^2|\psi|^2 du \sin\theta d\theta d\phi =0 . \end{align} Repeating this for $\Omega^{-1}\slashed{\nabla}_3 \Omega \psi$ using \Cref{ILED psi higherorder} gives the result. \end{proof} The same works for $\alpha$ using propositions \ref{alphaILED} and \ref{alphaILED higher order}: \begin{corollary}\label{alpha+2ptwisedecay} For smooth data of compact support on $\Sigma^*$, $\Sigma$ or $\overline{\Sigma}$, $\alpha$ decays along any hypersurface $r=R$: \begin{align} \lim_{v\longrightarrow \infty} \left|\left|\Omega^2\alpha\right|\right|_{L^2(S^2_{R})}=0. \end{align} \end{corollary} Commuting with the lie derivative along angular Killing fields $\slashed{\mathcal{L}}_{\Omega_i}^\gamma$ for $|\gamma|\leq2$ gives \begin{corollary}\label{horizonpsidecay} For smooth data of compact support for the +2 Teukolsky equation on $\Sigma^*$, $\Sigma$ or $\overline{\Sigma}$, $\Omega\psi|_{\mathscr{H}^+}$ and $\Omega^2\alpha|_{\mathscr{H}^+}$ decay towards $\mathscr{H}^+_+$. \end{corollary} \subsubsection{Radiation flux on $\mathscr{H}^+$}\label{+2 radiation flux on H+} Assume $\alpha$ satisfies \bref{T+2} and arises from smooth, compactly supported data on $\Sigma^*$. The regularity of $\Psi$ implies that on $\mathscr{H}^+$, the radiation flux in terms of $\Psi$ is given by \bref{RW def rad flux at H} \begin{align} \left\|\Psi\right\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2= \left\|\Omega\slashed{\nabla}_4\Psi\right\|^2_{L^2(\mathscr{H}^+)}. \end{align} Recall that if $\alpha$ satisfies the +2 Teukolsky equation \cref{T+2} then $\alpha, \Psi$ also satisfy (\ref{eq:d4Psi}) and (\ref{eq:d4d4Psi}): \begin{align}\label{psi out of alpha} \begin{split} \Omega\slashed{\nabla}_4 \Psi=\mathcal{A}_2 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha-6Mr\Omega^2\alpha-(3\Omega^2-1)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3r\Omega^2\alpha, \end{split} \end{align} \begin{align}\label{Psi out of alpha} \begin{split} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 \Psi&=\mathcal{A}_2(\mathcal{A}_2-2) r\Omega^2\alpha-6M\left(\Omega\slashed{\nabla}_3+\Omega\slashed{\nabla}_4\right)r\Omega^2\alpha. \end{split} \end{align} We find the limits towards $\mathscr{H}^+$: the left hand sides of (\ref{Psi out of alpha}) reads: \begin{align} (\Omega\slashed{\nabla}_4)^2\Psi+\frac{3\Omega^2-1}{r}\Omega\slashed{\nabla}_4\Psi\longrightarrow \left[\partial_v-\frac{1}{2M}\right]\partial_v\bm{\uppsi}_{\mathscr{H}^+} \text{\;towards\;} \mathscr{H}^+. \end{align} Now the right hand side reads: \begin{align} \mathcal{A}_2\left[\mathcal{A}_2-2\right]\upalpha_{\mathscr{H}^+}-6M\partial_v\upalpha_{\mathscr{H}^+}, \end{align} so we must determine $\partial_v\Psi$ from the equation \begin{align}\label{equation for Psi out of alpha on H} \partial_v^2\bm{\uppsi}_{\mathscr{H}^+}-\frac{1}{2M}\partial_v\bm{\uppsi}_{\mathscr{H}^+}=\mathcal{A}_2\left[\mathcal{A}_2-2\right]\upalpha_{\mathscr{H}^+}-6M\partial_v\upalpha_{\mathscr{H}^+}. \end{align} In Kruskal coordinates, this reads \begin{align} \begin{split} \frac{1}{(2M)^2}\partial_V^2\Psi&=\mathcal{A}_2(\mathcal{A}_2-2)V^{-2}\upalpha_{\mathscr{H}^+}-3V^{-1}\partial_V \upalpha_{\mathscr{H}^+}\\ &=\left[\mathcal{A}_2(\mathcal{A}_2-2)-6\right]V^{-2}\upalpha_{\mathscr{H}^+}-3V\partial_V V^{-2}\upalpha_{\mathscr{H}^+}. \end{split} \end{align} With the condition that $\Psi,\Omega\slashed{\nabla}_4\Psi$ decay as $v\longrightarrow\infty$, we have \begin{align}\label{eq:197} -\frac{1}{(2M)^2}\partial_V\Psi=\int_V^\infty\left\{\left[\mathcal{A}_2(\mathcal{A}_2-2)-6\right]V^{-2}\upalpha_{\mathscr{H}^+}-3V\partial_V V^{-2}\upalpha_{\mathscr{H}^+}\right\}d\widebar{V} \end{align} Integrating in again in $V$ and using the fact that $\upalpha_{\mathscr{H}^+}$ is compactly supported we get: \begin{align}\label{eq:198} \frac{1}{(2M)^2}\Psi=\int_V^\infty (V-\bar{V})\left\{\left[\mathcal{A}_2(\mathcal{A}_2-2)-6\right]V^{-2}\upalpha_{\mathscr{H}^+}-3V\partial_V V^{-2}\upalpha_{\mathscr{H}^+}\right\}d\bar{V}. \end{align} In Eddington-Finkelstein coordinates this reads \begin{lemma}\label{flux+2horizon} Let $\alpha$ be a solution to the +2 Teukolsky equation \bref{T+2} arising from data of compact support on $\mathscr{H}^+_{\geq0}$, and let $\Psi$ be the corresponding solution to the Regge--Wheeler equation arising from $\alpha$ via \bref{hier+}. Then the radiation field $\uppsi_{\mathscr{H}^+}$ on $\mathscr{H}^+$ belonging to $\Psi$ is given by: \begin{align}\label{eq:199} \bm{\uppsi}_{\mathscr{H}^+}=2M\int_v^\infty \left[e^{\frac{1}{2M}(v-\bar{v})}-1\right]\left\{\mathcal{A}_2\left[\mathcal{A}_2-2\right]\upalpha_{\mathscr{H}^+}-6M\partial_v\upalpha_{\mathscr{H}^+}\right\}, \end{align} \begin{align}\label{eq:200} \partial_v \bm{\uppsi}_{\mathscr{H}^+}=\int^{\infty}_v e^{\frac{1}{2M}(v-\bar{v})}\{-\mathcal{A}_2\left[\mathcal{A}_2-2\right]\upalpha_{\mathscr{H}^+}+6M\partial_v\upalpha_{\mathscr{H}^+}\} d\overline{v}. \end{align} \end{lemma} Equations \bref{eq:197}--\bref{eq:200} are the expressions for the radiation field and flux at $\mathscr{H}^+$ that we are able to compute directly out of data there. Note that this applies equally to radiation on $\mathscr{H}^+_{\geq0}, \mathscr{H}^+$ or $\overline{\mathscr{H}^+}$.\\ \indent Now let $F_{\mathscr{H}^+}=\int_v^\infty e^{\frac{1}{2M}(v-\bar{v})} \upalpha_{\mathscr{H}^+}d\bar{v}$, then $\partial_v F= \frac{1}{2M}F-\upalpha_{\mathscr{H}^+}$, which implies \begin{align} -\partial_v \bm{\uppsi}_{\mathscr{H}^+}=\mathcal{A}_2(\mathcal{A}_2-2)F_{\mathscr{H}^+}-6M\partial_v F_{\mathscr{H}^+}. \end{align} Note that $F_{\mathscr{H}^+}$ decays towards the future end of $\mathscr{H}^+_{\geq0}$, since \begin{align} \lim_{v\longrightarrow\infty}F_{\mathscr{H}^+}=\lim_{v\longrightarrow\infty} \int_v^\infty e^{\frac{1}{2M}(v-\bar{v})} \upalpha_{\mathscr{H}^+}d\bar{v}=\lim_{v\longrightarrow\infty} -2M \upalpha_{\mathscr{H}^+}=0. \end{align} Therefore, $L^2(\mathscr{H}^+_{\geq0})$ norm of $\partial_v \bm{\uppsi}_{\mathscr{H}^+}$ is given by \begin{align}\label{+2 norm on H+ beyond B} \begin{split} \left\|\partial_v \bm{\uppsi}_{\mathscr{H}^+}\right\|_{L^2(\mathscr{H}^+_{\geq0})}^2=&\left\|\mathcal{A}_2(\mathcal{A}_2-2)F_{\mathscr{H}^+}\right\|^2_{L^2(\mathscr{H}^+_{\geq0})}+\left\|6M\partial_v F_{\mathscr{H}^+}\right\|^2_{L^2(\mathscr{H}^+_{\geq0})}\\&+\int_{\Sigma^*\cap\mathscr{H}^+}\sin\theta d\theta d\phi \left(\left|\mathring{\slashed{\Delta}}F|_{\Sigma^*\cap\mathscr{H}^+}\right|^2+6\left|\mathring{\slashed{\nabla}}F|_{\Sigma^*\cap\mathscr{H}^+}\right|^2+8\Big|F|_{\Sigma^*\cap\mathscr{H}^+}\Big|^2\right). \end{split} \end{align} Starting from initial data on $\Sigma$ or $\overline{\Sigma}$ and repeating the computation leading to \bref{+2 norm on H+ beyond B}, the boundary term drops out since we then have \begin{align} \lim_{v\longrightarrow-\infty}F_{\mathscr{H}^+}=\lim_{v\longrightarrow-\infty} \int_v^\infty e^{\frac{1}{2M}(v-\bar{v})} \upalpha_{\mathscr{H}^+}d\bar{v}=\lim_{v\longrightarrow-\infty} -2M \upalpha_{\mathscr{H}^+}=0. \end{align} Therefore we have \begin{align}\label{+2 norm on H+ up to B} \begin{split} \left\|\partial_v \bm{\uppsi}_{\mathscr{H}^+}\right\|_{L^2(\mathscr{H}^+)}^2=&\left\|\mathcal{A}_2(\mathcal{A}_2-2)F_{\mathscr{H}^+}\right\|^2_{L^2(\mathscr{H}^+)}+\left\|6M\partial_v F_{\mathscr{H}^+}\right\|^2_{L^2(\mathscr{H}^+)}. \end{split} \end{align} \begin{align}\label{+2 norm on overline H+ up to B} \begin{split} \left\|\partial_v \bm{\uppsi}_{{\mathscr{H}^+}}\right\|_{L^2(\overline{\mathscr{H}^+})}^2=&\left\|\mathcal{A}_2(\mathcal{A}_2-2)F_{\mathscr{H}^+}\right\|^2_{L^2(\overline{\mathscr{H}^+})}+\left\|6M\partial_v F_{\mathscr{H}^+}\right\|^2_{L^2(\overline{\mathscr{H}^+})}. \end{split} \end{align} \subsubsection{Radiation on $\mathscr{I}^+$}\label{+2 radiation on scri+} The estimates of \Cref{rp+2} lead us to define a radiation field for $\alpha$ the same way it is defined for $\Psi$ \begin{corollary}\label{psi+2scri1} For smooth data of compact support for $\alpha$ on $\Sigma$, $r^5\psi$ has a finite pointwise limit on $\mathscr{I}^+$ which defines a smooth field there. \end{corollary} \begin{proof} We follow step by step the argument of \Cref{RWradscri} and use the estimates of \Cref{T+1rp}. \end{proof} Similarly, using \Cref{T+2rp} we have \begin{corollary}\label{alpha+2scri} For smooth data of compact support for $\alpha$ on $\Sigma$, $r^5\alpha$ has a finite pointwise limit on $\mathscr{I}^+$ which defines a smooth field there. \end{corollary} For computational convenience we define \begin{defin}\label{+2 radiation alpha definition scri} For a solution $\alpha$ of (\ref{T+2}) arising from smooth data of compact support on $\Sigma^*$ as in \Cref{WP+2Sigma*} or on $\Sigma, \overline{\Sigma}$ as in (\ref{WP+2Sigmabar}), the radiation field of $\alpha$ along $\mathscr{I}^+$ is defined to be the limit $\upalpha_{\mathscr{I}^+}(u,\theta^A)=\lim_{v\longrightarrow\infty} r^5\Omega^{-2}\alpha(u,v,\theta^A)$.\\ \indent Let $\psi$ be as in \bref{hier+}. We define $\psi_{\mathscr{I}^+}$ to be the limit of $r^5\Omega^{-1}\psi$ as $v\longrightarrow\infty$. \end{defin} Repeating the argument of \Cref{RWdecayscri} we have \begin{proposition}\label{T+1+2scridecay} For a solution $\alpha$ of (\ref{T+2}) arising from smooth data of compact support on $\Sigma^*$ as in \Cref{WP+2Sigma*} or on $\Sigma, \overline{\Sigma}$ as in (\ref{WP+2Sigmabar}), the radiation fields $\upalpha_{\mathscr{I}^+}$, $\psi_{\mathscr{I}^+}$ and $\bm{\uppsi}_{\mathscr{I}^+}$ decay along $\mathscr{I}^+$ as $u\longrightarrow \infty$.\\ \end{proposition} \begin{remark}\label{psi+2scrialternative} We can appeal to an alternative argument that gives the existence of the limits of $r^5\psi$ and $r^5\alpha$ at $\mathscr{I}^+$ without resorting to the hierarchy of $r^p$-estimates as follows:\\ \indent Let $u\geq u_0$. From \Cref{RWradscri} we know that $\Psi$ induces a smooth radiation field $\bm{\uppsi}_{\mathscr{I}^+}$ on $\mathscr{I}^+$. For large enough $v$ the definition of $\psi$ gives \begin{align} r^5\Omega^{-1}\psi=\frac{r^2}{\Omega^2}\Big|_u\int_{u_0}^u \frac{\Omega^2}{r^2}\Psi d\bar{u}. \end{align} Therefore \begin{align} \begin{split} \Big|r^5\Omega^{-1}\psi\Big|_{(u,v)}&\leq \sup_{\bar{u}\in[u_0,u]}\left|\Psi|_{(\bar{u},v)}\right|\frac{r^2}{\Omega^2}\int_{u_0}^u\frac{\Omega^2}{r^2} d\bar{u}. \end{split} \end{align} Note that $ \frac{r^2}{\Omega^2}\int_{u_0}^u\frac{\Omega^2}{r^2}$ is uniformly bounded in $v$ for finite $u_0,u$. Since $\Psi$ is also uniformly bounded in $v$ on $[u_0,u]$ we can conclude (say by Lebesgue's bounded convergence theorem) that the pointwise limit $ \lim_{v\longrightarrow\infty} r^5\psi$ exists for any fixed $u$. Note now that (\ref{hier+}) also implies \begin{align}\label{+2 Gronwall ingredient} \Omega\slashed{\nabla}_3 r^5\Omega^{-1}\psi+\frac{3\Omega^2-1}{r} r^5\Omega^{-1}\psi=\Psi. \end{align} Then we have \begin{align} \Big|r^5\Omega^{-1}\psi\Big|_{u,v}\leq \int_{u_0}^u d\bar{u} \left|\Psi\right|+\int_{u_0}^ud\bar{u}\left(\frac{3\Omega^2-1}{r}\right)\left|r^5\Omega^{-1}\psi\right|. \end{align} We can apply Gr\"onwall's inequality to find: \begin{align}\label{backwards estimate +2 Gronwall} \Big|r^5\Omega^{-1}\psi\Big|_{u,v}\leq\int_{u_0}^ud\bar{u}\left|\Psi\right|\exp\left[\int_{u_0}^u \frac{3\Omega^2-1}{r} ds\right]\lesssim\left|\int_{u_0}^u d\bar{u}\Psi\right|\left(\frac{r(u,v)}{r(u_0,v)}\right)^2. \end{align} Thus $r^5\Omega^{-1}\psi$ is uniformly bounded in $v$ on $[u_0,u]$. Existence of the $\Omega\slashed{\nabla}_3$ derivatives of the limit of $r^5\psi$ is immediate. Repeating the argument for $r\slashed{\nabla} r^5\psi$ gives differentiability in the angular directions.\\ \indent The benefit of the preceding argument is that it allows for a characterisation of the radiation fields at null infinity that is local in $u$. \end{remark} \subsubsection{Radiation flux on $\mathscr{I}^+$}\label{+2 radiation flux on scri+} The radiation flux on $\mathscr{I}^+$ is easy enough to write down being already in a form that can be computed from the radiation field $\upalpha_{\mathscr{I}^+}$ given the uniform convergence of $r^5\alpha$, $r^5\psi$ and $\Psi$ towards $\mathscr{I}^+$: \begin{align}\label{Psi out of alpha at scri} \begin{split} \bm{\uppsi}_{\mathscr{I}^+}&=(\partial_u)^2 \upalpha_{\mathscr{I}^+},\\ \partial_u\bm{\uppsi}_{\mathscr{I}^+}&=(\partial_u)^3\upalpha_{\mathscr{I}^+}. \end{split} \end{align} \section{Future asymptotics of the $-2$ Teukolsky equation}\label{section 7} \Cref{section 7} is devoted to the study of future radiation fields induced by solutions to the $+2$ Teukolsky equation arising from smooth, compactly supported data on $\Sigma^*$, as was done for the $+2$ Teukolsky equation in \Cref{section 6} and to the Regge--Wheeler equation in \Cref{subsection 5.2 subsection Radiation fields}.\\ \indent We first gather the estimates we need in \Cref{subsection 7.1 integrated boundedness and decay estimates for -2}, where we collect results from \cite{DHR16} estimating $\underline\alpha$ from $\underline\Psi$ defined via (\ref{hier-}) and the estimates of \Cref{subsection 5.1 Basic integrated boundedness and decay estimates} for $\underline\Psi$. We apply these results to study the future radiation fields and their fluxes in \Cref{subsection 7.2 future radiation fields and fluxes}. The estimates of \cite{DHR16} collected in \Cref{subsection 7.1 integrated boundedness and decay estimates for -2} will be sufficient to construct and estimate the radiation fields on $\mathscr{H}^+$ and $\mathscr{I}^+$. \subsection{Integrated boundedness and decay for $\underline\alpha$ via $\underline\Psi$}\label{subsection 7.1 integrated boundedness and decay estimates for -2} We begin with the following basic proposition, already proven in \Cref{Chandra}: \begin{proposition}\label{-2 implies RW} Let $(\underline\upalpha,\underline\upalpha')$ be data for \cref{T-2} on $\Sigma^*$, $\Sigma$ or $\overline{\Sigma}$ as in \Cref{WP-2Sigma*,,WP-2Sigmabar} respectively. Then $\underline\Psi$ defined out of the solution $\underline\alpha$ on $ J^+(\Sigma^*)$, $ J^+(\Sigma)$ or $ J^+(\overline{\Sigma})$ satisfies \cref{RW}. \end{proposition} Throughout this section we focus on the case of data on $\Sigma^*$: \begin{proposition}\label{-2psiILED} Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds: \begin{align} \begin{split} \int_{\mathscr{D}^{u,v}_{\Sigma^*}} \Omega^2 d\bar{u}d\bar{v}d\omega\;r^{4}\Omega^{-2} |\underline\psi|^2+&\int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}d\omega\;r^6\Omega^{-2}|\underline\psi|^2\\ &\lesssim \mathbb{F}_{\Sigma^*}[\underline\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u)\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\; r^6\Omega^{-2}|\underline\psi|^2. \end{split} \end{align} \end{proposition} \begin{proof} The definition of $\underline\psi$ (\ref{hier-}) and Cauchy--Schwarz imply \begin{align} \partial_v [r^{6}\Omega^{-2}|\underline\psi|^2]+M r^{4}\Omega^{-2}|\underline\psi|^2\leq \frac{1}{Mr^2}|\underline\Psi|^2. \end{align} The result follows by integrating over $\mathscr{D}^{u,v}_{\Sigma^*}$. \end{proof} \begin{proposition}\label{-2alphaILED} Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$: \begin{align} \begin{split} \int_{\mathscr{D}^{u,v}_{\Sigma^*}} \Omega^2 d\bar{u}d\bar{v}d\omega\;\Omega^{-4}|\underline\alpha|^2+&\int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}d\omega\;r^2\Omega^{-4}|\underline\alpha|^2\\&\lesssim \mathbb{F}_{\Sigma^*}[\underline\Psi]+\int_{\Sigma^*\cap J^-(\mathscr{C}_u\cap J^-(\underline{\mathscr{C}}_v)}drd\omega\;r^6\Omega^{-2}|\underline\psi|^2+ r^2\Omega^{-4}|\underline\alpha|^2. \end{split} \end{align} \end{proposition} \begin{proof} Similar to \Cref{-2psiILED}. See Propositions 12.1.2, 12.2.6 and 12.2.7 of \cite{DHR16}. \end{proof} \begin{proposition} Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds: \begin{align}\label{2ndderivativeofpsibar} \begin{split} \int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}d\omega\;\left|-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2(r^3\Omega^{-1}\underline\psi)\right|^2&+\int_{\mathscr{D}^{u,v}_{{\Sigma^*}}} d\bar{u}d\bar{v}d\omega\;\frac{\Omega^2}{r^3}\left(1-\frac{3M}{r}\right)^2 \left|-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2(r^3\Omega\underline\psi)\right|^2 \\ &\lesssim \mathbb{F}_{{\Sigma^*}}[\underline\Psi]+\int_{{\Sigma^*}}drd\omega\;r^6\Omega^{-2}\left|\underline\psi\right|^2+r^2\Omega^{-4}\left|\underline\alpha\right|^2. \end{split} \end{align} \end{proposition} \begin{proposition} Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$. For sufficiently small $\epsilon>0$ the following estimate holds: \begin{align} \int_{\mathscr{D}^{u,v}_{{\Sigma^*}}} \Omega^2 d\bar{u}d\bar{v}d\omega\; r^{5-\epsilon} \Omega^{-2}|r\slashed{\mathcal{D}}_2\underline\psi|^2 &\lesssim \mathbb{F}_{{\Sigma^*}}[\underline\Psi]+\int_{{\Sigma^*}}drd\omega\;r^{6-\epsilon}\Omega^{-2}\left[|r\slashed{\mathcal{D}}_2\underline\psi|^2+|\underline\psi|^2\right]+r^{6-\epsilon}\Omega^{-4}|\underline\alpha|^2. \end{align} \end{proposition} \begin{proposition} Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds: \begin{align} \begin{split} \int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}&d\omega\;r^6|\Omega^{-1}\slashed{\nabla}_3(\Omega^{-1}\underline\psi)|^2+\int_{\mathscr{D}^{u,v}_{\Sigma^*}}\Omega^2 d\bar{u}d\bar{v}d\omega\; r^4\left[|\Omega^{-1}\slashed{\nabla}_3(\Omega^{-1}\underline\psi)|^2+|r\Omega\slashed{\nabla}_4(\Omega^{-1}\underline\psi)|^2\right]\\& \lesssim \mathbb{F}_{{\Sigma^*}}[\underline\Psi]+\int_{{\Sigma^*}}drd\omega\;r^4\Omega^{-2}\left[|\underline\psi|^2+|r\slashed{\mathcal{D}}_2\underline\psi|^2+|\Omega^{-1}\slashed{\nabla}_3(\Omega^{-1}\underline\psi)|^2+|r\Omega\slashed{\nabla}_4(\Omega^{-1}\underline\psi)|^2\right]. \end{split} \end{align} \end{proposition} \begin{proposition} Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds: \begin{align} \begin{split} &\int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}d\omega\;|r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}} r\Omega^{-2}\underline\alpha|^2+\int_{\mathscr{D}_{{\Sigma^*}}^{u,v}} \Omega^2 d\bar{u}d\bar{v}d\omega\;|r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2\Omega^{-2}\underline\alpha|^2 \\& \lesssim \mathbb{F}_{{\Sigma^*}}[\underline\Psi]+\int_{{\Sigma^*}}drd\omega\;r^4\Omega^{-2}\left[|\underline\psi|^2+|r\slashed{\mathcal{D}}_2\underline\psi|^2+|\Omega^{-1}\slashed{\nabla}_3(\Omega^{-1}\underline\psi)|^2+|r\Omega\slashed{\nabla}_4(\Omega^{-1}\underline\psi)|^2\right]+\int_{{\Sigma^*}} drd\omega\;|r\Omega^{-2}\underline\alpha|^2. \end{split} \end{align} \end{proposition} \begin{proposition} Let $\underline\alpha$ be a solution to (\ref{T-2}) and $\underline\Psi, \underline\psi$ be as in (\ref{hier-}) and \Cref{-2 implies RW}. Then for any $u$ and any $v>0$ such that $(u,v,\theta^A)\in J^+(\Sigma^*)$, the following estimate holds for sufficiently small $\epsilon>0$: \begin{align} \begin{split} &\int_{\underline{\mathscr{C}}_v\cap J^+(\Sigma^*)\cap J^-(\mathscr{C}_u)}\Omega^2 d\bar{u}d\omega\; \left[|r\Omega^{-2}\underline\alpha|^2+|r\slashed{\mathcal{D}}_2r\Omega^{-2}\underline\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3 r\Omega^{-2}\underline\alpha|^2\right]\\&+\int_{\mathscr{D}_{\Sigma^*}^{u,v}}\Omega^2 d\bar{u}d\bar{v}d\omega\; \left[|\Omega^{-2}\underline\alpha|^2+|r\slashed{\mathcal{D}}_2\Omega^{-2}\underline\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3 \Omega^{-2}\underline\alpha|^2\right]\\& \lesssim \mathbb{F}_{{\Sigma^*}}[\underline\Psi]+\int_{{\Sigma^*}}drd\omega\;r^6\left[|\Omega^{-1}\underline\psi|^2+|r\slashed{\mathcal{D}}_2\Omega^{-1}\underline\psi|^2+|\Omega^{-1}\slashed{\nabla}_3(\Omega^{-1}\underline\psi)|^2\right]\\& +\int_{{\Sigma^*}}drd\omega\; r^2\left[|\Omega^{-2}\underline\alpha|^2+|r\slashed{\mathcal{D}}_2\Omega^{-2}\underline\alpha|^2+|\Omega^{-1}\slashed{\nabla}_3\Omega^{-2}\underline\alpha|^2\right]. \end{split} \end{align} \end{proposition} \subsection{Future radiation fields and fluxes}\label{subsection 7.2 future radiation fields and fluxes} In this section the notion of future radiation fields of solutions to the -2 Teukolsky equation \bref{T-2} is defined, and some of the properties of these radiation fields are studied, in particular obtaining their $\mathcal{E}^{T,-2}_{\mathscr{H}^+}$, $\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ fluxes when they belong to solutions of \bref{T-2} arising from smooth data of compact support. \subsubsection{Radiation on $\mathscr{H}^+$}\label{-2 radiation on H+} \begin{defin}\label{-2 radiation alpha definition H} Let $\underline\alpha$ be a solution to \cref{T-2} arising from smooth data as in \Cref{WP-2Sigma*}. The radiation field of $\underline\alpha$ along $\mathscr{H}^+_{\geq0}$, denoted $\underline\upalpha_{\mathscr{H}^+}$, is defined to be the restriction of $2M\Omega^{-2}\underline\alpha$ to $\mathscr{H}^-$. \end{defin} \begin{defin}\label{-2 radiation alpha definition open H} Let $\underline\alpha$ be a solution to \cref{T-2} arising from smooth data which is compactly supported on $\Sigma$ according to \Cref{WP-2Sigmabar}. The radiation field of $\underline\alpha$ along $\mathscr{H}^+_{\geq0}$, denoted $\underline\upalpha_{\mathscr{H}^+}$, is defined to be the restriction of $2M\Omega^{-2}\underline\alpha$ to $\mathscr{H}^-$. \end{defin} \begin{defin}\label{-2 radiation alpha definition overline H} Let $\underline\alpha$ be a solution to \cref{T-2} arising from smooth data as in \Cref{WP-2Sigmabar}. The radiation field of $\underline\alpha$ along $\overline{\mathscr{H}^+}$, denoted $\underline\upalpha_{{\mathscr{H}^+}}$, is defined by $V^2\underline\upalpha_{{\mathscr{H}^+}}=2MV^2\Omega^{-2}\underline\alpha|_{{\mathscr{H}^+}}$. \end{defin} \begin{remark} We will use the same notation for the radiation field on $\mathscr{H}^+_{\geq0}, \mathscr{H}^+$ or $\overline{\mathscr{H}^+}$. \end{remark} The following applies equally to radiation fields on $\mathscr{H}^+_{\geq0}$, $\mathscr{H}^+$ and $\overline{\mathscr{H}^+}$. \begin{proposition}\label{-2 radiation ptwise decay H} Assume $\underline\alpha$ arises from data which is supported away from $i^0$, then $\lim_{v\longrightarrow\infty}\underline{\bm{\uppsi}}_{\mathscr{H}^+}=~0$. \end{proposition} \begin{proof} The flux estimate of \Cref{-2psiILED} commuted with $\mathcal{L}_T$ implies \begin{align} \int_{v_0}^\infty d\bar{v}d\omega\; \left|\Omega^{-1}\underline\psi\right|^2+ \left|\slashed{\nabla}_T\Omega^{-1}\underline\psi\right|^2 <\infty. \end{align} This implies $||\Omega^{-1}\underline\psi||_{S^2_{\infty,v}}\longrightarrow0$ as $v\longrightarrow\infty$. A further Sobolev embedding on the sphere gives the result. \end{proof} Similarly, we have \begin{proposition} Assume $\underline\alpha$ arises from data which is supported away from $i^0$, then $\lim_{v\longrightarrow\infty}\underline\upalpha_{\mathscr{H}^+}=0$. \end{proposition} \subsubsection{Radiation flux on $\mathscr{H}^+$}\label{-2 radiation flux on H+} Now we can calculate the radiation energies in terms of $\underline\alpha$. We want to rewrite \begin{align} \Omega\slashed{\nabla}_4\underline\Psi=\Omega\slashed{\nabla}_4\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\underline\alpha \end{align} in terms of $\Omega^{-2}\underline\alpha$ and $\Omega^{-1}\underline\psi$. We have for $\underline\psi$ \begin{align} \begin{split} r^3\Omega^{-1}\underline\psi&=\frac{r^2}{\Omega^4}\Omega\slashed{\nabla}_4 r\Omega^2\underline\alpha=\frac{r^2}{\Omega^4}\Omega\slashed{\nabla}_4 r\Omega^4 \Omega^{-2}\underline\alpha \\&=r^2(2-\Omega^2)\Omega^{-2}\underline\alpha+r^3\Omega\slashed{\nabla}_4 \Omega^{-2}\underline\alpha. \end{split} \end{align} We can write for $\underline\Psi$ \begin{align}\label{Psi H+} \begin{split} \underline\Psi&=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 r^3\Omega\underline\psi=2M r^3\Omega^{-1}\underline\psi+r^2\Omega\slashed{\nabla}_4 r^3\Omega^{-1}\underline\psi \\&=2r^3\Omega^{-2}\underline\alpha+r^4(3+\Omega^2)\Omega\slashed{\nabla}_4\Omega^{-2}\underline\alpha+r^5(\Omega\slashed{\nabla}_4)^2\Omega^{-2}\underline\alpha. \end{split} \end{align} We can write for $\Omega\slashed{\nabla}_4\underline\Psi$ \begin{align}\label{nablav Psi H+} \begin{split} \Omega\slashed{\nabla}_4 \underline\Psi=&6r^2\Omega^2\Omega^{-2}\underline\alpha+r^3(2+13\Omega^2+3\Omega^4)\Omega\slashed{\nabla}_4\Omega^{-2}\underline\alpha \\&+3r^4(1+2\Omega^2)(\Omega\slashed{\nabla}_4)^2\Omega^{-2}\underline\alpha+r^5(\Omega\slashed{\nabla}_4)^3\Omega^{-2}\underline\alpha. \end{split} \end{align} At $\mathscr{H}^+$ \bref{Psi H+}, \bref{nablav Psi H+} become \begin{align}\label{-2 Psi out of alpha H+} \underline{\bm{\uppsi}}_{\mathscr{H}^+}=(2M)^2\left[2\underline\upalpha_{\mathscr{H}^+}+6M\partial_v\underline\upalpha_{\mathscr{H}^+}+(2M)^2\partial_v^2\underline\upalpha_{\mathscr{H}^+}\right], \end{align} \begin{align}\label{-2 expression on H is regular} \Omega\slashed{\nabla}_4\underline{\bm{\uppsi}}_{\mathscr{H}^+}=(2M)\left[4M\partial_v\underline\upalpha_{\mathscr{H}^+}+3(2M)^2\partial_v^2\underline\upalpha_{\mathscr{H}^+}+(2M)^3\partial_v^3\underline\upalpha_{\mathscr{H}^+}\right]. \end{align} \begin{remark} On $\mathcal{E}^{T,+2}_{\mathscr{H}^-}$, the norm $\|\;\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}$ is equal to \begin{align} \|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}^2=\|2(2M\partial_v) A\|^2_{L^2(\mathscr{H}^+)}+\|3(2M\partial_v)^2 A\|^2_{L^2(\mathscr{H}^+)}+\|(2M\partial_v)^3 A\|^2_{L^2(\mathscr{H}^+)}. \end{align} while for $\|\;\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}$ we have \begin{align} \begin{split} \|A\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}^2&=\|2(2M\partial_v) A\|^2_{L^2(\mathscr{H}^+_{\geq0})}+\|3(2M\partial_v)^2 A\|^2_{L^2(\mathscr{H}^+_{\geq0})}+\|(2M\partial_v)^3 A\|^2_{L^2(\mathscr{H}^+_{\geq0})}\\ &-6\|(2M\partial_v)A\|_{L^2(S^2_{\infty,0})}^2-3\|(2M\partial_v)^2A\|_{L^2(S^2_{\infty,0})}^2. \end{split} \end{align} If the same computation for $\|\;\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}$ is done with terms expressed in the Eddington--Finkelstein coordinates, it produces boundary terms that are not regular near $\mathcal{B}$. The expression \bref{-2 expression on H is regular} for $\underline\Psi$ remains well-defined over $\mathscr{H}^+$ for data on $\overline{\Sigma}$ and has a finite limit at $\mathcal{B}$, as we can see by writing it in terms of the regular Kruskal coordinates: \begin{align} \|\underline\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}^2=\|V^{1/2}\partial_V^3 V^{-2}\underline\upalpha_{\mathscr{H}^+}\|_{L^2_VL^2(S^2_{\infty,v})}^2. \end{align} For smooth initial data on $\overline{\Sigma}$, \Cref{WP-2Sigmabar} guarantees the continuity of $V^2\Omega^{-2}\underline\alpha$ in a neighborhood of $\mathcal{B}$, and in the backwards direction we can show the same with \Cref{backwards wellposedness -2} and \Cref{WP-2Sigma*}. \end{remark} \subsubsection{Radiation on $\mathscr{I}^+$}\label{-2 radiation asymptotics on scri+} \begin{proposition}\label{-2psiscri} Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\psi,\underline\Psi$ be as in (\ref{hier-}). Then $r^3\underline\psi$ has a uniform smooth limit towards $\mathscr{I}^+$ \end{proposition} \begin{proof} We can integrate the definition of $\underline\Psi$ from (\ref{hier-}) from $r=R$ towards $\mathscr{I}^+$: \begin{align}\label{above136 psi} r^3\Omega\underline\psi|_{u,v}=r^3\Omega\underline\psi|_{u,v(u,R)}+\int_{v(u,R)}^v \frac{\Omega^2}{r^2} \underline\Psi. \end{align} Note that Cauchy--Schwarz and Hardy's inequality applied to the integral term give \begin{align} \left[\int_{S^2}d\omega\int_{v(u,R)}^vd\bar{v} \left|\frac{\Omega^2}{r^2} \underline\Psi\right|\right]^2\leq \frac{1}{R} \int_{\mathscr{C}_u\cap\{r>R\}}d\bar{v}d\omega\;\frac{\Omega^2}{r^2} \left|\underline\Psi\right|^2\lesssim\frac{1}{R}\int_{\mathscr{C}_u\cap\{r>R\}}d\bar{v}d\omega\; |\Omega\slashed{\nabla}_4\underline\Psi|^2, \end{align} which is finite for data of compact support. We can repeat this estimate for $r\slashed{\nabla}\underline\Psi$ conclude with a Sobolev embedding on the sphere that the integral on the right hand side of (\ref{above136 psi}) is bounded. The dominated convergence theorem gives the result. \Cref{RWrp} tells us that the convergence is uniform in $u$. Finally, we can repeat the argument having commuted with $\mathcal{L}_T, \mathcal{L}_{\Omega^i}$ to show that the limit is smooth. \end{proof} Similarly, \cref{eq:d3d3psibar} gives us \begin{proposition}\label{-2 radiation at scri} Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\psi$ be as in (\ref{hier-}). Then $r\underline\alpha$ has a uniform smooth limit $\underline{\upalpha}_{\mathscr{I}^+}$ towards $\mathscr{I}^+$. \end{proposition} \begin{proof} We can again integrate the definition of $\underline\psi$ from (\ref{hier-}) from $r=R$ towards $\mathscr{I}^+$: \begin{align}\label{above136 alpha} r\Omega^2\underline\alpha|_{u,v}=r\Omega^2\underline\alpha|_{u,v(u,R)}+\int_{v(u,R)}^vd\bar{v} \frac{\Omega^2}{r^2}r^3\Omega\underline\psi. \end{align} Hardy's inequality gives us \begin{align} \int_{\mathscr{C}_u\cap\{r>R\}}d\bar{v}d\omega\;\frac{\Omega^2}{r^2}\left|r^3\Omega\underline\psi\right|^2\lesssim \int_{\mathscr{C}_u\cap\{r>R\}}d\bar{v}d\omega\;|\Omega\slashed{\nabla}_4 r^3\Omega\underline\psi|^2=\int_{\mathscr{C}_u\cap\{r>R\}} d\bar{v}d\omega\;\frac{\Omega^2}{r^2}|\underline\Psi|^2. \end{align} We can conclude using the above and repeating the proof of \Cref{-2psiscri}. \end{proof} \begin{remark} In particular, $\slashed{\nabla}_T r\underline\alpha$ attains a limit towards $\mathscr{I}^+$ which is smooth and $\lim_{v\longrightarrow\infty} \slashed{\nabla}_T r\underline\alpha=\partial_u \underline\upalpha_{\mathscr{I}^+}$. \end{remark} \begin{remark} Instead of resorting to commutation with $\mathcal{L}_T, \mathcal{L}_{\Omega^i}$ directly, one could employ the hierarchy of (\ref{eq:d3psibar}) and (\ref{eq:d3d3psibar}) to estimate the derivatives of $\underline\psi$ and $\underline\alpha$ one by one with a smaller loss of derivatives, see \cite{DHR16}. \end{remark} \begin{defin}\label{-2 definition radiation at scri} For a solution $\underline\alpha$ of (\ref{T-2}) arising from smooth data of compact support on $\Sigma^*$ according to \Cref{WP-2Sigma*} or on $\Sigma, \overline{\Sigma}$ as in \Cref{WP-2Sigmabar}, the radiation field of $\underline\alpha$ along $\mathscr{I}^+$ is defined by $\underline\upalpha_{\mathscr{I}^+}(u,\theta^A)=\lim_{v\longrightarrow \infty} r\underline\alpha(u,v,\theta^A)$. \end{defin} \begin{proposition}\label{-2 psi ptwisedecay} Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\psi$ be as in (\ref{hier-}). Then $\psi|_{r=R}$ decays as $t\longrightarrow\infty$. \end{proposition} \begin{proof} The estimate of \Cref{-2psiILED} applied to $r<R$ for some fixed $R<\infty$, commuted with $T$ gives \begin{align} \lim_{v\longrightarrow \infty} \int_{\underline{\mathscr{C}}_v\cap\{2M<r<R\}}dud\omega\; \left|\Omega^{-1}\psi\right|=0. \end{align} Commuting with $\Omega^{-1}\slashed{\nabla}_3$ gives the result. \end{proof} \begin{corollary}\label{-2 alpha ptwisedecay} Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\Psi$ be as in (\ref{hier-}). Then $\alpha|_{r=R}$ decays as $t\longrightarrow\infty$. \end{corollary} \begin{proposition}\label{-2 psi radiation decay} Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\psi$ be as in (\ref{hier-}). Then $\underline{\psi}_{\mathscr{I}^+}:=\lim_{v\longrightarrow\infty}r^3\underline\psi$ decays towards the future end of $\mathscr{I}^+$. \end{proposition} \begin{proof} This follows from integrating (\ref{hier-}) between $r=R$ and $\mathscr{I}^+$: \begin{align} \int_{S^2_R}\left|\frac{1}{r^2}r^3\Omega\underline{\psi}|_{(u,v)}-\underline{{\psi}}_{\mathscr{I}^+}\big|_{u}\right|_{S^2}^2\lesssim \frac{1}{R}\int_{\mathscr{C}_u\cap\{r>R\}} |\Omega\slashed{\nabla}_4\underline\Psi|^2. \end{align} This decays as $u\longrightarrow\infty$ by energy conservation. \Cref{-2 psi ptwisedecay} gives the result. \end{proof} \begin{corollary}\label{-2 alpha radiation decay} Let $\underline\alpha$ be a solution to (\ref{T-2}) arising from smooth compactly supported data on $\Sigma^*$ and let $\underline\psi$ be as in (\ref{hier-}). Then the radiation field $\upalpha_{\mathscr{I}^+}$ of \Cref{-2 definition radiation at scri} decays towards $\mathscr{I}^+_+$ \end{corollary} \subsubsection{Radiation flux on $\mathscr{I}^+$}\label{-2 radiation flux on scri+} We want to find the limit towards $\mathscr{I}^+$ of \begin{align}\label{eq:116} \Omega\slashed{\nabla}_3\underline\Psi=-(3\Omega^2-1)\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4r\Omega^2\underline\alpha+6Mr\Omega^2\underline\alpha-2r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4r\Omega^2\underline\alpha. \end{align} As $\underline\psi$ is related to the transverse derivative of $\underline\alpha$ near $\mathscr{I}^+$, we want to express $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 r\Omega^2\underline\alpha$ in terms of quantities that can be constructed intrinsically on $\mathscr{I}^+$ from data. We do this by integrating the Teukolsky equation: recall \cref{eq:d3d3psibar} \begin{align}\label{this22} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\underline\Psi=6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\underline\alpha+\mathcal{A}_2(\mathcal{A}_2-2)r\Omega^2\underline\alpha. \end{align} The results of the previous section give us the asymptotics: \begin{align} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\underline\Psi=(\Omega\slashed{\nabla}_3)^2\underline\Psi-\left(\frac{3\Omega^2-1}{r}\right)\Omega\slashed{\nabla}_3 \underline\Psi\longrightarrow (\partial_u)^2\underline\Psi \text{\;\;towards\;} \mathscr{I}^+. \end{align} The right hand side gives: \begin{align} 6M\partial_u \underline\upalpha_{\mathscr{I}^+}+\mathcal{A}_2\left(\mathcal{A}_2-2\right) \underline\upalpha_{\mathscr{I}^+}. \end{align} whereas the left hand side becomes $\partial_u^2\underline{\bm{\uppsi}}_{\mathscr{I}^+}$. \bref{this22} then becomes at $\mathscr{I}^+$ \begin{align} \partial_u^2\underline\Psi=6M\partial_u \underline\upalpha_{\mathscr{I}^+}+\mathcal{A}_2\left(\mathcal{A}_2-2\right) \underline\upalpha_{\mathscr{I}^+}. \end{align} We can integrate along $\mathscr{I}^+$: \begin{align}\label{202} \partial_u \underline\Psi|_u=\partial_u\underline\Psi|_{u_0}-6M\underline\upalpha_{\mathscr{I}^+}|_{u_0}+6M\underline\upalpha_{\mathscr{I}^+}|_{u}+\mathcal{A}_2\left(\mathcal{A}_2-2\right)\int_{u_0}^u \underline\upalpha_{\mathscr{I}^+} d\bar{u}. \end{align} The fact that $\lim_{u\longrightarrow\infty} \partial_u \underline{\bm{\uppsi}}_{\mathscr{I}^+}=0=\lim_{u\longrightarrow\infty} \underline\upalpha_{\mathscr{I}^+}$ tells us that \begin{align} \mathcal{A}_2\left(\mathcal{A}_2-2\right)\int_{u_0}^\infty r\underline\alpha=-\partial_u\underline{\bm{\uppsi}}_{\mathscr{I}^+}|_{u_0}+6M\underline\upalpha_{\mathscr{I}^+}|_{u_0}. \end{align} For data of compact support on $\Sigma$, we can take $u_0$ such that the right hand side vanishes. Knowing that $\mathcal{A}_2, \mathcal{A}_2-2$ are uniformly elliptic, we must have \begin{align}\label{-2 mean is 0} \int_{u_0}^\infty \underline\upalpha_{\mathscr{I}^+}=0. \end{align} We can integrate (\ref{202}) once more to find a useful expression for $\underline{\bm{\uppsi}}_{\mathscr{I}^+}$ that can be computed from data on $\mathscr{I}^+$: \begin{align}\label{-2 Psi out of alpha on scri+} \underline{\bm{\uppsi}}_{\mathscr{I}^+}(u,\theta^A)=6M\int_{u_0}^u d\bar{u}\underline\upalpha_{\mathscr{I}^+}+\mathcal{A}_2\left(\mathcal{A}_2-2\right)\int_{u_0}^u d\bar{u}(u-\bar{u})\underline\upalpha_{\mathscr{I}^+}. \end{align} Again, seeing that $\underline\Psi|_{\mathscr{I}^+}$ decays towards $\mathscr{I}^+_+$ we have: \begin{align} \int_{u_0}^\infty\int_{u_1}^\infty du_1du_2\underline\upalpha_{\mathscr{I}^+}=\int_{u_0}^\infty d\bar{u}(u-\bar{u})r\underline\alpha=0. \end{align} We can rewrite $\underline{\bm{\uppsi}}_{\mathscr{I}^+}$ and $\partial_u\underline{\bm{\uppsi}}_{\mathscr{I}^+}$: \begin{align}\label{formula for -2 RW in backwards direction} \underline{\bm{\uppsi}}_{\mathscr{I}^+}=-6M\int_u^{\infty}d\bar{u} \underline\upalpha_{\mathscr{I}^+}-\mathcal{A}_2\left(\mathcal{A}_2-2\right)\int_u^{\infty}d\bar{u}(u-\bar{u})\underline\upalpha_{\mathscr{I}^+}. \end{align} \begin{align}\label{formula for partialu -2 RW in backwards direction} \partial_u\underline{\bm{\uppsi}}_{\mathscr{I}^+}=-\mathcal{A}_2\left(\mathcal{A}_2-2\right)\int_u^{\infty} d\bar{u} \underline\upalpha_{\mathscr{I}^+}+6M\underline\upalpha_{\mathscr{I}^+}|_u. \end{align} Using \bref{-2 mean is 0}, we can recover \bref{-2 tricky norm at scri} \begin{align} \|\partial_u\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{L^2(\mathscr{I}^-)}^2=\int_{\mathscr{I}^+} d{u}\sin\theta d\theta d\phi\left[ 6M|\underline{\alpha}_{\mathscr{I}^+}|^2+\left|\mathcal{A}_2(\mathcal{A}_2-2)\int_{\bar{u}}^\infty d\bar{u}\; \underline{\alpha}_{\mathscr{I}^+}\right| ^2\right]. \end{align} \begin{remark}The fact that $\int_{-\infty}^\infty du_1\; \underline{\bm{\uppsi}}_{\mathscr{I}^+}=\int_{-\infty}^\infty\int_{u_1}^\infty du_1 du_2 \;\underline{\bm{\uppsi}}_{\mathscr{I}^+}=0$ implies \begin{align} \int_{-\infty}^\infty\int_{u_1}^\infty\int_{u_2}^\infty du_1du_2du_3\;\underline\upalpha_{\mathscr{I}^+}=\int_{u_0}^\infty\int_{u_1}^\infty\int_{u_2}^\infty\int_{u_3}^\infty du_1du_2du_3du_4\;\underline\upalpha_{\mathscr{I}^+}=0. \end{align} \end{remark} \section{Constructing the scattering maps for $\alpha, \underline\alpha$}\label{section 8 constructing the scattering maps} We gather the results of Sections \ref{section 6} and \ref{section 7} to finally construct the scattering theory for the Teukolsky equations \bref{T+2}, \bref{T-2}. \Cref{subsection 8.1 future scattering +2} is devoted to the +2 Teukolsky equation \bref{T+2}, where \Cref{subsubsection 8.1.1 forwards scattering +2} handles forwards scattering and \Cref{subsubsection 8.1.2 backwards scattering +2} handles backwards scattering. \Cref{subsection 8.2 future scattering -2} is devoted to the -2 Teukolsky equation \bref{T-2}, where \Cref{subsubsection 8.2.1 forwards scattering -2} handles forwards scattering and \Cref{subsubsection 8.2.2 backwards scattering -2} handles backwards scattering. Taking into account \Cref{time inversion}, results concerning scattering towards the past are immediate and they are collected in \Cref{subsection 8.3 past scattering +2-2}. \subsection{Future scattering for $\alpha$}\label{subsection 8.1 future scattering +2} Forwards scattering for the +2 Teukolsky equation \bref{T+2} is worked out entirely analogously to the case of the Regge--Wheeler equation \bref{RW}, using the results of Section \ref{+2 radiation}.\\ \indent For backwards scattering, we make use of the transport equations \bref{hier+} and the backwards scattering theory of \Cref{subsection 5.2 subsection Radiation fields} for the Regge--Wheeler equation \bref{RW}, instead of directly appealing to a limiting argument that repeats the proof of \Cref{RWbackwardsexistence}. Throughout this process, the uniform $T$-energy estimates of $\Psi$ are vital in controlling the backwards evolution of $\alpha$, but we note here that it is possible to derive uniform, nondegenerate energy estimates for $\alpha$ near $\mathscr{H}^+$ in contrast with the case of $\Psi$. In this sense, $\alpha$ is "red-shifted" in the backwards direction. \subsubsection{Forwards scattering for $\alpha$}\label{subsubsection 8.1.1 forwards scattering +2} We put together the ingredients worked out in \Cref{+2 radiation} to construct the forwards scattering map. \begin{proof}[Proof of \Cref{+2 future forward scattering}] Let $\alpha$ be the solution to \cref{T+2} on $ J^+(\Sigma^*)$ arising out of a compactly supported data set $(\upalpha,\upalpha')$ on $\Sigma^*$ as in \Cref{WP+2Sigma*}. The radiation field $\upalpha_{\mathscr{H}^+}$ exists as in \Cref{+2 radiation alpha definition H}. \Cref{alpha+2ptwisedecay} applied for $R=2M$ says that $\upalpha_{\mathscr{H}^+}\longrightarrow 0$ towards $\mathscr{H}^+_+$. Let $\Psi$ be the solution to \cref{RW} associated to $\alpha$ via (\ref{hier+}). The fact that $(\Psi|_{\Sigma^*},\slashed{\nabla}_{T}\Psi|_{\Sigma^*})$ are compactly supported means that the results of \Cref{+2 radiation flux on H+} apply. In particular, we find that \begin{align} \left|\int_{v}^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}\upalpha_{\mathscr{H}^+}(\bar{v},\theta^A)\right|\leq \frac{1}{2M} |\upalpha_{\mathscr{H}^+}(v,\theta^A)|, \end{align} and since $\|\uppsi_{\mathscr{H}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+_{\geq0}}}<\infty$, this shows that $\|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}<\infty$ and $\upalpha_{\mathscr{H}^+}\in \mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}$. Similarly by \Cref{alpha+2scri}, $r^5\alpha$ has a pointwise limit as $v\longrightarrow \infty$ which induces a smooth $\upalpha_{\mathscr{I}^+}$ on $\mathscr{I}^+$. \Cref{T+1+2scridecay} implies that $\upalpha_{\mathscr{I}^+}$ decays towards $\mathscr{I}^+_+$. As $\Psi_{\mathscr{I}^+}\in \mathcal{E}^T_{\mathscr{I}^+}$, we have that $\upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,+2}_{\mathscr{I}^+}$. \end{proof} \begin{corollary}\label{+2 future forward scattering Sigma Sigmabar} Solutions to (\ref{T+2}) arising from data on ${\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T,+2}$ and $\mathcal{E}_{{\mathscr{H}^+}}^{T,+2}$. Solutions to (\ref{T+2}) arising from data on $\overline{\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T,+2}$ and $\mathcal{E}_{\overline{\mathscr{H}^+}}^{T,+2}$ \end{corollary} \begin{proof} Identical to the proof of \Cref{RWfcpSigma} using \Cref{WP+2Sigmabar,,backwards wellposedness +2}. \end{proof} The proof of \Cref{+2 future forward scattering} above and \Cref{+2 future forward scattering Sigma Sigmabar} allow us to define the forwards maps ${}^{(+2)}\mathscr{F}^+$ from dense subspaces of $\mathcal{E}^{T+2}_{\Sigma^*}$, $\mathcal{E}^{T,+2}_{\Sigma}$, $\mathcal{E}^{T,+2}_{\overline{\Sigma}}$. \begin{defin} Let $(\upalpha,\upalpha')$ be a smooth data set of compact support to the +2 Teukolsky equation \bref{T+2} on $\Sigma^*$ as in \Cref{WP+2Sigma*}. Define the map ${}^{(+2)}\mathscr{F}^+$ by \begin{align} {}^{(+2)}\mathscr{F}^+:\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)\longrightarrow \Gamma(\mathscr{H}^+_{\geq0})\times\Gamma(\mathscr{I}^+), (\upalpha,\upalpha')\longrightarrow (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+}), \end{align} where $(\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})$ are as in the proof of \Cref{+2 future forward scattering}.\\ \indent Using \Cref{+2 future forward scattering Sigma Sigmabar}, the map ${}^{(+2)}\mathscr{F}^+$ is defined analogously for data on $\Sigma, \overline{\Sigma}$: \begin{align} {}^{(+2)}\mathscr{F}^+:\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)\longrightarrow \Gamma(\mathscr{H}^+)\times\Gamma(\mathscr{I}^+), (\upalpha,\upalpha')\longrightarrow (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+}),\\ {}^{(+2)}\mathscr{F}^+:\Gamma_c(\overline{\Sigma})\times\Gamma_c(\overline{\Sigma})\longrightarrow \Gamma(\overline{\mathscr{H}^+})\times\Gamma(\mathscr{I}^+), (\upalpha,\upalpha')\longrightarrow (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+}). \end{align} \end{defin} \subsubsection{Backwards scattering for $\alpha$}\label{subsubsection 8.1.2 backwards scattering +2} Now we construct the inverse ${}^{(+2)}\mathscr{B}^-$ of \Cref{+2 future backward scattering} on a dense subspace of $\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}$. The existence of a solution to the +2 Teukolsky equation \bref{T+2} out of compactly supported scattering data on $\mathscr{H}^+_{\geq0}, \mathscr{I}^+$ is shown in \Cref{+2 backwards existence}. Showing that this solution defines an element of $\mathcal{E}^{T,+2}_{\Sigma^*}$ is done in \Cref{+2 backwards inclusion 7/2}. \begin{proposition}\label{+2 backwards existence} For $\upalpha_{\mathscr{H}^+}\in\Gamma_c(\mathscr{H}^+_{\geq0})\cap\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}$ supported on $\mathscr{H}^+_{\geq0}\cap\{v<v_+\}$ for $v_+<\infty$, ${\alpha}_{\mathscr{I}^+}\in\Gamma_c(\mathscr{I}^+)\cap\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ supported on on $\mathscr{I}^+\cap\{u<u_+\}$ for $u_+<\infty$, there exists a unique solution $\alpha$ to \bref{T+2} in $J^+(\Sigma^*)$ that realises $\upalpha_{\mathscr{H}^+}$ and $\upalpha_{\mathscr{I}^+}$ as its radiation fields on $\mathscr{H}^+_{\geq0}, \mathscr{I}^+$. \end{proposition} \begin{proof} Define \begin{align} {\psi}_{\mathscr{H}^+}&=\frac{1}{(2M)^3}\int_v^\infty d\bar{v}\; e^{\frac{1}{2M}(v-\bar{v})}(\mathcal{A}_2-3)\upalpha_{\mathscr{H}^+},\\ \bm{\uppsi}_{\mathscr{H}^+}&=2M\int_v^\infty d\bar{v} \left[e^{\frac{1}{2M}(v-\bar{v})}-1\right]\left\{\mathcal{A}_2\left[\mathcal{A}_2-2\right]\upalpha_{\mathscr{H}^+}-6M\partial_v\upalpha_{\mathscr{H}^+}\right\},\\ \psi_{\mathscr{I}^+}&=\partial_u\upalpha_{\mathscr{I}^+},\\ \bm{\uppsi}_{\mathscr{I}^+}&=\partial_u^2\upalpha_{\mathscr{I}^+}. \end{align} With scattering data $\bm{\uppsi}_{\mathscr{I}^+}, \bm{\uppsi}_{\mathscr{H}^+}$, there is a unique solution $\Psi$ to \cref{RW} on $J^+(\Sigma^*)$. Define $\psi, \alpha$ by \begin{align} r^3\Omega\psi=(2M)^3\psi_{\mathscr{H}^+}-\int_u^\infty\frac{\Omega^2}{r^2}\Psi d\bar{u},\qquad\qquad r\Omega^2\alpha=\upalpha_{\mathscr{H}^+}-\int_u^\infty r{\Omega^3}\psi d\bar{u}, \end{align} then $\psi, \alpha$ satisfy the transport relations \bref{hier+}: \begin{align} \Psi=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3 r^3\Omega\psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha. \end{align} (note that we are working with $(1,1)$-tensor fields throughout). The boundedness of $F_v^T[\Psi](u,\infty)$ implies that $\Omega^2\alpha\longrightarrow\upalpha_{\mathscr{H}^+}$, $\Omega\psi\longrightarrow{\psi}_{\mathscr{H}^+}$ as $u\longrightarrow\infty$. Since $\Psi$ satisfies \cref{RW}, the commutation relation \bref{commutation relation} implies \begin{align} \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2\mathcal{T}^{+2}r\Omega^2\alpha=0. \end{align} where $\mathcal{T}^{+2}$ is the $+2$ Teukolsky operator. We have: \begin{align} \begin{split} \mathcal{T}^{+2}r\Omega^2\alpha&=\frac{3\Omega^2-1}{r}r^3\Omega\psi+\Omega\slashed{\nabla}_4 r^3\Omega\psi-\left(\mathcal{A}_2-\frac{6M}{r}\right)r\Omega^2\alpha\\ \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\mathcal{T}^{+2}r\Omega^2\alpha&=-(\mathcal{A}_2-3\Omega^2+1)r^3\Omega\psi-\Omega\slashed{\nabla}_4\Psi+6Mr\Omega^2\alpha \end{split} \end{align} On $\mathscr{H}^+$ this evaluates to \begin{align} \mathcal{T}^{+2}r\Omega^2\alpha|_{\mathscr{H}^+}&=(2M)^3\left(\partial_v-\frac{1}{2M}\right)\psi_{\mathscr{H}^+}-(\mathcal{A}_2-3)\upalpha_{\mathscr{H}^+},\\ \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\mathcal{T}^{+2}r\Omega^2\alpha|_{\mathscr{H}^+}&=-(2M)^3(\mathcal{A}_2+1)\psi_{\mathscr{H}^+}+6M\upalpha_{\mathscr{H}^+}-\partial_v\bm{\uppsi}_{\mathscr{H}^+}. \end{align} It is clear that with our construction of initial data, $\mathcal{T}^{+2}r\Omega^2\alpha|_{\mathscr{H}^+}= \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\mathcal{T}^{+2}r\Omega^2\alpha|_{\mathscr{H}^+}=0$, therefore $\alpha$ satisfies $\mathcal{T}^{+2}r\Omega^2\alpha=0$. Note that as $\Psi(u,v)$ vanishes for $u>u_+,v>v_+$, the same applies to $\alpha,\psi$. Let $\mathcal{R}>3M$, we can estimate $\psi(u,v)$ for $r(u_+,v)>\mathcal{R}$ by: \begin{align} |r^5\Omega\psi|\leq\int_u^{u_+}\Omega^2|\Psi|+\int_u^{u_+}\frac{2}{r}|r^5\Omega\psi| \end{align} Gr\"onwall's inequality implies \begin{align} |r^5\Omega\psi|\lesssim \left(\frac{r(u,v)}{r(u_+,v)}\right)^2\int_u^{u_+}|\Psi|. \end{align} As $\Psi$ converges uniformly to $\bm{\uppsi}_{\mathscr{I}^+}$, this implies that $\partial_u r^5\Omega\psi$ converges uniformly to $\partial_u\bm{\uppsi}_{\mathscr{H}^+}$, which in turn says that $r^5\Omega\psi$ converges to $\psi_{\mathscr{H}^+}$. An identical argument shows that $r^5\alpha$ converges to $\upalpha_{\mathscr{I}^+}$. \end{proof} In the following we explicitly show that $\alpha$ of \Cref{+2 backwards existence} defines a member of $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$: \begin{corollary}\label{+2 backwards inclusion 7/2} Let $\upalpha_{\mathscr{H}^+}, \upalpha_{\mathscr{I}^+}$ be as in \Cref{+2 backwards existence}. Let $\alpha$ be the solution to \cref{T+2} arising from $\upalpha_{\mathscr{H}^+}, \upalpha_{\mathscr{I}^+}$. Then $(\Omega^2\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha|_{\Sigma^*})\in\mathcal{E}^{T,+2}_{\Sigma^*}$ \end{corollary} \begin{proof} Let $\xi$ be a smooth cutoff function over $\mathbb{R}$ with $\xi=1$ for $r\leq0$, $\xi=0$ for $r\geq1$ such that all derivatives of $\xi$ are uniformly bounded. Let $\{R_n\}_{n=1}^\infty$ with $R_{1}$ large and $R_{n+1}=2R_n$ and define $\xi_n (r)=\xi\left(\frac{r-R_{n}}{R_{n+1}-R_n}\right)$. We want to show that the sequence $\alpha_n=\xi_n\alpha$ is such that $(\Omega^2\alpha_n,\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha_n)$ converges to $(\Omega^2\alpha,\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha)$ in $\mathcal{E}^{T,+2}_{\Sigma^*}$. Denoting by $\Psi_{n}=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha_n$ the solution to the Regge--Wheeler equation arising from $\alpha_n$, we calculate \begin{align} \begin{split} \Psi_n&=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2 r\Omega^2\alpha_n=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2\xi_n r\Omega^2\alpha\\ &=r^2(r^2\xi_n')'r\Omega^2\alpha-2r^2\xi_n'r^3\Omega\psi+\xi_n\Psi. \end{split} \end{align} We know that $\xi_n\Psi\longrightarrow \Psi$ in $\mathcal{E}^{T}_{\Sigma^*}$ (see \Cref{RW enough to be in space}). Seeing that $r^2\xi_n'\sim r, r^2(r^2\xi_n')'\sim r^2$ on $[R_n,R_{n+1}]$, we can estimate the remainder via \begin{align} \begin{split} \|\Psi_n-\xi_n\Psi\|_{\mathcal{E}^T_{\Sigma^*}}^2\lesssim\int_{R_n}^{R_{n+1}} dr\sin\theta d\theta d\phi\;& \left[|r^3\Omega\psi|^2+|\mathring{\slashed{\nabla}}r^3\Omega\psi|^2+|r\Omega\slashed{\nabla}_4 r^3 \Omega\psi|^2\right]\\&+\left[|r^3\Omega\alpha|^2+|\mathring{\slashed{\nabla}}r^3\Omega\alpha|^2+|r\Omega\slashed{\nabla}_4 r^3 \Omega\alpha|^2\right]\\&+\left[\frac{1}{r^2}(|\Psi|^2+|\mathring{\slashed{\nabla}}\Psi|^2)+|\Omega\slashed{\nabla}_4\Psi|^2\right]. \end{split} \end{align} The result follows if we can show that $r^{\frac{7}{2}}\Omega\psi|_{\Sigma^*}, r^{\frac{7}{2}}\Omega^2\alpha|_{\Sigma^*}, r^{\frac{3}{2}}\Omega\slashed{\nabla}_4 r^3\Omega\psi, r^{\frac{3}{2}}\Omega\slashed{\nabla}_4 r^3\Omega^2\alpha$ decay as $r\longrightarrow\infty$. Let $u<u'<u_-$ and take $r=r(u',v), R=r(u,v)$ and $(u,v,\theta^A):=(R,\theta^A)\in\Sigma^*$. We estimate $R^{\frac{7}{2}}\Omega\psi|_{\Sigma^*}$ by integrating the definition of $\Psi$ (\ref{hier+}): \begin{align} \begin{split} \int_{S^2}R^{\frac{7}{2}}\Omega|\psi(R,\theta^A)|d\omega&\leq\sqrt{r}\int_u^{u'}d\bar{u}\int_{S^2}d\omega\; \frac{\Omega^2}{r^2}|\Psi|+\sqrt{R}r^{3}\Omega|\psi(u',v,\theta^A)|\\&\lesssim_{u'}\sqrt{r}\int_u^{u'}d\bar{u}\int_{S^2}d\omega\; \frac{\Omega^2}{r^2}|\Psi|+r^{\frac{7}{2}}\Omega|\psi(u',v,\theta^A)|\\&\lesssim_{u'}\sqrt{F^T_v[\Psi](u,u')}+r^{\frac{7}{2}}\Omega|\psi(u',v,\theta^A)|. \end{split} \end{align} We used Cauchy--Schwarz to get to the last step. The right hand side decays as $v\longrightarrow\infty$ since $F^T_v[\Psi](u,u')$ decays, $F^T_{u'}[\Psi](v,\infty)<\infty$ and $\bm{\uppsi}_{\mathscr{I}^+}$ vanishes for $u<u_-$, so that \begin{align} |r^{3}\Omega\psi(u',v,\theta^A)|_{L^2(S^2_{u',v})}\leq\int_v^\infty d\bar{v}\int_{S^2_{u',\bar{v}}} \frac{\Omega^2}{r^2}|\Psi|\leq\frac{1}{\sqrt{r(u',v)}}\sqrt{F^T_{u'}[\Psi](v,\infty)}. \end{align} and commuting with $\slashed{\mathcal{L}}_{S^2}^\gamma$ for $|\gamma|\leq 3$ gives that $R^{\frac{7}{2}}\Omega\psi|_{\Sigma^*}$ decays as $R\longrightarrow\infty$. This can be repeated to show the same for $R^{\frac{7}{2}}\Omega^2\alpha|_{\Sigma^*}$. Furthermore, we have \begin{align} \begin{split} \Omega\slashed{\nabla}_3 r\Omega\slashed{\nabla}_4 r^3\Omega\psi=-\frac{\Omega^2}{r} r\Omega\slashed{\nabla}_4 r^3\Omega\psi+(3\Omega^2-1)\frac{\Omega^2}{r^2}\Psi+\frac{\Omega^2}{r}\Omega\slashed{\nabla}_4 \Psi. \end{split} \end{align} We estimate \begin{align} \left|r\Omega\slashed{\nabla}_4 r^3\Omega\psi|_{\Sigma^*}\right|\leq \left|r\Omega\slashed{\nabla}_4 r^3\Omega\psi(u',v,\theta^A)\right|+\int_u^{u'}d\bar{u}\left[\frac{\Omega^2}{r}|r\Omega\slashed{\nabla}_4 r^3\Omega\psi|+(3\Omega^2-1)\frac{\Omega^2}{r^2}|\Psi|+\frac{\Omega^2}{r}|\Omega\slashed{\nabla}_4\Psi|\right]. \end{align} Gr\"onwall's inequality implies \begin{align} \left|r\Omega\slashed{\nabla}_4 r^3\Omega\psi|_{\Sigma^*}\right|\lesssim\frac{r(u',v)}{r(u,v)}\left[\left|r\Omega\slashed{\nabla}_4 r^3\Omega\psi(u,v,\theta^A)\right|+\frac{1}{\sqrt{R}}\sqrt{F^T_v[\Psi](u,u')}\right], \end{align} which in turn implies that $r^{\frac{3}{2}}\Omega\slashed{\nabla}_4 r^3\Omega\psi|_{\Sigma^*}\longrightarrow 0$ as $R\longrightarrow\infty$. The same can be repeated to show $r^{\frac{3}{2}}\Omega\slashed{\nabla}_4 r^3\Omega^2\alpha|_{\Sigma^*}\longrightarrow 0$ as $R\longrightarrow\infty$. \end{proof} \begin{defin}\label{+2 definition of B-} Let $\upalpha_{\mathscr{H}^+}, \upalpha_{\mathscr{I}^+}$ be as in \Cref{+2 backwards existence}. Define the map ${}^{(+2)}\mathscr{B}^-$ by \begin{align} {}^{(+2)}\mathscr{B}^-:\Gamma_c(\mathscr{H}^+_{\geq0})\times\Gamma_c(\mathscr{I}^+)\longrightarrow\Gamma(\Sigma^*)\times\Gamma(\Sigma^*), (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})\longrightarrow (\Omega^2\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^2\alpha|_{\Sigma^*}), \end{align} where $\alpha$ is the solution to \bref{T+2} arising from scattering data $(\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})$ as in \Cref{+2 backwards existence}. \end{defin} \begin{corollary}\label{+2B- inverts +2F+} The maps ${}^{(+2)}\mathscr{F}^+$, ${}^{(+2)}\mathscr{B}^-$ extend uniquely to unitary Hilbert space isomorphisms on their respective domains, such that ${}^{(+2)}\mathscr{F}^+\circ{}^{(+2)}\mathscr{B}^-=Id$, ${}^{(+2)}\mathscr{B}^-\circ{}^{(+2)}\mathscr{F}^+=Id$. \end{corollary} \begin{proof} Identical to the proof of \Cref{B- inverts F+}. \end{proof} \begin{remark}\label{unitarity of +2B- is trivial} As in the case of \Cref{unitarity of B- is trivial}, \Cref{+2B- inverts +2F+} implies \begin{align}\label{unitarity of +2B- formula} \|{}^{(+2)}\mathscr{B}^-(\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})\|_{\mathcal{E}^{T,+2}_{\Sigma^*}}^2=\|\upalpha_{\mathscr{H}^+}\|^2_{\mathcal{E}^{T,+2}_{\mathscr{H}^+_{\geq0}}}+\|\upalpha_{\mathscr{I}^+}\|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}. \end{align} As in the case of \Cref{RW unitary backwards}, we can use the backwards $r^p$-estimates of \Cref{backwards rp estimates} to directly show \bref{unitarity of +2B- formula} without reference to the forwards map ${}^{(+2)}\mathscr{F}^+$. \end{remark} Since the region $J^+(\overline{\Sigma})\cap J^-(\Sigma^*)$ can be handled locally via \Cref{WP+2Sigmabar}, \Cref{backwards wellposedness +2} and $T$-energy conservation, we can immediately deduce the following: \begin{corollary} The map ${}^{(+2)}\mathscr{B}^-$ can be defined on the following domains: \begin{align} {}^{(+2)}\mathscr{B}^{-}:\mathcal{E}^{T,+2}_{\mathscr{H}^+}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,+2}_{\Sigma},\\ {}^{(+2)}\mathscr{B}^{-}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,+2}_{\overline{\Sigma}}, \end{align} and we have \begin{align} {}^{(+2)}\mathscr{F}^{+}\circ{}^{(+2)}\mathscr{B}^{-}=Id_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}\oplus\;\mathcal{E}^{T,+2}_{\mathscr{I}^+}},\qquad {}^{(+2)}\mathscr{B}^{-}\circ{}^{(+2)}\mathscr{F}^{+}=Id_{\mathcal{E}^{T,+2}_{\Sigma}},\\ {}^{(+2)}\mathscr{F}^{+}\circ{}^{(+2)}\mathscr{B}^{-}=Id_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\;\mathcal{E}^{T,+2}_{\mathscr{I}^+}},\qquad {}^{(+2)}\mathscr{B}^{-}\circ{}^{(+2)}\mathscr{F}^{+}=Id_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}}. \end{align} \end{corollary} This concludes the proof of \Cref{+2 future backward scattering}. \begin{remark}[A nondegenerate estimate near $\mathscr{H}^+$]\label{nondegenerate estimate near H+} Note that the transport hierarchy \bref{hier+} implies (integrating in the measure $du\sin\theta d\theta d\phi$) \begin{align} \begin{split} \int_{\underline{\mathscr{C}}_v\cap[u,\infty)} \frac{1}{\Omega^2} |\Omega\slashed{\nabla}_3 r^3\Omega\psi|^2&= \int_{\underline{\mathscr{C}}_v\cap[u,\infty)} \frac{\Omega^2}{r^2}|\Psi|^2\leq \underline{F}_v^T[\Psi](u,\infty),\\ \int_{\underline{\mathscr{C}}_v\cap[u,\infty)}\frac{1}{\Omega^2}|\Omega\slashed{\nabla}_3 r\Omega^2\alpha|^2&\lesssim \frac{1}{(2M)^2} \int_{\underline{\mathscr{C}}_v\cap[u,\infty)}\frac{1}{r^2}|\Omega\slashed{\nabla}_3 r^3\Omega\psi|^2\lesssim \Omega^2(u,v) \underline{F}_v^T[\Psi](u,\infty). \end{split} \end{align} These estimates hold uniformly in $v$, in contrast to \bref{RW exponential backwards near H+}. This can be traced to the sign of the first order term in \begin{align} \Omega\slashed{\nabla}_3\Omega\slashed{\nabla}_4 r\Omega^2\alpha+\frac{2(3\Omega^2-1)}{r}\Omega\slashed{\nabla}_3 r\Omega^2\alpha-\Omega^2\slashed{\Delta} r\Omega^2 \alpha+\frac{6M\Omega^2}{r^2} r\Omega^2\alpha=0. \end{align} for $r<3M$.\\ \indent Near $\mathscr{I}^+$ we can use \bref{+2 equation for radiation field} and follow the same steps leading to \bref{this} to derive for $R>\mathcal{R}_{\mathscr{I}^+}$: \begin{align}\label{this+2} \int_{\mathscr{C}_u\cap\{r>R\}}r^2|\Omega\slashed{\nabla}_4 r^5\Omega^{-2}\alpha|^2\lesssim_{u_-,M}\left[\|\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}^2 +\|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}^2+\int_{\mathscr{I}^+\cap[u,u_+]}|\upalpha_{\mathscr{I}^+}|^2+|\mathring{\slashed{\nabla}}\upalpha_{\mathscr{I}^+}|^2\right]. \end{align} With these estimates we can conclude as for the Regge--Wheeler equation: \begin{corollary}\label{+2 noncompact} The results of \Cref{+2 backwards existence} hold when $\upalpha_{\mathscr{H}^+}$, $\upalpha_{\mathscr{I}^+}$ are not compactly supported, provided \begin{align}\label{noncompact estimate} \sum_{|\gamma|\leq2}\|\slashed{\mathcal{L}}^\gamma_{S^2}\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{H}^+}}^2+\|\slashed{\mathcal{L}}^\gamma_{S^2}\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}^2+\int_{\mathscr{I}^+}|\slashed{\mathcal{L}}^\gamma_{S^2}\upalpha_{\mathscr{I}^+}|^2+|\slashed{\mathcal{L}}^\gamma_{S^2}\mathring{\slashed{\nabla}}\upalpha_{\mathscr{I}^+}|^2<\infty. \end{align} \end{corollary} \end{remark} The results above can be extended to scattering from $\Sigma, \overline\Sigma$, since the region $J^+(\overline\Sigma)\cap J^-(\Sigma^*)$ can be handled locally with \Cref{WP+2Sigmabar} and \Cref{RWfcpSigma}. \begin{corollary} Let $\upalpha_{\mathscr{H}^+}\in\Gamma(\mathscr{H}^+)\cap\;\mathcal{E}^{T,+2}_{\mathscr{H}^+}$, $\upalpha_{\mathscr{I}^+}\in\Gamma (\mathscr{I}^+)\cap\;\mathcal{E}^{T,+2}_{\mathscr{I}^+}$, such that \bref{noncompact estimate} is satisfied. Then there exists a unique solution $\alpha$ to \cref{T+2} in $J^+({\Sigma})$ such that $\lim_{v\longrightarrow\infty}r^5\alpha=\upalpha_{\mathscr{I}^+}$, $2M\Omega^2\alpha\big|_{\mathscr{H}^+}=\upalpha_{\mathscr{H}^+}$. Moreover, $(\alpha\big|_{{\Sigma}},\slashed{\nabla}_{n_\Sigma}\alpha|_{{\Sigma}})\in \mathcal{E}^{T,+2}_{{\Sigma}}$ and \begin{align} \left\|\left(\alpha|_{{\Sigma}},\slashed{\nabla}_{n_\Sigma}\alpha|_{{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,+2}_{{\Sigma}}}=\left|\left|\upalpha_{\mathscr{I}^+}\right|\right|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}+\left|\left|\upalpha_{\mathscr{H}^+}\right|\right|^2_{\mathcal{E}^{T,+2}_{{\mathscr{H}^+}}}. \end{align} \end{corollary} \begin{corollary} Let $\upalpha_{\mathscr{H}^+}\in\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}$ be such that $V^{-2}\upalpha\in \Gamma({\overline{\mathscr{H}^+}})$ and let $\upalpha_{\mathscr{I}^+}\in\Gamma(\mathscr{I}^+)\cap\;\mathcal{E}^{T,+2}_{\mathscr{I}^+}$. Then there exists a unique solution $\alpha$ to \cref{T+2} in $J^+(\overline{\Sigma})$ such that $\lim_{v\longrightarrow\infty}r^5\alpha=\upalpha_{\mathscr{I}^+}$, $2MV^{-2}\Omega^2\alpha\big|_{\mathscr{H}^+}=V^{-2}\upalpha_{\mathscr{H}^+}$. Moreover, $(\alpha\big|_{\overline{\Sigma}},\slashed{\nabla}_{n_{\overline{\Sigma}}}\alpha|_{\overline{\Sigma}})\in \mathcal{E}^{T,+2}_{\overline{\Sigma}}$ and \begin{align} \left\|\left(\alpha|_{\overline{\Sigma}},\slashed{\nabla}_{n_{\overline{\Sigma}}}\alpha|_{\overline{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}}=\left|\left|\upalpha_{\mathscr{I}^+}\right|\right|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^+}}+\left|\left|\upalpha_{\mathscr{H}^+}\right|\right|^2_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}. \end{align} \end{corollary} \subsubsection{A pointwise estimate near $i^0$ in backwards scattering}\label{subsubsection 8.1.3 pointwise estimate near i0} As an aside, if $\upalpha_{\mathscr{I}^+}$ is compactly supported we can use the backwards $r^p$-estimates of \Cref{backwards rp estimates} to obtain better decay for $\alpha,\psi$ towards $i^0$. We illustrate this point in what follows: \begin{proposition} Let $\alpha$ be the solution to (\ref{T+2}) arising from scattering data $\upalpha_{\mathscr{H}^+}\in \Gamma_c (\mathscr{H}^+_{\geq0}), \upalpha_{\mathscr{I}^+}\in \Gamma_c (\mathscr{I}^+_{\geq0})$ as in \Cref{+2 backwards existence}. Then $r^5\psi|_{\Sigma^*}, r^5\alpha|_{\Sigma^*}\longrightarrow 0$. The same applies when $\Sigma^*$ is replaced by $\Sigma$ or $\overline\Sigma$. \end{proposition} \begin{proof} Given that $\bm{\uppsi}_{\mathscr{I}^+}=\partial_u^2\upalpha_{\mathscr{I}^+}$ is compactly supported, we already know that $\Psi|_{\Sigma^*,r=R}\longrightarrow 0$ as $R\longrightarrow 0$. We first work with $r^5\psi$, for which we can derive a similar estimate to (\ref{backwards estimate +2 Gronwall}): Let $u<u'<u_-$ and take $(u,v,\theta^A)\in\Sigma^*$, $v-u:=R^*$. Integrating \cref{+2 Gronwall ingredient} in $u$ on $\underline{\mathscr{C}}_v$ between $u,u'$, we obtain: \begin{align} \Big|r^5\Omega^{-1}\psi(u,v)-r^5\Omega^{-1}\psi(u',v)\Big|\leq\int_{u}^{u'}\left|\Psi\right|\exp\left[\int_{u}^{u'} \frac{3\Omega^2-1}{r} d\bar{u}\right]\lesssim\left[\int_{u}^{u'}\left|\Psi\right|\right]\left(\frac{r(u',v)}{r(u,v)}\right)^2. \end{align} We further compare $\int_{u}^{u'}\left|\Psi\right|d\bar{u}$ to $\int_{-\infty}^{u'} |\Psi|_{\mathscr{I}^+}$: via the backwards $r^p$-estimates of \Cref{backwards rp estimates}: \begin{align} \left|\int_{u}^{u'}du \left|\Psi\right|-\int_{-\infty}^{u'}du \left|\bm{\uppsi}_{\mathscr{I}^+}\right|\right|^2\leq\left[ \int_{\mathscr{D}}du dv|\Omega\slashed{\nabla}_4\Psi|\right]^2\leq \frac{1}{\sqrt{R}}\int_{\mathscr{D}}du dv \;r^2|\Omega\slashed{\nabla}_4\Psi|^2, \end{align} where $\mathscr{D}=J^+(\Sigma^*)\cap J^+(\underline{\mathscr{C}}_{v})\cap J^-(\mathscr{C}_{u'})$. As in \Cref{backwards rp estimates}, we can bound the last integral by the right hand side of (\ref{RWbackwardsdecay}). As $R\longrightarrow\infty$, $\int_{u}^{u'}du \left|\Psi\right|\longrightarrow \int_{-\infty}^{u'}du \left|\bm{\uppsi}_{\mathscr{I}^+}\right|=0$. Consequently $\left|r^5\Omega^{-1}\psi(u,v)-r^5\Omega^{-1}\psi(u',v)\right|\ $ decays as $R\longrightarrow\infty$ and \begin{align} \lim_{R\longrightarrow\infty} r^5\psi|_{\Sigma^*,r=R}=0. \end{align} We can prove the same for $r^5\alpha|_{\Sigma,r=R}$ by repeating the above argument for $\int_{u_-}^{u_+} du (u-u_-)\Psi$ and noticing that $\int_{u_-}^{u_+} du (u-u_-)\bm{\uppsi}_{\mathscr{I}^+}$ also vanishes since $\bm{\uppsi}_{\mathscr{I}^+}$ is the 2nd derivative of compactly supported fields on $\mathscr{I}^+$. \end{proof} \subsection{Future scattering for $\underline\alpha$}\label{subsection 8.2 future scattering -2} Forwards and backwards scattering for the $-2$ Teukolsky equation are worked out entirely analogously to the case of the $+2$ Teukolsky equation, using the scattering theory of the Regge--Wheeler equation and the results of \Cref{subsection 7.2 future radiation fields and fluxes}. In contrast to the $+2$ equation, the transport equation \bref{hier-} relating $\underline\alpha$ and $\underline\Psi$ is sufficient to obtain an estimate for the radiation field near $\mathscr{I}^+$ that is uniform in the future end of the support of $\underline\upalpha_{\mathscr{I}^+}$, while near $\mathscr{H}^+$ $\underline\alpha$ experiences an \textit{enhanced blueshift}, and it is necessary for scattering data to decay exponentially at a sufficiently fast rate towards the future in order to obtain a solution in backwards scattering that is smooth near $\mathscr{H}^+$. \subsubsection{Forwards scattering for $\underline\alpha$}\label{subsubsection 8.2.1 forwards scattering -2} We put together the ingredients worked out in \Cref{subsection 7.2 future radiation fields and fluxes} to construct the forwards scattering map. \begin{proof}[Proof of \Cref{-2 future forward scattering}] Let $\underline\alpha$ be the solution to \cref{T-2} on $J^+(\Sigma^*)$ arising out of a compactly supported data set $(\underline\upalpha,\underline\upalpha')$ on $\Sigma^*$ as in \Cref{WP+2Sigma*}. \Cref{WP-2Sigma*} guarantees the existence of the radiation field $\underline\upalpha_{\mathscr{H}^+}$ as in \Cref{-2 radiation alpha definition H}. \Cref{-2 radiation ptwise decay H} says that $\underline\upalpha_{\mathscr{H}^+}\longrightarrow 0$ towards the future end of $\mathscr{H}^+$. Let $\underline\Psi$ be the solution to \cref{RW} associated to $\underline\alpha$ via (\ref{hier-}) The fact that $(\underline\Psi|_{\Sigma^*},\slashed{\nabla}_{T}\underline\Psi|_{\Sigma^*})$ are compactly supported means that the results of \Cref{-2 radiation flux on H+} apply and $\underline\upalpha_{\mathscr{H}^+}\in \mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}$. Similarly, by \Cref{-2 radiation at scri}, $r\underline\alpha$ has a pointwise limit as $v\longrightarrow \infty$ which induces a smooth $\underline\upalpha_{\mathscr{I}^+}$ on $\mathscr{I}^+$. \Cref{-2 alpha radiation decay} implies that $\underline\upalpha_{\mathscr{I}^+}$ decays towards the future end of $\mathscr{I}^+$. As $\underline\uppsi_{\mathscr{I}^+}\in \mathcal{E}^T_{\mathscr{I}^+}$, we have that \begin{align}\label{-2 term in L2 on scri+} \mathcal{A}_2(\mathcal{A}_2-2)\int_v^\infty d\bar{u}\underline\upalpha_{\mathscr{I}^+}-6M\underline\upalpha_{\mathscr{I}^+}\in L^2(\mathscr{I}^+). \end{align} The fact that $\underline\alpha$ arises from data of compact support means that \bref{-2 mean is 0} applies. This implies upon evaluating the $L^2(\mathscr{I}^+)$ norm of the left hand side of \bref{-2 term in L2 on scri+} that $\underline\upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,-2}_{\mathscr{I}^+}$. \end{proof} \begin{corollary}\label{-2 future forward scattering Sigma Sigmabar} Solutions to (\ref{T-2}) arising from data on ${\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T,-2}$ and $\mathcal{E}_{{\mathscr{H}^+}}^{T,-2}$. Solutions to (\ref{T-2}) arising from data on $\overline{\Sigma}$ of compact support give rise to smooth radiation fields in $\mathcal{E}_{\mathscr{I}^+}^{T,-2}$ and $\mathcal{E}_{\overline{\mathscr{H}^+}}^{T,-2}$. \end{corollary} \begin{proof} Identical to the proof of \Cref{RWfcpSigma} using \Cref{WP-2Sigmabar,,backwards wellposedness -2}. \end{proof} The proof of \Cref{-2 future forward scattering} above and \Cref{-2 future forward scattering Sigma Sigmabar} allow us to define the forwards maps ${}^{(-2)}\mathscr{F}^+$ from dense subspaces of $\mathcal{E}^{T,-2}_{\Sigma^*}$, $\mathcal{E}^{T,-2}_{\Sigma}$, $\mathcal{E}^{T,-2}_{\overline{\Sigma}}$. \begin{defin} Let $(\underline\upalpha,\underline\upalpha')$ be a smooth data set of compact support to the -2 Teukolsky equation \bref{T-2} on $\Sigma^*$ as in \Cref{WP-2Sigma*}. Define the map ${}^{(-2)}\mathscr{F}^+$ by \begin{align} {}^{(-2)}\mathscr{F}^+:\Gamma_c(\Sigma^*)\times\Gamma_c(\Sigma^*)\longrightarrow \Gamma(\mathscr{H}^+_{\geq0})\times\Gamma(\mathscr{I}^+), (\upalpha,\upalpha')\longrightarrow (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+}), \end{align} where $(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})$ are as in the proof of \Cref{-2 future forward scattering}.\\ \indent Using \Cref{-2 future forward scattering Sigma Sigmabar}, the map ${}^{(-2)}\mathscr{F}^+$ is defined analogously for data on $\Sigma, \overline{\Sigma}$: \begin{align} {}^{(-2)}\mathscr{F}^+:\Gamma_c(\Sigma)\times\Gamma_c(\Sigma)\longrightarrow \Gamma(\mathscr{H}^+)\times\Gamma(\mathscr{I}^+), (\upalpha,\upalpha')\longrightarrow (\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+}),\\ {}^{(-2)}\mathscr{F}^+:\Gamma_c(\overline{\Sigma})\times\Gamma_c(\overline{\Sigma})\longrightarrow \Gamma(\overline{\mathscr{H}^+})\times\Gamma(\mathscr{I}^+), (\underline\upalpha,\underline\upalpha')\longrightarrow (\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+}). \end{align} \end{defin} \subsubsection{Backwards scattering for $\underline\alpha$}\label{subsubsection 8.2.2 backwards scattering -2} Now we construct the inverse ${}^{(-2)}\mathscr{B}^-$ of \Cref{-2 future backward scattering} on a dense subspace of $\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+}$. The existence of a solution to \bref{T-2} out of compactly supported scattering data on $\mathscr{H}^+_{\geq0}, \mathscr{I}^+$ is shown in \Cref{-2 backwards existence}. Showing that this solution defines an element of $\mathcal{E}^{T,-2}_{\Sigma^*}$ is done in \Cref{-2 backwards inclusion 7/2}. \begin{proposition}\label{-2 backwards existence} For $\underline\upalpha_{\mathscr{H}^+}\in\Gamma(\mathscr{H}^+_{\geq0})\cap\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}$ supported on $\mathscr{H}^+_{\geq0}\cap\{v<v_+\}$ for $v_+<\infty$, ${\underline\alpha}_{\mathscr{I}^+}\in\Gamma(\mathscr{I}^+)\cap\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ supported on on $\mathscr{I}^+\cap\{u<u_+\}$ for $u_+<\infty$, there exists a unique solution $\alpha$ to \bref{T-2} in $J^+(\Sigma^*)$ that realises $\underline\upalpha_{\mathscr{H}^+}$ and $\underline\upalpha_{\mathscr{I}^+}$ as its radiation fields on $\mathscr{H}^+_{\geq0}$, $\mathscr{I}^+$ respectively. \end{proposition} \begin{remark} The fact that $\underline\upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ automatically implies that $\int_{-\infty}^\infty d\bar{u}\; \underline\upalpha_{\mathscr{I}^+}=0$. \end{remark} \begin{proof} Let $\widetilde{\Sigma}$ be a spacelike surface connecting $\mathscr{H}^+$ at a finite $v_*>v_+$ to $\mathscr{I}^+$ at a finite $u_*>u_+$. Denote by $\mathscr{D}$ the region bounded by $\mathscr{H}^+_{\geq 0}\cap\{v<v_+\}$, $\widetilde{\Sigma}$, $ \mathscr{I}^+\cap[u_-,u_+]$, $\Sigma^*$ and $\mathscr{C}_{u_-}$ for $u_->-\infty$. We define \begin{align} \underline{{\psi}}_{\mathscr{H}^+}&=\frac{2}{(2M)^2}\partial_v \underline\upalpha_{\mathscr{H}^+} +\frac{1}{2M}\partial_v \underline\upalpha_{\mathscr{H}^+}, \label{backwards psibar H} \\ \underline{\bm{\uppsi}}_{\mathscr{H}^+}&=2(2M)^2\underline\upalpha_{\mathscr{H}^+}+2(2M)^3\partial_v \underline\upalpha_{\mathscr{H}^+}+(2M)^4\partial_v^2\underline\upalpha_{\mathscr{H}^+}, \label{backwards P sibar H}\\ \underline{{\psi}}_{\mathscr{I}^+}&=-\int_u^\infty d\bar{u}\; \mathcal{A}_2 \;\underline\upalpha_{\mathscr{I}^+},\label{backwards psibar I}\\ \underline{\bm{\uppsi}}_{\mathscr{I}^+}&=\int_u^\infty d\bar{u}\; (u_+-u) \left[\mathcal{A}_2(\mathcal{A}_2-2)\underline\upalpha_{\mathscr{I}^+}+6M\partial_u \underline\upalpha_{\mathscr{I}^+}\right]. \label{backwards P sibar I} \end{align} We can find a unique solution $\underline\Psi$ to \bref{RW} with radiation fields $\underline{\bm{\uppsi}}_{\mathscr{I}^+}$, $\underline{\bm{\uppsi}}_{\mathscr{H}^+}$. Let \begin{align} r^3\Omega\underline\psi=(2M)^3\underline{{\psi}}_{\mathscr{I}^+}-\int_v^\infty d\bar{v}\;\frac{\Omega^2}{r^2}\underline\Psi,\qquad\qquad r\Omega^2\underline\alpha=\underline\upalpha_{\mathscr{I}^+}-\int_v^\infty d\bar{v}\; r\Omega^3\underline\psi. \end{align} Then $\underline\psi, \underline\alpha$ satisfy: \begin{align} \underline\Psi=\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 r^3\Omega\underline\psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\underline\alpha. \end{align} Moreover, we can see that $\lim_{v\longrightarrow\infty}r^3\Omega\underline\psi(u,v,\theta^A)=\underline{{\psi}}_{\mathscr{I}^+}(u,\theta^A)$ uniformly in $u$, as \begin{align} \int_{S^2}|r^3\Omega\underline\psi-(2M)^3\underline{{\psi}}_{\mathscr{I}^+}|^2=\int_{S^2}\left[\int_v^\infty \frac{\Omega^2}{r^2}\underline\Psi d\bar{v}\right]^2\lesssim \frac{1}{r}F^T_u[\underline\Psi](v,\infty), \end{align} and similarly $\lim_{v\longrightarrow\infty}r\Omega^2\underline\alpha(u,v,\theta^A)=\underline\upalpha_{\mathscr{I}^+}(u,\theta^A)$ uniformly in $u$. We can repeat the same for $\slashed{\nabla}_T,\mathring{\slashed{\nabla}}$-derivatives of $r\Omega^2\underline\alpha, r^3\Omega\underline\psi$, which immediately implies that $\partial_u r^3\Omega\underline\psi\longrightarrow \partial_u \underline{{\psi}}_{\mathscr{I}^+}$, $\partial_u r\Omega^2\underline\alpha\longrightarrow \partial_u \underline\upalpha_{\mathscr{I}^+}$ as $v\longrightarrow\infty$. \\ The commutation relation \bref{commutation relation 2} implies \begin{align} \left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\mathcal{T}^{-2}r\Omega^2\underline\alpha=0. \end{align} We find $\mathcal{T}^{-2}r\Omega^2\underline\alpha$ and $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 \mathcal{T}^{-2}r\Omega^2\underline\alpha$: \begin{align} \mathcal{T}^{-2}r\Omega^2\underline\alpha&=\Omega\slashed{\nabla}_3 r^3\Omega\underline\psi-\frac{3\Omega^2-1}{r}r^3\Omega\underline\psi-\left(\mathcal{A}_2-\frac{6M}{r}\right)r\Omega^2\underline\alpha,\\ \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\mathcal{T}^{-2}r\Omega^2\underline\alpha&=\Omega\slashed{\nabla}_3\underline\Psi-\left[\mathcal{A}_2-(3\Omega^2-1)\right]r^3\Omega\underline\psi-6M r\Omega^2\underline\alpha. \end{align} It is not hard to see from \bref{backwards psibar H}, \bref{backwards psibar I}, \bref{backwards P sibar H}, \bref{backwards P sibar I}, that in the limit $v\longrightarrow\infty$, $\mathcal{T}^{-2}r\Omega^2\underline\alpha$ and $\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4 \mathcal{T}^{-2}r\Omega^2\underline\alpha$ vanish. This implies that $\underline\alpha$ satisfies $\mathcal{T}^{-2}r\Omega^2{\underline\alpha}=0$ on $\mathscr{D}$. It is also clear that $\Omega^{-2}\underline\alpha|_{\mathscr{H}^+}=\underline\upalpha_{\mathscr{H}^+}$. Finally, we can repeat the above to extend $\underline\alpha$ to $J^+(\Sigma^*)\cap\{u\geq \tilde{u}\}$ for arbitrarily small $\tilde{u}$. \end{proof} \indent Note that energy conservation translates to the following $r$-weighted estimates that are uniform in $u$ as $u\longrightarrow -\infty$: \begin{align} \int_{{\mathscr{C}}_u} \frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4 r^3\Omega\underline\psi|^2&\leq {F}^T_u[\underline\Psi](v,\infty),\label{334}\\ \int_{{\mathscr{C}}_u} \frac{r^2}{\Omega^2}|\Omega\slashed{\nabla}_4 r\Omega^2\underline\alpha|^2&\lesssim \int_{{\mathscr{C}}_u} |\Omega\slashed{\nabla}_4 r^3\Omega\underline\psi|^2 \lesssim \frac{1}{r^2}{F}^T_u[\underline\Psi](v,\infty).\label{335} \end{align} This can be traced to the good sign of the first order term in \cref{T-2} near $\mathscr{I}^+$ when evolving backwards, and similar estimates can in fact be derived directly from \cref{T-2}. We can deduce \begin{proposition}\label{-2 backwards inclusion 7/2} Let $\underline\upalpha_{\mathscr{H}^+}, \underline\upalpha_{\mathscr{I}^+}$ be as in \Cref{-2 backwards existence}. Let $\underline\alpha$ be the corresponding solution to \cref{T-2}. Then we have that $(\Omega^{-2}\underline\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^{-2}\underline\alpha|_{\Sigma^*})\in\mathcal{E}^{T,-2}_{\Sigma^*}$ \end{proposition} \begin{proof} Using \bref{334}, \bref{335} it is easy to use the argument of \cref{+2 backwards inclusion 7/2} to show that $\lim_{r\longrightarrow\infty}\left|r^{\frac{7}{2}}\underline\psi|_{\Sigma^*}\right|=\lim_{r\longrightarrow\infty}\left|r^{\frac{7}{2}}\underline\alpha|_{\Sigma^*}\right|=0$, so we can repeat what was done to prove \Cref{+2 backwards inclusion 7/2} to obtain the result. \end{proof} \begin{defin}\label{-2 definition of B-} Let $\underline\upalpha_{\mathscr{H}^+}, \underline\upalpha_{\mathscr{I}^+}$ be as in \Cref{-2 backwards existence}. Define the map ${}^{(-2)}\mathscr{B}^-$ by \begin{align} {}^{(-2)}\mathscr{B}^-:\Gamma_c(\mathscr{H}^+_{\geq0})\times\Gamma_c(\mathscr{I}^+)\longrightarrow\Gamma(\Sigma^*)\times\Gamma(\Sigma^*), (\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})\longrightarrow (\Omega^{-2}\underline\alpha|_{\Sigma^*},\slashed{\nabla}_{n_{\Sigma^*}}\Omega^{-2}\underline\alpha|_{\Sigma^*}), \end{align} where $\underline\alpha$ is the solution to \bref{T-2} arising from scattering data $(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})$ as in \Cref{-2 backwards existence}. \end{defin} \begin{corollary}\label{-2B- inverts -2F+} The maps ${}^{(-2)}\mathscr{F}^+$, ${}^{(-2)}\mathscr{B}^-$ extend uniquely to unitary Hilbert space isomorphisms on their respective domains, such that ${}^{(-2)}\mathscr{F}^+\circ{}^{(-2)}\mathscr{B}^-=Id$, ${}^{(-2)}\mathscr{B}^-\circ{}^{(-2)}\mathscr{F}^+=Id$. \end{corollary} \begin{remark}\label{unitarity of -2B- is trivial} As in the case of \Cref{unitarity of B- is trivial,,unitarity of +2B- is trivial}, \Cref{-2B- inverts -2F+} implies \begin{align}\label{unitarity of -2B- formula} \|{}^{(-2)}\mathscr{B}^-(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})\|_{\mathcal{E}^{T,-2}_{\Sigma^*}}^2=\|\underline\upalpha_{\mathscr{H}^+}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{H}^+_{\geq0}}}+\|\underline\upalpha_{\mathscr{I}^+}\|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}. \end{align} As in the case of \Cref{RW unitary backwards}, we can use the backwards $r^p$-estimates of \Cref{backwards rp estimates} to directly show \bref{unitarity of -2B- formula} without reference to the forwards map ${}^{(-2)}\mathscr{F}^+$. \end{remark} Since the region $J^+(\overline{\Sigma})\cap J^-(\Sigma^*)$ can be handled locally via \Cref{WP-2Sigmabar}, \Cref{backwards wellposedness -2} and $T$-energy conservation, we can immediately deduce the following: \begin{corollary} The map ${}^{(-2)}\mathscr{B}^-$ can be defined on the following domains: \begin{align} {}^{(-2)}\mathscr{B}^{-}:\mathcal{E}^{T,-2}_{\mathscr{H}^+}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{\Sigma},\\ {}^{(-2)}\mathscr{B}^{-}:\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{\overline{\Sigma}}, \end{align} and we have \begin{align} {}^{(-2)}\mathscr{F}^{+}\circ{}^{(-2)}\mathscr{B}^{-}=Id_{\mathcal{E}^{T,-2}_{\mathscr{H}^+}\oplus\;\mathcal{E}^{T,-2}_{\mathscr{I}^+}},\qquad {}^{(-2)}\mathscr{B}^{-}\circ{}^{(-2)}\mathscr{F}^{+}=Id_{\mathcal{E}^{T,-2}_{\Sigma}},\\ {}^{(-2)}\mathscr{F}^{+}\circ{}^{(-2)}\mathscr{B}^{-}=Id_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus\;\mathcal{E}^{T,-2}_{\mathscr{I}^+}},\qquad {}^{(-2)}\mathscr{B}^{-}\circ{}^{(-2)}\mathscr{F}^{+}=Id_{\mathcal{E}^{T,-2}_{\overline{\Sigma}}}. \end{align} \end{corollary} This concludes the proof of \Cref{-2 future backward scattering}. \subsubsection{Non-compact future scattering data and the blueshift effect}\label{subsubsection 8.3 blueshift -2} \indent In contrast to \bref{334}, \bref{335} (and to the estimates of \Cref{nondegenerate estimate near H+}), estimates for $\Omega^{-2}\underline\alpha$ near $\mathscr{H}^+$ in the backwards direction suffer from an enhanced blueshift, which can be readily seen in the transport equations \bref{hier-}: \begin{align} \Omega\slashed{\nabla}_4 r^3\Omega^{-1}\underline\psi+\frac{2M}{r^2} r^3\Omega^{-1}\underline\psi=\frac{\underline\Psi}{r^2}. \end{align} For $r<\mathcal{R}_{\mathscr{H}^+}<3M$, we can derive \begin{align}\label{this 3} \begin{split} &\int_{S^2_{u,v}}|r^3\Omega^{-1}\underline\psi - (2M)^3\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2\lesssim \underbrace{\int_{S^2_{u,v_+}}|r^3\Omega^{-1}\underline\psi - (2M)^3\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2}_{=0}\\ &+\frac{1}{M}\int_v^{v_+}d\bar{v}\int_{S^2_{u,\bar{v}}}|r^3\Omega^{-1}\underline\psi - (2M)^3\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2+\frac{1}{(2M)^2}\int_{v}^{v_+}d\bar{v}\int_{S^2_{u,\bar{v}}}|\underline\Psi-\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2. \end{split} \end{align} Gr\"onwall's inequality and \bref{ptwise horizon} imply \begin{align} \begin{split} \int_{S^2_{u,v}}|r^3\Omega^{-1}\underline\psi - (2M)^3\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2\lesssim_{v_+} e^{\frac{1}{M}(v_+-v)}\left[\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\int_{\mathscr{H}^+\cap[v,v_+]}|\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2+|\mathring{\slashed{\nabla}}\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2\right]. \end{split} \end{align} The equation \begin{align} \Omega\slashed{\nabla}_4 r\Omega^{-2}\underline\alpha +\frac{4M}{r^2}r\Omega^{-2}\underline\alpha=r\Omega^{-1}\underline\psi \end{align} implies a similar estimate with a worse exponential factor \begin{align} \begin{split} \int_{S^2_{u,v}}|r\Omega^{-2}\underline\alpha - 2M\underline\upalpha_{\mathscr{H}^+}|^2\lesssim_{v_+} e^{\frac{2}{M}(v_+-v)}\left[\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\int_{\mathscr{H}^+\cap[v,v_+]}|\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2+|\mathring{\slashed{\nabla}}\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2\right]. \end{split} \end{align} \indent We can conclude that the statement of the backwards existence theorem holds when scattering data is not compactly supported, but the solution will not be smooth unless data decays exponentially, which we can then show with the following applied to \bref{this 3}: \begin{lemma} Let $f(v)>0$ and assume \begin{align} f(v)\leq \Lambda \int_v^{v_+} f(v) + e^{-Pv} \end{align} for all $v<v_+$. Then if $P>\Lambda$ we have \begin{align} f(v)< \frac{P}{P-\Lambda}e^{-Pv}. \end{align} \end{lemma} With this, we see that if $\underline\upalpha_{\mathscr{H}^+}$, $\underline\upalpha_{\mathscr{I}^+}$ decay exponentially at a rate faster than $\frac{1}{M}$ then the we are guaranteed that \begin{align} \int_{S^2_{u,v}}|r\Omega^{-2}\underline\alpha - 2M\underline\upalpha_{\mathscr{H}^+}|^2\lesssim \left[\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{I}^+}}^2+\|\underline{\bm{\uppsi}}_{\mathscr{I}^+}\|_{\mathcal{E}^T_{\mathscr{H}^+}}^2+\int_{\mathscr{H}^+\cap[v,v_+]}|\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2+|\mathring{\slashed{\nabla}}\underline{\bm{\uppsi}}_{\mathscr{H}^+}|^2\right]. \end{align} \begin{corollary}\label{-2 noncompact} Let $\underline\upalpha_{\mathscr{H}^+}$ be a smooth symmetric traceless $S^2_{\infty,v}$ 2-tensor field with domain $\mathscr{H}^+$, $\underline\upalpha_{\mathscr{I}^+}$ a smooth symmetric traceless $S^2_{\infty,v}$ 2-tensor field with domain $\mathscr{I}^+$. Then there exists a unique $\underline\alpha$ that is smooth on the interior of $J^+(\Sigma^*)$ and satisfies \bref{T-2}. If $\underline\upalpha_{\mathscr{H}^+}, \underline\upalpha_{\mathscr{I}^+}$ decay exponentially towards the future at rate faster than $\frac{1}{M}$ then $\Omega^{-2}\underline\alpha$ is smooth up to and including $\mathscr{H}^+$. \end{corollary} Since the region $J^+(\overline\Sigma)\cap J^-(\Sigma^*)$ can be handled locally with \Cref{WP+2Sigmabar} and \Cref{RWfcpSigma}, the results above can be extended to scattering from $\Sigma, \overline\Sigma$. \begin{corollary} Let $\underline\upalpha_{\mathscr{H}^+}\in\Gamma(\mathscr{H}^+)\cap\;\mathcal{E}^{T,-2}_{\mathscr{H}^+}$, $\underline\upalpha_{\mathscr{I}^+}\in\Gamma (\mathscr{I}^+)\cap\;\mathcal{E}^{T,-2}_{\mathscr{I}^+}$. Assume $\underline\upalpha_{\mathscr{H}^+}$, $\underline\upalpha_{\mathscr{I}^+}$ decay exponentially at a rate faster than $\frac{1}{M}$. Then there exists a unique solution $\underline\alpha$ to \cref{T-2} in $J^+({\Sigma})$ such that $\lim_{v\longrightarrow\infty}r\underline\alpha=\underline{\upalpha}_{\mathscr{I}^+}$, $2M\Omega^{-2}\underline\alpha\big|_{\mathscr{H}^+}=\underline\upalpha_{\mathscr{H}^+}$. Moreover, $(\underline\alpha\big|_{{\Sigma}},\slashed{\nabla}_T\underline\alpha|_{{\Sigma}})\in \mathcal{E}^{T,-2}_{{\Sigma}}$ and \begin{align} \left\|\left(\underline\alpha|_{{\Sigma}},\slashed{\nabla}_{n_{\Sigma}}\underline\alpha|_{{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,-2}_{{\Sigma}}}=\left|\left|\underline\upalpha_{\mathscr{I}^+}\right|\right|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}+\left|\left|\underline\upalpha_{\mathscr{H}^+}\right|\right|^2_{\mathcal{E}^{T,-2}_{{\mathscr{H}^+}}}. \end{align} \end{corollary} \begin{corollary} Let $\underline\upalpha_{\mathscr{H}^+}\in\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$ be such that $V^{2}\underline\upalpha\in \Gamma({\overline{\mathscr{H}^+}})$ and let $\underline\upalpha_{\mathscr{I}^+}\in\Gamma(\mathscr{I}^+)\cap\;\mathcal{E}^{T,-2}_{\mathscr{I}^+}$. Assume $\underline\upalpha_{\mathscr{H}^+}$, $\underline\upalpha_{\mathscr{I}^+}$ decay exponentially at a rate faster than $\frac{1}{M}$, then there exists a unique solution $\underline\alpha$ to \cref{T-2} in $J^+(\overline{\Sigma})$ such that $\lim_{v\longrightarrow\infty}r\underline\alpha=\underline\upalpha_{\mathscr{I}^+}$, $V^{2}\Omega^{-2}\underline\alpha\big|_{\mathscr{H}^+}=V^{2}\underline\upalpha_{\mathscr{H}^+}$. Moreover, $(\underline\alpha\big|_{\overline{\Sigma}},\slashed{\nabla}_T\underline\alpha|_{\overline{\Sigma}})\in \mathcal{E}^{T,-2}_{\overline{\Sigma}}$ and \begin{align} \left\|\left(\underline\alpha|_{\overline{\Sigma}},\slashed{\nabla}_{n_{\overline{\Sigma}}}\underline\alpha|_{\overline{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,-2}_{\overline{\Sigma}}}=\left|\left|\underline\upalpha_{\mathscr{I}^+}\right|\right|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}+\left|\left|\underline\upalpha_{\mathscr{H}^+}\right|\right|^2_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}}. \end{align} \end{corollary} \subsection{Past scattering for $\alpha, \underline\alpha$}\label{subsection 8.3 past scattering +2-2} Taking into account \Cref{time inversion}, \Cref{+2 past forward scattering,,-2 past forward scattering} are immediate. We state the results regarding scattering on $J^-(\overline{\Sigma})$. \begin{corollary}\label{past scattering of +2} Given smooth data of compact support $(\upalpha,\upalpha')\in \mathcal{E}^{T,+2}_{\overline{\Sigma}}$, there exists a unique solution $\alpha$ to the +2 Teukolsky equation \bref{T+2} on $J^-(\overline{\Sigma})$ that induces smooth radiation fields \begin{itemize} \item $\upalpha_{\mathscr{I}^-}\in \mathcal{E}^{T,+2}_{\mathscr{I}^-}$ given by $\upalpha_{\mathscr{I}^-}(v,\theta^A)=\lim_{u\longrightarrow -\infty} r\alpha(u,v,\theta^A)$, \item $\upalpha_{\mathscr{H}^-}\in \mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}$ given by $U^{2}\upalpha_{\mathscr{I}^-}=2MU^2\Omega^{-2}\alpha|_{\mathscr{H}^-}$. \end{itemize} such that \begin{align}\label{109091} \left\|\left(\alpha|_{\overline{\Sigma}},\slashed{\nabla}_T\alpha|_{\overline{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}}=\left|\left|\upalpha_{\mathscr{I}^-}\right|\right|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}+\left|\left|\upalpha_{\mathscr{H}^-}\right|\right|^2_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}}. \end{align} Let $\upalpha_{\mathscr{H}^-}\in\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}$ be such that $U^{2}\upalpha\in \Gamma({\overline{\mathscr{H}^-}})$ and let $\upalpha_{\mathscr{I}^-}\in\Gamma(\mathscr{I}^-)\cap\;\mathcal{E}^{T,+2}_{\mathscr{I}^-}$. Assume $\upalpha_{\mathscr{H}^-}$, $\upalpha_{\mathscr{I}^-}$ decay exponentially at a rate faster than $\frac{1}{M}$, then there exists a unique solution $\alpha$ to \cref{T+2} in $J^-(\overline{\Sigma})$ such that $\lim_{u\longrightarrow-\infty}r\alpha=\upalpha_{\mathscr{I}^-}$, $2MU^{2}\Omega^{-2}\alpha\big|_{\mathscr{H}^-}=U^{2}\upalpha_{\mathscr{H}^-}$. Moreover, $(\alpha\big|_{\overline{\Sigma}},\slashed{\nabla}_T\alpha|_{\overline{\Sigma}})\in \mathcal{E}^{T,+2}_{\overline{\Sigma}}$ and \bref{109091}.\\ Therefore, as in the case of ${}^{(+2)}\mathscr{F}^+,{}^{(+2)}\mathscr{B}^-$ we can define the unitary isomorphisms \begin{align} {}^{(+2)}\mathscr{F}^-:\mathcal{E}^{T,+2}_{\overline\Sigma}\longrightarrow\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-},\qquad\qquad {}^{(+2)}\mathscr{B}^+:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow\mathcal{E}^{T,+2}_{\overline\Sigma}, \end{align} with \begin{align} {}^{(+2)}\mathscr{F}^-\circ {}^{(+2)}\mathscr{B}^+=Id_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}},\qquad\qquad{}^{(+2)}\mathscr{B}^+\circ{}^{(+2)}\mathscr{F}^-\circ=Id_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus \mathcal{E}^{T,+2}_{\mathscr{I}^-}}. \end{align} An identical statement holds with $\mathcal{E}^{T,+2}_{{\Sigma}}, \mathcal{E}^{T,+2}_{{\mathscr{H}^-}}$ instead. \end{corollary} \begin{corollary}\label{past scattering of -2} Given smooth data of compact support $(\underline\upalpha,\underline\upalpha')\in \mathcal{E}^{T,-2}_{\overline{\Sigma}}$, there exists a unique solution $\underline\alpha$ to the -2 Teukolsky equation \bref{T-2} on $J^-(\overline\Sigma)$ that induces radiation fields \begin{itemize} \item $\underline\upalpha_{\mathscr{I}^-}\in \mathcal{E}^{T,-2}_{\mathscr{I}^-}$ given by $\underline\upalpha_{\mathscr{I}^-}(v,\theta^A)=\lim_{u\longrightarrow -\infty} r^5\underline\alpha(u,v,\theta^A)$, \item $\underline\upalpha_{{\mathscr{H}^-}}\in \mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}$ given by $U^{-2}\underline\upalpha_{\mathscr{H}^-}=2MU^{-2}\Omega^{2}\underline\alpha|_{\mathscr{H}^-}$. \end{itemize} such that \begin{align}\label{190190190} \left\|\left(\underline\alpha|_{\overline{\Sigma}},\slashed{\nabla}_T\underline\alpha|_{\overline{\Sigma}}\right)\right\|^2_{\mathcal{E}^{T,-2}_{\overline{\Sigma}}}=\left|\left|\underline\upalpha_{\mathscr{I}^-}\right|\right|^2_{\mathcal{E}^{T,-2}_{\mathscr{I}^-}}+\left|\left|\underline\upalpha_{\overline{\mathscr{H}^-}}\right|\right|^2_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}}. \end{align} Let $\underline\upalpha_{\mathscr{H}^-}\in\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}$ be such that $U^{-2}\underline\upalpha\in \Gamma({\overline{\mathscr{H}^-}})$ and let $\underline\upalpha_{\mathscr{I}^-}\in\Gamma(\mathscr{I}^-)\cap\;\mathcal{E}^{T,-2}_{\mathscr{I}^-}$. Then there exists a unique solution $\underline\alpha$ to \cref{T-2} in $J^+(\overline{\Sigma})$ such that $\lim_{u\longrightarrow-\infty}r^5\underline\alpha=\underline\upalpha_{\mathscr{I}^-}$, $2MU^{-2}\Omega^2\underline\alpha\big|_{\mathscr{H}^-}=U^{-2}\underline\upalpha_{\mathscr{H}^-}$. Moreover, $(\underline\alpha\big|_{\overline{\Sigma}},\slashed{\nabla}_T\underline\alpha|_{\overline{\Sigma}})\in \mathcal{E}^{T,-2}_{\overline{\Sigma}}$ and \bref{190190190} is satisfied. An identical statement holds with $\mathcal{E}^{T,-2}_{{\Sigma}}, \mathcal{E}^{T,-2}_{{\mathscr{H}^-}}$ instead. \end{corollary} Finally, note that using \Cref{past scattering of +2,,past scattering of -2}, the proof of \Cref{scatteringthm+2} and \Cref{scatteringthm-2} is immediate. \numberwithin{lemma}{section} \numberwithin{proposition}{section} \numberwithin{corollary}{section} \numberwithin{remark}{section} \section{Teukolsky--Starobinsky Correspondence}\label{section 9 TS correspondence} We now turn to the proof of \Cref{Theorem 3} of the introduction, whose detailed statement is contained in \Cref{Theorem 3 detailed statement}. We start by stating in Section 9.1 some useful algebraic relations satisfied by the constraints \bref{eq:227intro1}, \bref{eq:228intro1}. We then study the constraints on scattering data in Section 9.2 to construct the maps $\mathcal{TS}_{\mathscr{H}^\pm}, \mathcal{TS}_{\mathscr{I}^\pm}$, and then we use the results of Section 9.1 and Section 9.2 to show that the constraints are propagated by solutions arising from scattering data consistent with the constraints, culminating in the proof of \Cref{Corollary 1} of the introduction in Section 9.4. \subsection{Some algebraic properties of the Teukolsky--Starobinsky identities}\label{subsection 9.1 algebraic properties of TS} \indent Let $\alpha$ be a solution to the $+2$ Teukolsky equation and let $\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^2r\Omega^2\alpha$, then the commutation relation \bref{commutation relation} implies that \begin{align}\label{TS fact 1} &\mathcal{T}^{-2}\left[ \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\Psi\right]=0. \end{align} Similarly, if $\underline\alpha$ satisfies the $-2$ Teukolsky equation and $\underline\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\underline\alpha$, \bref{commutation relation 2} implies \begin{align}\label{TS fact 2} &\mathcal{T}^{+2}\left[ \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\underline\Psi\right]=0. \end{align} Note that were $(\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\alpha},\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha})$ to belong to a solution to the full system of equations (\ref{start of full system})-(\ref{Bianchi 0*}) then in fact we would have equations \bref{eq:TS1}, \bref{eq:TS2}: \begin{align} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\stackrel{\mbox{\scalebox{0.4}{(1)}}}\Psi-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}-6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\alpha}=0, \label{eq:227}\\ \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\stackrel{\mbox{\scalebox{0.4}{(1)}}}{\underline\Psi}-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha+6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]r\Omega^2\stackrel{\mbox{\scalebox{0.4}{(1)}}}\alpha=0.\label{eq:228} \end{align} \indent Combining \bref{TS fact 1} and \bref{TS fact 2} with the fact that $-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2,\slashed{\nabla}_T$ commute with both (\ref{T+2}) and (\ref{T-2}) leads to the following: denote by $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]$ the expression on the left hand side of (\ref{eq:227}) acting on $\alpha, \underline\alpha$, such that the constraint becomes \begin{align}\label{TS constraint -} \mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]:=\frac{1}{r^3}\Omega\slashed{\nabla}_3 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\Psi-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 {\underline\alpha}+6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]{\underline\alpha}=0. \end{align} Similarly denote by $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]$ the expression on the left hand side of (\ref{eq:228}) so that the constraint becomes \begin{align}\label{TS constraint +} \mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]:=\frac{1}{r^3}\Omega\slashed{\nabla}_4 \frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\underline\Psi-2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2 {\alpha}-6M\left[\Omega\slashed{\nabla}_4+\Omega\slashed{\nabla}_3\right]{\alpha}=0. \end{align} \begin{lemma}\label{propagation lemma} For $\alpha$ satisfying the $+2$ Teukolsky equation (\ref{T+2}) and $\underline\alpha$ satisfying the $-2$ equation (\ref{T-2}), $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]$ also satisfies the $+2$ Teukolsky equation (\ref{T+2}) and $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]$ satisfies the $-2$ equation \bref{T-2} \end{lemma} This implies that if we impose both constraints \bref{eq:227},\bref{eq:228} on initial or scattering data for both the $+2$ and $-2$ Teukolsky equations then the constraints will be propagated by the solutions in evolution. More specifically, if we have scattering data for $\alpha, \underline\alpha$ such that the \textit{radiation fields} belonging to the quantities $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]$, $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]$ (in the sense of the definitions stated in \Cref{+2 radiation} and \Cref{subsection 7.2 future radiation fields and fluxes}) are vanishing, then we must have that $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=0$, $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=0$ by \Cref{+2 future backward scattering} and \Cref{-2 future backward scattering}.\\ \indent We would like to know the extent to which data for $\alpha$, $\underline\alpha$ are constrained by \cref{TS constraint -} and \cref{TS constraint +}. Doing this for data on a Cauchy surface is complicated, but if we restrict to data consistent with the scattering theory developed so far in this paper then we can alternatively attempt to address this question for scattering data on $\mathscr{I}^+, \mathscr{H}^+$. This is the subject of the remainder of this section.\\ \indent To start with, we can show the following by a straightforward computation \begin{lemma}\label{not independent} For $\alpha$ satisfying the $+2$ Teukolsky equation (\ref{T+2}) and $\underline\alpha$ satisfying the $-2$ Teukolsky equation (\ref{T-2}) \begin{align}\label{ ts- to parabolic ts+} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_4\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^3 r\Omega^2\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=-\left[2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2+12M\slashed{\nabla}_T\right]r\Omega^2\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha], \end{align} \begin{align}\label{ ts+ to parabolic ts-} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3\right)^3 r\Omega^2\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=\left[2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2-12M\slashed{\nabla}_T\right]r\Omega^2\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]. \end{align} In other terms, \begin{align} \mathop{\mathbb{TS}^+}\left[\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha],-\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]\right]=0,\qquad\qquad\qquad\mathop{\mathbb{TS}^-}\left[-\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha],\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]\right]=0, \end{align} regardless of whether or not the constraints $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=0,\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=0$ are satisfied. \end{lemma} \Cref{not independent} implies that \Cref{eq:227}, \Cref{eq:228} are not independent. We will use \Cref{not independent} in \Cref{subsection 9.3 propagating the identities} to show that imposing only of the constraints on $\mathscr{I}^+$ and imposing only the other constraint on $\overline{\mathscr{H}^+}$ is enough to propagate the constraints on the solutions $\alpha, \underline\alpha$. \subsection{Inverting the identities on $\mathscr{I}^+, \overline{\mathscr{H}^+}$}\label{subsection 9.2 inverting the identities} \subsubsection*{Constraint \bref{eq:228} at $\mathscr{I}^+$} We know that there are dense subspaces of $\mathcal{E}^{T,+2}_{\overline{\Sigma}}, \mathcal{E}^{T,-2}_{\overline{\Sigma}}$ consisting of smooth data for \cref{T+2}, \cref{T-2} such that \begin{align} \lim_{v\longrightarrow\infty} r\Omega^2\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=\partial_u^4\upalpha_{\mathscr{I}^+}-2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}\underline\upalpha_{\mathscr{I}^+}+6M\partial_u\underline{\alpha}_{\mathscr{I}^+}, \end{align} so we consider \begin{align}\label{constraint null infinity} \partial_u^4\upalpha_{\mathscr{I}^+}-2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}\underline\upalpha_{\mathscr{I}^+}-6M\partial_u\underline{\alpha}_{\mathscr{I}^+}=0 \end{align} as a constraint on scattering data $\underline\upalpha_{\mathscr{I}^+}, \upalpha_{\mathscr{I}^+}$ at $\mathscr{I}^+$. We now show the following: if $\upalpha_{\mathscr{I}^+}$ is smooth and compactly supported, then there is a unique $\underline\upalpha_{\mathscr{I}^+}$ that decays towards $\mathscr{I}^+_\pm$ and satisfies \bref{constraint null infinity}: \begin{proposition}\label{alphabar out of alpha on scri} Let $\upalpha_{\mathscr{I}^+}\in \Gamma_c(\mathscr{I}^+)$. Then there exists a unique smooth $\underline\upalpha_{\mathscr{I}^+}$ such that \begin{align}\label{scalarise this} \partial_u^4\upalpha_{\mathscr{I}^+}-2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}\underline\upalpha_{\mathscr{I}^+}-6M\partial_u\underline\upalpha_{\mathscr{I}^+}=0, \end{align} with $\underline\upalpha_{\mathscr{I}^+}\longrightarrow0$ as $u\longrightarrow \pm \infty$. \end{proposition} \begin{proof} To make sense of (\ref{scalarise this}) we scalarise it: we associate to $\underline\upalpha_{\mathscr{I}^+}$ scalar fields $(\underline{f},\underline{g})$ on $\mathscr{M}$ with vanishing $\ell=0,1$ modes such that $\underline\upalpha_{\mathscr{I}^+}=r^2 \slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1(\underline{f},\underline{g})$. Similarly, we associate to $\upalpha_{\mathscr{I}^+}$ the two fields $(f,g)$ such that $\upalpha_{\mathscr{I}^+}=r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}_2(f,g)$. Define further $F=\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3)^3f$ and $G=\frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_3)^3g$. In the absence of $\ell=0,1$ modes, $r^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1$ is injective and thus (\ref{eq:227}) becomes: \begin{align}\label{eq:231} \begin{split} (F,G)&=2r^4\bar{\slashed{\mathcal{D}}_1}\slashed{\mathcal{D}}_2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1(\underline{f},\underline{g})+6M\Omega\slashed{\nabla}_3(\underline{f},\underline{g}) \\&=2r^4\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}_2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1 (\underline{f},-\underline{g})+6M\Omega\slashed{\nabla}_3(\underline{f},\underline{g}). \end{split} \end{align} Note that $r^4\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}_2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1 = \frac{1}{2} r^2\slashed{\mathcal{D}}_1[-\mathring{\slashed{\Delta}}-1]\slashed{\mathcal{D}}^*_1$ and $r^2\slashed{\mathcal{D}}^*_1\slashed{\mathcal{D}}_1=-\mathring{\slashed{\Delta}}+1$, so $r^4\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}_2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1=\frac{1}{2} r^4\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}^*_1$ \\ $\times\{\slashed{\mathcal{D}}_1\slashed{\mathcal{D}}^*_1-2\}=\frac{1}{2}\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)$. Equations (\ref{eq:231}) become \begin{align}\label{eq:232} \partial_u\underline{f}-\frac{1}{6M}\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\underline{f}=F, \end{align} \begin{align}\label{eq:233} \partial_u \underline{g}+\frac{1}{6M}\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\underline{g}=G. \end{align} Equations (\ref{eq:232}) and (\ref{eq:233}) are two $4^{th}$order parabolic equations which are well-behaved in opposite directions in time; a unique smooth solution exists for (\ref{eq:232}) when evolving in the direction of increasing $u$ whereas (\ref{eq:233}) admits a unique smooth solution in the direction of decreasing $u$. Therefore, assuming the boundary condition $f\longrightarrow 0$ as $u\longrightarrow -\infty$ we will have a unique solution $f$ to (\ref{eq:232}) and this solution will decay for $u\longrightarrow\infty$. Similarly, there is a unique smooth $g$ solving (\ref{eq:233}) with $g\longrightarrow0$ when $u\longrightarrow \pm \infty$. Thus there is a unique smooth $\underline\upalpha_{\mathscr{I}^+}$ solving (\ref{scalarise this}) and decays towards $\mathscr{I}^+_\pm$. \end{proof} \begin{corollary}\label{iterated integrals} Let $\upalpha_{\mathscr{I}^+},\underline\upalpha_{\mathscr{I}^+}$ be as in \Cref{alphabar out of alpha on scri}, then \begin{align}\label{-2 further constraint} \begin{split} \int_{-\infty}^\infty \underline{\alpha}_{\mathscr{I}^+}du_1=0 \end{split} \end{align} \end{corollary} \begin{proof} \cref{scalarise this} and the decay of $\upalpha_{\mathscr{I}^+},\underline\upalpha_{\mathscr{I}^+}$ implies \begin{align}\label{energy r ts-} \partial_u\upalpha_{\mathscr{I}^+}=2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2\int_{-\infty}^{u}du\; \underline\upalpha_{\mathscr{I}^+}+6M\underline\upalpha_{\mathscr{I}^+}. \end{align} Taking $u\longrightarrow \infty$ gives $2r^4\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1\overline{\slashed{\mathcal{D}}}_1\slashed{\mathcal{D}}_2\int_{-\infty}^{\infty}du \underline\upalpha_{\mathscr{I}^+}=0$ which implies $\int_{-\infty}^{\infty}du \underline\upalpha_{\mathscr{I}^+}=0$ as in \Cref{alphabar out of alpha on scri}. \end{proof} Conversely we have the following lemma which follows immediately by inspecting \bref{constraint null infinity}: \begin{proposition}\label{alpha out of alphabar on scri} Given $\underline\upalpha_{\mathscr{I}^+}\in \Gamma_c(\mathscr{I}^+)$, there exists a unique $\upalpha_{\mathscr{H}^+}$ that is smooth and supported away from $\mathscr{H}^+_+$, such that \bref{constraint null infinity} is satisfied by $\upalpha_{\mathscr{I}^+}, \underline\upalpha_{\mathscr{I}^+}$. Furthermore, if $\int_{-\infty}^\infty du\; \underline\upalpha_{\mathscr{I}^+}=0$ then $\upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,+2}_{\mathscr{I}^+}$. \end{proposition}\label{TS-2 on I+} This completes the construction of the map $\mathcal{TS}_{\mathscr{I}^+}$: \begin{corollary}\label{TS scri +} \Cref{alphabar out of alpha on scri} defines the map \begin{align} \mathcal{TS}_{\mathscr{I}^+}:\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{I}^+}. \end{align} The map $\mathcal{TS}_{\mathscr{I}^+}$ is surjective on a dense subspace of $\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ by \Cref{alpha out of alphabar on scri}. Therefore it extends to a unitary Hilbert-space isomorphism. \end{corollary} \begin{remark} The argument leading to \cref{iterated integrals} can be used to show that \begin{align} \begin{split} \int_{-\infty}^\infty \int_{-\infty}^{u_1} &\underline{\alpha}_{\mathscr{I}^+}du_1 du_2=\int_{-\infty}^\infty\int_{-\infty}^{u_1}\int_{-\infty}^{u_2}\underline{\alpha}_{\mathscr{I}^+}du_1 d u_2\\ &=\int_{-\infty}^\infty\int_{-\infty}^{u_1}\int_{-\infty}^{u_2}\int_{-\infty}^{u_3}\underline{\alpha}_{\mathscr{I}^+}du_1 du_2 du_3=0. \end{split} \end{align} \end{remark} \subsubsection*{Constraint \bref{eq:227} at $\overline{\mathscr{H}^+}$} \indent Similar considerations apply to constraint $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=0$, which in Kruskal coordinates looks like \begin{align}\label{constraint horizon} \partial_V^4 V^2\underline\upalpha_{\mathscr{H}^+}=\Big[2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}-3V\partial_V-6\Big]V^{-2}\upalpha_{\mathscr{H}^+}. \end{align} \begin{proposition}\label{alphabar out of alpha on H} Given $\upalpha_{\mathscr{H}^+}$ such that $V^{-2}\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$, solving \bref{constraint horizon} as a transport equation for $V^2\underline\upalpha_{\mathscr{H}^+}$ with decay conditions towards $\mathscr{H}^+_+$: \begin{align} V^2\underline\upalpha_{\mathscr{H}^+}, \partial_V V^2\underline\upalpha_{\mathscr{H}^+}, \partial_V^2 V^2\underline\upalpha_{\mathscr{H}^+}, \partial_V^3 V^2\underline\upalpha_{\mathscr{H}^+} \longrightarrow 0 \text{ as } V \longrightarrow \infty, \end{align} gives a unique solution such that $V^2\underline\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$ and $\underline\upalpha_{\mathscr{H}^+}, \upalpha_{\mathscr{H}^+}$ satisfy \bref{constraint horizon}. \end{proposition} Conversely, we have the following: \begin{proposition}\label{alpha out of alphabar on H} Let $\underline\upalpha_{\mathscr{H}^+}$ be such that $V^{2}\underline\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$, then there exists a unique $\upalpha_{\mathscr{H}^+}$ with $V^{-2}\upalpha_{\mathscr{H}^+}$ such that \bref{constraint horizon} is satisfied with $V^{-2}\upalpha_{\mathscr{H}^+}\longrightarrow 0$ as $V\longrightarrow \infty$ \end{proposition} \begin{proof} As in the proof of \Cref{alphabar out of alpha on scri}, we scalarise \bref{constraint horizon}: Let $V^2\underline\upalpha_{\mathscr{H}^+}=(2M)^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1(\underline{f},\underline{g})$, $V^{-2}\upalpha_{\mathscr{H}^+}=(2M)^2\slashed{\mathcal{D}}^*_2\slashed{\mathcal{D}}^*_1({f},g)$ and let $\underline{F}=-\partial_V^4 \underline{f}, \underline{G}=-\partial_V^4 \underline{g}$. Then $f, g, \underline{F}, \underline{G}$ satisfy \begin{align} \underline{F}&=\left[3V\partial_V+6-\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\right]f,\label{eq:444}\\ \underline{G}&=\left[3V\partial_V+6+\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\right]g.\label{eq:555} \end{align} Equations \bref{eq:444}, \bref{eq:555} are degenerate at $V=0$. If $f,g$ satisfy \bref{eq:444} and \bref{eq:555} then at $V=0$ we must have \begin{align}\label{elliptic} \underline{F}|_{V=0}&=\left[6-\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\right]f|_{V=0},\\ \underline{G}|_{V=0}&=\left[6+\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)\right]g|_{V=0}. \end{align} The above are elliptic identities that determine $(f,g)|_{V=0}$ from $F|_{V=0}, G|_{V=0}$. Denote $(f_0,g_0):=(f,g)|_{V=0}$.\\ \indent As was done in the proof of \Cref{alphabar out of alpha on scri}, we evolve \bref{eq:444} and \bref{eq:555} in opposite directions in $V$. Working with \bref{eq:555} is straightforward: let $V_\infty$ lie beyond the support of $\underline{F}$, then there is a unique $f$ satisfying \bref{eq:555} with $f|_{V_\infty}=0$ and we set $f$ to vanish for $V>V_\infty$.\\ \indent To find a solution to \bref{eq:444}, note that for $V_0>0$, there is a unique $g$ that satisfies \bref{eq:444} on $V\geq V_0$ and $g|_{V_0}=g_0$. Multiply \bref{eq:444} by $g$, integrate by parts to get: \begin{align} \frac{3}{2}\left[g(V)^2-g(V_0)^2\right]+\int_{V_0}^V\frac{1}{\widetilde{V}}6g^2+|f\mathring{\slashed{\Delta}}(\mathring{\slashed{\Delta}}+2)g|^2=\int_{V_0}^V\frac{1}{\widetilde{V}}g\cdot \underline{G} \end{align} Poincar\'e's inequality and Cauchy--Schwarz imply: \begin{align}\label{estimate} g(V)^2+\int_{V_0}^V\frac{5}{\widetilde{V}}g^2\lesssim\int_{V_0}^V \underline{G}^2+g_0^2 \end{align} We obtain similar estimates for $\partial_V g$ by commuting \bref{eq:444} with $\partial_V$. We can use \bref{estimate} commuted with $\partial_V, \mathring{\slashed{\nabla}}$ to conclude that taking $V_0\longrightarrow 0$, we can find $g$ that satisfies \bref{eq:444} with $g|_{V=0}=g_0$. \end{proof} \begin{remark} Were we to apply the constraint \bref{eq:227} on a smaller portion of the future event horizon, we would have needed more data to specify $\underline\upalpha_{\mathscr{H}^+}$ completely. In considering the problem on the entirety of $\overline{\mathscr{H}^+}$ no such additional data is necessary, since \bref{elliptic} determines the $f|_{\mathcal{B}}$ in terms of $\underline\upalpha_{\mathscr{H}^+}$. \end{remark} \begin{corollary}\label{TS+2 on H+} \Cref{alphabar out of alpha on H} defines the map \begin{align} \mathcal{TS}_{\mathscr{H}^+}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\longrightarrow \mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}. \end{align} The map $\mathcal{TS}_{\mathscr{H}^+}$ is surjective on a dense subspace of $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}$ by \Cref{alpha out of alphabar on H}. Therefore it extends to a unitary Hilbert-space isomorphism. \end{corollary}\label{TS on H-} We can analogously consider the constraints on $\overline{\mathscr{H}^-}, \mathscr{I}^-$. In light of \Cref{time inversion} we can immediately deduce the appropriate statements: \begin{corollary}\label{TS-2 on H-} Given $\upalpha_{\mathscr{H}^-}$ such that $U^2\upalpha_{\mathscr{H}^-}\in\Gamma_c(\overline{\mathscr{H}^-})$, there exists a unique solution $\underline\upalpha_{\mathscr{H}^-}$ to the equation \begin{align}\label{TS-2 on H- equation} \partial_U^4 U^2\upalpha_{\mathscr{H}^-}=\Big[2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}-3U\partial_U-6\Big]U^{-2}\underline\upalpha_{\mathscr{H}^-}. \end{align} such that $U^{-2}\underline\upalpha_{\mathscr{H}^-}\in\Gamma(\overline{\mathscr{H}^-})$. The solution $\underline\upalpha_{\mathscr{H}^-}(u,\theta^A)$ and its $\partial_U,\mathring{\slashed{\nabla}}$ derivatives decay exponentially as $u\longrightarrow-\infty$ at a rate $\frac{4}{M}$.\\ \indent Given $\underline\upalpha_{\mathscr{H}^-}$ such that $U^{-2}\underline\upalpha_{\mathscr{H}^-}\in\Gamma_c(\overline{\mathscr{H}^-})$, there exists a unique solution $\upalpha_{\mathscr{H}^-}$ such that $U^2\upalpha_{\mathscr{H}^-}\in\Gamma_c(\overline{\mathscr{H}^-})$.\\ \indent As in \Cref{TS+2 on H+}, we can combine the statements above to define a unitary Hilbert-space isomorphism via \bref{TS-2 on H- equation}: \begin{align} \mathcal{TS}_{\mathscr{H}^-}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}. \end{align} \end{corollary} \begin{corollary}\label{TS on scri-} Let $\upalpha_{\mathscr{I}^-}\in \Gamma_c(\mathscr{I}^-)$. Then there exists a unique smooth $\underline\upalpha_{\mathscr{I}^-}$ such that \begin{align}\label{TS+2 on I-} \partial_v^4\upalpha_{\mathscr{I}^-}-2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}\underline\upalpha_{\mathscr{I}^+}-6M\partial_v\underline\upalpha_{\mathscr{I}^+}=0, \end{align} with $\underline\upalpha_{\mathscr{I}^+}\longrightarrow0$ as $u\longrightarrow \pm \infty$. The solution $\underline\upalpha_{\mathscr{I}^-}$ and its derivatives decay exponentially as $v\longrightarrow\pm\infty$. \\ \indent Given $\underline\upalpha_{\mathscr{I}^-}$, there exists a unique solution $\upalpha_{\mathscr{I}^-}$ to \bref{TS+2 on I-} that is supported away from the past end of $\mathscr{I}^-$. Moreover, $\int_{-\infty}^\infty d\bar{v}\;\upalpha_{\mathscr{I}^-}=0$.\\ \indent As in \bref{TS scri +}, the statements above can be combined to define via \bref{TS+2 on I-} a unitary Hilbert space isomorphism: \begin{align} \mathcal{TS}_{\mathscr{I}^-}:\mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow\mathcal{E}^{T,-2}_{\mathscr{I}^-}. \end{align} \end{corollary} \begin{corollary}\label{Both TS+ and TS-} There exist Hilbert space isomorphisms \begin{align} &\mathcal{TS}^+:=\mathcal{TS}_{\mathscr{H}^+}\oplus\mathcal{TS}_{\mathscr{I}^+}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^+}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^+},\\ &\mathcal{TS}^-:=\mathcal{TS}_{\mathscr{H}^-}\oplus\mathcal{TS}_{\mathscr{I}^-}:\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,+2}_{\mathscr{I}^-}\longrightarrow\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus\mathcal{E}^{T,-2}_{\mathscr{I}^-}. \end{align} \end{corollary} \subsection{Propagating the identities}\label{subsection 9.3 propagating the identities} \indent We can summarise the contents of the previous section as follows: given scattering data for either $\alpha$ or $\underline\alpha$ on $\mathscr{I}^+$ and $\overline{\mathscr{H}^+}$, there exist unique scattering data for the other that is consistent with \bref{constraint null infinity} and \bref{constraint horizon} and \cref{+2 noncompact,,-2 noncompact}.\\ \indent For $\alpha$ and $\underline\alpha$ arising from scattering data related by \bref{constraint null infinity} and \bref{constraint horizon}, if we can verify that \begin{align} \lim_{v\longrightarrow\infty} r^5\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]&=0,\\ V^2\Omega^{-2}\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]\Big|_{\overline{\mathscr{H}^+}}&=0, \end{align} then \Cref{propagation lemma} together with \Cref{+2 future backward scattering}, \Cref{-2 future backward scattering} imply that $\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=0$ everywhere. \\ \indent Assume future scattering data with $(V^{-2}\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})\in \Gamma_c(\overline{\mathscr{H}^+})\times\Gamma_c(\mathscr{I}^+)$ for the +2 Teukolsky equation \cref{T+2}. We can obtain $\underline\upalpha_{\mathscr{H}^+}$ that is supported away from $\mathscr{H}^+_+$ by solving \bref{constraint horizon} as a transport equation, and we can use \Cref{alphabar out of alpha on scri} to find a smooth $\underline{\alpha}_{\mathscr{I}^+}$ decays exponentially towards $\mathscr{I}^+_\pm$ at rate faster than $\frac{4}{M}$. Therefore, there exists a unique solution $\underline\alpha$ that realises scattering data $(\underline\upalpha_{\mathscr{H}^+},\underline{\alpha}_{\mathscr{I}^+})$ with $V^2\Omega^{-2}\underline\alpha$ smooth everywhere on $J^+(\overline\Sigma)$ up to and including $\overline{\mathscr{H}^+}$. In particular, since $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]\Big|_{\overline{\mathscr{H}^+}}=0$, \cref{ ts- to parabolic ts+} implies \begin{align} \partial_V^4 \left\{\partial_U^4V^{-2}\upalpha_{\mathscr{H}^+}+\Big(2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}-3V\partial_V-6\Big)V^2\underline\upalpha_{\mathscr{H}^+}\right\}=0. \end{align} Since $V^{-2}\upalpha_{\mathscr{H}^+}, V^2\underline\upalpha_{\mathscr{H}^+}$ and their derivatives decay as $v\longrightarrow\infty$, we conclude that $V^2\Omega^{-2}\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]\Big|_{\overline{\mathscr{H}^+}}=0$.\\ \indent Towards $\mathscr{I}^+$, $(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})$ decay at a sufficiently fast rate that we can use \Cref{-2 noncompact}, \Cref{RW backwards noncompact}, and \Cref{Phi 2 backwards} to deduce. \begin{align} \begin{split} \lim_{u\longrightarrow\infty}\lim_{v\longrightarrow\infty}\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\underline\Psi&=\lim_{u\longrightarrow\infty} \int_u^\infty (u-\bar{u})\left[\mathcal{A}_2(\mathcal{A}_2-2)-6M\partial_u\right]\underline{\bm{\uppsi}}_{\mathscr{I}^+}\\ &=\lim_{u\longrightarrow\infty}\int_u^\infty (u-\bar{u}) \left[\mathcal{A}_2^2(\mathcal{A}_2-2)^2-(6M\partial_u)^2\right]\underline\upalpha_{\mathscr{I}^+}=0. \end{split} \end{align} We also have \begin{align} \lim_{u\longrightarrow\infty}\lim_{v\longrightarrow\infty}\partial_u^i\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\underline\Psi=0. \end{align} for $0\leq i\leq3$. Taking the limit of \bref{ ts+ to parabolic ts-} as $v\longrightarrow\infty$ implies \begin{align} \partial_u^4\left[\lim_{v\longrightarrow\infty}\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2\underline\Psi-\Big(2\mathring{\fancydstar_2}\mathring{\fancydstar_1}\mathring{\overline{\fancyd_1}}\mathring{\fancyd_1}+6M\partial_u\Big)\underline\upalpha_{\mathscr{I}^+}\right]=0. \end{align} Altogether, we see that $\lim_{v\longrightarrow\infty} r^5\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=0$. We have shown \begin{proposition} Assume $\alpha$ is a solution to \cref{T+2} arising from smooth scattering data $(\upalpha_{\mathscr{H}^+},\upalpha_{\mathscr{I}^+})$ such that $\upalpha_{\mathscr{I}^+} \in \Gamma_c({\mathscr{I}^+})$, $V^{-2}\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$. There exists unique smooth scattering data $\underline\upalpha_{\mathscr{H}^+}\in\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^+}}, \underline\upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,-2}_{\mathscr{I}^+}$ giving rise to a solution $\underline\alpha$ to \cref{T-2}. Moreover, $\alpha$ and $\underline\alpha$ satisfy $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=0$ everywhere on $J^+(\overline\Sigma)$. \end{proposition} We can repeat the above arguments starting from smooth, compactly supported scattering data for the $-2$ equation to arrive at \begin{proposition}\label{proof of corollary} Assume $\underline\alpha$ is a solution to \cref{T-2} arising from smooth scattering data $(\underline\upalpha_{\mathscr{H}^+},\underline\upalpha_{\mathscr{I}^+})$ such that $\underline\upalpha_{\mathscr{I}^+} \in \Gamma_c({\mathscr{I}^+})$, $V^2\underline\upalpha_{\mathscr{H}^+}\in\Gamma_c(\overline{\mathscr{H}^+})$. There exists unique smooth scattering data $\upalpha_{\mathscr{H}^+}\in\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}, \upalpha_{\mathscr{I}^+}\in\mathcal{E}^{T,+2}_{\mathscr{I}^+}$ giving rise to a solution $\alpha$ to \cref{T+2}. Moreover, $\alpha$ and $\underline\alpha$ satisfy $\mathop{\mathbb{TS}^+}[\alpha,\underline\alpha]=\mathop{\mathbb{TS}^-}[\alpha,\underline\alpha]=0$ everywhere on $J^+(\overline\Sigma)$. \end{proposition} This concludes the proof of \Cref{Theorem 3 detailed statement}, i.e.~\Cref{Theorem 3} of the introduction. \subsection{A mixed scattering theory: proof of Corollary 1}\label{subsection 9.4 mixed scattering} We are in a position to prove Corollary 1 of the introduction, i.e.~\Cref{corollary to be proven} of \Cref{subsection 4.4 Corollary 1: mixed scattering}: \begin{proof}[Proof of Corollary 1] We will construct the map $\mathscr{S}^{+2,-2}$ only in the forward direction on a dense subset of $\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}\oplus\;\mathcal{E}^{T,+2}_{\mathscr{I}^-}$. Let $\upalpha_{\mathscr{I}^-}\in \Gamma_c({\mathscr{I}^-})$, $\underline\upalpha_{\mathscr{H}^-}$ be such that $V^2\underline\upalpha_{\mathscr{H}^-}\in\Gamma_c(\overline{\mathscr{H}^-})$ and $\int_{-\infty}^\infty d\bar{v}\upalpha_{\mathscr{I}^-}=0$. The map $\mathcal{TS}^-$ of \Cref{Both TS+ and TS-} defines a scattering data set consisting of a smooth field $\underline\upalpha_{\mathscr{I}^-}$ on $\mathscr{I}^-$ which is supported away from the past end of $\mathscr{I}^-$, $\underline\upalpha_{\mathscr{H}^-}$ on $\overline{\mathscr{H}^-}$ which is supported away from the past end of $\overline{\mathscr{H}^-}$.\\ \indent The map ${}^{(+2)}\mathscr{B}^-$ of \Cref{+2 past forward scattering} gives rise to a smooth solution $\alpha$ on $J^-(\overline{\Sigma})$ such that \begin{align}\label{545454} \left\|\left(\alpha|_{\overline{\Sigma}}, \slashed{\nabla}_{n_{\overline\Sigma}}\alpha|_{n_{\overline\Sigma}}\right)\right\|_{\mathcal{E}^{T,+2}_{\overline{\Sigma}}}^2=\left\|\upalpha_{\mathscr{H}^-}\right\|^2_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^-}}}+\left\|\upalpha_{\mathscr{I}^-}\right\|^2_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}, \end{align} and the map ${}^{(+2)}\mathscr{F}^{+}$ extends $\alpha$ to a smooth solution of \bref{T+2} on $J^+(\overline{\Sigma})$. Combining \bref{545454} with the fact that $\alpha|_{{\Sigma^*}}, \slashed{\nabla}_{n_{{\Sigma^*}}}\alpha|_{\Sigma^*}$ are smooth implies that the estimates of \Cref{psiILED,,alphaILED,,ILED psi higherorder,,alphaILED higher order} apply, and we can apply \Cref{psi+2ptwisedecay,,alpha+2ptwisedecay,,horizonpsidecay} together with \Cref{WP+2Sigmabar} to conclude that $\alpha$ realises the image of ${}^{(+2)}\mathscr{F}^+$ on $\overline{\mathscr{H}^+}$ as its radiation field there.\\ \indent The scattering data set $(\underline\upalpha_{\mathscr{H}^-},\underline\upalpha_{\mathscr{I}^-})$ give rise to a unique smooth solution $\underline\alpha$ according to \Cref{past scattering of -2}, which in particular realises $\underline\upalpha_{\mathscr{H}^-},\underline\upalpha_{\mathscr{I}^-}$ as its radiation fields on $\mathscr{H}^-$, $\mathscr{I}^-$ respectively. The quantity $\underline\Psi=\left(\frac{r^2}{\Omega^2}\Omega\slashed{\nabla}_4\right)^2r\Omega^2\underline\alpha$ satisfies the Regge--Wheeler equation \bref{RW} and induces a radiation field on $\mathscr{I}^-$ that is given by $\uppsi_{\mathscr{I}^-}=\partial_v^2\underline\upalpha_{\mathscr{I}^-}$. Note that in particular, $\partial_v\uppsi_{\mathscr{I}^-}$ vanishes whenever $\underline\upalpha_{\mathscr{I}^-}$ vanishes on $\mathscr{I}^-$.\\ \indent Assume the support of $\underline\upalpha_{\mathscr{I}^-}$ on $\mathscr{I}^-$ in $v$ is contained in $[v_-,v_+]$. Since $\alpha$ arises from scattering data of compact support, we can follow the steps leading to estimate \bref{this+2} taking into account \Cref{time inversion} to obtain the following: let $R$ be sufficiently large, then \begin{align}\label{this-2} \begin{split} \int_{\underline{\mathscr{C}}_v\cap\{r>R\}}d\bar{u}d\omega\; r^2|\Omega\slashed{\nabla}_3\underline\Psi|^2\lesssim_{v_+} R^2\Bigg[\|\underline\upalpha_{\mathscr{I}^-}&\|_{\mathcal{E}^{T,-2}_{\mathscr{I}^-}}^2+\|\underline\upalpha_{\mathscr{H}^-}\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}^2\\&+\int_{[v_-,v_+]\times S^2}d\bar{v}d\omega\;|\mathring{\slashed{\nabla}}\partial_v^2{\underline\upalpha}_{\mathscr{I}^-}|_{S^2}^2+4|\partial_v^2{\underline\upalpha}_{\mathscr{I}^-}|_{S^2}^2\Bigg]. \end{split} \end{align} Let $v_1>v_+$, then we can use \bref{this-2} to show that $\sqrt{r}\partial_v\Psi|_{u,v}\longrightarrow 0$ as $u\longrightarrow-\infty$: \begin{align} \begin{split} |\Omega\slashed{\nabla}_4\Psi|&\leq \int_{-\infty}^u d\bar{u} |\Omega\slashed{\nabla}_4\Omega\slashed{\nabla}_3\Psi|\lesssim\int_{-\infty}^u d\bar{u}\frac{1}{r^2}|\mathring{\slashed{\Delta}}\underline\Psi+\underline\Psi|\lesssim \frac{1}{\sqrt{r(u,v)}}\sqrt{\int_{-\infty}^u d\bar{u}\frac{1}{r^2}|\mathring{\slashed{\Delta}}\underline\Psi|^2+|\mathring{\slashed{\nabla}}\underline\Psi|^2+\underline\Psi|^2}\\& \lesssim \frac{1}{\sqrt{r(u,v)}}\sqrt{\sum_{|\gamma|\leq2} F^T_v[\slashed{\mathcal{L}}_{\Omega^\gamma}\underline\Psi}]. \end{split} \end{align} where $\Omega^\gamma=\Omega_1^{\gamma_1}\Omega_2^{\gamma_2}\Omega_3^{\gamma_3}$ denotes Lie differentiation with respect to the $so(3)$ algebra of $S^2$ Killing fields. Now take $u_1<u_2, v_2>v_1$ such that $(u_2,v_1,\theta^A)\in J^-(\overline\Sigma)$ and $r(u_2,v_1)>R$. We can repeat the procedure leading to \Cref{RWrp} in the region $\mathscr{D}^{u_2,v_2}_{u_1,v_1}$ to get for $p\in[0,2]$: \begin{align}\label{747474} \begin{split} &\int_{\mathscr{C}_{u_2}\cap[v_1,v_2]} d\bar{v}\sin\theta d\theta d\phi \;r^p|\Omega\slashed{\nabla}_4\underline\Psi|^2+\int_{\underline{\mathscr{C}}_{v_2}\cap[u_1,u_2]}d\bar{u}\sin\theta d\theta d\phi\;r^p\left[|\slashed{\nabla}\underline\Psi|^2+\frac{1}{r^2}|\underline\Psi|^2\right]\\&+ \int_{\mathscr{D}^{u_2,v_2}_{u_1,v_1}}d\bar{u}d\bar{v}\sin\theta d\theta d\phi\;r^{p-1}\left[p|\Omega\slashed{\nabla}_4\underline\Psi|^2+(2-p)|\slashed{\nabla}\underline\Psi|^2+r^{p-3}|\underline\Psi|^2\right]\\&\lesssim\int_{\mathscr{C}_{u_1}\cap[v_1,v_2]} d\bar{v}\sin\theta d\theta d\phi\; r^p|\Omega\slashed{\nabla}_4\underline\Psi|^2+\int_{\underline{\mathscr{C}}_{v_1}\cap[u_1,u_2]}d\bar{u}\sin\theta d\theta d\phi\;r^p\left[|\slashed{\nabla}\underline\Psi|^2+\frac{1}{r^2}|\underline\Psi|^2\right]. \end{split} \end{align} Set $p=1$ in \bref{747474}. Keeping $v_1,v_2$ fixed and taking $u_1\longrightarrow-\infty$, the first term on the right hand side of \bref{747474} decays. The remaining term can be estimated by \bref{this-2} and applying Hardy's inequality, knowing that $\underline\Psi$ and its angular derivatives converge pointwise towards $\mathscr{I}^-$. In conclusion we have \begin{align}\label{rp estimate from past null infinity} \begin{split} &\int_{\mathscr{D}^{u_2,\infty}_{-\infty,v_1}}d\bar{u}d\bar{v}\sin\theta d\theta d\phi\;r^{p-1}\left[p|\Omega\slashed{\nabla}_4\underline\Psi|^2+(2-p)|\slashed{\nabla}\underline\Psi|^2+r^{p-3}|\underline\Psi|^2\right]\\ &\lesssim_{R}\sum_{|\gamma|\leq2} \Bigg[\|\slashed{\mathcal{L}}_{\Omega^\gamma}\underline\upalpha_{\mathscr{I}^-}\|_{\mathcal{E}^{T,-2}_{\mathscr{I}^-}}^2+\|\slashed{\mathcal{L}}_{\Omega^\gamma}\underline\upalpha_{\mathscr{H}^-}\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}^2+\int_{[v_-,v_+]\times S^2}d\bar{v}d\omega\;|\mathring{\slashed{\nabla}}\partial_v^2{\slashed{\mathcal{L}}_{\Omega^\gamma}\underline\upalpha}_{\mathscr{I}^-}|_{S^2}^2+4|\partial_v^2{\slashed{\mathcal{L}}_{\Omega^\gamma}\underline\upalpha}_{\mathscr{I}^-}|_{S^2}^2\Bigg]. \end{split} \end{align} We can extend the region $\mathscr{D}^{u_2,\infty}_{-\infty,v_1}$ to obtain \bref{rp estimate from past null infinity} over a region $\mathscr{D}^{\infty,\infty}_{-\infty,v_1}\cap\{r>R\}$. In view of the monotonicity of $F^T_u[\underline\Psi]\cap\{r>R\}$, this implies in particular that \begin{align} \lim_{u\longrightarrow\infty} \int_{\mathscr{C}_u\cap\{r>R\}}d\bar{v}\sin\theta d\theta d\phi \;\frac{\Omega^2}{r^2}|\underline\Psi|^2=0. \end{align} Now we show that $\underline\alpha$ induces a radiation field $\underline\upalpha_{\mathscr{I}^+}$ on $\mathscr{I}^+$ which is in $\mathcal{E}^{T,+2}_{\mathscr{I}^+}$. First, note that energy conservation is sufficient to show that $\underline\alpha, \underline\psi$ attains radiation fields on $\mathscr{I}^+$: Fix $u$ and take $v_2>v_1$: \begin{align}\label{corollary to be proven, radiation field exists} |r^3\Omega\underline\psi(u,v_2,\theta^A)-r^3\Omega\underline\psi(u,v_1,\theta^A)|\leq\int_{v_1}^{v_2} d\bar{v} \frac{\Omega^2}{r^2}|\underline\Psi|\leq\frac{1}{\sqrt{r(u,v_1)}}\;\sqrt{\int_{v_1}^{v_2} d\bar{v} \frac{\Omega^2}{r^2}|\underline\Psi|^2}. \end{align} by commuting with angular derivatives and using a Sobolev estimate as in the proof of \Cref{RWradscri}, this shows that for any sequence $\{v_n\}$ with $v_n\longrightarrow\infty$ we have that $r^3\Omega\underline\psi(u,v_n,\theta^A)$ is a Cauchy sequence, and an identical argument yields the same for $\underline\alpha$. Denote the limit of $r^3\underline\psi$ near $\mathscr{I}^+$ by $\underline\psi_{\mathscr{I}^+}$.\\ \indent Since $r^5\underline\psi$ converges near $\mathscr{I}^-$, estimate \bref{corollary to be proven, radiation field exists} can be easily modified to show that $\underline\psi_{\mathscr{I}^+}$ decays towards the past end of $\mathscr{I}^+$. As for the future end of $\mathscr{I}^+$, we repeat the estimate \bref{corollary to be proven, radiation field exists} estimating $\underline\psi_{\mathscr{I}^+}$ in terms of $\underline\psi$ along a hypersurface $\{r=R\}$ for a fixed $R$. Since $\underline\alpha$ is smooth and $\|(\underline\alpha|_{\overline{\Sigma}},\slashed{\nabla}_{n_{\overline{\Sigma}}}\underline\alpha|_{\overline{\Sigma}})\|_{\mathcal{E}^{T,-2}_{\overline{\Sigma}}}<\infty$, the results of \Cref{-2 radiation on H+} apply and we can deduce that $\underline\psi|_{r=R}$ decays as $t\longrightarrow\infty$, and this says that $\underline\psi_{\mathscr{I}^+}$ decays towards the future end of $\mathscr{I}^+$.\\ \indent We now show that $\int_{-\infty}^\infty d\bar{u} \;\underline\upalpha_{\mathscr{I}^+}=0$. Consider the $-2$ Teukolsky equations \bref{T-2}, which we write as follows: \begin{align} \frac{\Omega^2}{r^2}\Omega\slashed{\nabla}_3 r^5\Omega^{-1}\underline\psi=\left(\mathcal{A}_2-\frac{6M}{r}\right)r\Omega^2\underline\alpha. \end{align} It can be shown that the limit towards $\mathscr{I}^+$ produces \begin{align}\label{858585} \partial_u \underline\psi_{\mathscr{I}^+}=\mathcal{A}_2\;\underline\upalpha_{\mathscr{I}^+} \end{align} We can conclude by observing that $\underline\psi_{\mathscr{I}^+}$ decays towards both ends of $\mathscr{I}^+$. With this we can also conclude that $\underline\upalpha_{\mathscr{I}^+}\in \mathcal{E}^{T,-2}_{\mathscr{I}^+}$ and that \begin{align} \|\underline\upalpha_{\mathscr{I}^+}\|_{\mathcal{E}^{T,-2}_{\mathscr{I}^+}}^2+ \|\upalpha_{\mathscr{H}^+}\|_{\mathcal{E}^{T,+2}_{\overline{\mathscr{H}^+}}}^2= \|\upalpha_{\mathscr{I}^-}\|_{\mathcal{E}^{T,+2}_{\mathscr{I}^-}}^2+\|\underline\upalpha_{\mathscr{H}^-}\|_{\mathcal{E}^{T,-2}_{\overline{\mathscr{H}^-}}}^2. \end{align} \end{proof} \begin{remark} The result above subsumes a restricted map to scattering data in $\mathcal{E}^{T,-2}_{\mathscr{H}^-}$, $\mathcal{E}^{T,+2}_{\mathscr{H}^+}$, which leads to an isomorphism \begin{align} \mathscr{S}^{+2,-2}:\mathcal{E}^{T,-2}_{\mathscr{I}^-}\oplus\mathcal{E}^{T,-2}_{\mathscr{H}^-}\longrightarrow \mathcal{E}^{T,-2}_{\mathscr{H}^+}\oplus \mathcal{E}^{T,-2}_{\mathscr{I}^+}. \end{align} \end{remark}
{ "timestamp": "2020-07-28T02:43:27", "yymm": "2007", "arxiv_id": "2007.13658", "language": "en", "url": "https://arxiv.org/abs/2007.13658" }
\section{Introduction and main results} Branching processes considered in this paper are motivated by works of \cite{solomon:1975} and \cite{kesten:kozlov:spitzer:1975}, who analysed a neighbourhood random walk in random environment. This is a random walk $(X_t, t\in\mathbb{Z}^+)$ on $\mathbb{Z}$ defined in the following way. Consider a collection $(A_i, i \in \mathbb{Z}^+)$ of i.i.d. $(0,1)$-valued random variables. Let $\mathcal{A}$ be the $\sigma$-algebra generated by $(A_i,i\in\mathbb{Z}^+)$. Let $(X_k, k \in \mathbb{N})$ be a random walk in random environment, that is a collection of $\mathbb{Z}$-valued random variables such that $X_0=0$ and, for $k\ge 0$, \alns{ \prob \bigl( X_{k+1} = X_k + 1 \mid \mathcal{A}, X_0 = i_0, \ldots, X_k = i_k \bigr) = A_{i_k} } and \alns{ \prob \bigl( X_{k+1} = X_k -1 \mid \mathcal{A}, X_0 = i_0, \ldots, X_k = i_k\bigr) = 1- A_{i_k} } for all $i_j \in \mathbb{Z}$, $0\le j\le k$. The collection $(A_i,i\in\mathbb{Z}^+)$ is called a random environment. For this random walk, \cite{kesten:kozlov:spitzer:1975} studied the appropriately scaled limiting distribution of the hitting time $T_n=\inf\{k > 0: X_k =n\}$ of any state state $n\in\mathbb{Z}$. Their analysis is based on the representation of $T_n$, $n>0$ in terms of the total number of particles up to the $n$th generation of a certain branching process in random environment with size-1 immigration at each generation step. In this model the offspring distribution in the $n$th generation is geometric with a random parameter $A_n$. In other words, let $(Z_n, n \ge 0)$ be a branching process in random environment with one immigrant each time that starts from $Z_0 \equiv 0$. Then the following representation holds: \aln{ Z_{n+1} = \sum_{i=1}^{Z_n + 1} B_{n+1,i}\label{Zn} } where, conditioned on $\mathcal{A}$, $(B_{n+1,i},i\ge 1)$ are independent copies of a geometric random variable $B_{n+1}$ with probability mass function \aln{ \prob (B_{n+1}=k) = A_n(1-A_n)^k \quad \mbox{ for all } k \ge 0,\ n\ge 0. \label{eq:prob:mass:func:Bn} } Following \cite{kesten:kozlov:spitzer:1975}, let $U_i^n$ denote the number of transitions of $(X_k,k \ge 0)$ from $i$ to $i-1$ within time interval $[0,T_n)$, i.e., \alns{ U_i^n = \card \{ k < T_n : X_k = i, X_{k+1}= i-1\}, } where $\card(C)$ is the cardinality of the set $C$. It is easy to derive that \begin{equation} T_n = n + 2 \sum_{i = -\infty}^{\infty} U_i^n.\label{mainrepr} \end{equation} Note that $U_i^n = 0$ for all $i \ge n$ and $U:=\sum_{i \le 0} U_i^n < \infty$ a.s. if $X_k \to \infty$ a.s. as $k \to \infty$. It has been established in \cite{kesten:kozlov:spitzer:1975}, that \begin{equation} \sum_{i=1}^n U_i^n\ \stackrel{d}{=}\ \sum_{l=0}^{n-1} Z_l.\label{disteq} \end{equation} Then \cite{kesten:kozlov:spitzer:1975} have analysed $T_n$ under the so-called ``Kesten assumptions'' on the environment: \aln{ \exptn \biggl( \log \frac{1-A}{A} \biggr) < 0\quad \mbox{ but }\ \exptn \biggl(\frac{1-A}{A} \biggr) \ge 1 \label{eq:assumption:solomon} } and there exists a unique positive solution $\kappa$ to the equation \aln{ \exptn \bigg(\biggl(\frac{1-A}{A}\bigg)^\kappa\biggr) \ =\ \exptn \bigg( \exp \bigg\{ \kappa \log \frac{1-A}{A} \bigg\} \bigg) \ =\ 1. \label{eq:assumption:kks} } In particular, the assumption \eqref{eq:assumption:kks} implies that the random variable $$ \xi:=\log\frac{1-A}{A} $$ has an exponentially decaying right tail. It was shown in \cite{kesten:kozlov:spitzer:1975} that, under the assumptions \eqref{eq:assumption:solomon}--\eqref{eq:assumption:kks}, the distributions of appropriately scaled random variables $T_n$ and $\sum_{k=0}^{n-1} Z_k$ become close to each other and converge, as $n\to\infty$, to the distribution of a $\kappa$-stable random variable. The tail asymptotics for the branching process $Z_n$ under the assumptions \eqref{eq:assumption:solomon}--\eqref{eq:assumption:kks} were studied by \cite{DS2017} for all three regimes, subcritical, critical, and supercritical. The aim of our paper is to study the asymptotic behaviour of the branching process $Z_n$ under the complementary assumption that the distribution $F$ of the random variable $\xi$ is {\it long-tailed}, that is, $\overline{F}(x)>0$ for all $x$ and \begin{eqnarray}\label{assregvar} \overline F(x-y) &\sim& \overline F(x) \quad\mbox{as }x\to\infty, \end{eqnarray} for some (and therefore for all) fixed $y\not=0$. Here $\overline F(x)=1-F(x)$ is the tail distribution function and equivalence \eqref{assregvar} means that the ratio of the left- and right-hand sides tends to 1 as $x$ grows. In particular, \eqref{assregvar} implies that $F$ is {\it heavy-tailed}, i.e. ${\mathbb E} e^{c\xi}=\infty$ for all $c>0$. Given \eqref{assregvar}, the distribution $G$ defined by its tail as $\overline G(x)=\overline F(\log x)$, $x\ge 1$, is {\it slowly varying at infinity} and therefore {\it subexponential}, that is, \begin{eqnarray}\label{asssubexp} \overline{G*G}(x) &\sim& 2\overline G(x) \quad\mbox{as }x\to\infty, \end{eqnarray} see, e.g. Theorem 3.29 in \cite{FKZ}. A distribution $F$ with finite mean is called {\it strong subexponential} if \begin{eqnarray}\label{asssubexp.str} \int_0^x \overline F(x-y)\overline F(y)dy &\sim& 2\overline F(x)\int_0^\infty\overline F(y)dy \quad\mbox{as }x\to\infty. \end{eqnarray} Any strong subexponential distribution $F$ is subexponential, and its {\it integrated tail distribution} $F_I$ with the tail distribution function \begin{eqnarray*} \overline F_I(x) &=& \min\Bigl(1,\ \int_x^\infty\overline F(y)dy\Bigr). \end{eqnarray*} is subexponential too (see e.g. \cite[Theorem 3.27]{FKZ}). In what follows, we write $F_I(x,y]:=\overline F_I(x)-\overline F_I(y)$. We start now with our first main result. \begin{thm}\label{thm:tail:fixed:generation:size} Under the assumption \eqref{assregvar}, \alns{ \prob(Z_1> m)\ \sim\ \overline F(\log m)\quad\mbox{as }m\to\infty. } If, in addition, the distribution $F$ is subexponential, then, for any fixed $n\ge 2$, \alns{ \prob(Z_n> m)\ \sim\ n\overline F(\log m)\quad\mbox{as }m\to\infty. } \end{thm} Theorem \ref{thm:tail:fixed:generation:size} shows that the tail of $Z_1$ is surprisingly heavy and is getting heavier in each next generation. What should be underlined, this type of behaviour is a consequence of the environment only, and not of the branching mechanism which is of geometric type. In contrast to a series of papers \cite{seneta}, \cite{darling}, \cite{schuh:barbour}, \cite{hong:zhang:2018}, we do not analyse the convergence results for $n\to\infty$, with focusing on the tail behaviour of the distribution of $Z_n$ for each $n$. Consider now a branching process with state-independent immigration satisfying the stability condition \begin{eqnarray}\label{stab.cond} -a\ :=\ \exptn \xi < 0\quad \mbox{where }\exptn |\xi|<\infty. \end{eqnarray} The classical Foster criterion implies that the distribution of $Z_n$ stabilises in time, i.e.\ the distribution of the Markov chain $Z_n$ converges to a unique limiting/stationary distribution as $n$ grows. It follows from Theorem \ref{thm:tail:fixed:generation:size} that, for any $n$, the tail of the stationary distribution must be asymptotically heavier than $n\overline{F}(\log m)$, i.e. $\prob(Z>m)/\overline{F}(\log m)\to\infty$ as $m\to\infty$, where $Z$ is sampled from the stationary distribution. The distribution tail asymptotics of $Z_n$ and $Z$ are specified in the following two results. The first result provides two asymptotic lower bounds, for finite and infinite time horizons, where the first bound is uniform for all generations. \begin{thm}\label{thm:tail:generation:size.lower} Assume that $A\le\widehat A$ a.s. for some constant $\widehat A<1$. Then the following lower bounds hold. {\rm (i)} If the distribution $F$ is long-tailed, then \begin{eqnarray}\label{eq:stat1} \prob(Z_n>m) &\ge& (a^{-1}+o(1)) F_I(\log m,\log m+na] \mbox{ as }m\to\infty\mbox{ uniformly for all }n\ge 1. \end{eqnarray} {\rm (ii)} If the integrated tail distribution $F_I$ is long-tailed and the stability condition \eqref{stab.cond} holds, then \begin{eqnarray}\label{eq:stat2} \prob(Z>m) &\ge& (a^{-1}+o(1)) \overline F_I(\log m)\quad\mbox{as }m\to\infty. \end{eqnarray} \end{thm} The next result presents conditions for existence of upper bounds that match the lower bounds of Theorem \ref{thm:tail:generation:size.lower}. \begin{thm}\label{thm:tail:generation:size.upper} Let the stability condition \eqref{stab.cond} hold and the distribution $F$ be such that \begin{eqnarray}\label{cond.sqrt} \overline F(m-\sqrt m)\sim\overline F(m)\quad \mbox{ and }\quad\overline F(m)e^{\sqrt m}\to\infty\quad\mbox{as }m\to\infty. \end{eqnarray} Then the following upper bounds hold. {\rm (i)} If the distribution $F$ is strong subexponential, then \begin{eqnarray}\label{eq:stat1.eq} \prob(Z_n>m) &\le& (a^{-1}+o(1)) F_I(\log m,\log m+na] \mbox{ as }m\to\infty \mbox{ uniformly for all }n\ge 1. \end{eqnarray} {\rm (ii)} If the integrated tail distribution $F_I$ is subexponential, then \begin{eqnarray}\label{eq:stat2.eq} \prob(Z>m) &\le& (a^{-1}+o(1)) \overline F_I(\log m)\quad\mbox{as }m\to\infty. \end{eqnarray} \end{thm} Distributions satisfying the first condition in \eqref{cond.sqrt} are called {\it square-root insensitive}, see e.g. \cite[Sect. 2.8]{FKZ}. Typical examples of distributions satisfying \eqref{cond.sqrt} are: any regularly varying distribution, the log-normal distribution and a Weibull distribution with parameter less than $1/2$. We do not know, how essential is the square-root insensitivity condition for the upper bounds in Theorem \ref{thm:tail:generation:size.upper} to hold. In the literature, there are various scenarios where extra randomness leads to appearance of further terms in the tail asymptotics due to the effects caused by the central limit theorem. Namely, for the Weibull distribution $\overline{F}(x) =\exp (-x^{\beta})$ with parameter $\beta\in[1/2,1)$, the number of extra terms appearing in the tail asymptotics depends on the interval $[n/(n+1), (n+1)/(n+2))$, $n=1$, $2$, \ldots\ the parameter $\beta$ belongs to -- see e.g. \cite{AKS1998} and \cite{FK2000} for the distributional tail asymptotics of the stationary queue length in a single-server queue or \cite{DKW2020} for the tail asymptotics of the stationary distribution in a Markov chain with asymptotically zero drift. However, we are not certain that similar arguments may be relevant to the model considered in the present paper. If the distribution $F$ satisfies all the conditions of Theorems \ref{thm:tail:generation:size.lower} and \ref{thm:tail:generation:size.upper}, then the corresponding lower and upper bounds match each other and we conclude the following tail asymptotics: \begin{eqnarray}\label{eq:stat1.eq.asy} \prob(Z_n>m) &\sim& a^{-1} F_I(\log m,\log m+na] \mbox{ as }m\to\infty\mbox{ uniformly for all }n\ge 1,\\ \label{eq:stat2.eq.asy} \prob(Z>m) &\sim& a^{-1} \overline F_I(\log m)\quad\mbox{as }m\to\infty. \end{eqnarray} These asymptotics may be intuitively interpreted as follows: $Z_n$ takes a large value if one of the $\xi$'s is sufficiently large, i.e.\ one of the success probabilities $A$'s is small. This phenomenon may be named as {\it the principle of a single atypical environment} and formulated as follows. For any $c>1$ and $\varepsilon>0$ let us introduce events \begin{eqnarray*} E_n^{(k)}(m) &=& \bigl\{Z_k\le c,\ \xi_k>\log m+(a+\varepsilon)(n-k),\\ &&\hspace{15mm}|S_{j,n-1}-(n-j)\exptn\xi|\le c+\varepsilon(n-j) \mbox{ for all }j\in[k+1,n-1]\bigr\},\quad k\le n-1, \end{eqnarray*} where $S_{j,n}:=\xi_j+\ldots+\xi_n$. The event $E_n^{(k)}(m)$ describes all trajectories such that the value of $Z_k$ is relatively small, then the success probability $A_k$ is close to zero and, as a result, a single atypical environment occurs, and after time $k$ the environment follows the strong law of large numbers with drift $-a$. As stated in the next theorem, the union of all these events provides the most probable way for the large deviations of $Z_n$ to occur. \begin{thm}\label{thm:PSLE} Assume that conditions of Theorems \ref{thm:tail:generation:size.lower} and \ref{thm:tail:generation:size.upper} hold. Then, for any fixed $\varepsilon>0$, \begin{eqnarray}\label{eq:PSLE} \lim_{c\to\infty}\lim_{m\to\infty} \inf_{n\ge 1}\prob\biggl(\bigcup_{k=0}^{n-1} E_n^{(k)}(m)\ \Big|\ Z_n>m\biggr) &=& 1. \end{eqnarray} \end{thm} Let us highlight a natural link of branching processes in the random environment to stochastic difference equations. It follows from the recurrence equation \begin{eqnarray*} \exptn(Z_n\mid\mathcal A,\ Z_{n-1}) &=& (Z_{n-1}+1)\exptn(B_n\mid\mathcal A)\\ &=& (Z_{n-1}+1) \bigg(\frac{1}{A_{n-1}}-1\biggr) \ =\ (Z_{n-1}+1)e^{\xi_{n-1}} \end{eqnarray*} that, for each $n$, the conditional expectation of $Z_n$, \begin{eqnarray}\label{perp} \exptn(Z_n\mid\mathcal A) &=& \sum_{k=0}^{n-1} e^{\sum_{l=k}^{n-1} \xi_l} \ =\ \sum_{k=0}^{n-1}e^{S_{k,n-1}}, \end{eqnarray} is distributed as a finite time horizon perpetuity, and its limit $\exptn(Z\mid\mathcal A)$ as the solution to the stochastic fixed point equation. Their tail asymptotic behaviour in the heavy-tailed case is the same as given in \eqref{eq:stat1.eq.asy}--\eqref{eq:stat2.eq.asy}, that is, \begin{eqnarray}\label{perp1} \prob\bigl[\exptn(Z_n\mid\mathcal A)>m\bigr] &\sim& a^{-1} F_I(\log m,\log m+na] \mbox{ as }m\to\infty\mbox{ uniformly for all }n\ge 1,\\ \label{perp2} \prob\bigl[\exptn(Z\mid\mathcal A)>m\bigr] &\sim& a^{-1} \overline F_I(\log m)\quad\mbox{as }m\to\infty, \end{eqnarray} see \cite{Dyszewski} for \eqref{perp2} and \cite{Dima2020} for general case. The remainder of the paper is dedicated to the proofs of the results above. We close our paper by Section \ref{sec:extensions} which contains some discussion and possible extensions. \section{Finite time horizon tail asymptotics, proof of Theorem~\ref{thm:tail:fixed:generation:size}} \label{sec:n} We start with some useful representations. Firstly, \begin{eqnarray}\label{rep.Z.m.EA} \prob(Z_1>m) &=& \prob(B_1>m)\ =\ \exptn \bigl((1-A_0)^{m+1}\bigr). \end{eqnarray} Secondly let us observe that the $k$-fold convolution of geometric distribution is known in the closed form, and its probability mass function is hypergeometric: \begin{eqnarray*} \prob(B_1+\ldots+B_k=m\mid\mathcal{A}) &=& A^k(1-A)^m\frac{(m+1)\ldots(m+k-1)}{(k-1)!} \quad\mbox{for all }k\ge 2\mbox{ and }m\ge 0. \end{eqnarray*} Therefore, for $k\ge 2$, \begin{eqnarray*} \prob(B_1+\ldots+B_k>m\mid\mathcal{A}) &=& (-1)^{k-1}\frac{A^k}{(k-1)!} \frac{{\rm d}^{k-1}}{{\rm d}A^{k-1}}\sum_{n=m+1}^\infty(1-A)^{n+k-1}\\ &=& (-1)^{k-1}\frac{A^k}{(k-1)!} \frac{{\rm d}^{k-1}}{{\rm d}A^{k-1}}\frac{(1-A)^{m+k}}{A}\\ &=& (-1)^{k-1}\frac{A^k}{(k-1)!} \sum_{j=0}^{k-1} {{k-1}\choose{j}} \frac{{\rm d}^j}{{\rm d}A^j}(1-A)^{m+k} \frac{{\rm d}^{k-1-j}}{{\rm d}A^{k-1-j}}\frac{1}{A}, \end{eqnarray*} which yields the following binomial representation that is convenient for further analysis, \begin{eqnarray}\label{reprenegbinomial} \prob(B_1+\ldots+B_k>m\mid\mathcal{A}) &=& A^k \sum_{j=0}^{k-1} {{m+k}\choose{j}} (1-A)^{m+k-j} \frac{1}{A^{k-j}}\nonumber\\ &=& \sum_{j=0}^{k-1} {{m+k}\choose{j}} A^j(1-A)^{m+k-j}.\label{reprenegbinomial} \end{eqnarray} The above representations allow us to prove two auxiliary results. \begin{lemma}\label{l:1.exp} Under the assumption \eqref{assregvar}, \begin{eqnarray}\label{rep.Z.m.EA} \exptn \bigl((1-A)^m\bigr) &\sim& \overline F(\log m) \quad\mbox{as }m\to\infty. \end{eqnarray} \end{lemma} \begin{lemma}\label{l:jm.upper} Under the assumption \eqref{assregvar}, there exist $\gamma<\infty$ and $\varepsilon>0$ such that \begin{eqnarray*} \exptn A^j(1-A)^m &\le& \gamma\frac{j^jm^m}{(m+j)^{m+j}}\overline F(\log m-\log j) \quad\mbox{for all }m>1\mbox{ and }j\le\varepsilon m. \end{eqnarray*} In particular, for any fixed $j\ge 1$, \begin{eqnarray} \exptn A^j(1-A)^m &=& o(\overline F(\log m))\quad\mbox{as }m\to\infty. \label{osmalljgeq1} \end{eqnarray} \end{lemma} \begin{proof}[Proof of Lemma \ref{l:1.exp}] Since, for any fixed $\varepsilon>0$, $$ \exptn \bigl((1-A)^{m+1};\ A>\varepsilon\bigr)\ \le\ (1-\varepsilon)^{m+1} $$ is exponentially decreasing as $m\to\infty$, the asymptotic behaviour of the right-hand side in \eqref{rep.Z.m.EA} is determined by the tail behavior of $A$ near $0$. Notice that, for $0<a<b<1$, \begin{eqnarray}\label{startid} \prob\left(A\in(a, b]\right) &=& \prob\left(\log \frac{1-A}{A}\in \left[\log \frac{1-b}{b},\ \log \frac{1-a}{a}\right)\right) \nonumber\\ &=& \prob\bigl(\xi\in [\log(1/b-1),\ \log(1/a-1))\bigr). \end{eqnarray} Hence, for any fixed $c>0$, we have \begin{eqnarray*} \exptn (1-A)^m &\ge& \exptn [(1-A)^m;\ A\le c/m] \\ &\ge& (1-c/m)^m\prob( A\le c/m)\\ &=& (1-c/m)^m\overline F(\log(m/c-1)). \end{eqnarray*} It follows from the long-tailedness of the distribution $F$ of $\xi$ that the right-hand side of above equation is asymptotically equivalent to $e^{-c}\overline F(\log m)$ as $m\to\infty$. Letting $c\downarrow 0$ we complete the proof of the lower bound \begin{eqnarray*} \exptn (1-A)^m &\ge& (1+o(1))\overline F(\log m)\quad\mbox{as }m\to\infty. \end{eqnarray*} To obtain the matching upper bound, let us consider the following decomposition which is valid for all integer $K\in[1,[m/2]-1]$: \begin{eqnarray*} \lefteqn{\exptn (1-A)^m}\\ &=& \exptn\biggl[(1-A)^m;\ A\le \frac{K}{m}\biggr]+ \sum_{k=K}^{[m/2]-1} \exptn \biggl[(1-A)^m;\ A\in\biggl(\frac{k}{m},\frac{k+1}{m}\biggr]\biggr] +\exptn\biggl[(1-A)^m;\ A>\frac{[m/2]}{m}\biggr]\\ &\le& \prob\biggl(A\le \frac{K}{m}\biggr) +\sum_{k=K}^{[m/2]-1}\biggl(1-\frac{k}{m}\biggr)^m \prob\biggl(A\le\frac{k+1}{m}\biggr) +\biggl(1-\frac{[m/2]}{m}\biggr)^m\\ &\le& \overline F\biggl(\log\biggl(\frac{m}{K}-1\biggr)\biggr) +\sum_{k=K}^{[m/2]-1} e^{-k} \overline F\biggl(\log\biggl(\frac{m}{k+1}-1\biggr)\biggr) +\biggl(1-\frac{[m/2]}{m}\biggr)^m. \end{eqnarray*} Let us show that the series in the middle term in the last line is negligible for large values of $K$. Indeed, firstly, $$ \frac{m}{k+1}-1\ \ge\ \frac{1}{2}\frac{m}{k+1} \quad\mbox{for all }k\le \frac{m}{2}-1 $$ and hence \begin{eqnarray*} \sum_{k=K}^{[m/2]-1} e^{-k} \overline F\biggl(\log\biggl(\frac{m}{k+1}-1\biggr)\biggr) &\le& \sum_{k=K}^{[m/2]-1} e^{-k} \overline F(\log m-\log(k+1)-\log 2). \end{eqnarray*} Since the distribution $F$ is assumed long-tailed, there exists a constant $\gamma<\infty$ such that $\overline F(x-y)\le\gamma e^y \overline F(x)$ for all $x$, $y>0$. Therefore, \begin{eqnarray}\label{inner.sum.1} \sum_{k=K}^{[m/2]-1} e^{-k} \overline F\biggl(\log\biggl(\frac{m}{k+1}-1\biggr)\biggr) &\le& \gamma\overline F(\log m)\sum_{k=K}^\infty e^{-k} e^{\log(k+1)+\log 2}\nonumber\\ &\le& \varepsilon(K)\overline F(\log m) \end{eqnarray} where \begin{eqnarray*} \varepsilon(K) &:=& \gamma\sum_{k=K}^\infty e^{-k} e^{\log(k+1)+\log 2}\ \to\ 0 \quad\mbox{as }K\to\infty. \end{eqnarray*} Hence we conclude that \begin{eqnarray*} \exptn (1-A)^m &\le& \overline F(\log(m/K-1)) +\varepsilon(K)\overline F(\log m)+O(1/2^m) \quad\mbox{as }m\to\infty. \end{eqnarray*} Due to the long-tailedness of $F$ this implies that, for any fixed $K$, \begin{eqnarray*} \exptn (1-A)^m &\le& (1+o(1))\overline F(\log m)+\varepsilon(K)\overline F(\log m) \quad\mbox{as }m\to\infty. \end{eqnarray*} Since $\varepsilon(K)\to 0$ as $K\to\infty$, the proof is complete. \end{proof} \begin{proof}[Proof of Lemma \ref{l:jm.upper}] There exist $K\in\mathbb N$ and $\varepsilon_1>0$ such that the following inequalities hold \begin{eqnarray}\label{log.lin} \log(k+1) &\le& k/6\quad\mbox{for all }k\ge K \end{eqnarray} and \begin{eqnarray}\label{comp.e} \biggl(1-\frac{j}{m}\biggr)^m &\ge& \frac{1}{3^j} \quad\mbox{for all }m>K\mbox{ and }j\le\varepsilon_1 m. \end{eqnarray} Similar to the case $j=0$ considered in the proof of Lemma \ref{l:1.exp}, we make use of the following decomposition: \begin{eqnarray}\label{E1.E2} \exptn A^j(1-A)^m &=& \exptn\biggl[A^j(1-A)^m;\ A\le\frac{Kj}{3m}\biggr]+ \sum_{k=K}^{[3m/j]} \exptn\biggl[A^j(1-A)^m;\ A\in\biggl(k\frac{j}{3m},(k+1)\frac{j}{3m}\biggr]\biggr]\nonumber\\ &=:& E_1+E_2. \end{eqnarray} The maximum of the function $x^j(1-x)^m$ over the interval $[0,1]$ is attained at point $j/(m+j)$ and is equal to $j^jm^m/(m+j)^{m+j}$. Therefore, for some $\varepsilon=\varepsilon(K)\le\varepsilon_1$, \begin{eqnarray}\label{E1} E_1 &\le& \frac{j^jm^m}{(m+j)^{m+j}} \prob\biggl(A\le\frac{Kj}{3m}\biggr)\nonumber\\ &=& \frac{j^jm^m}{(m+j)^{m+j}} \overline F\biggl(\log\biggl(\frac{3m}{Kj}-1\biggl)\biggl)\nonumber\\ &\le& \gamma_1\frac{j^jm^m}{(m+j)^{m+j}} \overline F(\log m-\log j) \quad\mbox{for some }\gamma_1<\infty\mbox{ and all }j\le\varepsilon m, \end{eqnarray} owing to the long-tailedness of $F$. Further, the series on the right hand side of \eqref{E1.E2} possesses the following upper bound \begin{eqnarray*} E_2 &\le& \sum_{k=K}^{[3m/j]} (k+1)^j\biggl(\frac{j}{3m}\biggr)^j \biggl(1-\frac{kj}{3m}\biggr)^m \prob\biggl(A\le(k+1)\frac{j}{3m}\biggr)\\ &\le& \biggl(\frac{j}{3m}\biggr)^j\ \sum_{k=K}^{[3m/j]} (k+1)^j e^{-kj/3} \overline F(\log(3m/(k+1)j-1)) \end{eqnarray*} because $(1-kj/3m)^m\le e^{-kj/3}$. Let us now bound the latter series. It follows from the inequality \eqref{log.lin} that \begin{eqnarray*} (k+1)^j e^{-kj/3} &=& e^{j(\log(k+1)-k/3)} \ \le\ e^{-jk/6}\quad\mbox{for all }k\ge K. \end{eqnarray*} Then, using arguments similar to those in \eqref{inner.sum.1}, \begin{eqnarray} E_2 &\le& \biggl(\frac{j}{3m}\biggr)^j \sum_{k=K}^{[3m/j]} e^{-jk/6} \overline F(\log(3m/(k+1)j-1))\label{citedlater2}\\ &\le& \gamma_2\biggl(\frac{j}{3m}\biggr)^j\overline F(\log m-\log j) \quad\mbox{for some }\gamma_2<\infty,\nonumber \end{eqnarray} which implies the result due to the inequalities \eqref{E1} and \begin{eqnarray*} \frac{j^jm^m}{(m+j)^{m+j}}\ =\ \biggl(\frac{j}{m}\biggr)^j\biggl(1-\frac{j}{m+j}\biggr)^{m+j} &\ge& \biggl(\frac{j}{3m}\biggr)^j \end{eqnarray*} which is guarantied by \eqref{comp.e}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:tail:fixed:generation:size}] We prove the statement by induction in $n\ge 1$. The assertion for $n=1$ follows from the representation \eqref{rep.Z.m.EA} and Lemma \ref{l:1.exp}. Assume that the assertion of Theorem~\ref{thm:tail:fixed:generation:size} is valid for some $n\ge 1$. Let us show that then it follows for $n+1\ge 2$. Our aim is to obtain the tail asymptotics of the distribution of \alns{ Z_{n+1}\ =\ \sum_{i=1}^{Z_n+1} B_{n+1,i}, } where $(B_{n+1,i}, i\ge 1)$ are independent copies of a geometric random variable $B_{n+1}$ with success probability $A_n$ (its probability mass function is specified in \eqref{eq:prob:mass:func:Bn}) and independent of $Z_n$ conditioned on $\mathcal{A}$. Then the following representation holds \begin{eqnarray}\label{decomp} \prob(Z_{n+1}>m) &=& \sum_{k=0}^\infty \prob \bigg( \sum_{j=1}^{k+1} B_{n+1,j}>m, Z_n=k\bigg)\nonumber\\ &=& \sum_{k=0}^\infty \exptn \bigg[ \prob \bigg(\sum_{j=1}^{k+1} B_{n+1,j}>m\Big| \mathcal{A} \bigg) \bigg] \prob(Z_n=k), \end{eqnarray} where we have conditioned on $\mathcal{A}$ and used the fact that $Z_n$ and $(B_{n+1,i}, i\ge 1)$ are independent conditioned on $\mathcal{A}$. We start with the proof of the upper bound. For that, let us split the summation in \eqref{decomp} into three parts, from $0$ to $K$, from $K+1$ to $\varepsilon m-1$ and from $\varepsilon m$ to $\infty$ where integer $K$ is chosen large enough and real $\varepsilon>0$ small enough. This splitting together with non-negativity of the $B$'s implies that \begin{eqnarray*} \lefteqn{\prob(Z_{n+1}>m)} \\ &\le& \exptn \bigg[ \prob \bigg( \sum_{j=1}^K B_{n+1,j}>m\Big| \mathcal{A} \bigg) \bigg] \prob(Z_n<K)\\ &&\hspace{42mm}+ \sum_{k=K}^{\varepsilon m} \exptn \bigg[ \prob \bigg( \sum_{j=1}^{k+1} B_{n+1,j}>m\Big| \mathcal{A} \bigg) \bigg] \prob(Z_n=k) + \prob(Z_n>\varepsilon m)\\ &\le& \exptn \bigg[ \prob \bigg( \sum_{j=1}^K B_{n+1,j}>m\Big| \mathcal{A} \bigg)\bigg] + \sum_{k=K}^{\varepsilon m}\exptn \bigg[ \prob \bigg( \sum_{j=1}^{k+1} B_{n+1,j}>m\Big| \mathcal{A} \bigg) \bigg] \prob(Z_n=k) + \prob(Z_n>\varepsilon m). \end{eqnarray*} By the induction hypothesis and long-tailedness of $F$, for any fixed $\varepsilon$, \begin{eqnarray*} \prob(Z_n>\varepsilon m) &\sim& n\overline F(\log(\varepsilon m)) \ \sim\ n\overline F(\log m)\quad\mbox{as }m\to\infty. \end{eqnarray*} So it is left to show that, for any fixed $K$, \begin{eqnarray}\label{est.upper.44} \exptn \bigg[ \prob \bigg( \sum_{j=1}^K B_{n+1,j}>m\Big| \mathcal{A} \bigg)\bigg] &\sim& \overline F(\log m)\quad\mbox{as }m\to\infty, \end{eqnarray} and that, for any $\delta>0$, there exist a sufficiently large $K$ and a sufficiently small $\varepsilon>0$ such that \begin{eqnarray}\label{est.upper.4} \sum_{k=K}^{\varepsilon m}\exptn \bigg[ \prob \bigg( \sum_{j=1}^{k+1} B_{n+1,j}>m\Big| \mathcal{A} \bigg) \bigg] \prob(Z_n=k) &\le& \delta\overline F(\log m) \quad\mbox{for all sufficiently large }m. \end{eqnarray} We start with proving \eqref{est.upper.4}. Let $\xi(A)$ be a Bernoulli random variable with success probability $A$ and $S_{m+k}(A)$ be the sum of $m+k$ independent copies of $\xi(A)$. It follows from the representation \eqref{reprenegbinomial} that \begin{eqnarray*} \prob(B_1+\ldots+B_k>m\mid\mathcal{A}) &=& \prob(S_{m+k}(A)\le k-1)\\ &\le& (\exptn (e^{-\beta\xi(A)}))^{m+k}e^{\beta(k-1)}\\ &=& (1-A+e^{-\beta}A)^{m+k}e^{\beta(k-1)}, \quad\mbox{for all }\beta>0. \end{eqnarray*} The minimal value of the right hand side is attained for $\beta$ such that $e^{-\beta}=\frac{(1-A)(k-1)}{A(m+1)}$, hence \begin{eqnarray*} \prob(B_1+\ldots+B_k>m\mid\mathcal{A}) &\le& \frac{(m+k)^{m+k}}{(m+1)^{m+1}(k-1)^{k-1}}A^{k-1}(1-A)^{m+1}. \end{eqnarray*} This allows us to conclude from Lemma \ref{l:jm.upper} that, for $k\le\varepsilon m$, \begin{eqnarray*} \prob(B_1+\ldots+B_k>m) &\le& \frac{(m+k)^{m+k}}{(m+1)^{m+1}(k-1)^{k-1}}\exptn A^{k-1}(1-A)^{m+1}\\ &\le& \gamma\overline F(\log(m+1)-\log(k-1)). \end{eqnarray*} Therefore, \begin{eqnarray*} \sum_{k=K}^{\varepsilon m}\prob(B_1+\ldots+B_{k+1}>m)\prob(Z_n=k) &\le& \gamma\sum_{k=K}^{\varepsilon m} \overline F(\log(m+1)-\log k)\prob(Z_n=k). \end{eqnarray*} Representing $\prob(Z_n=k)$ as the difference $\prob(Z_n>k-1)-\prob(Z_n>k)$ and rearranging the sum on the right hand side we conclude that this sum is not greater than \begin{eqnarray*} \lefteqn{\overline F(\log(m+1)-\log K)\prob(Z_n>K-1)}\\ &&+ \sum_{k=K}^{\varepsilon m-1} \bigl( \overline F(\log(m+1)-\log (k+1))-\overline F(\log(m+1)-\log k)\bigr) \prob(Z_n>k). \end{eqnarray*} Then the induction hypothesis yields an upper bound, for some $\gamma_1<\infty$, \begin{eqnarray*} \lefteqn{\sum_{k=K}^{\varepsilon m}\prob(B_1+\ldots+B_{k+1}>m)\prob(Z_n=k)}\\ &\le& \gamma\overline F(\log(m+1)-\log K)\prob(Z_n>K-1)\\ &&+ \gamma_1\sum_{k=K}^{\varepsilon m-1} \bigl( \overline F(\log(m+1)-\log (k+1))-\overline F(\log(m+1)-\log k)\bigr) \overline F(\log k). \end{eqnarray*} Due to the long-tailedness of $F$, for any $\delta>0$ there exists a sufficiently large $K$ such that the first term on the right hand side is not greater than $\delta\overline F(\log m)$, for all sufficiently large $m$. After rearranging we conclude that the sum on the right hand side is not greater than \begin{eqnarray}\label{sum.2} \lefteqn{\overline F(\log(m+1)-\log (\varepsilon m)) \overline F(\log(\varepsilon m-1))}\nonumber\\ &&\hspace{30mm}+\sum_{k=K+1}^{\varepsilon m-1} \overline F(\log(m+1)-\log k) \bigl(\overline F(\log(k-1))-\overline F(\log k)\bigr). \end{eqnarray} Since $F$ is long-tailed, the first term here is asymptotically equivalent to \begin{eqnarray*} \overline F(\log(1/\varepsilon))\overline F(\log m) \quad\mbox{as }m\to\infty, \end{eqnarray*} so it is not greater than $\delta\overline F(\log m)$ for all sufficiently large $m$ provided $\overline F(\log(1/\varepsilon))\le\delta/2$. The sum in \eqref{sum.2} equals \begin{eqnarray*} \sum_{k=K+1}^{\varepsilon m-1} \overline G\biggl(\frac{m+1}{k}\biggr)G(k-1,k], \end{eqnarray*} where the distribution $G$ is defined via its tail as $\overline G(x)=\overline F(\log x)$, and can be bounded by the integral \begin{eqnarray*} \int_K^{\varepsilon m} \overline G(m/z)G(dz) &=& \prob(e^{\xi_1+\xi_2}>m;\ e^{\xi_2}\in(K,\varepsilon m])\\ &=& \prob(\xi_1+\xi_2>\log m;\ \xi_2\in(\log K,\ \log m-\log(1/\varepsilon)]). \end{eqnarray*} Since the distribution $F$ is assumed to be subexponential, we can choose a sufficiently large $K$ and a sufficiently small $\varepsilon>0$ such that the latter probability is not greater than $\delta\overline F(\log m)$ for all sufficiently large $m$, see \cite[Theorem 3.6]{FKZ}, which completes the proof of \eqref{est.upper.4}. To complete the proof of the upper bound it now suffices to show \eqref{est.upper.44}. This follows immediately from the representation \eqref{reprenegbinomial}, the asymptotics \eqref{osmalljgeq1} and Lemma \ref{l:1.exp}. We will obtain now the matching lower bound. For that, let us split the sum in \eqref{decomp} into two parts, from $0$ to $cm$ and from $cm+1$ to $\infty$ where $c$ is a large number sent to infinity later on. This splitting implies that \begin{eqnarray}\label{est} \prob(Z_{n+1}>m) &\ge& \sum_{k=0}^{cm} \exptn \big[ \prob(B_{n+1}>m\mid \mathcal{A})\big]\prob(Z_n=k) +\sum_{cm+1}^\infty \exptn \bigg[ \prob \bigg( \sum_{j=1}^{k+1} B_{n+1,j}>m\Big| \mathcal{A} \bigg) \bigg] \prob(Z_n=k)\nonumber\\ &\ge& \exptn \big[ \prob(B_{n+1}>m\mid \mathcal{A})\big]\prob(Z_n\le cm) +\exptn \bigg[ \prob \bigg( \sum_{j=1}^{cm} B_{n+1,j}>m\Big| \mathcal{A} \bigg) \bigg] \prob(Z_l>cm), \end{eqnarray} since all the $B$'s are non-negative. By Lemma \ref{l:1.exp}, \begin{eqnarray}\label{est.1} \exptn \big[ \prob(B_{n+1}>m\mid \mathcal{A})\big]\prob(Z_n\le cm) &\sim& \overline F(\log m)\quad\mbox{as }m\to\infty. \end{eqnarray} Further, by the law of large numbers, \begin{eqnarray*} \prob \bigg( \sum_{j=1}^{cm} B_{n+1,j}>m\Big|\mathcal{A} \bigg) &\stackrel{\rm a.s.}\to& 1\quad\mbox{as }c\to\infty. \end{eqnarray*} Hence, the dominated convergence theorem allows us to conclude that \begin{eqnarray}\label{est.2} \exptn \bigg[ \prob \bigg( \sum_{j=1}^{cm} B_{n+1,j}>m\Big| \mathcal{A} \bigg) \bigg] &\to& 1\quad\mbox{as }c\to\infty. \end{eqnarray} Finally, by the induction hypothesis and long-tailedness of $F$, for any fixed $c$, \begin{eqnarray}\label{est.3} \prob(Z_n>cm) &\sim& n\overline F(\log(cm)) \ \sim\ n\overline F(\log m)\quad\mbox{as }m\to\infty. \end{eqnarray} Substituting \eqref{est.1}--\eqref{est.3} into \eqref{est} and letting $c\to\infty$ we conclude the induction step for the lower bound. \end{proof} \iffalse \subsection{Second proof of Theorem~\ref{thm:tail:fixed:generation:size}} We start from analysing the {\it asymptotic behaviour of $\prob(Z>mA/(1-A))$.} We prove this theorem under additional assumption that the distribution $F$ of $\xi=\log(1/A-1)$ is subexponential ($F\in \mathcal{S}$), that is, \begin{eqnarray*} \overline{F*F}(x) &\sim& 2\overline F(x)\quad\mbox{as }x\to\infty. \end{eqnarray*} Recall that then there exists a function $h$ such that $F$ is $h$-insensitive, i.e. $h(x)\uparrow\infty$ and \begin{align}\label{hsens} \overline{F}(x\pm h(x))\sim \overline{F}(x)\quad\mbox{as }x\to\infty. \end{align} We will use later the existence of a function $x\rightarrow g(x)$ increasing to infinity such that $g(x)=o(x)$ and \begin{equation}\label{secondcondition} e^{-\widetilde{c} g(x)}=o(\overline{F}(\log x))\quad \text{as $x\to\infty$} \end{equation} for any $\widetilde{c}>0$. Note that above condition is equivalent to the existence of a function $g$ increasing to infinity such that $g(x)=o(e^x)$ and \begin{equation}\label{secondcondition2} e^{-g(x)}=o(\overline{F}(x))\quad \text{as $x\to\infty$}. \end{equation} Such a function exists by Theorem 2.6 of \cite{FKZ} since $F\in \mathcal{S}$ is heavy-tailed. In most common situations we have \begin{equation}\label{s-c-b} e^{-e^{h(x)}}=o(\overline{F}(x)) \end{equation} for $h$ satisfying \eqref{hsens}. Indeed, consider the following examples. \begin{itemize} \item {\it Regularly varying distribution.} In this case $\overline{F}(x)=l(x)x^{-\alpha}$, for $x\ge 1$. Here \eqref{hsens} holds for any $h(x)=o(x)$. By taking $h(x)=x^{\beta}$ for some $\beta <1$ we get that the LHS in \eqref{s-c-b} is $\exp (-e^{x^{\beta}})$ which is $o(e^{-x})$ which, in turn, is clearly $o(l(x)x^{-\alpha})$; \item {\it Weibull-type distribution.} In this case we have $\overline{F}(x)=e^{-x^{\beta}}$ for $\beta \in (0,1)$. Here \eqref{hsens} holds for any $h(x)=o(x^{1-\beta})$. By taking $h(x)=x^{\gamma}$ for some $\gamma \in (0,1-\beta)$, we get that again $\exp (-e^{h(x)})=o(e^{-x})$ which is clearly $o(\overline{F}(x))$; \item {\it Log-normal-type distribution.} In this case, $\overline{F}(x)=e^{-c(\log x)^{\gamma}}$ for some $\gamma >1$. Then \eqref{hsens} holds for any $h(x) = o(x/\log x)$, and again taking $h(x)=x^{\gamma}$ for some $\gamma \in (0,1)$ leads to \eqref{s-c-b}; \item {\it Subexponential distribution with very light tail $\overline{F}(x)=e^{-x/(\log x)^{\gamma}}$ for $\gamma >0$.} \footnote{To be checked that it is subexponential. It might be that it is not!!! There is none known criterium that works in this case.} Then \eqref{hsens} holds if and only if $h(x) = o((\log x)^{\gamma})$ and \eqref{s-c-b} is equivalent to $x/(\log x)^{\gamma} = o (e^{h(x)})$ which, in turn, is equivalent to $\log x = o(h(x))$. So $\gamma$ should be bigger than $1$. On the other hand, if $\gamma \le 1$, then \eqref{s-c-b} and \eqref{hsens} cannot hold together. \end{itemize} \begin{lemma} Assume that $Z$ is independent of $A$ and its distribution satisfies \begin{align}\label{dfZ} \prob (Z>m) \sim l\overline{F} (\log m) \quad\mbox{as }m\to\infty, \end{align} for some fixed $l>0$. Then $$ \prob (Z>mA/(1-A)) \sim (l+1) \overline{F}(\log m)\quad\mbox{as }m\to\infty. $$ \end{lemma} \begin{proof} From the basic properties of subexponential distributions (see e.g. \cite{FKZ}) one can get that \begin{align*} \prob (Z>mA/(1-A)) = \prob (\log Z +\xi>\log m) \sim (l+1) \overline{F}(\log m) \end{align*} which completes the proof. \end{proof} Recall that, conditioned on $A$, the random variables $B_i^A$ are independent copies of $B^A$ with geometric distribution $\prob(B^A=k)=A(1-A)^k$ for $k=0$, $1$, \ldots We will prove now the {\it equivalence} of $\prob(Z>mA/(1-A))$ and $\prob (\sum_{i=1}^{Z+1} B_i^A >m)$. In other words, knowing that $Z$ attaints 'large values', we show that all $B_i^A$ can be replaced by their common mean $1/A-1$. \begin{propn}\label{mainprop} Assume that $Z$ does not depend on $\{A, \{B_i^A\}\}$ and has distribution function $G$ satisfying \eqref{dfZ}. Then \begin{align}\label{EQU} \prob \Bigg(\sum_{i=1}^{Z+1} B_i^A >m\Bigg) \sim (l+1) \overline{F}(\log m)\quad\mbox{as }m\to\infty. \end{align} \end{propn} \begin{proof} We will prove this proposition identifying the proper lower and upper bounds. {\bf Lower Bound.} Let $\psi_i^A = {\mathbb I} (B_i^A\ge 1)$. Observe that then $\psi_i^A\le B_i^A$ a.s. For any $\varepsilon \in (0,1)$, we may find $C>0$ such that $\prob(A\ge C) \ge 1-\varepsilon$ and $\prob(A<C)=:\delta_C> 0$. We choose $R>C$ such that $\prob (A>R)\le\varepsilon$. Since both $B^A\ge 0$ and $\psi^A$ are stochastically decreasing in $A$, we get \begin{align}\label{eq} \prob \bigg(\sum_{i=1}^{Z+1} B_i^A >m \bigg)& \ge \prob \bigg(\sum_{i=1}^{Z+1} B_i^A, A<C \bigg) + \prob \bigg(\sum_{i=1}^{Z+1} \psi_i^A >m, R\ge A\ge C \bigg)\nonumber\\ & \ge \prob( B_1^A>m, A<C) + \prob\bigg(\sum_{i=1}^{Z+1} \psi_i^R >m \bigg) \prob(R\ge A\ge C)\nonumber\\ & \equiv P_1(m)+P_2(m) \prob(R\ge A\ge C) \end{align} where $\prob(R\ge A\ge C)\ge 1-2\varepsilon$. Here \begin{align*} P_1(m) & = \prob( B_1^A>m) - \prob( B_1^A>m, A\ge C) \\ & \ge \prob( B_1^A>m) - \prob( B_1^C>m) \\ & \sim \overline{F}(\log m)\quad\mbox{as }m\to\infty, \end{align*} since $\prob( B_1^C>m) = (1-\delta_C)^{m+1}$ is decaying exponentially fast. Next, for any $\gamma >0$, \begin{align*} P_2(m) & \ge \prob \bigg(\sum_{i=1}^{Z+1} \psi_i^R >m, Z+1>m(1-R)(1+\gamma) \bigg)\\ &\ge \prob ((Z+1)(1-R)(1+\gamma)>m) - \prob \bigg(\sum_{i=1}^{m(1-R)(1+\gamma)} \psi_i^R \le m \bigg)\\ & \sim \prob ((Z+1)(1-R)(1+\gamma)>m)\\ & \sim l \overline{F}(\log (m/(1-R)(1+\gamma)))\\ & \sim l \overline{F}(\log m) \quad\mbox{as }m\to\infty \end{align*} since the subtrahend in the second line decays exponentially fast and since $F$ is long-tailed. Therefore, \begin{align*} \prob \bigg(\sum_{i=1}^{Z+1} B_i^A >m \bigg) \ge (1+o(1)) (1+l(1-2\varepsilon)) \overline{F}(\log m). \end{align*} Since $\varepsilon$ may be taken arbitrarily small, the lower bound in \eqref{EQU} follows. {\bf Upper Bound.} Let $E_m$ and $D_m$ be the events \begin{align*} E_m = \bigg\{Z+1>m\frac{A}{1-A}\bigg\} \quad \mbox{and} \quad D_m = \bigg\{ \sum_{i=1}^{Z+1} B_i >m\bigg\}. \end{align*} Let $g(x)$ be any increasing to $\infty$ function of $x$ such that $g(x) = o(x)$ and \begin{equation}\label{firstcondition} \overline{F}(\log x -\log g(x)) \sim \overline{F}(\log x). \end{equation} Then \begin{align}\label{UB1} \prob (D_m) \le \prob(D_m\cap\overline{E}_{m/g(m)})+\prob(E_{m/g(m)}). \end{align} Here, for any event $D$, we denote by $\overline{D}$ its complement. Clearly, \begin{align*} \prob(E_{m/g(m)})\sim \prob (E_m) \sim (l+1)\overline{F}(\log m), \end{align*} so it is left to show that the first term on the right hand side of \eqref{UB1}, call it $\widetilde{P}(m)$, is of order $o(\overline F(\log m))$. We use the classical exponential Chebyshev inequality (which is called sometimes ``Chernoff inequality''). Let $c\in (0,1)$ and let $\alpha = \alpha_A$ be defined by the formula $\alpha=\log (1+cA)$. Choose any $C\in (0,1/2)$ such that $e^{-x}\le 1-x/2$, for all $x\in [0,C]$. Then $A/(1-A)\le 2A$ on the event $A\le C$ and, for all $m$ sufficiently large, \begin{align*} \widetilde{P}(m) & \le \prob \left(\sum_{i=1}^{mA/g(m)(1-A)} B_i^A >m,\ A\le C, \ \frac{m}{g(m)}\frac{A}{1-A}\ge 1\right) + o(\overline{F}(\log m))\\ & \le \exptn\left( e^{-\alpha m} \left(\exptn_A e^{\alpha B_1^A}\right)^{mA/g(m)(1-A)};\ A\le C,\ \frac{m}{g(m)}\frac{A}{1-A}\ge 1\right)+o(\overline{F}(\log m)) \\ & = \exptn\left( e^{-\alpha m} \left(\frac{A}{1-e^{\alpha}(1-A)}\right)^{mA/g(m)(1-A)};\ A\le C,\ \frac{m}{g(m)}\frac{A}{1-A}\ge 1\right) +o(\overline{F}(\log m))\\ & \le \exptn \Bigg(\exp \bigg[-cmA/2 + \log\frac{1}{1-c}\cdot\frac{2mA}{g(m)}\bigg];\ A\ge \frac{g(m)}{2m}\Bigg)+o(\overline{F}(\log m))\\ & \le \exptn \bigg(e^{-cmA/4};\ A\ge \frac{g(m)}{2m}\bigg) + o(\overline{F}(\log m))\\ & \le e^{-cg(m)/8} + o(\overline{F}(\log m)). \end{align*} Now the assertion follows from the asymptotics \eqref{secondcondition}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:tail:fixed:generation:size}] The proof is immediate by induction based on Theorem \ref{thm:tail:fixed:generation:size} and Proposition \ref{mainprop}. \end{proof} \fi \section{Proof of the lower bound, Theorem~\ref{thm:tail:generation:size.lower}} \label{sec:lower} Note that, by the strong law of large numbers, for any fixed $\varepsilon>0$, \begin{eqnarray}\label{C_S} \inf_{n\ge 1}\prob(C_S(c,\varepsilon,k,n)\mbox{ for all }k\le n) &\to& 1\quad\mbox{as }c\to\infty, \end{eqnarray} where \begin{eqnarray*} C_S(c,\varepsilon,k,n) &:=& \{|S_{k,n}-(n-k+1)\exptn\xi|\le c+\varepsilon(n-k+1)\} \end{eqnarray*} and $S_{k,n}= \xi_k+\ldots+\xi_n$. We show that, under the long-tailedness condition \eqref{assregvar}, the most probable way for a big value of $Z_n$ to occur is due to atypical random environment when one of the following events occurs, $k\le n-1$: \begin{eqnarray*} C_A(k,n) &:=& \Bigl\{A_k\le \frac{c_1}{M(m,k,n)},\ C_S(c_2,\varepsilon,j,n-1)\mbox{ for all }j\in[k+1,n-1]\Bigr\}, \end{eqnarray*} where \begin{eqnarray*} M(m,k,n) &:=& m e^{\varepsilon(n-1-k)+c_2}\prod_{j=k+1}^{n-1}\frac{1}{a_{A_j}} \ =\ m e^{\varepsilon(n-1-k)+c_2-S_{k+1,n-1}}, \end{eqnarray*} $a_A:=\exptn \{B\mid A\}=1/A-1=e^\xi$, $c_1$, $c_2$, $\varepsilon>0$ are fixed, $c_2$ will be sent to infinity later on, while $c_1$ and $\varepsilon$ will be sent to $0$. Since $A$ is bounded by $\widehat A<1$, $a_A$ is bounded away from $0$ by $1/\widehat A-1$. Let us bound from below the probability of the union of events $C_A(k,n)$. We start with the following lower bound \begin{eqnarray}\label{lower.for.union} \prob\biggl(\bigcup_{k=0}^{n-1} C_A(k,n)\biggr) &\ge& \sum_{k=0}^{n-1}\prob(C_A(k,n)) -\sum_{k\not=l}\prob(C_A(k,n)\cap C_A(l,n)). \end{eqnarray} On the event $C_S(c_2,\varepsilon,k+1,n-1)$ we have \begin{eqnarray}\label{lln} a(n-1-k)\ \le\ \varepsilon(n-1-k)+c_2-S_{k+1,n-1} \ \le\ 2c_2+(2\varepsilon+a)(n-1-k) \end{eqnarray} and hence \begin{eqnarray*} \sum_{k=0}^{n-1}\prob(C_A(k,n)) &\ge& \sum_{k=0}^{n-1}\prob \biggl(A_k\le \frac{c_1}{m e^{2c_2+(2\varepsilon+a)(n-1-k)}},\ C_S(c_2,\varepsilon,j,n-1) \mbox{ for all }j\in[k+1,n-1]\biggr)\\ &=& \sum_{k=0}^{n-1}\prob \biggl(A_k\le \frac{c_1}{m e^{2c_2+(2\varepsilon+a)(n-1-k)}}\biggr) \prob\bigl(C_S(c_2,\varepsilon,j,n-1) \mbox{ for all }j\in[k+1,n-1]\bigr)\\ &\ge& \prob\bigl(C_S(c_2,\varepsilon,j,n-1)\mbox{ for all }j\in[1,n-1]\bigr) \sum_{k=0}^{n-1} \prob\biggl(A_k\le \frac{c_1}{m e^{2c_2+(2\varepsilon+a)(n-1-k)}}\biggr), \end{eqnarray*} and \begin{eqnarray*} \sum_{k\not=l}\prob(C_A(k,n)\cap C_A(l,n)) &\le& \sum_{k\not=l}\prob\biggl(A_k\le \frac{c_1}{m e^{a(n-1-k)}},\ A_l\le \frac{c_1}{m e^{a(n-1-l)}}\biggr)\\ &=& \sum_{k\not=l}\prob\biggl(A_k\le \frac{c_1}{m e^{a(n-1-k)}}\biggr) \prob\biggl(A_l\le \frac{c_1}{m e^{a(n-1-l)}}\biggr)\\ &\le& \biggl(\sum_{k=0}^{n-1} \prob\biggl(A_k\le \frac{c_1}{m e^{a(n-1-k)}}\biggr)\biggr)^2. \end{eqnarray*} As follows from \eqref{startid}, \begin{eqnarray*} \sum_{k=0}^{n-1} \prob\biggl(A_k\le \frac{c_1}{m e^{2c_2+(2\varepsilon+a)(n-1-k)}}\biggr) &=& \sum_{k=0}^{n-1} \prob\biggl(\xi\ge\log\biggl(\frac{m e^{2c_2+(2\varepsilon+a)k}}{c_1} -1\biggr)\biggr)\\ &\ge& \sum_{k=0}^{n-1} \overline F(\log m +2c_2+(2\varepsilon+a)k-\log c_1)\\ &\ge& \frac{1}{2\varepsilon+a} \int_{\log m+2c_2-\log c_1}^{\log m+2c_2-\log c_1+(2\varepsilon+a)n} \overline F(x)dx \end{eqnarray*} since the tail function $\overline F(x)$ is decreasing. Therefore, \begin{eqnarray*} \sum_{k=0}^{n-1} \prob\biggl(A_k\le \frac{c_1}{m e^{2c_2+(2\varepsilon+a)(n-1-k)}}\biggr) &\ge& \frac{1+o(1)}{2\varepsilon+a} \int_{\log m}^{\log m+(2\varepsilon+a)n} \overline F(x)dx \end{eqnarray*} as $m\to\infty$ uniformly for all $n\ge1$ because the distribution $F$ is long-tailed. Similarly, \begin{eqnarray*} \sum_{k=0}^{n-1} \prob\biggl(A_k\le \frac{c_1}{m e^{a(n-1-k)}}\biggr) &\le& \frac{1+o(1)}{a} \int_{\log m}^{\log m+na} \overline F(x)dx. \end{eqnarray*} Therefore, \begin{eqnarray*} \sum_{k=0}^{n-1}\prob(C_A(k,n)) &\ge& \frac{1+o(1)}{2\varepsilon+a} \int_{\log m}^{\log m+na} \overline F(x)dx \prob\bigl(C_S(c_2,\varepsilon,j,n-1)\mbox{ for all }j\in[1,n-1]\bigr), \end{eqnarray*} and \begin{eqnarray*} \sum_{k\not=l}\prob(C_A(k,n)\cap C_A(l,n)) &=& O\biggl(\int_{\log m}^{\log m+na} \overline F(x)dx\biggr)^2 \end{eqnarray*} as $m\to\infty$ uniformly for all $n\ge1$. Substituting these bounds into \eqref{lower.for.union} and applying \eqref{C_S}, for any fixed $\varepsilon>0$ we can conclude the following lower bound, \begin{eqnarray}\label{lower.for.union.final} \prob\biggl(\bigcup_{k=0}^{n-1} C_A(k,n)\biggr) &\ge& \frac{g(c_2)+o(1)}{2\varepsilon+a} \int_{\log m}^{\log m+na} \overline F(x)dx \end{eqnarray} as $m\to\infty$ uniformly for all $n\ge1$, where $g(c_2)\to 1$ as $c_2\to\infty$. As above, conditioning on $\mathcal A$ yields \begin{eqnarray}\label{Zn.A.C} \prob(Z_n>m) &=& \exptn[\prob(Z_n>m\mid \mathcal A)]\nonumber\\ &\ge& \exptn[\prob(Z_n>m\mid \mathcal A);\ C_A(n)], \end{eqnarray} where $C_A(n):=\bigcup_{k=0}^{n-1} C_A(k,n)$. Then, owing to \eqref{lower.for.union.final}, for the proof of \eqref{eq:stat1} it suffices to show that \begin{eqnarray}\label{Zn.A.C.1} \liminf_{m\to\infty}\inf_{C_A(n)}\prob(Z_n>m\mid \mathcal A) &\ge& e^{-c_1} \quad\mbox{uniformly for all }n\ge 1. \end{eqnarray} Hence we are left with the proof of \eqref{Zn.A.C.1}. Since the event $C_A(n)$ is the union of events $C_A(k,n)$, $k\le n-1$, the probability of the event \begin{eqnarray*} C_B(k,n) &:=& \biggl\{B_{k+1,1}>m e^{c_2+\varepsilon(n-1-k)}\prod_{j=k+1}^{n-1}\frac{1}{a_{A_j}}\biggr\}, \end{eqnarray*} conditionally on $C_A(n)$, possesses the following asymptotic lower bound \begin{eqnarray*} \prob(C_B(k,n)\mid C_A(n)) &\ge& (1-A)^{m e^{c_2+\varepsilon(n-1-k)}\prod_{j=k+1}^{n-1}\frac{1}{a_{A_j}}}\\ &\ge& \biggl(1-\frac{c_1}{m e^{c_2+\varepsilon(n-1-k)}\prod_{j=k+1}^{n-1}\frac{1}{a_{A_j}}} \biggr)^{m e^{c_2+\varepsilon(n-1-k)}\prod_{j=k+1}^{n-1}\frac{1}{a_{A_j}}}\\ &\to& e^{-c_1}\quad\mbox{as }m\to\infty. \end{eqnarray*} Therefore, it only remains to show that \begin{eqnarray}\label{l.b.ind.conv} \inf_{C_A(k,n)}\prob(Z_n>m\mid C_B(k,n),\ \mathcal A) &\to& 1 \end{eqnarray} as $m\to\infty$ uniformly for all $k\le n-1$ and $n\ge 1$. To prove this convergence, let us note that, conditioned on $\mathcal A$, \begin{eqnarray*} \prob\bigl[Z_j\le la_{A_{j-1}}e^{-\varepsilon} \big\mid Z_{j-1}=l,\mathcal A\bigr] &=& \prob\bigl[B_{j,1}+\ldots+B_{j,l+1}\le la_{A_{j-1}}e^{-\varepsilon} \big\mid \mathcal A\bigr]\\ &\le& \prob\biggl[\frac{B_{j,1}}{a_{A_{j-1}}}+\ldots+\frac{B_{j,l}}{a_{A_{j-1}}} \le le^{-\varepsilon} \bigg\mid \mathcal A\biggr]\\ &=& \prob\biggl[\biggl(e^{-\varepsilon/2}-\frac{B_{j,1}}{a_{A_{j-1}}}\biggr) +\ldots+\biggl(e^{-\varepsilon/2}-\frac{B_{j,l}}{a_{A_{j-1}}}\biggr) \ge l(e^{-\varepsilon/2}-e^{-\varepsilon}) \bigg\mid \mathcal A\biggr]\\ &\le& \prob\biggl[\biggl(e^{-\varepsilon/2}-\frac{B_{j,1}}{a_{A_{j-1}}}\biggr) +\ldots+\biggl(e^{-\varepsilon/2}-\frac{B_{j,l}}{a_{A_{j-1}}}\biggr) \ge l e^{-\varepsilon}\varepsilon/2 \bigg\mid \mathcal A\biggr]. \end{eqnarray*} Applying the exponential Markov inequality, we obtain the following upper bound, for all $\lambda>0$, \begin{eqnarray*} \prob\bigl[Z_j\le l a_{A_{j-1}}e^{-\varepsilon} \big\mid Z_{j-1}=l,\mathcal A\bigr] &\le& e^{-l\lambda e^{-\varepsilon}\varepsilon/2} \exptn e^{\lambda\bigl(\bigl(e^{-\varepsilon/2}-\frac{B_{j,1}}{a_{A_{j-1}}}\bigr) +\ldots+\bigl(e^{-\varepsilon/2}-\frac{B_{j,l}}{a_{A_{j-1}}}\bigr)\bigr)}. \end{eqnarray*} Since \begin{eqnarray*} \exptn\Bigl[e^{\lambda\bigl(e^{-\varepsilon/2}-\frac{B}{a_A}\bigr)} \Big\mid A\Bigr] &=& e^{\lambda(1-\varepsilon)}\frac{A}{1-(1-A)e^{-\lambda\frac{A}{1-A}}}\\ &=& e^{\frac{\lambda}{1-A}-\lambda\varepsilon} \frac{A}{e^{\lambda\frac{A}{1-A}}-(1-A)} \ \le\ e^{\frac{\lambda}{1-A}-\lambda\varepsilon} \frac{1}{\frac{\lambda}{1-A}+1} \end{eqnarray*} and $A$ is bounded away from $1$, there exists a sufficiently small $\lambda_0>0$ such that \begin{eqnarray*} \exptn\Bigl[e^{\lambda_0\bigl(e^{-\varepsilon/2}-\frac{B}{a_A}\bigr)} \Big\mid A\Bigr] &\le& 1\quad\mbox{for all }A\in(0,\widehat{A}). \end{eqnarray*} Therefore, \begin{eqnarray*} \prob\bigl[Z_j\le l a_{A_{j-1}}e^{-\varepsilon} \big\mid Z_{j-1}=l,\ \mathcal A\bigr] &\le& e^{-l\delta}\quad\mbox{where }\delta=\lambda_0e^{-\varepsilon}\varepsilon/2>0. \end{eqnarray*} which, due to monotonicity property of the branching process $Z_n$, implies that \begin{eqnarray*} \prob\bigl[Z_j\le l a_{A_{j-1}}e^{-\varepsilon} \big\mid Z_{j-1}\ge l,\ \mathcal A\bigr] &\le& e^{-l\delta}. \end{eqnarray*} Then the induction arguments lead to the following upper bound \begin{eqnarray*} \prob\biggl[Z_n\le l e^{-\varepsilon(n-1-k)}\prod_{i=k+1}^{n-1} a_{A_i} \bigg\mid Z_{k+1}\ge l,\ \mathcal A\biggr] &\le& \sum_{j=k+1}^{n-1} e^{-l\delta e^{-\varepsilon(j-1-k)}\prod_{i=k+1}^{j-1} a_{A_i}}. \end{eqnarray*} We take \begin{eqnarray*} l &=& m e^{c_2+\varepsilon(n-1-k)}\prod_{i=k+1}^{n-1}\frac{1}{a_{A_i}} \end{eqnarray*} to conclude that \begin{eqnarray*} \prob(Z_n>m \mid C_B(k,n),\ \mathcal A) &\ge& 1-\sum_{j=k+1}^{n-1} e^{-m\delta e^{c_2+\varepsilon(n-1-j)}\prod_{i=j+1}^{n-1} a_{A_i}}. \end{eqnarray*} Due to the representation \begin{eqnarray*} \log e^{c_2}\prod_{i=j}^{n-1}\frac{A_i}{1-A_i} &=& c_2+\sum_{i=j}^{n-1}\log\frac{A_i}{1-A_i} \ =\ c_2-\sum_{i=j}^{n-1}\xi_i, \end{eqnarray*} we get \begin{eqnarray*} \prob(Z_n>m\mid C_B(k,n),\ \mathcal A) &\ge& 1-\sum_{j=k+1}^{n-1} e^{-m\delta e^{\varepsilon(n-1-j)}}, \end{eqnarray*} for any sequence of $\xi$'s such that \begin{eqnarray*} c_2-\sum_{i=j}^{n-1}\xi_i &\ge& 0\quad\mbox{for all }j\in[k,n -1], \end{eqnarray*} which is the case on $C_S(c_2,\varepsilon,k,n-1)$ and hence on $C_A(k,n)$, as follows from the first inequality in \eqref{lln} for all $\varepsilon\in(0,-\exptn\xi)$. So, we have shown \eqref{l.b.ind.conv}, and the proof of the first lower bound in Theorem \ref{thm:tail:generation:size.lower} is complete. The lower limit for the stationary distribution follows similar arguments if we start with an analogue of \eqref{Zn.A.C}, \begin{eqnarray}\label{Zn.A.C.infty} \prob(Z>m) &=& \lim_{n\to\infty}\prob(Z_n>m)\nonumber\\ &\ge& \lim_{n\to\infty}\exptn[\prob(Z_n>m\mid \mathcal A);\ C_A(n)]. \end{eqnarray} Then, similar to \eqref{lower.for.union.final}, we may use the fact that $F_I$ is long-tailed to conclude that \begin{eqnarray}\label{lower.for.union.final.infty} \lim_{n\to\infty}\prob(C_A(n)) &\ge& \frac{g(c_2)+o(1)}{2\varepsilon+a} \overline F_I(\log m) \quad\mbox{as }m\to\infty, \end{eqnarray} which together with \eqref{Zn.A.C.1} justifies the lower bound for the stationary tail distribution. \section{Proof of the upper bound, Theorem~\ref{thm:tail:generation:size.upper}} \label{sec:upper} Let $W_n$ be a branching process without immigration, that is, $W_0=1$ and \begin{eqnarray*} W_{n+1} &=& \sum_{i=0}^{W_n}B_{n+1,i}\quad\mbox{for }n\ge 0. \end{eqnarray*} Let $W_n^{(1)}$ be the number of particles in $Z_n$ generated by the immigrant arriving at time $1$, $W_n^{(2)}$ be the number of particles in $Z_n$ generated by the immigrant arriving at time $2$ and so on. All these processes extinct in a finite time and are independent being conditioned on the environment $\mathcal A$. In addition, $W_n^{(k)}$ has the same distribution with $W_{n-k}$ given the same success probabilities. By the definition of $Z_n$, \begin{eqnarray*} Z_n &=& W_n^{(1)}+W_n^{(2)}+\ldots+W_n^{(n)}, \end{eqnarray*} and hence, for any fixed $\varepsilon>0$, \begin{eqnarray*} \prob(Z_n>m) &\le& \prob\bigl(W_n^{(k)}>me^{-\varepsilon(n-k)}(1-e^{-\varepsilon}) \mbox{ for some }k\le n\bigr)\\ &=& \exptn\bigl[\prob\bigl(W_n^{(k)}>me^{-\varepsilon(n-k)}(1-e^{-\varepsilon}) \mbox{ for some }k\le n\mid\mathcal A\bigr)\bigr]. \end{eqnarray*} Splitting the area of integration into two parts, we get the following upper bound \begin{eqnarray}\label{upperboundW} \prob(Z_n>m) &\le& \prob(S_{k,n-1}>\log m-\sqrt{\log m}-2\varepsilon(n-k) \mbox{ for some }k\in[0,n-1])\nonumber\\ && + \exptn\bigl[\prob\bigl(W_n^{(k)}>me^{-\varepsilon(n-k)}(1-e^{-\varepsilon}) \mbox{ for some }k\le n\mid\mathcal A\bigr);\nonumber\\ &&\hspace{20mm}S_{k,n-1}\le\log m-\sqrt{\log m}-2\varepsilon(n-k) \mbox{ for all }k\in[0,n-1]\bigr]. \end{eqnarray} Using \eqref{stab.cond} and strong subexponentiality of $F$ we conclude that \begin{eqnarray}\label{upper.1} \lefteqn{\prob(S_{k,n-1}+2\varepsilon(n-k)>\log m-\sqrt{\log m} \mbox{ for some }k\in[0,n-1])}\nonumber\\ &&\hspace{40mm}\sim\ \frac{1}{a-2\varepsilon} \int_{\log m-\sqrt{\log m}}^{\log m-\sqrt{\log m}+n(a-2\varepsilon)} \overline F(x)dx \end{eqnarray} as $m\to\infty$ uniformly for all $n$, see \cite{Dima2002} and also \cite{FKZ}, Theorem 5.3. Further, by the Markov inequality, \begin{eqnarray*} \prob\bigl(W_n^{(k)}>me^{-\varepsilon(n-k)}(1-e^{-\varepsilon})\mid\mathcal A\bigr) &\le& \frac{\exptn(W_n^{(k)}\mid\mathcal A)}{me^{-\varepsilon(n-k)}(1-e^{-\varepsilon})}\\ &=& \frac{e^{S_{k,n-1}}}{me^{-\varepsilon(n-k)}(1-e^{-\varepsilon})}. \end{eqnarray*} Hence, on the event $\{S_{k,n-1}\le\log m-\sqrt{\log m}-2\varepsilon(n-k) \mbox{ for all }k\in[0,n-1]\}$ we have \begin{eqnarray*} \prob\bigl(W_n^{(k)}>me^{-\varepsilon(n-k)}(1-e^{-\varepsilon})\mid\mathcal A\bigr) &\le& \frac{e^{-\varepsilon(n-k)}}{e^{\sqrt{\log m}}(1-e^{-\varepsilon})}, \end{eqnarray*} which implies that \begin{eqnarray}\label{upper.2} \lefteqn{\exptn\bigl[\prob\bigl(W_n^{(k)}>m(1-\varepsilon)^{n-k}\varepsilon \mbox{ for some }k\le n\mid\mathcal A\bigr);}\nonumber\\ &&\hspace{40mm}S_{k,n-1}\le\log m-\sqrt{\log m}-2\varepsilon(n-k) \mbox{ for all }k\in[0,n-1]\bigr] \nonumber\\ &&\hspace{20mm}\le\ \frac{1}{e^{\sqrt{\log m}}(1-e^{-\varepsilon})} \sum_{k=0}^\infty e^{-\varepsilon(n-k)}\nonumber\\ &&\hspace{20mm}=\ \frac{1}{e^{\sqrt{\log m}}(1-e^{-\varepsilon})^2}. \end{eqnarray} Substituting \eqref{upper.1} and \eqref{upper.2} into \eqref{upperboundW}, we deduce that, uniformly for all $n\ge 1$, \begin{eqnarray*} \prob(Z_n>m) &\le& \frac{1+o(1)}{a-2\varepsilon} \int_{\log m-\sqrt{\log m}}^{\log m-\sqrt{\log m}+na} \overline F(x)dx + \frac{1}{e^{\sqrt{\log m}}(1-e^{-\varepsilon})^2}. \end{eqnarray*} By the condition \eqref{cond.sqrt}, $\overline F(\log m-\sqrt{\log m})\sim\overline F(\log m)$ and $\overline F(\log m)e^{\sqrt{\log m}}\to\infty$ as $m\to\infty$, hence \begin{eqnarray*} \prob(Z_n>m) &\le& \frac{1+o(1)}{a-2\varepsilon} \int_{\log m}^{\log m+na} \overline F(x)dx, \end{eqnarray*} uniformly for all $n\ge 1$. Due to the arbitrary choice of $\varepsilon>0$, the proof of the upper bound \eqref{eq:stat1.eq} is complete. The above arguments can be streamlined if we made use of the link \eqref{perp} to stochastic difference equations. Indeed, conditioning on the environment leads to \begin{eqnarray*} \prob(Z_n>m) &=& \exptn\bigl[\prob(Z_n>m\mid\mathcal A)\bigr]\\ &\le& \prob\bigl[\exptn(Z_n\mid\mathcal A)>me^{-\sqrt{\log m}}\bigr]\\ && + \exptn\bigl[\prob\bigl(Z_n>m\mid\mathcal A\bigr); \ \exptn(Z_n\mid\mathcal A)\le me^{-\sqrt{\log m}}\bigr]. \end{eqnarray*} For the first term on the right hand side we apply the asymptotics \eqref{perp1}. To estimate of the second term, we can apply the Markov inequality to get \begin{eqnarray*} \prob\bigl(Z_n>m\mid\mathcal A\bigr) &\le& \frac{\exptn(Z_n\mid\mathcal A)}{m}\\ &\le& \frac{me^{-\sqrt{\log m}}}{m}\ =\ e^{-\sqrt{\log m}} \end{eqnarray*} on the event $\exptn(Z_n\mid\mathcal A)\le me^{-\sqrt{\log m}}$ which completes the proof. The proof of the stationary upper bound \eqref{eq:stat2.eq} follows similar arguments with initial upper bound \begin{eqnarray*} \prob(Z>m) &=& \lim_{n\to\infty}\prob(Z_n>m)\\ &\le& \lim_{n\to\infty}\prob\bigl[\exptn(Z_n\mid\mathcal A)>me^{-\sqrt{\log m}}\bigr]\\ && + \lim_{n\to\infty}\exptn\bigl[\prob\bigl(Z_n>m\mid\mathcal A\bigr); \ \exptn(Z_n\mid\mathcal A)\le me^{-\sqrt{\log m}}\bigr]. \end{eqnarray*} and further use of the asymptotics \eqref{perp2} instead of \eqref{perp1} which is valid due to subexponentiality of the integrated tail distribution $F_I$. The proof of Theorem \ref{thm:tail:generation:size.upper} is complete. \section{Proof of the principle of a single atypical environment, Theorem~\ref{thm:PSLE}} \label{sec:PSLE} As follows from the arguments presented in Section \ref{sec:lower}, for any fixed $c$ and $\varepsilon>0$, \begin{eqnarray*} \prob\biggl(\bigcup_{k=0}^{n-1} E_n^{(k)}(m)\biggr) &\sim& \frac{1}{a+\varepsilon} \int_{\log m}^{\log m+(a+\varepsilon)n} \overline F(x)dx\\ &\ge& \frac{1}{a+\varepsilon} \int_{\log m}^{\log m+an} \overline F(x)dx \end{eqnarray*} and the event presented on the left hand side implies $Z_n>m$ with high probability, that is, \begin{eqnarray*} \prob\biggl(Z_n>m\ \Big|\ \bigcup_{k=0}^{n-1} E_n^{(k)}(m)\biggr) &\to& 1 \quad\mbox{as }m\to\infty\mbox{ uniformly for all }n. \end{eqnarray*} Then the equality \begin{eqnarray*} \prob\biggl(\bigcup_{k=0}^{n-1} E_n^{(k)}(m)\ \Big|\ Z_n>m\biggr) &=& \prob\biggl(Z_n>m\ \Big|\ \bigcup_{k=0}^{n-1} E_n^{(k)}(m)\biggr) \frac{\prob\Bigl(\bigcup_{k=0}^{n-1} E_n^{(k)}(m)\Bigr)}{\prob(Z_n>m)} \end{eqnarray*} and Theorem \ref{thm:tail:generation:size.upper} imply that \begin{eqnarray*} \lim_{m\to\infty}\inf_{n} \prob\biggl(\bigcup_{k=0}^{n-1} E_n^{(k)}(m)\ \Big|\ Z_n>m\biggr) &\ge& \frac{a}{a+\varepsilon}. \end{eqnarray*} Letting $\varepsilon\downarrow 0$ concludes the proof. \section{Related models} \label{sec:extensions} The techniques developed in this paper may be applied to analysing a variety of similar models. We mention here a few of them. {\bf Random-size immigration.} One may replace size-1 immigration by a {\it random-size-im\-migra\-tion} where random sizes are i.i.d. and independent of everything else, with a common light-tailed distribution (or, more generally, the sizes may be stochastically bounded by a random variable with a light-tailed distribution). A branching process $\{\widehat{Z}_n, n\ge 0\}$ with {\it state-dependent size-1 immigration} is a particular case here: an immigrant arrives only when the previous generation produces no offspring: \begin{align*} {\widehat Z}_{n+1}= \sum_{i=1}^{\max (1,{\widehat Z}_n)} B_{n+1,i}, \quad n\ge 0. \end{align*} Clearly, $\widehat{Z}_n \le Z_n$ a.s., for any $n$. Moreover, one can show that, for each $n$, the low bounds for $\prob(Z_n>m)$ and $\prob(\widehat{Z}_n>m)$ are asymptotically equivalent. Then, in particular, the statement of Theorem \ref{thm:tail:fixed:generation:size} stays valid with $\widehat{Z}_n$ in place of $Z_n$. {\bf Continuous-space analogue.} Instead of the recursion \eqref{Zn}, one may consider a ``continuous-space'' recursion of the form \begin{align*} Z_{n+1} = Y_{n+1} + \int_0^{Z_n} dB_{n+1}(t) \end{align*} where $B_n$ are subordinators with a light-tailed distribution of the Levy measure (that depends on random parameters) and $\{Y_n\}$ are i.i.d. ``innovations'' with a light-tailed distribution. A similar problem for a branching process with immigration, but without random environment has been studied in a recent paper by \cite{FM}.
{ "timestamp": "2020-10-21T02:18:51", "yymm": "2007", "arxiv_id": "2007.13507", "language": "en", "url": "https://arxiv.org/abs/2007.13507" }
\section{Introduction} In order to accurately describe noise-induced phenomenon in spatially-extended systems, it is important to add fluctuations to continuum models that respect some underlying structure like a Hamiltonian and the sampling of the Gibbs/Boltzmann/canonical distribution. Guaranteeing this kind of fluctuation-dissipation relation (a.k.a.~detailed balance) is not always obvious, especially in condensed matter physics for which accurate phenomenological models are not always built from first principles. One example is the Landau-Lifshitz-Gilbert equation describing a single magnetic spin requiring multiplicative noise, thereby creating an effective electric field, rather than additive noise to ensure sampling of the Gibbs distribution, see \cite{kohn2005magnetic}. In effect, the noise is projected onto the surface of the sphere representing the configuration space of the constant magnitude spin vector. Another example is the regularization of Stochastic partial differential equations (SPDEs) by correlating the noise in space. While the corresponding dynamics occur at regularity scales that allow for analysis of the evolution to be treated via now well-understood methods for understanding stochastic paths in the PDE setting, see for instance \cite{de1999stochastic,da2014stochastic}, entirely different distributions from their un-correlated noise counterparts may be sampled. Although white-noise solutions to SPDEs in situations with much less regularity can be understood with the introduction of regularity structures by Hairer in \cite{Hairer:2014hd}, there are still dimensional restrictions, even in the case where the deterministic part is parabolic and hence strongly coercive, see for instance the recent work \cite{bruned2019geometric} on stochastic harmonic map heat flows. Our goal in this work is to combine the two considerations above related to sampling and regularization, deriving an SPDE model for a spatially-extended magnetic spin system with spatially ``colored'' noise designed to sample an invariant Gibbs measure. We derive such a continuum model designed to sample an invariant Gibbs measure from a microscopic Metropolis Hastings (MH) algorithm. The MH algorithm \cite{hastings1970monte,metropolis1953equation} allows the random walk dynamics to be separated from the Hamiltonian structure in the invariant measure: a simple random-walk proposal, $\tilde X_i = X_i^n + \varepsilon w_i^n$ with $i=1\dots N$ indexing space and the $w_i^n$ independent normally distributed random variables, will sample the Gibbs measure \begin{equation}\label{eq:Gibbs} \mu(\vec X) = Z^{-1} e^{-\beta H(\vec X)}, \end{equation} where $Z$ is the partition function and $\beta^{-1}=k_BT$, if an accept probability of $$ \alpha = 1\wedge e^{-\beta ( H(\tilde{\vec X}) - H(\vec X^n))} $$ where $a\wedge b = \min(a,b)$ is used for arbitrary bounded Hamiltonian $H$ (i.e.~$\vec X^{n+1} = \tilde{\vec X}$ with probability $\alpha$ and $\vec X^n$ otherwise). The stochastic differential equation (SDE) $$ d\vec X = - \nabla H dt + \sqrt{2\beta^{-1}} d\vec W $$ also samples the invariant measure \eqref{eq:Gibbs}. Furthermore the MH dynamics converge to the SDE dynamics in the limit as the proposal size $\varepsilon\to 0$. Thus the limiting MH dynamics can be used to construct SDE models that preserve the invariant measure \eqref{eq:Gibbs} in more complex situations. For example, if the random walk proposal is changed to $\tilde{\vec X} = \vec X^n + \varepsilon B \vec w^n$ for constant matrix $B$, then the SDE \begin{equation}\begin{aligned}\label{eq:SDEgeneric} d\vec X & = - BB^T \nabla H dt + \sqrt{2\beta^{-1}} B d\vec W \end{aligned}\end{equation} also samples the invariant measure \eqref{eq:Gibbs} (but not for every non-constant $B(\vec X)$, c.f.~\cite{PhysRevE.76.011123}). In Appendix \ref{App:FP_Addative} we confirm this by direct substitution into the (constant $B$) Fokker-Planck equation \begin{equation}\begin{aligned} \label{eq:constB_FP} \partial_t \rho(x,t) &= \sum_{i=1}^{3N} \partial_i \left[ ( BB^T \nabla H )_i \rho(x,t)\right] \\ & + \beta^{-1} \sum_{i,j =1}^{3N} (BB^T)_{ij} \partial_i \partial_j \rho(x,t) . \end{aligned}\end{equation} Equation \eqref{eq:SDEgeneric}, with symmetric, non-negative definite covariance matrix $BB^T$, has spatially-correlated noise and still samples the Gibbs distribution \eqref{eq:Gibbs}. A continuum limit of the SDE \eqref{eq:SDEgeneric} exists if the Hamiltonian $H$ and covariance matrix $BB^T$ are appropriately scaled with system size $N$. In this work, we consider a system of $N$ spins (with periodic boundary conditions), or vectors on $\mathbb{S}^m$ for some $m \geq 1$, thereby introducing a confining geometry and investigate how this interacts with spatially-correlated ``colored'' noise, deriving an appropriately regularized Stochastic partial differential equation that still samples an invariant measure of the form \eqref{eq:Gibbs}. The spatially correlated noise coupled to the geometric constraint will result in a proposal of the form $\tilde{\vec X} = \vec X^n + \varepsilon B (\vec X^n) \vec w^n$, where unfortunately {\it the colored noise proposal is no longer symmetric}. However, we prove that the MH dynamics can still be approximated by an SDE system similar to that of \eqref{eq:SDEgeneric}, and that for a canonical choice of the matrix $B$ related to the geometry, that the SDE system still samples the correct invariant measure. For ease of exposition and physical importance, we will restrict ourselves to $m =2$ and work only with spins defined as vectors on $\mathbb{S}^2$. We build on our recent work \cite{gao2018limiting} which showed that on a general torus in any dimension, $\mathbb{T}^d$, the MH dynamics for a system of spatially-uncorrelated ``white'' noise driven spins with confining geometry converged to the dynamics of an SDE system as $\varepsilon$, the proposal size, went to zero. We also considered the $N\to\infty$ limit of the dynamics while quenching the noise ($\beta =N^\gamma$ for $\gamma$ sufficiently large) to arrive at the harmonic map heat flow equation \begin{equation}\label{eq:whitePDE} \partial_t \sigma(x,t) = - \sigma \times ( \sigma \times \Delta \sigma). \end{equation} This could also be referred to as the overdamped Landau-Lifshitz-Gilbert (LLG) equation. Quenching the noise was essential in the derivation due to the known convergence issues with stochastic partial differential equations (SPDE) in spatial dimensions greater than one (c.f.~\cite{ryser2012well}). The convergence from the SDE model to a PDE model also relied on the regularity of the harmonic map heat flow equation, which can fail for $\mathbb{T}^d \to \mathbb{S}^m$ in finite time for dimensions $d > 2$ due to bubbling singularities, see \cite{struwe1988evolution,guo1993landau}. To derive a regularized SPDE limit ($\beta$ constant with $N\to\infty$), we begin with a random walk for the MH algorithm that projects now spatially-correlated Gaussian noise onto the tangent plane of the underlying geometry. After taking the proposal size $\varepsilon\to0$ arriving at a system of SDEs, we find that unlike the white noise case, the choice of $\sigma \times$ as the projection is crucial for sampling the desired distribution \eqref{eq:Gibbs}. Therefore, the regularized non-local SPDE that samples the Gibbs distribution is \begin{align} \label{eq:spde} \partial_t \sigma(x,t) & = -\sigma(x,t) \times \int_{\mathbb{T}^d} C(x-y) (\sigma \times \Delta \sigma) (y,t) dy \nonumber \\ & + \sqrt{2\beta^{-1}} \sigma(x,t) \times \eta^C(x,t), \end{align} where $C$ is a non-local operator to be described below in a variety of cases that encodes the covariance structure of the colored noise, $\eta^C(x,t)$, is colored-in-space white-in-time noise (i.e. $\mathbb{E}[ \eta^C(x,t)\eta^C(y,s)] = C(x-y)\delta(t-s)$), interpreted in the Stratonovich sense. \subsection{Prior Work} Having an appropriately regularized stochastic limit is important to studying thermal effect in ferromagnets such as magnetization reversal~\cite{Wernsdorfer:1997eb,Coffey:2012ez}. Existing field models continue to use spatially-uncorrelated white noise in the stochastic LLG equation so as to maintain the equilibrium distribution, proposing for example weak formulations of the solutions and numerical finite element schemes (c.f.~Ch.~2 of \cite{banas2014stochastic}). Equation \eqref{eq:spde} is in contrast to regularizing the LLG equation by changing the energy functional to include a term to control the modulus of continuity~\cite{Chugreeva:2018iz}. It also compliments other works that derive equations to preserve the equilibrium distribution, such as in the case of inhomogeneous magnitude of magnetic spins~\cite{Nishino:2015gv}, for temporally-colored noise but for finitely many spins~\cite{Atxitia:2009io}, and for the stochastic Landau-Lifshitz-Bloch equation~\cite{Evans:2012ex}. More generally, physical models with confining geometries are natural generalizations of the SPDE limits derived using colored noise for unconstrained random walks in \cite{MPS} and more recently in \cite{Kuntz:2019fh}. See also \cite{PhysRevE.60.6343} that focuses on quantum field theories, but also discusses the effective action of a generic SPDE system through the tools of fluctuation-dissipation and invariant measures, with examples including reaction-diffusion-decay systems, KPZ (noisy Burgers), and purely dissipative SPDEs. Since our approach starts from the MH algorithm, it is worth pointing out that the MH algorithm itself is widely used in particle statistics and sampling algorithms, see for instance \cite{newman1999monte,binder1993monte,landau2014guide,batrouni2004metastable,maccari2016numerical}. It also arrises when adopting the Bayesian approach to inverse problems and signal processing~\cite{stuart2010inverse,hairer2011signal}. This has lead to the study of optimal scalings for the unconstrained random walk MH algorithm and diffusion limits for certain forms of probability distributions~\cite{roberts1997weak,breyer2000metropolis,MPS,jourdain2014optimal,jourdain2015optimal,Kuntz:2019fh}. Specifically, for product measures in \cite{roberts1997weak} and the Gibbs distribution of a lattice model in \cite{breyer2000metropolis}, the weak convergence to Langevin diffusions has been shown by comparing generator functions. The pioneering work \cite{MPS}, based in part upon earlier works on sampling \cite{hairer2005analysis,hairer2007analysis}, extended this type of result to non-product form measures and demonstrated the weak convergence to a SPDE. Subsequent works \cite{jourdain2014optimal,jourdain2015optimal,Kuntz:2019fh} consider scaling limits of systems started away from their equilibrium distributions. Building on our previous work \cite{gao2018limiting} that studied the limiting dynamics of a geometric MH process with white noise in the proposal, we fill a missing gap in the above results showing strong convergence of trajectories started far from equilibrium to a non-local SPDE in a geometric setting, with the underlying dynamics of the process designed to sample an (non product form) invariant measure using colored noise with a given covariance structure. Similar to \cite{MPS}, we derive a drift term that implicitly is driven by a non-local diffusion operator. In the context of random walks, this is related to fractional diffusion operators, but we are interested to see the effects of colored noise on the geometric evolution. \subsection{Outline of Results} The remainder of the paper is as follows. In section~\ref{sec:notation} we layout the vector notation we adapt for the paper. In section~\ref{sec:white_noise} we review our results from \cite{gao2018limiting} pointing out a few interesting facts that will be in contrast to the colored noise case. We extend these results to the case of colored noise in section \ref{sec:colored_noise}, outlining the derivation of the limiting SDE system from the MH dynamics in section \ref{sec:MHtoSDE} (details of the proof in Appendix \ref{app:SDEderivation}), discussing the correct projection of the noise onto the tangent plane of the underlying geometry to ensure the SDE system samples the desired distribution \eqref{eq:Gibbs} in section \ref{sec:cross}, proving the invariant measure of the MH dynamics converges to this same invariant measure in section \ref{sec:IM}, and discuss the Fourier representation of the non-local SPDE \eqref{eq:spde} in section \ref{sec:PSDE} with an outline the well-posedness in Appendix \ref{A:LWP} when the noise is trace class. We support our trajectory-wise convergence results with direct numerical simulations in section \ref{sec:num} as well as illuminate the differences between the choice of two different projections of the noise onto the tangent plane of the underlying geometry. We give concluding remarks in section \ref{sec:conclusions}. \section{Notation\label{sec:notation}} We present our results for the case of one periodic spatial dimension, $\mathbb{T}$, and spins that live on $\mathbb{S}^2$, although this can be extended to other dimensions for both the periodic domain and the spherical target. It becomes convenient to adopt different notation in different contexts, which we summarize here. The torus with unit length is subdivided with $x_i = (i-1)/N$ for $i=1\dots N$ with a spin located at each $x_i$. We take $ {\boldsymbol \sigma} _i^n$ for $i=1\dots N$ as the collection of the $N$ spins of the MH dynamics at time-step $n$, each a 3-dimensional vector, with components \begin{equation} {\boldsymbol \sigma} _i^n = \left< \sigma _{i,x}^n, \sigma _{i,y}^n, \sigma _{i,z}^n \right> \end{equation} satisfying $( \sigma _{i,x}^n)^2 + ( \sigma _{i,y}^n)^2+ ( \sigma _{i,z}^n)^2 = 1$ for each $i=1\dots N$ and each integer $n\ge 0$. The $3N$-dimensional vector \begin{equation} \sig^n = \left< \sigma _{1,x}^n \dots \sigma _{N,x}^n\; \sigma _{1,y}^n \dots \sigma _{N,y}^n\; \sigma _{1,z}^n \dots \sigma _{N,z}^n \right> \end{equation} contains all the components of all the spins. We similarly define: $\sigp{n}$, $ {\boldsymbol {\tilde \sigma}} _i^n$ and $ {\tilde \sigma} _{i,q}^n$ $q\in\{x,y,z\}$ for the MH proposal at time-step $n$; $\w^n$, $\wi_i^n\in\mathbb{R}^3$, and $\wiq_{i,q}^n$ for the independent standard Gaussian random variables used to generate the proposal at time-step $n$; $\s(t)$, $\si_i(t)$, $\siq_{i,q}(t)$ for the solution to the limiting SDE system at time $t$. Since the noise will be correlated in each component, it will also be useful to represent it as $$ \w^n = \left< \wq_x^n \; \wq_y^n \; \wq_z^n \right> $$ with each $N\times 1$ vector $$ \wq_q^n = \left< \wiq_{1,q}\dots \wiq_{N,q} \right> \textrm{ for } q\in\{x,y,z\}. $$ \section{White Noise} \label{sec:white_noise} Here we present an overview of our previous work, \cite{gao2018limiting}, pointing out a few interesting facts that will be in contrast to the colored noise case. We remind the reader that though we limit our discussion here to the cases $d=1$, $m=2$ for ease of exposition, all results here extend to $d\geq1$, $m \geq 1$ with small modifications. To arrive at an appropriate continuum limit, we begin with the standard Metropolis Hastings algorithm using independent Gaussian (``white'') noise to propose a new state. The proposed new configuration of the $N$ spins $ {\boldsymbol {\tilde \sigma}} _i^n$, $i=1,2,\dots N$ requires picking a random direction in the tangent plane, moving along that direction, and projecting back onto the sphere, \begin{equation}\label{eq:proposal} {\boldsymbol {\tilde \sigma}} ^{n}_i = \frac{ {\boldsymbol \sigma} _i^n + \epsilon \nui_i^W }{ \| {\boldsymbol \sigma} _i^n + \epsilon \nui_i^W \| } \end{equation} with $\nui_i^W = \textrm{P}^\perp_{ {\boldsymbol \sigma} _i^n} (\wi_i^n)$ is a projection of the three-dimensional normal random vector $\wi_i^n$ into the tangent plane of $ {\boldsymbol \sigma} _i^n$, $\textrm{P}^\perp_x( y) = y-(x\cdot y)x $ or in matrix form $(I - xx^T)y$. Defining Hamiltonian \begin{equation} \label{eq:H} H(\sig) = \tfrac{1}{N} \sum_{i=1}^N \tfrac{N^2}{2} \| {\boldsymbol \sigma} _{i+1} - {\boldsymbol \sigma} _i \|^2 \end{equation} with $ {\boldsymbol \sigma} _{N+1} = {\boldsymbol \sigma} _1$ for periodic boundary conditions, the accept probability \begin{equation}\label{eq:alpha} \alpha = 1\wedge e^{-\beta ( H(\sigp{n}) - H(\sig^n))} \end{equation} ensures sampling of the Gibbs distribution \eqref{eq:Gibbs}, where $\sigp{n}$ and $\sig^n$ are the $3N$-vectors of the proposal components and current spin components, respectively. Symmetry in the proposal is crucial for \eqref{eq:alpha} to be the correct accept probability to sample the Gibbs distribution. We discuss this in more detail, pointing out that symmetry is lacking when $\nui_i^W$ in the proposal is replaced with its correlated noise version next in Sec.~\ref{sec:colored_noise}. By taking the lowest order term in $\varepsilon$ of the mean and added noise, the MH step is approximately equivalent to the Euler-like step $$ {\boldsymbol \sigma} _i^{n+1} - {\boldsymbol \sigma} _i^n \approx -\tfrac12 \beta \varepsilon^2 \textrm{P}^\perp_{ {\boldsymbol \sigma} _i^n}\left( \frac{\partial H}{\partial {\boldsymbol \sigma} _{i}^n}\right) - \varepsilon^2 {\boldsymbol \sigma} _i^n + \varepsilon \textrm{P}^\perp_{ {\boldsymbol \sigma} _i^n} (\wi_i^n). $$ Our previous work showed the trajectory-wise convergence as $\epsilon\to 0$ of the MH dynamics to the corresponding It\^o SDE \begin{equation}\label{eq:whiteSDE} \mathrm{d}\si_i = \left[ \textrm{P}^\perp_{\si_i}(\Delta_N\si_i) - \tfrac{2N}{\beta} \si_i \right] \dt + \sqrt{2\beta^{-1}N} \textrm P^\perp_{\si_i}( \dWi_i ) \end{equation} under the time rescaling $\delta t = \beta \varepsilon^2/2N$ where $\Wi_i$ are 3-dimensional Brownian motions and \[ \Delta_N {\boldsymbol \sigma} _i = N^2( {\boldsymbol \sigma} _{i+1}-2 {\boldsymbol \sigma} _i + {\boldsymbol \sigma} _{i-1}) \] is the discretized Laplace operator. In the case of white noise, we point out that other projection operators could be used in place of $\textrm P^\perp_x(y)$ above. The only requirement in the MH algorithm is that white noise is projected onto the tangent plane of $ {\boldsymbol \sigma} _i^n$. Two other natural choices would be $ {\boldsymbol \sigma} _i^n \times \wi_i^n$ and $- {\boldsymbol \sigma} _i^n \times ( {\boldsymbol \sigma} _i^n \times \wi_i^n)$, the later being equivalent to $\textrm P^\perp_{ {\boldsymbol \sigma} _i^n} (\wi_i^n)$ defined above; both produce white noise in the tangent plane. We will observe in Sec.~\ref{sec:cross} that this freedom is strongly related to the white noise setting and that care must be taken when moving to the colored noise case. To see the equivalence of the two natural projection choices of the cross and cross-cross product in the white-noise case, we show that the limiting SDE systems for the MH dynamics produce the exact same Fokker-Planck equation in either case, so using either is justified. Define the $3N\times 1$ vector of independent noises as \begin{equation}\label{eq:dWvec} \dW = \left< \dWq_{x}\; \dWq_{y} \; \dWq_{z} \right> \end{equation} with each $N\times 1$ vector $$ \dWq_q = \left< \dWiq_{1,q}\dots \dWiq_{N,q} \right> \; \textrm{for } q\in\{x,y,z\} $$ so that the $3N$ (It\^o ) equations analogous to \eqref{eq:whiteSDE} are \begin{equation}\begin{aligned}\label{eq:ItoWhite} \mathrm{d}\s = PP^T \Delta_N\s \;\dt - \frac{2N}{\beta} \s \;\dt + \sqrt{2\beta^{-1}N} P \dW. \end{aligned}\end{equation} We consider two choices for the block-defined projection matrix $P$ next. Note that both these projection matrices contributes the same factor $-2\s$ to the It\^o correction term, $-2N\beta^{-1}\s$, in the above SDE. For the single-spin projection $ {\boldsymbol \sigma} _i \times \dWi_i$, the block-defined projection matrix is \begin{equation}\label{eq:Pcross} P_1 = \begin{pmatrix} 0 & -Z & Y \\ Z & 0 & -X \\ -Y & X & 0 \end{pmatrix} \end{equation} and the block-defined projection matrix for $- {\boldsymbol \sigma} _i \times ( {\boldsymbol \sigma} _i \times \dWi_i)$ is \begin{equation}\label{eq:Pcrosscross} P_2 = \begin{pmatrix} I-X^2 & -XY & -XZ \\ -XY & I-Y^2 & -YZ \\ -XZ & -YZ & I-Z^2 \end{pmatrix} \end{equation} where each $N \times N$ block matrix $X,Y$ or $Z$ are the diagonal matrices \[ Q = \begin{pmatrix} \sigma _{1,q} & 0 & \dots & 0 \\ 0 & \sigma _{2,q} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \sigma _{N,q} \end{pmatrix} \] for $Q\in\{X,Y,Z\}$ with corresponding $q\in\{x,y,z\}$. The Fokker-Planck equation for \eqref{eq:ItoWhite} is \begin{equation}\begin{aligned} \partial_t \rho(\s,t) &= \sum_{i=1}^{3N} \partial_i \left[ (PP^T \Delta_N \s )_i \rho(\s,t)\right] \\ & + \frac{2N}{\beta}\sum_{i=1}^{3N} \partial_i \left[ \si_i \rho(\s,t) \right]\\ & + \frac{N}{\beta} \sum_{i,j =1}^{3N} \partial_i \partial_j \left[ (PP^T)_{ij} \rho(\s,t) \right]. \end{aligned}\end{equation} Notice that this equation only depends on $PP^T$ which is identical for both $P_1$ and $P_2$, \begin{equation}\label{eq:equivalent} P_1 P_1^T = P_2 P_2^T = P_2, \end{equation} after using that $ \sigma _{i,x}^2 + \sigma _{i,y}^2 + \sigma _{i,z}^2 =1$ for each $i$. Thus, both projections produce statistically equivalent trajectories in the white noise setting, and direct substitution can verify that \eqref{eq:Gibbs} is an invariant measure for both (see Appendix \ref{App:FP_Mult}). The key point when taking colored noise instead of white noise that we will see in Sec.~\ref{sec:cross} is that {\em the covariance matrix for the noise and the projection matrix do not commute} and therefore $PP^T$ does not appear isolated in the colored noise Fokker-Planck equation. The two projection matrices $P_1$ and $P_2$ produce statistically different ensembles. We also point out that the It\^o correction term in \eqref{eq:ItoWhite}, $-2N\beta^{-1}\s$, is completely independent of choice of projection, the Stratonovich form of \eqref{eq:ItoWhite} being \begin{equation}\begin{aligned}\label{eq:whiteSDEstrat} \mathrm{d} \s = PP^T \Delta_N\s \;\dt + \sqrt{2\beta^{-1}N} P \circ \dW. \end{aligned}\end{equation} This fact remains true in the case of colored noise, that the It\^o correction term depends only on the covariance matrix of the noise but not the choice of projection (see Sec.~\ref{sec:colored_noise} with details in Appendix \ref{append:ito_correction}). Our previous work also considered the continuum limit of the SDE system \eqref{eq:whiteSDE}. Defining a lattice spacing $\delta x = N^{-1}$ and taking a scaling of $\beta = N^{\gamma}$ for $\gamma$ sufficiently large to quench the noise (numerical simulations verified convergence for $\beta \sim N^{3/2}$), we showed convergence to the (local, deterministic) PDE \eqref{eq:whitePDE} under some regularity assumptions of the solution to the harmonic map heat flow equation. This convergence holds regardless of the number of spatial dimensions considered, provided we assume regularity of the solution to the corresponding harmonic map heat flow with domain $\mathbb{T}^d, d>2$. As mentioned in the Introduction, the regularity of the solution for $d > 2$ is a delicate issue when considering the fixed $\beta$ continuum limit to an SPDE and one may not be guaranteed convergence in the case of white noise. \section{Colored Noise} \label{sec:colored_noise} Using spatially-correlated noise in the proposal of the MH algorithm to lead to regularized SPDEs in the continuum limit intuitively accounts for the fact that at smaller atomic scales, the true physical system cannot be further subdivided into infinity small units with independent fluctuations. A natural way to introduce correlations in the noise that decay with distance is to ``color'' the noise, requiring the power in the Fourier representation to decay with frequency. In the discrete setting, to form various covariance matrices satisfying our periodic boundary conditions, we use a periodic Fourier basis with power in each frequency mode that decays with rate $\kappa$. We again remind the reader that for ease of exposition we have set $d=1$ in this section, but extending to higher dimensions is just a matter of using higher dimensional discrete Fourier transform machinery. However, in subsection \ref{sec:PSDE} below about SPDE limits, we will state the limiting equations for general dimension $d$. Specifically we decompose an $N\times N$ covariance matrix \begin{subequations}\label{Cbar} \begin{equation} \bar{C}_N = \phi \bar{D}^2 \phi^T \end{equation} with diagonal matrix $\bar{D}_{jj} = \lambda_j = d_j^{-\kappa}$ with frequencies $d_j$ defined as \begin{equation}\label{Dbar} d_j = \left\{ \begin{array}{ll} 1 & j=1 \\ 2\pi (j-1) & 2\le j \le \tfrac{N}{2} + 1 \\ 2\pi(j-\tfrac{N}{2} - 1) & \tfrac{N}{2} + 2 \le j \le N \end{array}\right. \end{equation} and the matrix of Fourier eigenvectors given by \begin{equation}\label{Phi} \phi_{ij} = \left\{ \begin{array}{ll} 1 & j=1 \\ \sqrt{2} \cos(\frac{1}{N} d_j (i-1) ) & 2\le j \le \tfrac{N}{2} \\ \cos(\frac{1}{N} d_j (i-1) ) & j = \tfrac{N}{2} + 1\\ \sqrt{2} \sin( \frac{1}{N} d_j (i-1) ) & \tfrac{N}{2}+2 \le j \le N \end{array}\right. \end{equation} \end{subequations} with $ \sum_i \phi_{ij}^2 = N $ for each $j$. With this scaling, the eigenvectors $\phi_{ij}$ converge to the discrete set of Fourier functions $1$, $\sqrt{2}\cos(2\pi x), \sqrt{2}\cos(4\pi x), \dots $ and $\sqrt{2}\sin(2\pi x), \sqrt{2}\sin(4\pi x)\dots $ as $N\to\infty$, forming an orthonormal set, with inner product of two functions defined as $\int_0^1 f(x) g(x) dx $. Also due to this scaling $\Tr(\bar{C}_N) = N \sum_{j=1}^N \lambda_j^2$. Note that taking $\kappa=0$ creates equal power in all modes, reducing $\bar{C}_N$ to a diagonal matrix with $N$ on the diagonal representing uncorrelated ``white'' noise. Increasing $\kappa$ increases the length scale of the covariance, broadening $\bar{C}_N$ which is peaked along the diagonal. For use in the MH algorithm, at time step $n$, we form three vectors $\bar{C}_N^{1/2} \wq_q^n$ for $q\in\{x,y,z\}$ with the $\wq_q^n$ a set of vectors of independent uncorrelated standard Gaussian random variables. The vectors $\bar{C}_N^{1/2} \wq_q^n$ are independent for different $q$ but spatially-correlated with covariance matrices given by $\bar{C}_N$. This correlated noise is projected into the tangent plane of the corresponding spin, defining $$ \nui_i^n = P^\perp_{ {\boldsymbol \sigma} _i^n} ( (\bar{C}_N^{1/2}\wq^n_x)_i, (\bar{C}_N^{1/2}\wq^n_y)_i, (\bar{C}_N^{1/2}\wq^n_z)_i ). $$ The analogous proposal to \eqref{eq:proposal} is \begin{equation}\label{eq:proposalcolor} {\boldsymbol {\tilde \sigma}} _i^n = \frac{ {\boldsymbol \sigma} _i^n + \varepsilon \nui_i^n }{ \| {\boldsymbol \sigma} _i^n + \varepsilon \nui_i^n \| }. \end{equation} The first thing to note is that using $\nui_i^n$ in place of $\nui^W_i$ in the proposal creates a non-symmetric proposal and therefore using the accept probability \eqref{eq:alpha} no longer guarantees sampling of the Gibbs distribution \eqref{eq:Gibbs}. In particular, since our noise is now spatially correlated but our projections are completely local, the probability of undoing a particular rotation is not equal to the probability of doing it. In the white noise case, the tangent vector $\tilde{\nui}_i^W$ to get $ {\boldsymbol \sigma} _i^n$ back from the proposal $ {\boldsymbol {\tilde \sigma}} _i^n$ is unique and has the same magnitude as $\nui_i^W$. Then $\mathbb{P} ( {\boldsymbol \sigma} _i^n | {\boldsymbol {\tilde \sigma}} _i^n ) = \mathbb{P} ( {\boldsymbol {\tilde \sigma}} _i^n | {\boldsymbol \sigma} _i^n )$ and since the tangent vectors $\nui_i^W$ are independent for different spins $i$, the entire proposal in the white noise case is symmetric, \[ \mathbb{P} (\sig^n | \sigp{n} ) = \mathbb{P} (\sigp{n} | \sig^n ). \] In the colored noise case, the tangent vectors are correlated and the above symmetry requirement is no longer true. However, as the sphere is locally close to flat, intuitively the projected tangent vectors from $ {\boldsymbol \sigma} _i^n$ and back from the proposal $ {\boldsymbol {\tilde \sigma}} _i^n$ should be almost symmetric, though we observe that it depends upon the projection chosen as to how this asymmetric proposal manifests in the limit of $\epsilon\to 0$. For the cross-product projection corresponding to $P_1$, the non-symmetric terms appear in higher-orders of $\varepsilon$ and we conjecture they vanish taking similar limits of the (wrongly defined) MH algorithm as we did previously. We revisit this conjecture in Sec.~\ref{sec:IM}. We discuss this limit of $\varepsilon\to0$, arriving at the (It\^o) SDE \begin{equation} \label{eq:colorSDE_ito} \begin{aligned} \mathrm{d} \s =& P_1 \frac{C_{N}}{N} P_1^T \Delta_N\s \;\dt - 2\beta^{-1} \frac{\textrm{Tr}( \bar{C}_N )}{N} \s \;\dt \\ & + \sqrt{2\beta^{-1}} P_1 C_{N}^{1/2} \dW % \end{aligned} \end{equation} next in Sec.~\ref{sec:MHtoSDE} with details appearing in Appendix \ref{app:SDEderivation}. Then in Sec.~\ref{sec:cross} we discuss why the $P_1$ projection matrix, corresponding to $\sigma \times $ has been selected. In Appendix \ref{App:FP_Mult} we verify that the Gibbs distribution is the invariant measure of \eqref{eq:colorSDE_ito} by considering the Fokker-Planck equation for the equivalent Stratonovich SDE \begin{equation}\label{eq:colorSDE} \begin{aligned} \mathrm{ d} \s =& P_1 \frac{C_{N}}{N} P_1^T \Delta_N\s \;\dt + \sqrt{2\beta^{-1}} P_1 C_{N}^{1/2} \circ \dW. \end{aligned}\end{equation} Notice that for the case of uncorrelated noise, $\kappa=0$, the matrix $C_N$ reduces to a diagonal matrix with $N$ on the diagonal. The SDE \eqref{eq:colorSDE_ito} therefore reduces to the white noise SDE \eqref{eq:whiteSDE} as $\frac1N C_N$ reduces to the identity matrix, $\frac1N \Tr(\bar{C}_n) = N$ and $C^{1/2}_N \dW = \sqrt{N} \dW$. \subsection{Limiting Dynamics of Metropolis Hastings\label{sec:MHtoSDE}} The idea behind the convergence is to equate one Metropolis Hastings step to one Euler-Maruyama numerical integration step of the It\^o SDE \eqref{eq:colorSDE_ito}. Following \cite{gao2018limiting,MPS} we consider the leading order in proposal size $\varepsilon$ terms for the drift and diffusion of one MH step. At various points we drop higher order terms that are random variables, which are capable of taking on arbitrarily large values, but with small probability. To ensure a true asymptotic convergence, we bound the average pathwise error between MH and SDE trajectories themselves, not the probability distribution governed by a master equation, thus we have a strong, trajectory-wise, convergence result. In what follows, we heuristically explain obtaining the leading order terms for the drift and the diffusion. Expectations, $\mathbb{E}{\cdot}$, are conditioned on knowing the current MH spin configuration, $\sig^n$. The details of properly bounding the error between the piece-wise interpolated MH trajectory and the SDE trajectory are left to Appendix \ref{app:SDEderivation}. The drift term of the SDE comes from the expectation of one MH step, \begin{equation}\label{eq:expDrift}\begin{aligned} & \mathbb{E}{ \sig^{n+1} - \sig^n } =\\ &\hspace{1cm} \mathbb{E} { (\sigp{n} - \sig^n) \left(1 \wedge e^{-\beta \big( H(\sigp{n}) - H(\sig^n) \big) }\right) }, \end{aligned}\end{equation} where the elements of the proposal $\sigp{n}$ are each given by \eqref{eq:proposalcolor}. Expanding this proposal for small $\varepsilon$, we obtain \begin{equation}\label{eq:expansion} {\boldsymbol {\tilde \sigma}} _i^n - {\boldsymbol \sigma} _i^n \approx \varepsilon \nui_i^n - \frac 12 \varepsilon^2 \|\nui_i^n\|^2 {\boldsymbol \sigma} _i^n. \end{equation} We evaluate the expectation in \eqref{eq:expDrift} for the first term on the right-hand-side of \eqref{eq:expansion} first, then the second term. For the expectation over the first term in the expansion \eqref{eq:expansion}, we have $$ \sigp{n} -\sig^n \approx \varepsilon P C_N^{1/2} \w^n $$ and proceed to compute $$ \mathbb{E}{ \varepsilon P C_N^{1/2} \w^n \left(1 \wedge e^{-\beta \big( H(\sigp{n}) - H(\sig^n) \big) }\right) } $$ using the first order expansion of $H(\sigp{n}) - H(\sig^n)$ which is \begin{equation}\label{deltaH} \delta H \approx \varepsilon (\nabla H)^T P C_N^{1/2} \w^n. \end{equation} The first order term in the expansion of $\left(1 \wedge e^{-\beta \delta H }\right)$ is one, resulting in the expectation of $\w^n$ which is zero. The next order term comes from using the Lemma 2.4 in \cite{MPS}, which we state here for convenience. \begin{lemma}[\cite{MPS}] \label{lm:mps} For $z \sim \mathcal{N}(0,1)$, \begin{equation*} \mathbb{E}[]{z \left( 1 \wedge e^{az+b} \right) } = a e^{\frac{a^2}{2} + b} \Phi \left( - \frac{b}{|a|} - |a| \right) \end{equation*} for any real constants $a,b$, and $\Phi(\cdot)$ is the CDF for the standard normal random variable. \end{lemma} We apply this lemma on the expectation for a single component of $\w^n$ and corresponding coefficient of $\delta H$ and then take the expectation over the remaining components of $\w^n$ with further approximations detailed in Appendix \ref{App:Drift}. The result is the same as taking $1\wedge e^{-\beta\delta H}\approx 1 - \beta\delta H$ when $\delta H <0$ and 1 otherwise while also assuming $\delta H$ is mean zero so that each case happens with probability 1/2. Thus, $$ \mathbb{E}{ \w^n \left(1 \wedge e^{-\beta \big( H(\sigp{n}) - H(\sig^n) \big) }\right) } \approx \mathbb{E}{ \w^n \left( \frac 12 \delta H \right) } $$ and the only expectation that remains is $\mathbb{E}{ \w^n( \w^n)^T } = I$, the identity matrix. Therefore, \begin{equation}\label{w_lemma2} \mathbb{E}{ \w^n \left(1 \wedge e^{-\beta \delta H} \right) } \approx - \varepsilon \frac{\beta}{2} ((\nabla H)^T P C_N^{1/2} )^T \end{equation} and \begin{equation}\label{eq:2} \mathbb{E}{ \varepsilon P C_N^{1/2} \w^n \left(1 \wedge e^{-\beta \delta H} \right) } \approx- \varepsilon^2 \frac{\beta}{2} PC_NP^T \nabla H. \end{equation} Returning to \eqref{eq:expDrift}, we consider the second term in the expansion \eqref{eq:expansion}, and compute $$ \mathbb{E}{\frac 12 \varepsilon^2 \|\nui_i^n\|^2 {\boldsymbol \sigma} _i^n \left(1 \wedge e^{-\beta \delta H }\right)}. $$ Here, it is convenient to take $\nui_i^n=P^\perp_{ {\boldsymbol \sigma} _i^n} \ui_i$ and write the three components of $\ui_i$ in terms of the decomposition of the matrix $\bar{C}_N$ defined in \eqref{Cbar} as \begin{equation}\label{eq:u} u_{i,q} = \sum_{j=1}^N \lambda_j \phi_{ji} \wiq_{j,q}^n \qquad \textrm{for } q\in \{x,y,z\}. \end{equation} Unlike above, the first term in the expansion of $\left(1 \wedge e^{-\beta \delta H }\right)$ gives non-zero expectation, which is $$ \mathbb{E}{\frac 12 \varepsilon^2 \|\nui_i^n\|^2 {\boldsymbol \sigma} _i^n } = \varepsilon^2 {\boldsymbol \sigma} _i^n \sum_{j=1}^N \lambda_j^2 \phi_{ji}^2 . $$ We further notice that $\sum_{j=1}^N \lambda_j^2 \phi_{ji}^2$ is equivalent to $\frac{1}{N} \textrm{Tr}( \bar{C}_N)$ for each $i$ as a result of the chosen Fourier basis to represent $\bar{C}_N$. Therefore, \begin{equation}\begin{aligned} \label{ito} \mathbb{E}{-\frac 12 \varepsilon^2 \|\nui_i^n\|^2 {\boldsymbol \sigma} _i^n } = -\varepsilon^2 \frac{1}{N}\textrm{Tr}( \bar{C}_N) {\boldsymbol \sigma} _i^n. \end{aligned}\end{equation} In vector form, combining the above with \eqref{eq:2}, we have that \eqref{eq:expDrift} to leading order in $\varepsilon$ is \begin{equation}\label{eq:4} \mathbb{E}{\sig^{n+1} - \sig^n} \approx -\varepsilon^2 \frac{\beta}{2} PC_N P^T \nabla H - \varepsilon^2 \frac 1N\textrm{Tr}( \bar{C}_N) \sig^n. \end{equation} The diffusion part of the SDE is the leading-order in $\varepsilon$ term of the mean-zero noise, \begin{equation}\label{eq:diff_approx} \sig^{n+1} - \sig^n - \mathbb{E}{ \sig^{n+1} - \sig^n} \approx \varepsilon PC_N^{1/2} \w^n. \end{equation} Recall from above, that the expectation of $PC_N^{1/2} \w^n$ was zero; this is the leading order noise term. Combining with the drift, we have that one step of the MH algorithm to leading order is \begin{equation}\label{eq:SDEstep} \begin{aligned} \sig^{n+1} - \sig^n \approx &-\varepsilon^2 \frac{\beta}{2} P C_N P^T \nabla H \\ & - \varepsilon^2 \frac{1}{N}\textrm{Tr}( \bar{C}_N )\sig^n + \varepsilon PC_N^{1/2} \w^n. \end{aligned}\end{equation} Defining a rescaling of time as $\delta t = \varepsilon^2 \beta/ 2 $ the above is \begin{equation}\label{eq:SDEstepEM} \begin{aligned} \sig^{n+1} &- \sig^n \approx - \delta t P \frac{1}{N} C_N P^T \Delta_N \sig^n \\ & - \delta t \frac{2}{N \beta} \textrm{Tr}( \bar{C}_N )\sig^n + \sqrt{\frac{2\delta t}{\beta}} PC_N^{1/2} \w^n, \end{aligned}\end{equation} where we have used that for Hamiltonian \eqref{eq:H}, \begin{equation}\label{gradHdef} \nabla H = \tfrac1N \Delta_N \sig^n . \end{equation} Equation \eqref{eq:SDEstepEM} is one step of the the Euler-Maruyama method for the Stratonovich SDE \eqref{eq:colorSDE}. The trajectory-wise convergence of the MH dynamics to the solution of \eqref{eq:colorSDE} is summarized in the following statement and proved in Appendix \ref{App:Existence}. \begin{theorem} \label{thm:MHtoSDE} Define the piecewise constant interpolation of the MH dynamics as $\sig(t)$, \begin{equation} \sig(t) = \sig^n \quad n \delta t \le t < (n+1) \delta t, \end{equation} where $\delta t = \frac{\beta \varepsilon^2}{2}$ is the timestep size of the MH dynamics, and $\s(t)$ is the solution to the SDE system \eqref{eq:colorSDE} with initial conditions $\s(0) = \sig(0)$ and $\| {\boldsymbol \sigma} _i(0) \| =1,1 \le i \le N$. If the proposal noise in the MH step is generated by the same $3N$ Weiner processes in \eqref{eq:colorSDE} as $$ \varepsilon \wi_i^n = \sqrt{2 \beta^{-1}} \left[ \Wi_i((n+1)\delta t) -\Wi_i(n \delta t) \right], $$ for $i=1\dots N$, then we have the following strong convergence result: \begin{equation}\label{eq:36} \mathbb{E}[]{ \sup_{0 \le \tau \le T} \| \s(\tau) - \sig(\tau)\|^2 } \le c_1 \sqrt{\delta t} \exp(c_2 T) \end{equation} for any $T \in (0,\infty)$, where $c_1$ and $c_2$ are functions of $N,\beta,T,\Tr(C_N)$ and independent of the choice of $\delta t$. \end{theorem} This convergence result holds regardless of the projection matrix $P$. What remains to be determined is if the SDE \eqref{eq:colorSDE} has the Gibbs distribution \eqref{eq:Gibbs} as its invariant measure. \subsection{Choosing a Projection\label{sec:cross}} Having established convergence of the MH dynamics, we are left to show that the system of SDEs \eqref{eq:colorSDE} has the Gibbs distribution \eqref{eq:Gibbs} as its invariant measure. This SDE is in the form of \eqref{eq:SDEgeneric} with the matrix $B=PC_N^{1/2}$ being non-constant. For generic non-constant $B$ in \eqref{eq:SDEgeneric}, the Gibbs distribution \eqref{eq:Gibbs} is no longer an invariant measure, however in a few special cases it is. For example, in the case of the white noise SDE \eqref{eq:whiteSDE} using either $ {\boldsymbol \sigma} _i \times \dWi_i$ or $- {\boldsymbol \sigma} _i \times ( {\boldsymbol \sigma} _i \times \dWi_i)$, so that $B=P_1$ or $B=P_2$, it is, as we show by direct computation in Appendix \ref{App:FP_Mult}. However, when considering colored noise, only the projection of the form $ {\boldsymbol \sigma} _i \times (\cdot)$ corresponding to $B=P_1 C_N^{1/2}$, and not $B=P_2 C_N^{1/2}$, has \eqref{eq:Gibbs} as an invariant measure, as we show by direct computation in Appendix \ref{App:FP_Mult}. Unlike the white noise case, since the colored noise matrix and projection matrix do not commute, $P_1 C_N P_1^T \ne P_2 C_N P_2^T$, and the two projections of the noise into the tangent plane produce statistically different trajectories. We explore this idea further numerically in Sec.~\ref{sec:num}, showing that the cross-cross projection samples something further and further from the Gibbs distribution as the noise becomes more correlated. \subsection{Convergence of the Invariant Measure\label{sec:IM}} In this section, we justify a statement said earlier in Sec.~\ref{sec:colored_noise}, that the non-symmetric terms in the MH proposal \eqref{eq:proposalcolor} appear in higher-orders of the proposal size $\varepsilon$. In particular, we show that the invariant measure of the MH dynamics with colored noise in the proposal and cross-product projection is close to the desired invariant Gibbs distribution, converging to it in the $\varepsilon \to 0$ limit. We apply similar ideas to those of \cite{mattingly2010convergence} which consider invariant measures of numerical approximations of SDE solutions. We start with Dynkin's Formula over one timestep of the SDE, and then replace the integral over the SDE solution with the MH solution, bounding the difference. Summing over multiple timesteps and noticing a telescoping series, we show the long-time average over the MH solution converges to the average over the invariant measure of the SDE, which is the Gibbs distribution. Therefore, as in \cite{mattingly2010convergence}, we find that the difference between the invariant measures is the same order of magnitude as the error between the MH dynamics and the solution to the SDE \eqref{eq:colorSDE} on a finite time interval, given by \eqref{eq:36}. Our goal is to show that the long-time average of a $C^\infty$ test function $\varphi$ $$ \lim_{n\to\infty} \mathbb{E}[]{ \frac 1n \sum_{k=0}^{n-1} \varphi( \sig^k) } $$ where $ \sig^k $ is the $k$-th MH step with the inaccurate accept rate \eqref{eq:alpha} and any projection to form $\nui_i^n$ in \eqref{eq:proposalcolor}, converges to the stationary average $\bar{\varphi}$ with respect to the invariant measure $\mu$ of the SDE \eqref{eq:colorSDE} with corresponding projection, \begin{equation}\label{barvar} \bar{\varphi} = \int \varphi(\sig) \mu( \sig) \mathrm{d} \sig. \end{equation} We build on the fact that the MH algorithm has a unique stationary distribution, that is not the Gibbs distribution, and that the SDE has a unique stationary measure $\mu$ because the generator $\mathcal{L}$ of the SDE \eqref{eq:colorSDE} is hypoelliptic; its second order term is \begin{align*} & \sum_{i,j} ( PC_N P^T )_{ij}\partial_i \partial_j = \\ & \qquad (P^T D)^T C_N (P^T D) - \sum_{i,j} \partial_i (PC_N P^T)_{ij} \partial_j, \end{align*} where $D$ is the diagonal matrix with $\bar{D}$ repeated 3 times along the diagonal and the system of vector fields $(P^T D)^T C_N (P^T D)$ covers $\mathbb{T}(\mathbb{S}^2)^N$ as in \cite{banas2014stochastic} and the second term on the right hand side is first order. In the special case that the cross-product projection matrix $P_1$ in \eqref{eq:Pcross} is used, then the SDE has the known invariant measure of the Gibbs measure $\mu$ in \eqref{eq:Gibbs}. Our argument will therefore show that the MH algorithm with cross-product projection samples a distribution that converges to the Gibbs measure as the proposal size $\varepsilon\to0$. We start with Dynkin's Formula \cite{oksendal2003stochastic} for the SDE \eqref{eq:colorSDE}, with generator $\mathcal{L}$, over a time-step $\delta t$, \begin{equation}\label{C1}\begin{aligned} \mathbb{E}[]{\psi(\s ((k+1)\delta t))} - &\mathbb{E}[]{\psi(\s( k\delta t))} \\ & = \mathbb{E}[]{ \int_{k\delta t}^{(k+1)\delta t} \mathcal{L} \psi(\s(t)) \dt } . \end{aligned}\end{equation} Consider that $\psi$ solves a Poisson equation for $C^\infty$ test function $\varphi$, \begin{equation}\label{C2} \mathcal{L}\psi = \varphi - \bar{\varphi} \end{equation} where the stationary average $\bar\varphi$ is defined in \eqref{barvar}. Using \eqref{C2} in the right-hand-side of \eqref{C1}, we have that \begin{equation}\label{C5}\begin{aligned} \mathbb{E} \big[ \psi(\s ((k+1)&\delta t)) \big] -\mathbb{E}[]{\psi(\s( k\delta t))} \\ & = \mathbb{E}[]{ \int_{k\delta t}^{(k+1)\delta t} \varphi(\s(t)) \dt } - \bar{\varphi} \delta t. \end{aligned}\end{equation} The integral term can be bounded by \begin{equation}\label{C6} \left| \mathbb{E}[]{ \int_{k\delta t}^{(k+1)\delta t} \varphi(\s(t)) \dt - \varphi(\s(k\delta t)) \delta t } \right| \le c \delta t^2 \end{equation} for some constant $c$ independent of $\delta t$ by Riemann sum approximations of integrals. From Theorem \ref{thm:MHtoSDE}, the difference between the SDE solution $\s(k\delta t)$ and the Metropolis step $\sig^k$ is bounded by \[ \mathbb{E}[]{ \left\| \sig^k - \s(k\delta t) \right \| } \le c_3 \delta t^{1/4}, \] and therefore for smooth test functions \begin{equation}\label{C8} \left| \mathbb{E}[]{\varphi( \s(k\delta t) ) } - \mathbb{E}[]{\varphi(\sig^k) } \right| \le c_4 \delta t^{1/4}. \end{equation} Using bounds \eqref{C6} and then \eqref{C8} we have that \eqref{C5} can written as \begin{align*} & \mathbb{E}[]{\psi(\s ((k+1)\delta t))} - \mathbb{E}[]{\psi(\s( k\delta t))} = \\ &\hspace{1cm} \mathbb{E}[]{\varphi(\sig^k)} \delta t - \bar{\varphi} \delta t + e_1 \end{align*} where $|e_1| \le c \delta t \delta t ^{1/4}$ for some constant $c$ independent of $\delta t$. Re-arranging, and dividing by $\delta t$ we have that \begin{equation*} \mathbb{E}[]{\varphi(\sig^k)} - \bar{\varphi} = \frac{1}{\delta t} \mathbb{E}[]{\psi(\s ((k+1)\delta t))-\psi( \s(k\delta t))} + e_2 \end{equation*} where $|e_2| \le c \delta t ^{1/4}$ for some constant $c$ independent of $\delta t$. Summing over $n$ values of $k$ and dividing by $n$ we have that \begin{equation}\begin{aligned} \frac{1}{n}&\sum_{k=0}^{n-1} \mathbb{E}[]{\varphi(\sig^k)} - \bar{\varphi} \\ & = \frac{1}{n\delta t} \sum_{k=0}^{n-1} \mathbb{E}[]{\psi(\s ((k+1)\delta t))-\psi( \s(k\delta t))} + e_2 \end{aligned}\end{equation} which has a telescoping sum on the right-hand side. By defining $T = n\delta t$ the above is equivalent to \begin{equation} \frac{1}{n}\sum_{k=0}^{n-1} \mathbb{E}[]{\varphi(\sig^k)} - \bar{\varphi} = \frac{1}{T} \mathbb{E}[]{\psi(\s (T)-\psi(\s( 0))} + e_2 . \end{equation} Recall that $\psi$ is the unique solution to the Poisson equation \eqref{C2} therefore it is smooth because $\varphi$ is smooth. Indeed, the theory of hypoelliptic operators is precisely such that $\mathcal{L} u \in C^\infty$ implies $u \in C^\infty$, see \cite{hormander1967hypoelliptic} or \cite{hormander2015analysis}, Chapter $XI$. Since we are operating on a compact space overall, $\psi$ is thus bounded and the convergence result follows. Thus, the $1/T$ term goes to zero as $T\to\infty$ ($n\to\infty$). We therefore conclude that the MH long-time average converges to the stationary average with respect to the SDE invariant measure $\bar{\varphi}$ as $\delta t\to 0$ ($\varepsilon\to 0$) with order $\delta t^{1/4}$ and as $n\to\infty$, and the following convergence results holds: \begin{theorem} \label{thm:IM} Define $\sig^n$ as the $n^{\textrm{th}}$ step of the MH dynamics with colored noise proposal given in \eqref{eq:proposalcolor}, either the cross- or cross-cross-projection, accept rate given in \eqref{eq:alpha} and let $\mu(\s)$ be the invariant measure of the corresponding SDE \eqref{eq:colorSDE} with the same projection. Then $$ \left| \frac{1}{n}\sum_{k=0}^{n-1} \mathbb{E}[]{\varphi(\sig^k)} - \int \varphi(\s) \mu(\s) \mathrm{d} \s \; \right| \le \frac{c_1}{n \delta t} + c_2 \delta t ^ {1/4} $$ for time step $\delta t = \varepsilon^2 \beta / 2$ and constants $c_1$ and $c_2$ independent of $n$, and $\delta t$. \end{theorem} \begin{remark} A nearly identical argument can be used to show that the invariant measure for the SDE with the $P_2$ projection will converge to that of the SDE with the $P_1$ projection as the colored noise converges to white noise, thus both SDEs sample the Gibbs measure. In other words, as $\kappa \to 0$ the covariance matrix $C \to I$ in a uniform sense in our definition of \eqref{Cbar} as an operator on $\ell^2$ (and hence smooth) functions $(\mathbb{S}^2)^N$. \end{remark} \subsection{A New Non-local SPDE Limit\label{sec:PSDE}} In this section, we discuss the extension of the non-local SPDE \eqref{eq:spde} to the case $\mathbb{T}^d \to \mathbb{S}^2$ with $d>2$ obtained by taking the limit as $N\to\infty$ (with $\beta$ constant) of the SDE \eqref{eq:colorSDE} and remark briefly on properties of the corresponding solutions. In particular, formally taking the limit of \eqref{eq:colorSDE}, we arrive at a non-local stochastic version of the harmonic map heat flow equation given by \begin{equation}\begin{aligned} \label{eqn:spde} d \sigma = &\left( -\sigma \times (M_\kappa (D)) (\sigma \times \Delta \sigma) \right) dt \\ &+ \sigma \times ( \mathcal{F}^{-1} (m(k))^{- \kappa} \circ dW (k)), \end{aligned}\end{equation} where we let $m( k) = 2 \pi k$ if $|k| \neq 0$, and $m( k) = 1$ for $k= 0$, $\mathcal{F}$ is the Fourier transform on $\mathbb{T}^d$, $M_\kappa (D)$ is the Fourier multiplier such that \[ M_\kappa (D) f = \mathcal{F}^{-1} (|m(k)|)^{-2 \kappa} \mathcal{F} f \] and $dW(k)$ are a set of independent standard Gaussian noises for each corresponding Fourier mode in frequency space. Note that many other forms of the covariance structure could easily work here, such as $m(k) = \langle k \rangle = \sqrt{1 + |2 \pi k|^2}$. Also note that we can write \[ ( ( I - \Delta)^{-\kappa} f)(x) = \int K_\kappa (x,y) f(y) dy \] with the integral kernel given by \[ K_\kappa (x,y) = \frac{1}{(2 \pi)^d} \sum_{k \in \mathbb{Z}^d} \int e^{-i 2 \pi k \cdot (x-y)} \langle k \rangle^{\kappa}. \] Then, if $\kappa$ is chosen such that $M_\kappa (D)$ is trace class with a weight relating to the regularity required ($\int K(x,y) dx , \int K(x,y) dy < \infty$ as well as integrals of derivatives of $K$), we can use canonical results on stochastic PDEs coupled with existence arguments for quasilinear heat equations. We will follow somewhat the ideas in \cite{de1999stochastic,gess2016stability} for stochastic PDEs with multiplicative noise (mostly in the context of motivating the It\^o formulation in the former and for using energy estimates to handle degenerate SPDE models in the latter). For the key energy estimates on the deterministic piece, we cite the general theory of well-posedness for quasilinear heat equations developed in \cite[Chapter $15$]{Tay3}. For possible extensions to non-trace class covariance structure, see the recent work of \cite{bruned2019geometric} where a renormalization is proposed. It will be a topic of further work to explore the place of our colored noise model within this context. Using the regularity of the colored noise, we provide a brief outline of existence for solutions to \eqref{eqn:spde} in Appendix \ref{A:LWP}. However, as the results are fairly standard with sufficiently regular noise, we proceed with a detailed numerical study of convergence of the Metropolis-Hastings model and dynamics. \begin{figure*} \includegraphics[width=\textwidth]{Figure1} \caption{(a) Dynamics of MH algorithm and SDE \eqref{eq:colorSDE} at the indicated values of time, $t$ (recall the relationship between SDE time-step and MH proposal size: $\delta t = 2\beta \varepsilon^2$). Parameters: number of spins $N=32$, inverse temperature $\beta = 10$, the order of eigenvalues $\kappa = 1$, time step size $\delta t = 1e{}^-5$. (b) Strong order of convergence with respect to time step size for the error between MH algorithm and SDE \eqref{eq:colorSDE}, both using the cross projection matrix \eqref{eq:Pcross}. The solid black line with slope 1/2 indicates the order is approximately that given in Theorem \ref{thm:MHtoSDE}. (c) The same as (b) but using the cross-cross projection matrix \eqref{eq:Pcrosscross}. (b) and (c) parameters: $N=16$, $\beta=5$, error calculated at time $T=0.05$, and averaged over 400 simulations. } \label{fig:mh_sde} \end{figure*} \section{Numerical Results} \label{sec:num} In this section, we perform numerical simulations to support our convergence results and demonstrate the discussed differences when using different projections. All the simulations are from the one dimensional periodic lattice $\mathbb{T}^1 $ to the unit sphere $\mathbb{S}^2$. The MH dynamics are simulated as explained in Sec.~\ref{sec:colored_noise}. To numerically solve the SDE \eqref{eq:colorSDE_ito}, written in the It\^o form, we use the stochastic Euler's method combined with a normalizing step to project the spins back onto the sphere after each time step. We start by showing a trajectory-wise comparison in Figure \ref{fig:mh_sde}(a) of the MH dynamics and the SDE dynamics generated utilizing the same random noise for the proposal in the MH as the diffusion term in the SDE. Each spin is plotted on the same sphere, with lines connecting nearest neighbors. Figures \ref{fig:mh_sde}(b) and (c) show the strong order of convergence for the error between the MH algorithm and the SDE \eqref{eq:colorSDE} with respect to the time step size $\delta t$, for which the equivalent MH proposal size is $\varepsilon = \sqrt{2 \delta t / \beta} $. The error is calculated at fixed time $T$ as \begin{equation} \mathbb{E}\left[ \frac{1}{N}\sum_{i=1}^N \| {\boldsymbol \sigma} _i^n - \si_i(n\delta t) \|^2 \;\right], \end{equation} where the expectation is taken over multiple realizations. The numerical convergence order is approximately $\frac 12$, supporting Theorem \ref{thm:MHtoSDE} as a tight bound on the error regardless of choosing the cross projection matrix \eqref{eq:Pcross} or the cross-cross projection matrix \eqref{eq:Pcrosscross}. Next we show the effect of the different projection matrices on the invariant measure of the SDE system \eqref{eq:colorSDE}. Since the desired invariant measure is high-dimensional, we instead plot the empirical cumulative distribution function (cdf) of the energy over time. Figure \ref{fig:distribution}(a) shows that for the case of white noise, $\kappa = 0$, utilizing either the cross projection matrix \eqref{eq:Pcross} or the cross-cross projection matrix \eqref{eq:Pcrosscross} results in indistinguishable invariant distributions of the energy; both versions have the Gibbs distribution as an invariant measure. However, when coloring the noise by increasing $\kappa$, it is only the cross projection matrix \eqref{eq:Pcross} that maintains an energy distribution indistinguishable from the white noise case. Figure \ref{fig:distribution}(b) supports that the color noise SDE \eqref{eq:colorSDE} with the cross projection matrix \eqref{eq:Pcross} is ergodic with respect to the correct Gibbs distribution, despite being the limit of our incorrect MH scheme in Sec.~\ref{sec:colored_noise}. Figure \ref{fig:distribution}(c) shows that the SDE system with the cross-cross projection matrix \eqref{eq:Pcrosscross} has lower energy on average as the correlations in the colored noise increase with increasing $\kappa$. \begin{figure}[ht!] \includegraphics[width=0.42\textwidth]{Figure2} \caption{ (a) Energy distribution in equilibrium for the indicated values of $\beta$ when ``white noise'' $\kappa=0$ is used. The solid lines are for the cross-product projection, the yellow dashed lines are for the cross-cross-product projection. They agree entirely; Gibbs is being sampled in all cases. (b) For the cross-product, at the indicated values of $\kappa$, the same distribution is being sampled at each of the three values of $\beta=5,10$ and $20$. (c) For the cross-cross-product, different distributions are being sampled for different values of $\kappa$, consistent with Gibbs not being the invariant measure of the SDE when $\kappa\ne 0$. Parameters: $N=16$, $\delta t = 1e-4$, and $1e5$ different time points. } \label{fig:distribution} \end{figure} To further illuminate this interaction of the projection matrix and the correlated noise, we look how each term in the SDE effects the energy of system when in equilibrium. The energy, $H$ given by \eqref{eq:H}, evolves according to the It\^o SDE \begin{equation}\begin{aligned}\label{eq:dE_color} dH =& \frac1N \sum_{i=1}^N N^2 (\si_{i+1}-\si_i) \cdot ( d\si_{i+1} - d\si_i ) \\ & + N\beta^{-1} \Tr( C_N^{1/2}P^T A P C_{N}^{1/2} )dt, \end{aligned}\end{equation} where $A$ is the tri-diagonal matrix with 2 on the diagonal and -1 on the sub- and super-diagonals (taking into account periodicity), $d\si_i$ is given by \eqref{eq:colorSDE_ito}, and $\si_{N+1}=\si_{1}$. Note that since $\Tr(XY) =\Tr(YX)$ for two $n\times n$ matrices $X$ and $Y$ , the trace term \begin{align*} & \Tr(PC_N^{1/2}C_N^{1/2}P^TA) = \Tr(PP^T C_N A) \\ & = \Tr(PP^T\phi \bar{D}^2 \phi^T A) = \Tr(PP^T \phi \phi^T \bar{D}^2 A) \\ & = \Tr(P \bar{D}^2 A) = 4 N \sum_{i=1}^N \lambda_i^2 \end{align*} is a constant independent of the choice of projection matrix. We therefore ignore this term and proceed to decompose $d\si_i$ given by \eqref{eq:colorSDE_ito} over one $\delta t$ time-step of numerical integration as \begin{equation}\label{eq:ds_pq} \si_i^{n+1}-\si_i^n = \boldsymbol p_i^n \delta t - 2\beta^{-1}\frac{\Tr(\bar{C}_N)}{N} \si_i^n \delta t + \boldsymbol q_i^n \sqrt{\delta t}, \end{equation} where we define \[ \vec p^{\;n} = P \frac{1}{N} C_N P^T \Delta_N \vec s^{\;n} \textrm{ and } \vec q^{\;n} = PC_N^{1/2}\vec w^{\;n} \] as well as take $\si^n_{N+1} = \si^n_1$, $ \boldsymbol p^n_{N+1} = \boldsymbol p^n_{1} $, and $ \boldsymbol q^n_{N+1} = \boldsymbol q^n_{1} $ for the periodic boundary conditions. The trace term in \eqref{eq:ds_pq} is also of a form independent of the choice of projection matrix. Therefore, to illuminate the interaction of the projection matrix and the correlated noise we consider only the contributions to \eqref{eq:dE_color}, the change in energy, given by the $\vec p^{\;n}$ and $ \vec q^{\;n}$ terms over each time-step of the numerical integration of the SDE, calculated as \begin{equation}\begin{aligned}\label{deltaE_drift} \delta H^n_{\textrm{drift}} &= \sum_{i=1}^N (\si^n_{i+1}-\si^n_{i})\cdot ( \boldsymbol p^n_{i+1} - \boldsymbol p^n_i) \delta t \end{aligned}\end{equation} and \begin{equation}\begin{aligned}\label{deltaE_noise} \delta H^n_{\textrm{noise}} &= N \sqrt{\frac{2}{\beta}} \sum_{i=1}^N (\si^n_{i+1}-\si^n_{i})\cdot ( \boldsymbol q^n_{i+1} - \boldsymbol q^n_i) \sqrt{\delta t } . \end{aligned}\end{equation} In Fig.~\ref{fig:deltaE} we plot the distribution of $\delta H^n_{\textrm{drift}} $ and $\delta H^n_{\textrm{noise}}$ for both the $P_1$ (cross-product) and $P_2$ (cross-cross-product) projections over the course of one simulation using each of the indicated values of $\kappa$ to form $\bar{C}_N$. We see that as $\kappa$ increases, the differences between these distributions increases, consistent with Fig.~\ref{fig:distribution}(c) showing more deviation from the Gibbs distribution with increasing $\kappa$. This difference is more pronounced in the deterministic drift contribution to the energy, $\delta H^n_{\textrm{drift}}$, than the diffusion contribution, $\delta H^n_{\textrm{noise}}$. It suggests the random-walk nature of the dynamics remains relatively unaffected by the choice of projection, while the cross-cross projection produces long tails to lower values of $\delta H^n_{\textrm{drift}}$ possibly explaining the shift in average energy to lower energies seen in Fig.~\ref{fig:distribution}(c). \begin{figure*}[t] \includegraphics[width=\textwidth]{Figure3} \caption{Comparison of the effect on the energy for the cross and the cross-cross projection matrix in SDE \eqref{eq:colorSDE}. (a) The empirical histogram of $\delta H^n_{\textrm{drift}}$ from \eqref{deltaE_drift} taken at each time point of one simulation of the SDE in equilibrium for the indicated value of $\kappa$. (b) Same as (a) but for $\delta H^n_{\textrm{noise}}$ from \eqref{deltaE_noise}. Parameters: $N=16$, $\delta t = 1e-4$, $\beta = 10$, and $1e5$ different time points.} \label{fig:deltaE} \end{figure*} In Fig.~\ref{fig:sdeconvplot} we verify convergence of the SDE system to the SPDE \eqref{eq:spde}. First, in Fig.~\ref{fig:sdeconvplot}(a), for just the deterministic drift part of this system, we show convergence of the finite difference ODE approximation of the non-local PDE ($\beta^{-1}=0$). We compute the error at fixed time $T$ between each coarser scale, $N=N_c$, with the finest scale, $N=N_f$, as \begin{equation} \frac{1}{N_c}\sum_{i=1}^{N_c} \| \si_i^{\textrm{Coarse}}(T) - \si_{1 + (i-1)\frac{N_f}{N_c}}^{\textrm{Fine}}(T) \|^2 . \end{equation} Then, in Fig.~\ref{fig:sdeconvplot}(b) we shown the strong convergence of the SDE, taking the expectation of the above error over realizations. Note the convergence rate even for the white noise case of $\kappa=0$, which is not guaranteed if more than one spatial dimension of this SPDE was considered due to the potential breakdown of regularity of the deterministic solution in that case. The deterministic convergence of order 4 is twice that of the noisy system, which is approximately order $2$. \begin{figure}[ht!] \includegraphics[width = 0.4\textwidth]{Figure4} \caption{(a) Convergence plot for the deterministic finite difference approximation of the non-local PDE \eqref{eq:colorSDE} with $\beta^{-1}=0$. (b) Convergence plot for the Stochastic finite difference approximation of the non-local SPDE \eqref{eq:colorSDE} with $\beta=5$, averaged over 100 simulations. In both panels, dynamics are simulated until $T=0.125$ with $\delta t = \tfrac12 \delta x^2$. } \label{fig:sdeconvplot} \end{figure} \begin{figure*}[ht!] \includegraphics[width=\textwidth]{Figure5} \caption{(a) Dynamics of the non-local PDE (Eq.~\ref{eq:spde} with $\beta^{-1}=0$) at the indicated values of time, $t$, with original timescale and (b) with rescaled time $\tilde{t} = (2\pi)^{2 \kappa} t$ for $\kappa \in \{0,0.5,1,1.5 \}$. (c) Evolution of the energy with the original timescale and (d) with rescaled time. Legend applies to all panels. Parameters: number of spins $N=32$, original time step size $\delta t = 1e{}^-5$. } \label{fig:PDEdynamics} \end{figure*} Last, we look at some of the behavior of the new non-local (deterministic) PDE. In Fig.~\ref{fig:PDEdynamics}(a) we show the evolution toward equilibrium of the spins for different values of $\kappa$ highlighting the different time scales. By considering the covariance operator as a fractional Laplacian, acting similarly to the harmonic map heat flow equation, we conjecture the time rescaling being related to the diffusion time scaling of the underlying non-local heat equation \[ u_t = M_\kappa (D) \Delta u, \] which decays to its equilibrium on the time scale $e^{-\lambda_1^{2(1-\kappa)}}$ for $\lambda_1$ the first non-trivial eigenvalue of the Laplacian on $\mathbb{T}^d$. The non-local form of the operator we consider here does not immediately present a leading order linear operator of this form as occurs in the cross-cross projection, however we will see that this time scale still arises in Figure~\ref{fig:PDEdynamics}. Figures~\ref{fig:PDEdynamics}(c) and (d) also shows the effect of this time rescaling when looking at the evolution of the energy of the system. \section{Conclusions/Discussions\label{sec:conclusions}} We establish here a new stochastic partial differential equation as the limit of a set of sampling algorithms where the proposal is taken with spatially correlated colored noise, thereby deriving a mesoscopic model of fluctuations for spin systems in a principled way. The geometric nature of our system means that the nonlocal form of the drift arises in a manner that we have not seen before in the literature. In order to ensure that the system samples the desired Gibbs measure, we have to be careful with the manner by which we project the noise into the geometric setting. Specifically, we show using a cross-product projection samples the Gibbs measure while a cross-cross-product projection samples an invariant measure that is shifted to lower energy than the Gibbs measure. This shift increases as the correlation length-scale of the noise is increased, and is shown numerically to be related to the deterministic effect on the energy of the system rather than fluctuations in the energy. In addition to finding convergence rates, numerical simulations are also used to show that the nonlocal drift term of the new SPDE exhibits the same time-scales for relaxation to equilibrium as a fractional Laplacian, thereby acting similarly to the harmonic map heat flow equation. Future work will involve considering other geometries beyond the sphere, performing a more careful analysis of the resulting SDE/SPDE systems following for instance the recent developments on geometric renormalization tools in \cite{bruned2019geometric}.
{ "timestamp": "2020-07-28T02:41:31", "yymm": "2007", "arxiv_id": "2007.13612", "language": "en", "url": "https://arxiv.org/abs/2007.13612" }
\section{Introduction} \label{sec:nordstr-euler-syst} The purpose of the work presented here is to prove global existence and uniqueness of classical solutions and its asymptotic behavior of a semi-linear wave equation with damping terms. This wave equation arises in the context of the nonlinear Nordström theory of gravity, which we shall describe in what follows. The first fully relativistic, consistent, theory of gravitation was a scalar theory developed by Nordström \cite{nordstrom13:_zur_theor_gravit_stand_rela }, where the gravitational field is described by a nonlinear hyperbolic equation for the scalar field $\phi$. Although the theory is not in agreement with observations it provides, due to its nonlinearity, some interesting mathematical challenges. Surprisingly, this theory has never been mathematically investigated, although its linear version coupled to the Euler equations has been studied by Speck \cite{Speck_0 } and coupled to the Vlasov equation by Calogero \cite{calogero03:_sphe } and others~\cite{Felix_Antonio_Calogero_201 }, \cite{Fajman_Jeremie_Jacques-202 }, \cite{Wang_202 }, \cite{Calogero_Rein_200 } and \cite{Calogero_Rein_200 }. We follow here the geometric reformulation provided by Einstein-Fokker \cite{einstein14:_nords_gravit_stand_diffe } and will use the Euler equations as a matter model. See also Straumann~\cite[Chap.~2.]{STRA } for a modern representation of that theory. The basic idea of this theory is that the physical metric $ g_{\alpha\beta}$ is related to the Minkowski metric $\eta_{\alpha\beta}$ by the following conformal transformation. \begin{equation} \label{eq:Euler-Nordstrom:1} g_{\alpha\beta}=\phi^2\eta_{\alpha\beta}, \end{equation} where $\eta_{\alpha\beta}=\text{diag}(-1,1,1,1)$. The matter is described by an energy-momentum tensor, which in the case of a perfect fluid takes the form \begin{equation} \label{eq:section1-intro:6} T^{\alpha\beta}= \left( \epsilon + p \right) u^{\alpha} u^{\beta}+ p g^{\alpha\beta}, \end{equation} where $\epsilon$ denotes the energy density, $p$ the pressure and $u^{\alpha}$ is the unit timelike vector which satisfies \begin{equation} \label{eq:section1-intro:7} g_{\alpha\beta} u^{\alpha} u^{\beta}=-1. \end{equation} The field equations, as proposed by Einstein and Fokker takes the following form \begin{equation} \label{eq:Euler-Nordstrom:5} R = T, \end{equation} here we set the relevant constants to one, the Ricci scalar is denoted by $R$ and the trace of the fundamental energy tensor by $T=g_{\alpha\beta}T^{\alpha\beta}$. Using equation \eqref{eq:Euler-Nordstrom:1}, the Ricci scalar takes the form \begin{equation} \label{eq:section1-intro:2} R = -6 \frac{\Box\phi}{\phi^3}, \qquad\Box \eqdef\eta^{\alpha\beta}\partial_{\alpha}\partial_{\beta}. \end{equation} While the Euler equations take the form \begin{equation} \label{eq:Euler-Nordstrom:6} \nabla_{\alpha} T^{\alpha\beta}=0, \end{equation} where $ \nabla_{\alpha}$ is the covariant derivative associated with $g_{\alpha\beta}$. Combining equations \eqref{eq:Euler-Nordstrom:5}, \eqref{eq:section1-intro:2} and \eqref{eq:Euler-Nordstrom:6}, the Euler-Nordström system takes the following form \begin{subequations} \begin{align} \label{eq:Euler-Nordstrom:7} & \Box\phi =- \frac{1}{6} T\phi^{3} \\ \label{eq:section1-intro:4} & \nabla_{\alpha} T^{\alpha\beta} =0. \end{align} \end{subequations} \begin{rem}[Different form of the field equation] \label{rem:section1-intro:1} We want to point out that it is possible to consider a slightly different conformal transformation (see for example \cite{calogero03:_sphe } or~\cite{Speck_0 }), namely \begin{equation} \label{eq:section1-intro:1} g_{\alpha\beta}=e^{2\psi}\eta_{\alpha\beta}, \end{equation} which leads to an equivalent nonlinear wave equation \begin{equation} \label{eq:section1-intro:3} \Box\psi + \left( \nabla\psi \right)^2 =-\frac{1}{6} e^{2\psi}T. \end{equation} \end{rem} \subsection{The field equations with cosmological constant and the background solutions} \label{sec:fields-equations} In what follows we modify the field equation \eqref{eq:Euler-Nordstrom:7} by adding a term which corresponds to the cosmological constant $\Lambda$ in General Relativity in the following way, \begin{equation} \label{eq:Nordstrom:3} \Box\phi =- \frac{1}{6} T\phi^{3} -\Lambda \phi. \end{equation} This choice is motivated by the properties of explicit solutions which are homogeneous and isotropic, namely that these properties are very similar to the ones of Euler-Einstein (see e.g.~\cite[Chap.~V]{Choquet-Bruhat_0 }, \cite[Chap.~10]{Rendall_boo }), and Euler-Poisson (\cite{Brauer_Rendall_Reula_9 }), which we will discuss below. We denote an isotropic and homogeneous vacuum background solution by $\mathring{\phi}$, and for convenience we set $\varkappa^2=\Lambda>0$. Homogeneity implies that the function $\mathring \phi$ depends just on $t$, while the fact that the solution describes vacuum leads to the conclusion that $T\equiv 0$. Therefore equation \eqref{eq:Nordstrom:3} reduces to \begin{equation} \label{eq:section1-intro:1B} -\frac{d^2}{dt^2}\mathring{\phi}=-\varkappa^2\mathring{\phi}. \end{equation} This differential equation has a general solution of the form $\mathring{\phi}=Ae^{\varkappa t}+Be^{-\varkappa t}$. Since we want that our solution has similar behavior to the so-called flat de Sitter solution in general relativity (see for example \cite[Chap.~V]{Choquet-Bruhat_0 }), namely, that $\mathring{\phi}$ and $\frac{d}{dt}\mathring{\phi}$ are positive, we chose \begin{equation} \label{eq:section1-intro:2B} \mathring{\phi}(t)= \mathord{e^{\varkappa t}} \end{equation} as the background solution. Considering also the part $Be^{-\varkappa t}$ would complicate the analysis but should not change the global behavior of the solutions, that is why we are neglecting this term. We now study small deviations from the background solution $\mathring \phi$. So we make the following Ansatz \begin{equation} \label{eq:phi} \phi=\mathring{\phi}+\Psi=\mathord{e^{\varkappa t}} +\Psi, \end{equation} where $\Psi$ denotes the deviation from the background. Then $\Psi$ satisfies the following equation \begin{equation} \square \phi=\square(\mathord{e^{\varkappa t}}+\Psi)=-\varkappa^2 \mathord{e^{\varkappa t}}+\square \Psi=-\frac{1}{6} T(\mathord{e^{\varkappa t}} +\Psi)^3-\varkappa^2(\mathord{e^{\varkappa t}} +\Psi). \end{equation} Thus $\Psi$ satisfies the initial value problem \begin{equation} \label{eq:psi} \left\{ \begin{array}{l} \square \Psi =-\frac{1}{6}T(\mathord{e^{\varkappa t}}+\Psi)^3-\varkappa^2\Psi\\ \Psi(0,x)=\Psi_0(x), \ \partial_t\Psi(0,x)=\Psi_1(x) \end{array}\right.. \end{equation} Our goal is: \begin{enumerate}[label=\alph*.] \item \label{item:section1-intro:2} To show global existence of classical solutions for equation \eqref{eq:psi} demanding a small source term $T$ and small initial data. \item \label{item:section1-intro:1} To show that for large $t$, the metric $\phi^2\eta_{\alpha\beta}$ approaches asymptotically the background metric $e^{2\varkappa t}\eta_{\alpha\beta}$, in the following sense, \begin{equation} \label{eq:section1-intro:3B} \lim_{t\to\infty}\frac{\phi(t,x)}{\mathring \phi(t)}=\lim_{t\to\infty}\frac{e^{\varkappa t}+\Psi(t,x)}{e^{\varkappa t}}\approx 1. \end{equation} \end{enumerate} Note that if $\Psi$ is small, then $(\mathord{e^{\varkappa t}} +\Psi)^3\sim e^{3\varkappa t}$, and this term growths very rapidly and might prevent that the solution exists for all time. So in order to achieve the desired asymptotic behavior of $\Psi$, expressed by equation \eqref{eq:psi}, we multiply $\phi$ by $\mathord{e^{-\varkappa t}}$, then from equality \eqref{eq:phi} we conclude that $\mathord{e^{-\varkappa t}} \phi=1+\mathord{e^{-\varkappa t}}\Psi $, and therefore we set \begin{equation} \Omega\overset{\mbox{\tiny{def}}}{=} \mathord{e^{-\varkappa t}} \Psi. \end{equation} The resulting equation for $\Omega$ takes the form \begin{align*} \partial_t \Omega &= \partial_t (e^{-\varkappa t} \Psi)=e^{-\varkappa t} \partial_t\Psi-\varkappa e^{-\varkappa t} \Psi=e^{-\varkappa t} \partial_t\Psi-\varkappa \Omega,\\ \partial_t^2 \Omega &= e^{-\varkappa t} \partial_t^2\Psi-2\varkappa e^{-\varkappa t}\partial_t\Psi+\varkappa^2e^{-\varkappa t}\Psi= e^{-\varkappa t}\partial_t^2\Psi-2\varkappa(\partial_t \Omega+\varkappa \Omega)+\varkappa^2\Omega\\ & = e^{-\varkappa t}\partial_t^2\Psi-2\varkappa \partial_t \Omega -\varkappa^2\Omega, \end{align*} or \begin{equation} -e^{-\varkappa t}\partial_t^2\Psi=-\partial_t^2\Omega-2\varkappa \partial_t \Omega -\varkappa^2\Omega. \end{equation} Thus we have obtained \begin{equation} e^{-\varkappa t} \square \Psi=\square \Omega-2\varkappa\partial_t \Omega-\varkappa^2\Omega=-\frac{1}{6} \widehat Te^{-\varkappa t}(e^{\varkappa t}+\Psi)^3-\varkappa^2\Omega, \end{equation} or \begin{equation} \label{eq:Omega} \square \Omega-2\varkappa\partial_t \Omega=-\frac{1}{6} T(t,x)e^{2\varkappa t}(1+\Omega)^3. \end{equation} We wish to show the existence of global classical solutions for system \eqref{eq:Omega} demanding a small source term $T(t,x)$. On the one hand, the term $e^{2\varkappa t }$ seems to hamper the proof of the desired global existence, but on the other hand, we have obtained a good dissipative term of the form $-2\varkappa\partial_t\Omega$. This is why we perform the transformation $\widetilde T= \widetilde g_{\alpha\beta}\widetilde T^{\alpha\beta}= e^{3\kappa t} T$, which also implies that the right-hand of the wave equation \eqref{eq:Omega} takes the form $-\frac{1}{6}e^{-\varkappa t} \widetilde T(1+\Omega)^3$. If $\Omega$ remains bounded, then the right-hand side will tend to zero. That is why we finally consider the following system \begin{subequations} \begin{align} \label{eq:section2-field:1} - &\partial_t^2 \Omega-2\varkappa \partial_t \Omega +\Delta \Omega=-\mathord{e^{-\varkappa t}} a(t,x)(1+\Omega)^3\\ \label{eq:section2-field:2} & (\Omega(0,x),\partial_t \Omega(0,x))=(f(x),g(x)), \end{align} \end{subequations} where we have denoted $\frac{1}{6}\widetilde T$ by $a(t,x)$. \begin{rem}[The scaling and the Euler equations] The above scaling of the trace of the energy-momentum tensor will change the Euler equations. That is why this scaling has to be taken into account for the coupled Euler-Nordström system, which we want to treat in a forthcoming paper. Moreover, it turns out that we also need to scale the metric and the velocity as follows: $\widetilde g_{\alpha\beta}=e^{-2\kappa t} g_{\alpha\beta}=e^{-2\kappa t}\phi^2 \eta_{\alpha\beta}$, and the $\widetilde{u}^\alpha =\mathord{e^{\varkappa t}} u^\alpha$, which is compatible with the scaling $\widetilde T= e^{3\kappa t} T$. \end{rem} In what follows we will not consider the Euler-Nordström system but instead consider the fluid as a given source of the field equations, and therefore we will consider the right-hand side of equation \eqref{eq:Euler-Nordstrom:5} as a given function of $(t,x)$. A similar setting was considered by H.~Friedrich for the Einstein vacuum equations with positive cosmological constant, in which he proved global existence of classical solutions for small initial data \cite{Friedrich:198 }. We point out that we require the deviation $e^{\varkappa t}\Omega=\Psi$ to be spatially periodic, and that is why we study the Cauchy problem \eqref{eq:section2-field:1}--\eqref{eq:section2-field:2} in the Sobolev spaces $H^m(\setT^3)$. We shall decompose this space into two orthogonal components, namely, $H^m(\setT^3)=\setR \oplus \mathbullet H^m(\setT^3)$, where the second component consists of all functions with zero mean over the torus $\setT^3$. The reason for this decomposition is that the homogeneous space $\mathbullet H^{m}(\setT^3)$ possesses some convenient features for our energy estimates and seems best suited for our setting. However, there is a technical difficulty in using these spaces, namely the presence of the nonlinear term $(1+\Omega)^3$, that cannot belong to the homogeneous spaces. We solve this problem by performing a projection of our variables into a part that belongs to these spaces, and another part that satisfies an ordinary differential equation, we refer to section \ref{sec:math-prel} for details. As we will see, in section \ref{sec:math-prel}, these spaces posses some nice features, such as Proposition \ref{prop:9}, that simplify the energy estimates which we shall use for proving our results. Having set up the problem, we outline the structure of our paper and summarize our main results. In section \ref{sec:math-prel}, we introduce the necessary mathematical tools, such as homogeneous and non-homogeneous Sobolev spaces on the torus $\setT^3$. Using Fourier series, in section \ref{sec:semi-linear-wave}, we obtain, for small initial data and a small source term, global existence and uniqueness of these solutions in the $ H^m(\setT^3)$ spaces (see Theorem \ref{thm:1}). We then turn, in Section \ref{symm-hyper-const}, to the theory of symmetric hyperbolic systems. We write the wave equation in a slightly unorthodox way as a symmetric hyperbolic system (see system \eqref{eq:symm:1}) and then prove global existence, uniqueness and asymptotic decay for a small source term, but not necessarily small initial data, (see~Theorem~\ref{thr:section5-sh:1}). The reason we consider the semi-linear wave equation \eqref{eq:section2-field:1} in the framework of the theory of symmetric hyperbolic systems is that in the future we want to consider the coupled Euler-Nordström system, and we know already the Euler equations can be cast into that form (see \cite{BK }). Finally, in the Section \ref{sec:blow-solut-large}, we show that if the source term is not small, then the corresponding solutions blow up in finite time. It turns out, however, that for the proof of the blow up result we need that $(1+\Omega(t,x))\geq 0$, which seems natural if the initial data are positive. However, for that being true, it is not sufficient to only assume the initial data to be positive; additional conditions are needed that also result in a more elaborated proof. That has been taken care of in the last section. \end{document} \section{Mathematical Preliminaries} \label{sec:math-prel} \subsection{Sobolev spaces on the torus $\setT^3$} \label{sec:non-homog-sobol} We consider the solutions on the torus $\mathbb T^3$ using Sobolev spaces $H^m$ where $m$ is a nonnegative integer (see e.g.~\cite[Chap~3.1]{taylor9 }, \cite[Chap.~5.10]{robinson200 }). It is natural to represent functions on the torus by Fourier series and their norms by Fourier coefficients. For a function $f$, its Fourier series is given by \begin{equation} \label{eq:fs:1} f(x)=\sum_{ k\in \setZ^3} \widehat f_{ k}e^{ix\cdot k}, \end{equation} where \begin{equation} \label{eq:fs:2} \widehat f_{ k}=\frac{1}{(2\pi)^3}\int_{\mathbb T^3}f(x)e^{-ix\cdot k } dx, \end{equation} $x\cdot k=x_1k_1+x_2k_2+x_3k_3$, $x\in \setT^3$, and $ k\in \setZ^3$. The $H^m$ norm is given by \begin{equation} \label{eq:norm:1} \|f\|^2_{H^m(\mathbb T^3)}=\|f\|^2_{H^m}= |\widehat f_0|^2+ \sum_{ k\in\setZ^3}| k|^{2m}|\widehat f_{ k}|^2. \end{equation} The homogeneous Sobolev spaces $\mathbullet H^m$ are defined by the semi--norm \begin{equation} \label{eq:norm:2} \|f\|^2_{\mathbullet H^m(\mathbb T^3)}=\|f\|^2_{\mathbullet H^m}= \sum_{ k\in\setZ^3}| k|^{2m}|\widehat f_{ k}|^2. \end{equation} We decompose the Sobolev space $H^m$ into two orthogonal components \begin{equation} \label{eq:norm:3} H^m=\setR \oplus \mathbullet H^m. \end{equation} A function $f\in H^m$ belongs to $\mathbullet H^m$ if and only if it has a zero mean, that is, \begin{equation} \label{eq:fs:3} \frac{1}{(2\pi)^3}\int_{\mathbb T^3}f(x)dx=0. \end{equation} By Parseval's identity, \begin{equation*} \|f\|^2_{L^2(\mathbb T^3)}=\sum_{k\in\setZ^3}|\widehat f_k|^2 \end{equation*} and since $\widehat{(\partial^\alpha f)}_k =k^\alpha \widehat{f}_k$, the following equivalent holds \begin{equation*} \|f\|^2_{ H^m( \setT^3)}\simeq \|f\|^2_{L^2(\mathbb T^3)}+ \sum_{|\alpha |= m} \|\partial^\alpha f\|_{L^2(\setT^3)}^2. \end{equation*} We also introduce an inner-product, for two vector valid real functions $U$ and $V$, we set \begin{equation} \label{eq:norm:4} \langle U, V\rangle_m=\widehat U_0\cdot \widehat V_0+ \sum_{k\in \setZ^3}|k|^{2m}\left(\widehat U_k\cdot \overline{\widehat V_k}\right). \end{equation} The following proposition which is a certain version of Wirtinger's inequality \cite{Dym_P.Mckean_8 } is a simple consequence of the representation of the homogeneous norm \eqref{eq:norm:2}. \begin{prop}[Estimate for the gradient] \label{prop:9} Let $ \partial_x u=(\partial_1 u,\partial_2u,\partial_3 u)^{\intercal}$ and $u\in \mathbullet H^{m+1}$ and $m\geq0$ be an integer. Then the following holds \begin{equation*} \label{eq:norm:36} \left\|{u}\right\|_{\mathbullet H^{m+1}}=\|\partial_x u\|_{\mathbullet H^{m}}. \end{equation*} \end{prop} \begin{proof} Since $|(\widehat{\partial_x u})_k|^2=|k|^2\ |\widehat u_k|^2$, we obtain by the representation \eqref{eq:norm:3} of the norm that \begin{equation} \|\partial_x u\|_{\mathbullet H^{m}}^2=\sum_{0\neq k\in\setZ^3} |k|^{2m} |(\widehat{\partial_x u})_k|^2 =\sum_{0\neq k\in\setZ^3} |k|^{2m} |k|^2 |\widehat u_k|^2=\left\|u\right\|_{\mathbullet H^{m+1}}^2 \end{equation} which proves the proposition. \end{proof} \subsection{Calculus in the Sobolev spaces on the torus $\setT^3$} \label{sec:calculus-sobol} We recall that the known properties in Sobolev spaces, defined over $\setR^n$ such as multiplication, embedding and Moser type estimates, hold also for Sobolev spaces defined over the torus $\setT^n$, see e.g.~\cite[Chap.~13]{taylor97 }. \begin{prop}[A Nonlinear estimate] \label{prop:1} Let $m>\frac{3}{2}$ and $a \in H^{m}$, then there is a universal constant $C(A)$, depending just on the constants of multiplications and embedding, such that \begin{equation} \label{eq:estimate:3} \left\|a(1+u)^3 \right\|_{H^{m}}, \left\|a(1+u)^3 \right\|_{L^\infty}\leq C(A) \|a\|_{H^{m}} \end{equation} for all $u\in H^m$ with $\|u\|_{H^m}\leq A$. \end{prop} \begin{proof} By the multiplication property (\cite[Proposition 3.7,~Chap.~13]{taylor97 }), there is a constant $C$ such that \begin{equation} \label{eq:section3-fourier:1} \begin{split} \left\|a(1+u)^3\right\|_{H^{m}} & \leq C \left\|a\right\|_{H^{m}} \left\|(1+u)^3\right\|_{H^{m}} \leq C \left\|a\right\|_{H^{m}} \left\|(1+u)\right\|_{H^{m}}^3 \\ & \leq C \left\|a\right\|_{H^{m}} \left(1+\|u\|_{H^m}\right)^3 \leq C \left\|a\right\|_{H^{m}} \left(1+A\right)^3. \end{split} \end{equation} Using the embedding $\|u\|_{L^\infty}\leq C \|u\|_{H^m}$, we see that \eqref{eq:estimate:3} holds. \end{proof} \subsection{Estimate of symmetric hyperbolic system} We shall also need the following property of solution to semi-linear symmetric hyperbolic systems. Consider a symmetric hyperbolic system \begin{equation} \label{eq:sh:1} \partial_t U=\sum_{j=1}^3A^j(t,x)\partial_j U +F(t,x,U), \end{equation} where the matrices $A^j(t,x)$ are symmetric and $F(t,x,U)$ is a smooth function of $U $. The next proposition provides a uniform modulus of continuity for the difference $U(t,\cdot)-U_0(\cdot) $ in the $H^m $ norm. \begin{prop}[Modulus of continuity] \label{prop:6} Let $m>\frac{5}{2}$, $A^j \in L^\infty([0,T];H^m )$ for some positive $T$ and $F(t,x,0)\in L^\infty([0,T];H^m )$. Assume that $U(t)\in C([0,T];H^m )\cap C^1([0,T]);H^{m-1})$ is the solution to system \eqref{eq:sh:1} with initial data $U_0\in H^m$, then there is a constant $C(\|U_0\|_{H^m})$ such that \begin{equation} \|U(t)-U_0\|_{ H^{m-1}}\leq C({\left\|{U_0}\right\|_{H^m}})t^{\frac{1}{m}}\quad \text{for}\ \ 0<t<T. \end{equation} \end{prop} \begin{rem} We know that from the existence theory for quasilinear symmetric hyperbolic system that the solution $U$ belongs to a certain ball around $U_0$ in the $H^m$ space (see e.g. \cite{KATO}, \cite{rauch12:_hyper}). So we may assume that $\|U(t)\|_{H^m}\leq \|U_0\|_{H^m}+R$ for some positive $R$ and $t\in[0,T]$. The same phenomena appears also for quasi-linear wave equations (see e.g. \cite[Theorem 6.4.11]{Hormander_1997}). \end{rem} \begin{proof} Let $ t<T$, then \begin{equation} U(t,x)-U_0(x)=\int_{0}^t\partial_tU(\tau,x) d\tau. \end{equation} By the Cauchy Schwarz inequality, it follows that \begin{equation} \Big| U(t,x)-U_0(x)\Big|^2\leq t\int_{0}^t|\partial_tU(\tau,x)|^2 d\tau. \end{equation} Hence we conclude that \begin{equation} \label{eq:mod:4} \| U(t,x)-U_0(x)\|_{L^2}^2 \leq t\int_{0}^t\int |\partial_tU(\tau,x)|^2dx d \tau \leq t^2\left\|\partial_t U\right\|_{L^\infty([0,T];L^2)}^2. \end{equation} Since $U$ satisfies system \eqref{eq:sh:1}, we obtain \begin{equation} \label{eq:sh:2} \begin{split} \|\partial_t U(t)\|_{L^2} &\leq \|\partial_t U(t)\|_{H^{m-1}} \leq \sum_{j=1}^3 \|A^j(t,x)\partial_j U(t)\|_{H^{m-1}}+ \|F(t,x,U(t))\|_{H^{m-1}}\\ & \leq C\sum_{j=1}^3 \|A_j(t,\cdot)\|_{H^m} \|U(t)\|_{H^{m}} + C(\|U(t)\|_{L^\infty}) \|U(t)\|_{H^m}+ \|F(t,\cdot,0)\|_{H^m}. \end{split} \end{equation} Here we used the multiplication property and Moser third estimate, see e.g. \cite[Theorem~6.4.1]{rauch12:_hype }, \cite[Proposition 3.9,~Chap.~13]{taylor97}. Thus it follows from the remark that \begin{equation} \sup_{[0,T]}\|\partial_t U(t)\|_{L^2}\leq C(\|U_0\|_{H^m}). \end{equation} We now apply the intermediate estimate $\|u\|_{H^r}\leq \|u\|_{H^{m}}^{m-\frac{r}{m}}\|u\|_{L^2}^{\frac{r}{m}}$ for $0<r< m$, (see e.g.~\cite[Prop.~1.52]{Bahouri_2011}) and inequality \eqref{eq:mod:4}, then \begin{equation} \label{eq:estimate:9} \begin{split} \|U(t)-U_0\|_{ H^{m-1}}\leq \|U(t)-U_0\|_{ H^{m}}^{\frac{(m-1)}{m}} \|U(t)-U_0\|_{L^2}^{\frac{1}{m}} \leq \|U(t)-U_0\|_{H^m}^{frac{m-1}{m}} C_0(\|U_0\|_{H^m})t^{\frac{1}{m}}. \end{split} \end{equation} Since $\|U(t)\|_{H^m}\leq \|U_0\|_{H^m}+R$, that completes the proof. \end{proof} \subsection{Gronwall inequality} We shall use the following version of Gronwall's inequality (see e.~g.~\cite{Bahouri_2011}). \begin{lem}[Gronwall's inequality] \label{lem:Gronwall} Let $g$ be a $C^1$ function, $f$, $F$, and $A$ continuous function in the interval $[t_0, T]$. Suppose that for $t\in [t_0,T]$ $g$ obeys \begin{equation} \label{eq:section2-preliminaries:1} \frac{1}{2}\frac{d}{dt}g^2(t)\leq A(t)g^2(t)+f(t)g(t). \end{equation} Then for $t\in [t_0,T]$ we have \begin{equation} \label{eq:section2-preliminaries:2} g(t)\leq e^{\int_{t_0}^t A(\tau)d\tau} g(t_0)+\int_{t_0}^t e^{\int_{\tau}^tA(s)ds} f(\tau)d\tau. \end{equation} \end{lem} \end{document} \section{The Cauchy problem for a semi-linear wave equation using Fourier series} \label{sec:semi-linear-wave} In the following section we shall investigate the Cauchy problem \eqref{eq:section2-field:1}--\eqref{eq:section2-field:2}, however, for convenience, we multiply the wave equation by $-1$ and denote the unknown by $u$ instead of $\Omega$, which results in the following semi-linear wave equation \begin{subequations} \begin{align} \label{eq:wave:1} &\partial_t^2 u+2\varkappa \partial_t u -\Delta u=\mathord{e^{-\varkappa t}} a(t,x)(1+u)^3\\ \label{eq:wave:2} & (u(0,x),\partial_t u(0,x))=(f(x),g(x)). \end{align} \end{subequations} Here $\varkappa$ is a positive constant, while $a(t,x)$ is a smooth function as we discussed in Section \ref{sec:fields-equations}. We are interested in proving the global existence of classical solutions to the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2} for small initial data and $a(t,x)$. We also note, that the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2} has some similarities with the Cauchy problem of the damped semi-linear wave equation \begin{subequations} \begin{align} \label{eq:section3-fourier:7} & \partial_t^2 u+2\varkappa \partial_t u -\Delta u=|u|^p, \qquad (t,x)\in \setR_+\times \setR^3\\ \label{eq:section3-fourier:6} & (u(0,x),\partial_t u(0,x))=(f(x),g(x)). \end{align} \end{subequations} for which it is known that for $2\leq p\leq 3$ there exist global solutions for small initial data, for further details we refer to \cite{Ebert_Reissig_1 }, \cite{Todorova_Yordanov_0 } and the references therein. However, we did not find in the literature any results concerning the Cauchy problem \eqref{eq:section3-fourier:7}--\eqref{eq:section3-fourier:6} on the torus. There is however another difference, between these two Cauchy problems \eqref{eq:section3-fourier:7}--\eqref{eq:section3-fourier:6} and \eqref{eq:wave:1}--\eqref{eq:wave:2}. In equation \eqref{eq:wave:1}, the function $a(t,x)$ is essential, in the sense that global existence depends on the smallness of this function, while the structure of the nonlinear term is of less importance. \subsection{Local existence} \label{sec:local-existence} Before we are going to present our results concerning the global existence of classical solutions we shall discuss the question of local existence and uniqueness of the initial value problem \eqref{eq:wave:1}--\eqref{eq:wave:2}. There are well known local existence and uniqueness theorems for quasilinear wave equations of the form \begin{equation} \label{eq:section3-fourier2} g^{\alpha\beta}(u,u')\partial_\alpha\partial_\beta u=F(u), \end{equation} where $g^{\alpha\beta}(u,u')$ has a Lorentzian signature and $u'=\partial_\alpha u$, $\alpha=0,1,2,3$, see for example \cite[Theorem 6.4.11]{Hormander_1997}, \cite[Theorem 4.1]{Sogge_95} and \cite[Theorem 5.1]{Shatah-Struwe-98}. These references treat the initial value problems in the Sobolev space $H^m(\setR^3)$ and under the condition $F(0)=0$. We consider solutions of equation \eqref{eq:wave:1} that belong to Sobolev spaces on the torus $\setT^3$, $H^m(\setT^3)$, and we observe that the right hand side of \eqref{eq:section3-fourier2} does not satisfy the condition $F(0)=0$. Nevertheless, the above existence results can be applied to the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2} because of the following reasons. \begin{enumerate}[label=\arabic*.] \item The energy estimates are an indispensable tool for proving local existence for the linearized equation. The energy estimates rely on the formula for integration by parts $\int u\partial_{x_j} v dx=-\int \partial_{x_j} u v dx$, which holds for periodic functions and rapidly decreasing functions in $\setR^n$. That is why the energy estimates in the above references hold in the Sobolev spaces $H^m(\setT^3)$ as well. \item Moser type inequality, the second important tool, states that $\|F(u)-F(0)\|_{H^m}\leq C \|u\|_{H^m}$ for a sufficiently smooth function $F$. This nonlinear estimate is valid both for $u\in H^m(\setR^3)$ and for $u\in H^m(\setT^3)$. In the case the equations are considered on the $\setR^3$, the requirement $F(0)=0$ is needed since the constant function does not belong to Sobolev space $H^m(\setR^3)$. However, the situation is different on the torus. Here obviously the constant function belongs to the space $H^m(\setT^3)$. \end{enumerate} So we conclude that with some minor modifications of \cite[Theorem 6.4.11]{Hormander_1997}, the following result on local existence and uniqueness. \begin{thm}[Local existence] \label{thr:section3-fourier:1} Let $m>\frac{5}{2}$, $a(t,\cdot)\in L^\infty([0,\infty); H^m(\setT^3)$, $ f\in H^{m+1}(\setT^3)$ and $ g\in H^{m}(\setT^3)$, then there exists a positive $T$ and a unique solution $u$ to the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2} such that \begin{equation*} u\in L^\infty([0,T]; H^{m+1}(\setT^3)\cap C^{0,1}([0,T]; H^{m}(\setT^3), \end{equation*} where $C^{0,1}$ is a Lipschitz continuous function. \end{thm} \subsection{Global existence} \label{subsec:homogeneous} Once that the existence and uniqueness of local solutions have been established (by theorem \ref{thr:section3-fourier:1}), we turn now to the question of whether global solutions to the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2} exist. Again, the energy estimates are the main tool for treating this problem. Those energy estimates are different from the one that has been used for local existence. Our method consists in expanding the solution into Fourier series, which allows us to solve the corresponding ordinary differential equations for the Fourier's coefficients, and use the norm \eqref{eq:norm:1} to derive the desired estimates. Our main result in this section is the following theorem. \begin{thm}[Global existence of classical solutions for small data] \label{thm:1} Let $m> \frac{5}{2}$, $f\in H^{m+1}$, $g\in H^m$, and $a\in C([0,\infty); H^{m})$. Then there is a suitable constant $\varepsilon$ such that if the following holds \begin{equation*} \|f\|_{ H^{m+1}}, \|g\|_{ H^{m}}, \sup_{[0,\infty)}\|a(t,\cdot)\|_{ H^{m}}<\epsilon, \end{equation*} then the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2} has a unique global solution of the form \begin{equation} \label{eq:section3-fourier:4} u\in C([0,\infty); H^{m+1}). \end{equation} Moreover, there exists a positive constant $C_1$ such that \begin{equation} \label{eq:limit-zero} \|u(t,\cdot)\|_{ H^{m+1}}\leq {C_1}. \end{equation} \end{thm} \subsection{Proof of Theorem \ref{thm:1}} \label{sec:outline-proof} The main points and ideas of the proof of Theorem \ref{thm:1} can be described as follows: \begin{enumerate}[label=\arabic*.] \item Obtain an energy estimate for the linearized equation. \item Use the Banach fixed pointed theorem for the linearized equation. \end{enumerate} We start with the energy estimates for the linearized system of equation \eqref{eq:wave:1}. For any function $v\in H^{m}$ we set \begin{equation} \label{eq:section3-fourier:9} F(t,x)\eqdef a(t,x)\left( 1+v \right)^{3} \end{equation} and consider the following linear initial value problem \begin{subequations} \begin{align} \label{eq:wave:7} &\partial_t^2 u+2\varkappa \partial_t u -\Delta u= \mathord{e^{-\varkappa t}} F(t,x)\\ \label{eq:wave:8} & (u(0,x),\partial_t u(0,x))=(f(x)),g(x)). \end{align} \end{subequations} We now consider the Fourier coefficients \begin{equation*} \widehat u_k(t)=\frac{1}{(2\pi)^3}\int_{\setT^3} u(t,x)dx, \quad k\in \setZ^3, \end{equation*} and the coefficients of the other data of the Cauchy problem \eqref{eq:wave:7}--\eqref{eq:wave:8} as well. We obtain, by this procedure, for each $ k\in \setZ^3$ an ordinary differential equation \begin{subequations} \begin{align} \label{eq:fs:4} & \widehat u^{\prime\prime}_{ k}(t)+2\varkappa \widehat u'_{ k}(t)+| k|^2\widehat u_{ k}(t)=\mathord{e^{-\varkappa t}} \widehat F_{ k}(t)\\ \label{eq:fs:5} & \widehat u_{ k}(0)=\widehat f_{ k}, \quad \widehat u'_{ k}(0)=\widehat g_{ k}. \end{align} \end{subequations} We can solve \eqref{eq:fs:4}--\eqref{eq:fs:5} explicitly, however, since the structure of the solutions depends on $\varkappa$, and in order to work with similar formulas for all $k\neq 0$, we restrict $\varkappa$ to the interval $(0,1)$. We present the energy estimates in the following proposition \begin{prop}[Energy estimate for the linearized wave equation] \label{prop:3.2} Let $ m\geq 0$ and $0<\varkappa<1$, and assume $F\in C([0,\infty); H^m)$, $f\in H^{m+1}$, and $g \in H^m$. Then there exists a unique solution $u\in C([0,\infty); H^{m+1})$ to equation \eqref{eq:wave:7} with initial data \eqref{eq:wave:8}, and moreover, it obeys \begin{equation} \label{eq:estimate-linear} \begin{split} \|u(t,\cdot)\|^2_{H^{m+1}} & \leq {2e^{-2\varkappa t}}\left\{(1+2\varkappa^2)(1+t^2)\|f\|^2_{\mathbullet H^{m+1}}+(2(1+t^2) \|g\|^2_{\mathbullet H^{m}}+t(1+t^2)\int_0^t \|F(\tau,\cdot)\|^2_{\mathbullet H^{m}} d\tau\right\} \\ & + \widehat f_{0}^2+\widehat g^2_{0}\left(\frac{1-e^{-2\varkappa t}}{2\varkappa}\right)^2+\frac{1}{4} \sup_{\tau\in [0,t]}|\widehat F_{ 0}(\tau)|^2\left( \frac{1-e^{-\varkappa t}}{\varkappa}\right)^4. \end{split} \end{equation} \end{prop} \begin{proof} For each $k\neq 0$ the solution of the initial value problem of the ordinary differential equations \eqref{eq:fs:4}--\eqref{eq:fs:5} is then given by \begin{equation} \label{eq:fs:7} \begin{split} \widehat u_{ k}(t) & =\mathord{e^{-\varkappa t}} \left\{ \widehat f_{ k} \cos\left(\sqrt{|k|^2-\varkappa^2}\, t\right)+\frac{\widehat g_{ k}+\varkappa \widehat f_{ k}}{\sqrt{|k|^2-\varkappa^2}}\, \sin\left(\sqrt{|k|^2-\varkappa^2}\, t\right)\right\} \\ & +\frac{1}{\sqrt{|k|^2-\varkappa^2}}\int_0^t e^{-\varkappa(t-\tau)}\sin\left(\sqrt{|k|^2-\varkappa^2}\,(t-\tau)\right)e^{ -\varkappa \tau}\widehat F_{ k}(\tau)d\tau\\ & = \mathord{e^{-\varkappa t}} \left\{ \widehat f_{ k} \cos\left(\sqrt{|k|^2-\varkappa^2}\,t\right)+\frac{\widehat g_{ k}+\varkappa \widehat f_{ k}}{\sqrt{|k|^2-\varkappa^2}}\sin\left(\sqrt{|k|^2-\varkappa^2} t\right)\right. \\ & +\left. \frac{1}{\sqrt{|k|^2-\varkappa^2}}\int_0^t\sin\left(\sqrt{|k|^2-\varkappa^2} (t-\tau)\right)\widehat F_{ k}(\tau)d\tau \right\}, \end{split} \end{equation} and for $ k=0$, \begin{equation} \label{eq:u-0} \widehat u_{ 0}(t)=\widehat f_{ 0}+\widehat g_{0}\left(\frac{1-e^{-2\varkappa t}}{2\varkappa}\right)+\frac{1}{2\varkappa}\int_0^t\left(1-e^{ -2\varkappa(t-\tau) } \right)e^{-\varkappa \tau} \widehat F_{ 0}(\tau) d\tau. \end{equation} We shall now estimate $\|u\|_{H^{m+1}}^2$ by the formula \eqref{eq:norm:1}. For $k\neq 0$, we conclude from equality \eqref{eq:fs:7} and the trivial inequality $(a+b+c)^2\leq 2(a^2+b^2+c^2)$ that \begin{equation*} \begin{split} & |k|^{2(m+1)}|\widehat u_{ k}(t)|^2 \leq 2e^{-2\varkappa t}|k|^{2(m+1)}\left\{|\widehat f_k|^2\left(\cos\left(\sqrt{|k|^2-\varkappa^2}\, t\right)\right)^2 \right. \\ + & \left. \left|\widehat g_k+\varkappa \widehat f_k\right|^2\left(\frac{\sin\left(\sqrt{|k|^2-\varkappa^2}\, t\right)}{\sqrt{|k|^2-\varkappa^2}}\right)^2+ \left(\int_0^t \frac{\sin\left(\sqrt{|k|^2-\varkappa^2}\, (t -\tau)\right)}{\sqrt{|k|^2-\varkappa^2}} \widehat{F_k}(\tau)d\tau\right)^2 \right\}\\ & = I_k +II_k+III_k. \end{split} \end{equation*} The first one is easy to estimate, and we obtain that \begin{equation} \label{eq:e-1} I_k\leq 2e^{-2\varkappa t}|k|^{2(m+1)}|\widehat f_k|^2. \end{equation} For the second and third term we use the inequality $\sqrt{1+\xi^2}|\sin(\xi t)|\leq \xi\sqrt{1+t^2}$, with $\xi=\sqrt{|k|^2-\varkappa^2}$, that implies \begin{equation} \frac{\sin\left(\sqrt{|k|^2-\varkappa^2}\, t\right)}{\sqrt{|k|^2-\varkappa^2}}\leq\sqrt{\frac{1+t^2}{1+\xi^2}}=\sqrt{\frac{ 1+t^2}{|k|^2+1-\varkappa^2}}\leq \frac{\sqrt{1+t^2}}{|k|}. \end{equation} Hence \begin{equation} \label{eq:e-2} II_k\leq 2 e^{-2\varkappa t}|k|^{2m}|\widehat g_k +\varkappa \widehat f_k|^2(1+t^2)\leq 2 e^{-2\varkappa t}|k|^{2m}\left(2|\widehat g_k|^2 +2\varkappa^2 |\widehat f_k|^2\right)(1+t^2) \end{equation} and \begin{equation} \label{eq:e-3} III_k\leq 2 e^{-2\varkappa t}|k|^{2m} t\int_0^t {(1+(t-\tau)^2}|\widehat F_k(\tau)|^2d\tau\leq 2 e^{-2\varkappa t}|k|^{2m}| t(1+t^2)\int_0^t |\widehat F_k(\tau)|^2d\tau. \end{equation} We now turn to the zero's term \eqref{eq:u-0}, and we start with the integral term of \eqref{eq:u-0}, \begin{equation} \label{eq:u-0:2} \bigg|\frac{1}{2\varkappa}\int_0^t\left(1-e^{-2\varkappa(t-\tau)}\right) e^{-\varkappa \tau} \widehat F_{ 0}(\tau) d\tau \bigg|\leq \frac{1}{2}\left(\frac{1-e^{-\varkappa t}}{\varkappa}\right)^2\sup_{[0,t]}| \widehat F_0(\tau)| \end{equation} This leads to \begin{equation} \label{eq:zero:1} |\widehat u_0(t)|^2\leq 2\left\{ \widehat f_{ 0}^2+ g^2_{0}\left(\frac{1-e^{-2\varkappa t}}{2\varkappa}\right)^2+\frac{1}{4}\left(\frac{1-e^{-\varkappa t}}{\varkappa}\right)^4\left(\sup_{[0,t]}| \widehat F_0(\tau)|\right)^2\right\}. \end{equation} Summing up the inequalities \eqref{eq:e-1}, \eqref{eq:e-2}, \eqref{eq:e-3} and \eqref{eq:zero:1} imply that inequality \eqref{eq:estimate-linear} holds, and this completes the proof of Proposition \ref{prop:3.2}. \end{proof} We turn now to prove the main result of this section, namely, the proof of Theorem \ref{thm:1}. \begin{proof}[Proof of Theorem \ref{thm:1} by a fixed point argument] Based on the energy estimate \eqref{eq:estimate-linear} of the solution to the linear Cauchy problem \eqref{eq:wave:7}--\eqref{eq:wave:8}, we shall show the existence of classical solutions to the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2} in the interval $[0,\infty)$ in the Sobolev space $ H^{m+1}$ for $m> \frac{5}{2}$, under the assumption that the initial data, as well as $a(t,x)$, are sufficiently small. In order to achieve this, we define a linear operator \begin{equation} \label{eq:linear-operator} \mathscr{L}: C([0,\infty); H^{m+1})\to C([0,\infty); H^{m+1}), \end{equation} as follows. Let $u=\mathscr L (v)$ be the solution to the linear equation \begin{subequations} \begin{align} \label{eq:wave:12} &\partial_t^2 u+2\varkappa \partial_t u -\Delta u=\mathord{e^{-\varkappa t}} a(t,x)(1+v)^3\\ \label{eq:wave:13} & (u(0,x),\partial_t u(0,x))=(f(x),g(x)). \end{align} \end{subequations} Next, for $R>0$ we define a bounded set $B_R\subset H^{m+1}$ as follows \begin{equation} \label{eq:ball} B_R =\{v(t,\cdot)\in C([0,\infty); H^{m+1}): \sup_{[0,\infty)}\|v(t,\cdot)\|_{H^{m+1}}\leq R,\ v(0,x)=f(x), \partial_t v(0,x)=g(x)\}. \end{equation} Obviously, the ball $B_R$ is a closed set in the Banach space $C([0,\infty); H^{m+1})$, and that is why we can apply the Banach fixed point theorem to the operator $\mathscr{L}$, which will enable us to prove the existence of global solutions. In order to apply the Banach fixed point theorem we need to show: \begin{enumerate}[label=\alph*)] \item \label{item:section3-fourier:2} $\mathscr L:B_R\to B_R$, that is, $\mathscr L$ maps the ball into itself. \item \label{item:section3-fourier:3} $\mathscr L:B_R\to B_R$ is a contraction. \end{enumerate} We start with \ref{item:section3-fourier:2}: We shall use the energy estimate provided by Proposition \ref{prop:1}. So we set \begin{align*} M_1 & =\max\{2e^{-2\varkappa t}\left((1+2\varkappa^2(1+t^2)\right):t\geq 0\};\\ M_2 & =\max\{4e^{-2\varkappa t}\left(1+t^2\right):t\geq 0\};\\ M_3 & =\max\{e^{-2\varkappa t}\left(t^2(1+t^2\right)):t\geq 0\}. \end{align*} Using standard calculus in $H^m(\setT^3)$ there is a constant $C(R) $ such that \begin{equation} \|(1+v)^3\|_{L^\infty}\leq C_e \|(1+v)^3\|_{H^m}\leq C(R) \end{equation} for any $ v\in B_R$, here $C_e$ is the constant of the embedding $L^\infty \hookrightarrow H^m$. We first estimate the integral term and $\widehat F_0$ of the right hand side of \eqref{eq:estimate-linear}. So \begin{equation} \int_0^t\|F(\tau,\cdot)\|_{\mathbullet H^m}^2d\tau\leq t\sup_{[0,t]}\|a(\tau,\cdot)(1+v(\tau,\cdot)^3\|_{\mathbullet H^m}^2\leq t C_m^2 \sup_{[0,\infty)}\|a(\tau,\cdot)\|_{\mathbullet H^m}^2C^2(R), \end{equation} where $C_m$ is the constant of the multiplication in the Sobolev space $H^m$. Now, \begin{equation} \label{eq:est-0} \widehat F_0(\tau)=\frac{1}{(2\pi)^3}\int_{\setT^3}a(\tau,x)(1+v(\tau,x))^3dx, \end{equation} By Jensen's inequality (see e.~g.~\cite[Ch.~2]{Lieb-Loss}), \begin{equation} \begin{split} | \widehat F_0(\tau)|^2 &=\frac{1}{(2\pi)^3}\int_{\setT^3}a^2(\tau,x)(1+v(\tau, x))^6dx\leq \|(1+v(\tau, \cdot))^3\|_{L^\infty}^2\|a(\tau,\cdot)\|_{L^2}^2 \\ & \leq C^2(R) \|a(\tau,\cdot)\|_{H^m}^2. \end{split} \end{equation} Now, by Proposition \ref{prop:3.2}, inequality \eqref{eq:estimate-linear}, $u=\mathscr L (v)$ satisfies the inequality \begin{equation} \begin{split} \|u(t,\cdot)\|_{H^{m+1}}^2 & \leq M_1\|f\|_{\mathbullet H^{m+1}}^2+ M_2\|g\|_{\mathbullet H^{m}}^2+M_3C_m^2 C^2(R) \|a(\tau,\cdot)\|_{H^m}^2 \\ & \widehat f_0^2+ \frac{\widehat g_0^2}{4\varkappa^2}+\frac{1}{4\varkappa^2} C^2(R) \|a(\tau,\cdot)\|_{H^m}^2 \leq R^2, \end{split} \end{equation} if \begin{equation} \label{eq:cond-1} \|f\|_{H^{m+1}}^2\leq \frac{ R^2}{4\max\{M_1,1\}}, \end{equation} \begin{equation} \label{eq:cond-2} \|g\|_{H^{m}}^2\leq \frac{ R^2}{4\max\{M_2,\frac{1}{4\varkappa^2}\}} \end{equation} and \begin{equation} \label{eq:cond-3} \sup_{[0,\infty)}\|a(t,\cdot)\|_{H^{m}}^2\leq \frac{R^2}{2C_m^2C^2(R)}\frac{1}{4\max\{M_3,\frac{1}{4\varkappa^4}\}}. \end{equation} Thus, $\mathscr L$ maps the ball into itself provided that \eqref{eq:cond-1}, \eqref{eq:cond-2} and \eqref{eq:cond-3} hold. \ref{item:section3-fourier:3} {Contraction:} Let $w=\mathscr L(v_1)-\mathscr L(v_2)$, then $w$ satisfies \begin{align} \label{eq:wave:14} &\partial_t^2 w+2\varkappa \partial_t w -\Delta u=\mathord{e^{-\varkappa t}} a(t,x)\left((1+v_1)^3-(1+v_2)^3\right),\\ \label{eq:wave:15} & (w(0,x),\partial_t w(0,x))=(0,0). \end{align} By the energy estimate \eqref{eq:estimate-linear}, we obtain that \begin{equation} \begin{split} \|w\|_{H^{m+1}}^2 & \leq 2 e^{-2\varkappa t}t^2(1+t^2)\sup_{[0,t]}\|a(\tau,\cdot)\left((1+v_1(\tau, \cdot))^3-(1+v_2(\tau, \cdot))^3\right)\|_{H^m}^2 \\ &+ \frac{1}{4\varkappa^4}\sup_{[0,t]} |\widehat F_0(\tau)|^2. \end{split} \end{equation} Note that \begin{equation*} \left((1+v_1)^3-(1+v_2)^3\right)=(v_1-v_2)\left(3+3(v_1+v_2)+ (v_1^2+v_1v_2+v_2^2)\right), \end{equation*} and that similar to equation \eqref{eq:est-0} we obtain \begin{equation} |\widehat F_0(\tau)|^2\leq \|a(\tau,\cdot)\|_{H^m}^2\|(v_1-v_2)(\tau,\cdot)\left(3+3(v_1+v_2)+ (v_1^2+v_1v_2+v_2^2)\right)(\tau,\cdot)\|_{L^\infty}^2. \end{equation} So by the embedding $L^\infty\hookrightarrow H^m$, the multiplication property of $H^m$ and the fact that $v_1,v_2\in B_R$, there exists a constant $K(R)$ such that \begin{equation} \|w(t,\cdot)\|_{H^{m+1}}^2\leq \max\{M_3,\frac{1}{4\varkappa^2}\}K^2(R) \sup_{[0,\infty)}\|(v_1-v_2)(t,\cdot)\|_{H^m}^2\sup_{[0,\infty)}\|a(t, \cdot)\|_{H^m}^2 \end{equation} holds. Thus, the operator $\mathscr L:B_R\to B_R$ is a contraction provided that \begin{equation} \label{eq:cond-4} \sup_{[0,\infty)}\|a(t,\cdot)\|_{H^m}^2\leq \frac{1}{2}\frac{1}{ \max\{M_3,\frac{1}{4\varkappa^2}\}K^2(R)}. \end{equation} So let $\epsilon$ be the minimum of the upper-bounds \eqref{eq:cond-1}--\eqref{eq:cond-4}, then the existence of a unique global solution follows from the application of the Banach fixed point theorem. The solution belongs to the ball $B_R$, and therefore inequality \eqref{eq:limit-zero} holds. That completes the proof of theorem \ref{thm:1}. \end{proof} \end{document} \section{The wave equation as a modified symmetric hyperbolic system} \label{symm-hyper-const} In this section, we investigate the questions of global existence and asymptotic decay of classical solutions to the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2} by using the theory of symmetric hyperbolic systems and the corresponding energy estimates. Since the relativistic Euler equations can be written as a symmetric hyperbolic system (see~\cite{BK }), it will enable us, in the future, to couple the semi-linear equation \eqref{eq:wave:1} to the Euler equations \eqref{eq:Euler-Nordstrom:6}. It is a well-known fact that wave equations can be cast into symmetric hyperbolic form. It turns out, however, that we need a modification of this standard procedure, which we will outline in the next subsection. With this new system at hand, we are able to prove results similar to those in Section \ref{sec:semi-linear-wave}. There are, however, some important differences between the results in both sections which we have to point out. We do not require that the initial data have to be small, and we can, even, drop the term $e^{-\varkappa t}$ and yet obtain global existence. However, since we rely, to a certain extend, on properties of the $\mathbullet H^{m}$ spaces and since the right hand side of the wave equation \eqref{eq:section2-field:1}, the term $\left( 1+u \right)^{3}\not\in \mathbullet H^{m}$, we will perform a projection on the wave equation that allows us to obtain a system of an ordinary differential equation and a modified wave equation with a right hand side, that does belong to $\mathbullet H^{m}$. \subsection{The projection of the wave equation} \label{sec:proj-wave-equat} Based on our observations made in section \ref{sec:non-homog-sobol} about norms of the spaces $H^{m}$ and $\mathbullet H^m$, and in particular the orthogonal decomposition of $H^m$ \eqref{eq:norm:3}, we define the orthogonal projection $P_0 : H^m \to \setR$, by \begin{equation} \label{eq:section4-sh6} \widehat{u}_0=P_0(u). \end{equation} We denote by $u_h$ the complementary projection, that is, \begin{equation} \label{eq:section4-sh7} u_h=\left( Id-P_0 \right)u. \end{equation} Since $u_h$ belongs to $\mathbullet H^m$, its norm is given by the formula \begin{equation} \label{eq:section4-sh8} \left\Vert u_h \right\Vert_{H^m}^2 =\sum\limits_{k\neq0}^{} \left\vert k \right\vert^{2m} \left\vert \widehat{u}_{k} \right\vert \end{equation} and obviously \( \left\langle \widehat{u}_0,u_{h} \right\rangle_m=0\) holds, where the inner product is given by equation \eqref{eq:norm:4}. We apply now the projections $P_0$ and $\Id-P_0$ to the wave equation \eqref{eq:section2-field:1}, that is, \begin{align} \label{eq:section4-sh10} P_0 \left\{ \partial_t^2 u +2\varkappa \partial_t u -\Delta u \right\} &= P_0 \left\{ \mathord{e^{-\varkappa t}} a(t,x)(1+u)^3 \right\} \end{align} and \begin{equation} \label{eq:section4-sh11} \left( \Id-P_0 \right)\left\{ \partial_t^2 u +2\varkappa \partial_t u -\Delta u \right\}= \left( \Id -P_0 \right) \left\{ \mathord{e^{-\varkappa t}} a(t,x)(1+u)^3 \right\}. \end{equation} Those projections result in the following system \begin{subequations} \begin{align} \label{eq:section4-sh12} & \widehat{u}_0^{\prime\prime} + 2\varkappa \widehat{u}_0^{\prime} = e^{-\varkappa t}\widehat F_0 \\ \label{eq:section4-sh32} & \partial_t^2 u_h +2\varkappa \partial_t u_h -\Delta u_h = \mathord{e^{-\varkappa t}} \left( a(t,x)(1+u)^3 -\widehat{F}_{0} \right), \end{align} \end{subequations} where \begin{equation} \widehat F_0= P_0(a(t,x) \left( 1+u \right)^3)= \int_{\setT^3} a(t,x) \left( 1+u \right)^3dx. \end{equation} \subsection{A semi-linear wave equation written as a symmetric hyperbolic system} \label{sec:semi-linear-wave-1} The most common way to write the wave equation as a symmetric hyperbolic system is to consider either the vector valued function \begin{equation*} V= \begin{pmatrix} \partial_t u\\ \partial_x u \end{pmatrix} \qquad\mbox{or} \qquad V= \begin{pmatrix} \partial_t u\\ \partial_x u\\ u \end{pmatrix} \end{equation*} as an unknown (here $\partial_x u\eqdef (\partial_1 u,\partial_2 u,\partial_3 u)^{\intercal}$). However, in both cases, for a system with damping terms, the energy estimates obtained are not appropriate to show global existence. We, therefore, introduce a different unknown by setting \begin{equation} \label{eq:section5-sh:5} V\eqdef \begin{pmatrix} \partial_tu_h +\varkappa u_h \\ \partial_x u_h \end{pmatrix}. \end{equation} Then equation \eqref{eq:section4-sh32} can be written as a symmetric hyperbolic system as follows \begin{equation} \label{eq:symm:1} \begin{split} \partial_t V=\sum_{k=1}^3 B^k\partial_k V-\varkappa V+\varkappa^2 \begin{pmatrix} u_{h} \\ 0 \\ 0\\ 0 \end{pmatrix} +\mathord{e^{-\varkappa t}} \begin{pmatrix} a(t,x) (1+u)^3 \\ 0 \\ 0 \\ 0 \end{pmatrix} - e^{-\varkappa t} \begin{pmatrix} \widehat{F_{0}} \\ 0 \\ 0 \\ 0 \end{pmatrix} \end{split} \end{equation} where $B^k$ are constant symmetric matrices, \begin{equation*} B^k=\begin{pmatrix} 0 & \delta_1^k & \delta_2^k & \delta_3^k \\ \delta_1^k & 0 & 0 & 0\\ \delta_2^k & 0 & 0 & 0\\ \delta_3^k & 0 & 0 & 0 \end{pmatrix}. \end{equation*} \subsection{Energy estimates} \label{sec:energy-estimate} \begin{defn}[The energy functional] The energy functional for the unknown $V$, given by equation \eqref{eq:section5-sh:5} is \begin{equation} \label{eq:energy:1} E(t)\eqdef \langle V(t),V(t)\rangle_m=\|\partial_t u_h+\varkappa u_h\|_{H^m}^2+\|\partial_x u_h\|_{ H^m}^2. \end{equation} \end{defn} \begin{rem}[About the definition of the energy] \label{rem:section4-sh:1} It might look surprising to define the energy as a scalar product in $H^{m}$ while the vector $V$ as defined in \eqref{eq:section4-sh32} only contains terms that belong to $\mathbullet H^{m}$. We use this notation since the energy estimates contain terms that belong to $H^m$. \end{rem} Proceeding in the usual way, suppose $V$ satisfies \eqref{eq:symm:1}, then differentiation of the energy with respect to time results that \begin{equation*} \begin{split} \frac{1}{2}\frac{d}{dt} E(t) & = \langle \partial_t V(t),V(t)\rangle_m =\sum_{k=1}^3 \langle B^k\partial_k V, V\rangle_m -\varkappa \langle V,V \rangle_m+\varkappa^2 \langle u_h, \partial_t u_h+\varkappa u_h\rangle_m\\ & + \mathord{e^{-\varkappa t}} \langle a(t,\cdot)(1+u)^3,\partial_t u_h+\varkappa u_h\rangle_m - \mathord{e^{-\varkappa t}} \langle \widehat F_0,\partial_t u_h+\varkappa u_h \rangle_m\\ & = -\varkappa \| V\|_{H^m}^2+ \varkappa^2 \langle u_h, \partial_t u_h+\varkappa u_h\rangle_m + \mathord{e^{-\varkappa t}} \langle a(t,\cdot)(1+u)^3,\partial_t u_h+\varkappa u_h\rangle_m, \end{split} \end{equation*} since $\mathbullet H^{m}\perp\setR $, and since $B^k$ are symmetric and constant, then by integration by parts that $\langle B^k\partial_k V, V\rangle_m =0$. By the Cauchy Schwarz inequality, we obtain \begin{equation*} |\langle u_h, \partial_t u_h+\varkappa u_h\rangle_m|\leq \| u_h\|_{ H^m} \| \partial_t u_h+\varkappa u_h\|_{ H^m} \leq \| u_h\|_{ H^m}\| V\|_{ H^m} \end{equation*} and \begin{equation*} |\langle a(1+u)^3,\partial_t u_h+\varkappa u_h\rangle_m| \leq \Vert a \left( 1+u \right)^{3}\Vert_{H^{m}}\| V\|_{ H^m}, \end{equation*} which allows us to conclude, using the definition of the energy \eqref{eq:energy:1}, that \begin{equation} \label{eq:energy:2} \frac{1}{2} \frac{d}{dt} E(t)\leq -\varkappa E(t) + \left\{ \varkappa^2 \left\Vert u_h(t) \right\Vert_{H^m} + e^{-\varkappa t}\left\Vert a(t,\cdot)( \left( 1+u(t) \right)^{3} \right\Vert_{H^{m}} \right\} \sqrt{E(t)} \end{equation} We now apply Gronwall's inequality, Lemma \ref{lem:Gronwall}, in the interval $[t_0,t]$ and with $A(t)=-\varkappa$, then we obtain \begin{equation} \label{eq:Gronwall:2} \begin{split} \sqrt{E(t)} & \leq e^{-\varkappa(t-t_0)}\sqrt{E(t_0)} + \varkappa^2\int_{t_0}^te^{-\varkappa(t-s)} \left\Vert u_h(s) \right\Vert_{H^{m}}ds \\ & + \int_{t_0}^te^{-\varkappa (t-s)}e^{-\varkappa s} \left\Vert a(s,\cdot) \left( 1+u(s) \right)^3 \right\Vert_{H^m}ds \end{split}. \end{equation} \begin{rem}[The role of $u_{h}$ in the a-priori estimates] \label{rem:section4-sh:2} We observe that the term $1+u=1+\widehat u_0+u_h$ implies that, for a fixed $\widehat u_0$ equation \eqref{eq:section4-sh32} is not coupled to \eqref{eq:section4-sh12} and it consists only of the unknown $u_h$. Obviously, $u_h$ is not a solution to the system \eqref{eq:section4-sh12}--\eqref{eq:section4-sh32}, but it enables us to obtain important \textit{a-priori} estimates for the solution. \end{rem} We are now in a position to apply this energy estimate to show global existence by a bootstrap argument, which is done in the next section. \subsection{Global existence by a bootstrap argument} \label{sec:glob-exist-bootstr} In this section, we take the initial data in the homogeneous space, \begin{equation} \label{eq:section5-sh:33} \left\{\begin{array}{ll} u(x,0)=f(x), & \partial_t u(x,0)=g(x),\\ f\in \mathbullet H^{m+1}, & g\in \mathbullet H^m. \end{array}\right. \end{equation} This is not essential for the proof, but it makes it somewhat simpler. The following theorem is the main result of this section. \begin{thm}[Global existence and decay of solutions] \label{thr:section5-sh:1} Let $0<\varkappa<1$ and $m> \frac{5}{2}$, let the initial data be as specified by \eqref{eq:section5-sh:33}, and $a\in C([0,\infty);H^{m})$. There exists a suitable constant $\varepsilon$ such that if \begin{equation} \label{eq:small:1} \sup_{[0,\infty)}\|a(t,\cdot)\|_{ H^{m}}<\varepsilon, \end{equation} then system \eqref{eq:section4-sh12}--\eqref{eq:section4-sh32}, or equivalently equation \eqref{eq:wave:1}, with initial data given by \eqref{eq:section5-sh:33} has a unique solution \begin{equation} \label{eq:section4-sh:10} u\in C([0,\infty); H^{m+1}). \end{equation} Moreover \begin{equation} \label{eq:decay} \lim_{t\to\infty}\|u(t)\|_{H^{m+1}}\leq \widetilde \epsilon, \end{equation} where $\widetilde \epsilon$ depends on the smallness condition \eqref{eq:small:1}. \end{thm} \begin{rem}[Comparison with theorem \ref{thm:1}] \label{rem:section4-sh:3} We emphasize that, contrary to Theorem \ref{thm:1}, in section \ref{subsec:homogeneous}, the initial data are not required to be small. \end{rem} \begin{rem}[About the asymptotic behavior of the metric] \label{rem:section4-sh:5} Recall that the physical metric has the following form $ g_{\alpha\beta}=\phi^2\eta_{\alpha\beta}$, where $\eta_{\alpha\beta} $ denotes the Minkowski metric. In Section \ref{sec:fields-equations} we concluded that the background metric has the following form $(e^{\varkappa t})^2 \eta_{\alpha\beta}$. We shall now use the asymptotic estimate \eqref{eq:decay} of the global solutions to compare the asymptotic of the physical metric with the background metric. We remind that $\phi=\mathord{e^{\varkappa t}}(1+u)$, where $u$ is the solution to the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2}, therefore we can conclude that the asymptotic behavior of the metric can be described by the following expression \begin{equation} ( 1-\widetilde \epsilon)^2\leq \lim_{t\to\infty}\frac{g_{\alpha\beta}(t,x)}{e^{2\varkappa t}\eta_{\alpha\beta} }= \lim_{t\to\infty}(1+u(t,x))^2\leq (1+\widetilde \epsilon)^2. \end{equation} \end{rem} The proof of Theorem \ref{thr:section5-sh:1} is based on the following propositions which we present together with their corresponding proofs. We recall that by the existence theorem, Theorem \ref{thr:section3-fourier:1}, the solution of the system \eqref{eq:section4-sh12}--\eqref{eq:section4-sh32} exists in a certain time interval $[0,T]$. \begin{prop}[A priori estimates] \label{prop:2} Let $ 0<\varkappa<1$, $1<\beta<\frac{1}{\varkappa}$ and set $\alpha= \|V(0)\|_{H^m}$. Assume the solution $u=\widehat u_0+u_h$ to \eqref{eq:section4-sh12}--\eqref{eq:section4-sh32} with initial data \eqref{eq:section5-sh:33} exists for $t\in [0,T]$. If $\mathord{\left\|{a(t,\cdot)}\right\|_{ H^m}}$ is sufficiently small, then there exists a $T^{+}$, $0<T^{+}\leq T$, such that \begin{equation} \label{eq:section4-sh:2} \sup_{[0,T^{+}]}\mathord{\left\|{u(t)}\right\|_{ H^m}}\leq \alpha\beta \end{equation} and \begin{equation} \label{eq:section4-sh:1} E(T^{+})\leq E(0). \end{equation} \end{prop} \begin{proof} We start with the proof of inequality \eqref{eq:section4-sh:2}. Recall that although we have written $\alpha= \|V(0)\|_{H^m}$, the initial data are in the homogeneous Sobolev space and therefore by Proposition \ref{prop:9} and \eqref{eq:energy:1}, we obtain \begin{equation} \|u(0)\|_{H^{m+1}}=\|f\|_{H^{m+1}}=\|\partial_x f\|_{H^{m}}\leq \|V(0)\|_{H^m}=\alpha. \end{equation} Hence, since $ \beta > 0$, it follows from the existence Theorem \ref{thr:section3-fourier:1} and the continuity property of the corresponding solutions, that there exists $0<T^+\leq T$ such that \begin{equation} \label{eq:section4-sh4} \sup_{[0,T^+]}\| u(t)\|_{ H^m}\leq \alpha\beta. \end{equation} We now turn to inequality \eqref{eq:section4-sh:1}. For $t\in [0,T^+]$ we observe, using inequality \eqref{eq:Gronwall:2} that \begin{equation} \label{eq:Gronwall:1} \begin{split} \sqrt{E(t)} & \leq e^{-\varkappa t}\sqrt{E(0)} + \varkappa^2\int_{0}^te^{-\varkappa (t-s)} \alpha\beta ds + \int_{0}^te^{-\varkappa (t-s)}e^{-\varkappa s} \left\Vert a(s,\cdot) \left( 1+u(s) \right)^3 \right\Vert_{H^m} ds\\ & \leq e^{-\varkappa t}\sqrt{E(0)} +\varkappa \left(1 -e^{-\varkappa t}\right)\alpha\beta+ te^{-\varkappa t}\sup_{[0,t]}\left\Vert a(s,\cdot) \left( 1+u(s) \right)^3 \right\Vert_{H^m}. \end{split} \end{equation} A simple algebraic manipulation shows us that, $E(t)\leq E(0)$, if \begin{equation} \varkappa \left(e^{\varkappa t}-1\right)\alpha\beta+ t\sup_{[0,t]}\left\Vert a(s,\cdot) \left(1+u(s) \right)^3 \right\Vert_{H^m}\leq \left(e^{\varkappa t}-1\right)\sqrt{E(0)}, \end{equation} or equivalently \begin{equation} \label{eq:section4-sh:7} t\sup_{[0,t]}\left\Vert a(s,\cdot) \left(1+u(s) \right)^3 \right\Vert_{H^m}\leq \left(e^{\varkappa t}-1\right)\alpha\left(\beta-\varkappa\right). \end{equation} Since for $s\in [0,T^+]$, we can conclude that $\|u(s)\|_{H^m}\leq \alpha\beta\leq 2\alpha \beta$, we can apply Proposition \ref{prop:1} with $A=2\alpha \beta$, that results in \begin{equation} \label{eq:energy:3} \left\Vert a(s,\cdot) \left(1+u(s) \right)^3 \right\Vert_{H^m}\leq C(2\alpha\beta)\|a(s,\cdot)\|_{H^m}. \end{equation} We now set \begin{equation} \label{eq:energy:4} \epsilon_0=\frac{\varkappa\alpha(\beta-\varkappa)}{C(2\alpha\beta)}. \end{equation} Since $\beta-\varkappa>0$, $\epsilon_0>0$, therefore we can demand the smallness condition \begin{equation} \label{eq:energy:5} \sup_{[0,\infty)}\|a(t,\cdot)\|_{H^m}\leq \epsilon_0. \end{equation} We now let $t=T^+$ in inequality \eqref{eq:section4-sh:7}, then by inequality \eqref{eq:energy:3}, condition \eqref{eq:energy:5}, with \eqref{eq:energy:4}, we conclude that \begin{equation} \label{eq:section4-sh3} \begin{split} T^+\sup_{[0,T^+]}\left\Vert a(s,\cdot) \left(1+u(s) \right)^3 \right\Vert_{H^m} & \leq T^+ C(2\alpha\beta)\sup_{[0,T^+]}\|a(s,\cdot)\|_{H^m}\leq T^+ C(2\alpha\beta)\epsilon_0 \\ & = \varkappa T^+\alpha(\beta-\varkappa)\leq (e^{\varkappa T^+}-1)\alpha(\beta-\varkappa), \end{split} \end{equation} holds and consequently \eqref{eq:section4-sh3} implies inequality \eqref{eq:section4-sh:7}. This proves \eqref {eq:section4-sh:1} and completes the proof of the proposition. In the last step, we used the elementary inequality $x\leq e^{x}-1$. \end{proof} Based on Proposition \ref{prop:2} we define \begin{defn}[Definition of $T^\star$] \begin{equation} \label{eq:section4-sh:4} T^\ast=\sup\left\{T: \sup_{[0,T]}\mathord{\left\|{u(t)}\right\|_{H^m}}\leq \alpha\beta \ \text{and} \ E(T)\leq E(0)\right\}. \end{equation} \end{defn} The following proposition plays a central role in proving Theorem \ref{thr:section5-sh:1}. \begin{prop}[$T^{\star}$ is not finite] \label{prop:10} Under the assumptions of Proposition \ref{prop:2}, we obtain \begin{equation*} T^\ast=\infty. \end{equation*} \end{prop} It is important to note that we need two conditions in the definition of $T^{\star}$, as Proposition \ref{prop:2} already suggests. The role of these two conditions will become clearer after we finish the proof and we will come back to this point. \textsc{Sketch of the proof:} The proof of this proposition is rather long, and as we said, crucial for theorem \ref{thr:section5-sh:1} and that is why we sketch here its structure. We prove Proposition \ref{prop:10} by a contradiction argument, in other words, we assume that $T^\ast$ is finite, and then we show that both conditions of \eqref{eq:section4-sh:4} hold in a larger interval. The first step of the proof deals with the extension of the solution for $t>T^\ast$. In the second step, using the inequality \eqref{eq:section4-sh:1}, we show that there exists a $T^{\ddagger} >T^\ast$ such that \begin{equation} \label{eq:section4-sh:3} \sup_{[0,T^\ddagger]}\mathord{\left\|{u(t)}\right\|_{H^m}}\leq\alpha\beta. \end{equation} With this inequality proven, we are then able, in the third step, to show that \begin{equation} \label{eq:section4-sh14} E(T^\ddagger)\leq E(0). \end{equation} The existence of these inequalities in the interval $[0,T^{\ddagger}]$ contradicts the definition of $T^{\ast}$. Therefore we conclude that $T^\ast=\infty$. \begin{proof}[Proof of Proposition \ref{prop:10}] \quad \begin{enumerate}[label=\textsc{Step \arabic*.},wide,labelwidth=!,labelindent=0pt] \item\label{item:section4-sh:3} We need to extend the solution beyond $T^{\star}$, so let $\widetilde u$ be the solution to equation \eqref{eq:wave:1} with initial data $\widetilde u(T^\ast,x)=\widetilde f(x)$ and $ \partial_t \widetilde u(T^\ast,x)=\widetilde g(x)$, where $\widetilde f(x) =u(T^\ast,x)$ and $\widetilde g(x)=\partial_t u(T^\ast,x)$. We will show that \begin{equation} \label{eq:outline-of-proof-prop5:1} u(t)\in H^{m+1} \quad \mbox{for}\quad t\in [0,T^\ast], \end{equation} which implies $ \widetilde f\in H^{m+1}$, a fact that is needed in order to apply the existence theorem, Theorem \ref{thr:section3-fourier:1}. To prove \eqref{eq:outline-of-proof-prop5:1} we apply Proposition \ref{prop:9} to $ u_{h} $, which allows us to conclude \begin{equation} \label{eq:section4-sh35} \|u(t)\|_{H^{m+1}}^2=|\widehat u_0(t)|^2+\|u_h(t)\|_{H^{m+1}}^2=|\widehat u_0(t)|^2+\|\partial_x u_h(t)\|_{H^{m}}^2. \end{equation} Hence, since $u_h(t)\in H^m$, we see by equation \eqref{eq:section4-sh35} that $\partial_x u_h(t)\in H^{m}$ and we can conclude $u(t)\in H^{m+1}$. Consequently, by the existence theorem, Theorem \ref{thr:section3-fourier:1}, there exists a $T_1>T^\ast$ such that $\widetilde u(t)$ exists for $t\in [T^\ast,T_1]$. \item\label{item:section4-sh:4} We turn now to the proof of inequality \eqref{eq:section4-sh:3}. Since $\|\widetilde u(T^\ast)\|_{H^m}\leq \alpha\beta$, there exits a $T_{2}$, $T^\ast<T_2\leq T_1$, such that \begin{equation} \label{eq:outline-of-proof-prop54} \sup_{ [T^{\star},T_{2}]}\|\widetilde u(\tau)\|_{H^m}\leq 2\alpha\beta. \end{equation} holds. We now set \begin{equation} \label{eq:tilde} \widetilde V=\begin{pmatrix} \partial_t \widetilde u_h+\varkappa \widetilde u_h\\ \partial_x \widetilde u_h \end{pmatrix}, \end{equation} then $(\widehat{\widetilde u}_0,\widetilde u_h)$ solves system \eqref{eq:section4-sh12}--\eqref{eq:section4-sh32} with the initial data \begin{subequations} \begin{align} & \widehat{\widetilde u}_0(T^\ast)=\widehat{\widetilde f}_0, \quad \partial_t\widehat{\widetilde u}_0(T^\ast)=\widehat{\widetilde g}_0 \\ & \widetilde u_h(T^\ast,x)=\widetilde f_h(x), \quad \partial_t\widetilde u_h(T^\ast,x)=\widetilde g_h(x). \end{align} \end{subequations} We shall estimate each component of $(\widehat{\widetilde u}_0,\widetilde u_h)$ separately. We take $0<\epsilon_1$ such that $1<\beta -\epsilon_1$ and then we will prove below the following two inequalities: \begin{equation} \label{eq:outline-of-proof-prop53} |\widehat{\widetilde u_0}(t)|\leq \frac{\alpha \beta\epsilon_1}{2}, \qquad t\in [T^\ast,T_4], \end{equation} and \begin{equation} \label{eq:outline-of-proof-prop51} \sup_{[T^\ast,T_5]} \|\widetilde u_h(t)\|_{H^{m}}\leq \alpha(\beta-\epsilon_1). \end{equation} Here $T_4,T_5\in (T^\ast,T_2]$. Setting $T^{\ddagger}=\min\{T_4,T_5\}$, and combining \eqref{eq:outline-of-proof-prop53} and \eqref{eq:outline-of-proof-prop51}, we obtain \begin{equation} \label{eq:section4-sh:5} \sup_{[T^\ast,T^{\ddagger}]} \|\widetilde u(t)\|_{H^m}^2= \sup_{[T^\ast,T^{\ddagger}]}|\widehat{\widetilde u_0}(t)|^2+ \sup_{[T^\ast,T^{\ddagger}]}\|\widetilde u_h(t)\|_{H^m}^2\leq\frac{(\alpha\beta\epsilon_1)^2}{4} +(\alpha(\beta-\epsilon_1))^2\leq (\alpha\beta)^2, \end{equation} which proves \eqref{eq:section4-sh:3}. The last inequality requires that $\epsilon_1 \leq \frac{8}{\beta+4}$. We start with the estimate of $\widehat{\widetilde u_0}(t)$ for $t\in [T^\ast,T_2]$. Since it satisfies the initial value problem \eqref{eq:fs:4}--\eqref{eq:fs:5} with $k=0$, its solution is given by \begin{equation} \label{eq:section4-sh:6} \widehat{\widetilde u}_0(t) = \widehat{\widetilde f}_0+\widehat{\widetilde g}_0\left(\frac{1-e^{-2\varkappa(t-T^\ast)}}{2\varkappa}\right) + \frac{1}{2\varkappa}\int_{T^\ast}^t\left(1-e^{ -2\varkappa(t-\tau) } \right)e^{-\varkappa \tau} \widehat F_{ 0}(\tau) d\tau, \end{equation} where \begin{equation} \widehat F_{ 0}(\tau)=\frac{1}{(2\pi)^3}\int_{\setT^3}a(\tau,x)(1+\widetilde u(\tau,x))^3 dx. \end{equation} Since $\widehat{\widetilde f}_0=\widehat u_0(T^\ast)$ and the initial data $f,g\in \mathbullet H^{m}$, we conclude by equation \eqref{eq:u-0} and estimate \eqref{eq:u-0:2} that \begin{equation} \label{eq:estimate:7} | \widehat{\widetilde f}_0|\leq \frac{1}{2}\left(\frac{1-e^{-\varkappa T^{\ast}}}{\varkappa}\right)^2\sup_{[0,T^\ast]}|\widehat F_0(t)| \end{equation} holds. By Proposition \ref{prop:1}, we obtain \begin{equation} \label{eq:estimate:8} |\widehat F_0(t)|\leq \sup_{[0,T^\ast]}\|a(t,\cdot)\|_{L^\infty}C(2\alpha\beta). \end{equation} Hence, if we require that \begin{equation} \label{eq:cond:1} \sup_{[0,\infty)}\|a(t,\cdot)\|_{L^\infty}\leq \frac{2\alpha\beta\varkappa^2\epsilon_1}{6 C(2\alpha\beta)}, \end{equation} then we conclude that \begin{equation} \label{eq:estimate:1} | \widehat{\widetilde f}_0|\leq \frac{\alpha\beta\epsilon_1}{6}. \end{equation} holds. Before we proceed, we remark, that the smallness condition \eqref{eq:cond:1} is given in the term of $L^\infty$ norm, but it can easily be formulated in terms of $H^m$ norm by Sobolev embedding theorem. Next, since $\displaystyle\lim_{t\to T^\ast}\left(\frac{1-e^{-2\varkappa(t-T^\ast)}}{2\varkappa}\right)=0$, there exists a $T_3$, with $T^\ast<T_3\leq T_2$, such that \begin{equation} \label{eq:estimate:2} \left|\widehat{\widetilde g_0}\right| \left(\frac{1-e^{-2\varkappa(t-T^\ast)}}{2\varkappa}\right)\leq \frac{\alpha\beta\epsilon_1}{6}, \quad t\in [T^\ast,T_3]. \end{equation} For the third term on the left side of equation \eqref{eq:section4-sh:6}, we use inequality \eqref{eq:outline-of-proof-prop54}, and then by a similar argument used to estimate the term $| \widehat{\widetilde f}_0|$, we obtain \begin{equation} \label{eq:estimate:6} \left| \frac{1}{2\varkappa}\int_{T^\ast}^t\left(1-e^{ -2\varkappa(t-\tau) } \right)e^{-\varkappa \tau} \widehat F_{ 0}(\tau) d\tau\right|\leq \frac{1}{2\varkappa^2}C(2\alpha\beta)\sup_{[0,\infty)}\|a(t,\cdot)\|_{L^\infty} \leq \frac{\alpha\beta \epsilon_1}{6}, \end{equation} provided that the condition \eqref{eq:cond:1} is satisfied. Letting $T_4=\min\{T_2,T_3\}$, we conclude from inequalities \eqref{eq:estimate:1}, \eqref{eq:estimate:2} and \eqref{eq:estimate:6} implies that \eqref{eq:outline-of-proof-prop53} holds. We now turn to prove inequality \eqref{eq:outline-of-proof-prop51}: For a fixed $\widehat{\widetilde u_0}(t)$, the unknown $\widetilde V(t)$, as defined in equation \eqref{eq:tilde}, satisfies the symmetric hyperbolic system \eqref{eq:symm:1}, and that is why we can apply Proposition \ref{prop:6} with initial data $V(T^\ast,x)$. We first observe by Proposition \ref{prop:9} that \begin{equation} \label{eq:outline-of-proof-prop55} \begin{split} \mathord{\left\|{\widetilde u_h(t)}\right\|_{H^m}} &\leq \|\partial_x \widetilde u_h(t)\|_{H^{m-1}}\leq \|\widetilde V(t)\|_{H^{m-1}} \leq \| \widetilde V(t) -\widetilde V(T^\ast)\|_{H^{m-1}}+ \| \widetilde V(T^\ast)\|_{H^{m-1}} \end{split} \end{equation} holds. By the definition of $T^\ast$, we can conclude that $E(T^\ast)\leq E(0)$ holds, which then implies the inequality $\|V(T^\ast,\cdot)\|_{H^m} =\sqrt{ E(T^\ast)}\leq \sqrt{E(0)}=\alpha$. We now can apply Proposition \ref{prop:6} to $\| \widetilde V(t) -\widetilde V(T^\ast)\|_{H^{m-1}}$ with $C_0$ depending on $\alpha$, combine it with inequality \eqref{eq:outline-of-proof-prop55}, and then obtain \begin{equation} \label{eq:section4-sh:13} \mathord{\left\|{\widetilde u_h(t)}\right\|_{H^m}} \leq C_0(\alpha)(t-T^\ast)^{\frac{1}{m}}+ \alpha\leq \alpha(\beta-\epsilon_1) \end{equation} provided that $t-T^\ast\leq \left(\frac{\alpha(\beta-\epsilon_1-1)}{C_0(\alpha)}\right)^m$. Thus \eqref{eq:outline-of-proof-prop51} holds with $T_5=T^\ast+\left(\frac{\alpha(\beta-\epsilon_1-1)}{C_0(\alpha)}\right)^m$. \item\label{item:section4-sh:5} It remains to show that $E(t)\leq E(0)$ for $t\in [T^\ast,T^{\ddagger}]$, where $T^\ddagger=\min\{T_4,T_5\}$. We will first establish the inequality \begin{equation} \label{eq:section4-sh:8} \sqrt{E(t)} \leq e^{-\varkappa(t-T^\ast)}\sqrt{E(T^\ast)}+ \varkappa\left(1-e^{-\varkappa(t-T^\ast)}\right)\alpha\beta+ e^{-\varkappa t}(t-T^\ast)\sup_{[T^\ast,t]}\left\Vert a(\tau,\cdot) \left( 1+\widetilde u(\tau) \right)^3 \right\Vert_{H^m}. \end{equation} Using the energy estimate \eqref{eq:Gronwall:2} we observe that \begin{equation} \label{eq:Gronwall:8} \begin{split} \sqrt{E(t)} & \leq e^{-\varkappa(t-T^\ast)}\sqrt{E(T^\ast)} + \varkappa^2\int_{T^\ast}^te^{-\varkappa(t-\tau)} \left\Vert \widetilde u_h(\tau) \right\Vert_{H^{m}}d\tau \\ & + \int_{T^\ast}^te^{-\varkappa (t-\tau)}e^{-\varkappa \tau} \left\Vert a(\tau,\cdot) \left( 1+\widetilde u(\tau) \right)^3 \right\Vert_{H^m} d\tau. \end{split} \end{equation} Since the inequality $\|\widetilde u_h(t)\|\leq \alpha\beta$ holds in the interval $[T^\ast,T^\ddagger]$, then inequality \eqref{eq:section4-sh:8} follows by inserting this bound into the energy estimate \eqref{eq:Gronwall:8}. From this inequality, we observe that \begin{math} \sqrt{E(t)}\leq \sqrt{E(0)} \end{math} holds if \begin{equation} \begin{split} & \varkappa\left(e^{\varkappa(t-T^\ast)}-1\right)\alpha\beta+ e^{-\varkappa t}e^{\varkappa(t-T^\ast)}(t-T^\ast)\sup_{[T^\ast,t]}\left\Vert a(\tau,\cdot) \left(1+\widetilde u(\tau) \right)^3 \right\Vert_{H^m}\\ \leq & e^{\varkappa(t-T^\ast)}\sqrt{E(0)}-\sqrt{E(T^\ast)}=\sqrt{E(0)}\left(e^{ \varkappa(t-T^\ast) }-1\right)+\sqrt{E(0)}-\sqrt{E(T^\ast)}. \end{split} \end{equation} But we already know that $E(T^\ast)\leq E(0)$ holds, by the definition of $T^\ast$. Therefore it suffices to show that \begin{equation} \label{eq:outline-of-proof-prop512} \varkappa\left(e^{\varkappa(t-T^\ast)}-1\right)\alpha\beta+ e^{-\varkappa T^\ast}(t-T^\ast)\sup_{[T^\ast,t]}\left\Vert a(\tau,\cdot) \left(1+\widetilde u(\tau) \right)^3 \right\Vert_{H^m} \leq \sqrt{E(0)}\left(e^{\varkappa(t-T^\ast)}-1\right). \end{equation} Note, that since $\varkappa$ is strictly positive, we conclude inequality $e^{-\varkappa T^\ast}<1$, and therefore we can drop the term $e^{-\varkappa T^\ast}$. So it is enough to show that \begin{equation} \label{eq:section4-sh:9} (t-T^\ast)\sup_{[T^\ast,t]}\left\Vert a(\tau,\cdot) \left(1+\widetilde u(\tau) \right)^3 \right\Vert_{H^m} \leq \left(e^{\varkappa(t-T^\ast)}-1\right)\alpha(1-\beta\varkappa). \end{equation} We now let $t=T^{\ddagger}$ and we proceed as we did in the proof of Proposition \ref{prop:2}. Under the smallness condition on $a(t)$, \eqref{eq:energy:5} where $\epsilon_0 $ is given by \eqref{eq:energy:4}, we conclude that \begin{equation} \label{eq:outline-of-proof-prop510} \begin{split} (T^{\ddagger}-T^\ast)\sup_{[T^\ast,T^{\ddagger}]}\left\Vert a(\tau,\cdot) \left(1+\widetilde u(\tau) \right)^3 \right\Vert_{H^m} &\leq(T^{\ddagger}-T^\ast)C(2\alpha\beta)\sup_{[0,\infty)}\|a(\tau,\cdot)\|_{H^m }\\ \leq \varkappa (T^{\ddagger}-T^\ast)\alpha(1-\beta\varkappa) &\leq\left(e^{\varkappa(T^{\ddagger}-T^\ast)}-1\right)\alpha(1-\beta\varkappa). \end{split} \end{equation} holds. We observe that the inequalities in expression \eqref{eq:outline-of-proof-prop510} imply inequality \eqref{eq:section4-sh:9}, and thus we have proved inequality \eqref{eq:section4-sh14}. Taking into account the conditions on $a(t)$, namely \eqref{eq:energy:5} and \eqref{eq:cond:1} respectively, we conclude that there exists a positive $\epsilon$ depending on $\epsilon_0$ and $\epsilon_{1}$ such that if $\sup_{[0,\infty)}\|a(t,\cdot)\|_{H^m}\leq \epsilon$, the inequalities $\sup_{[0,T^{\ddagger}]}\left\{ \|u(t)\|_{H^m} \right\}\leq \alpha \beta$ and $E(T^\ddagger)\leq E(0)$ hold for $t\in [0, T^{\ddagger}]$. It is important to note that the condition \eqref{eq:cond:1} also can be formulated in terms of the $H^m $ norm. Therefore, both conditions hold in the larger time interval $[0,T^{\ddagger}]$. This implies that the assumption that $T^\ast<\infty$ is false and that completes the proof of Proposition \ref{prop:10}. \end{enumerate} \end{proof} \begin{rem}[About the definition of $T^\star$] We come back to the question of why we had two conditions in the definition of $T^{\star}$. One motivation was Proposition \ref{prop:2}, however there is an important difference in the proofs of Proposition \ref{prop:2} and Proposition \ref{prop:10}. While we proved \begin{equation} \sup_{[0,T]}\mathord{\left\|{u(t)}\right\|_{H^m}}\leq \alpha\beta \end{equation} in Proposition \ref{prop:2} by a simple continuity argument, we needed condition $E(T^{\ddagger})\leq E(0)$ to prove that corresponding inequality in Proposition \ref{prop:10}. In other words, both conditions are interconnected appropriately. \end{rem} \vspace{-0.35cm} We turn now to the proof of Theorem \ref{thr:section5-sh:1}. \vspace{-0.35cm} \begin{proof}[Proof of Theorem \ref{thr:section5-sh:1}] The global existence and regularity essentially follows from Propositions \ref{prop:2} and \ref{prop:10}. The asymptotic behavior \eqref{eq:decay} will be proven by considering $\left\vert \widehat{u}_{0} \right\vert$ and $\left\Vert u_{h} \right\Vert$ separately. We start to prove equation \eqref{eq:section4-sh:10}. By the existence theorem, Theorem \ref{thr:section3-fourier:1}, the solution to the initial value problem \eqref{eq:wave:1}--\eqref{eq:wave:2} exists in a certain time interval $[0,T]$. Consequently the system \eqref{eq:section4-sh12}--\eqref{eq:section4-sh32} has a solution in the time interval $[0,T]$. Then we can apply Propositions \ref{prop:2} and \ref{prop:10} that provide the existence of a global solution $u\in C([0,\infty);H^m)$. By Proposition \ref{prop:9} we obtain \begin{equation} \label{eq:outline-of-proof-theorem3:1} \|u(t)\|_{H^{m+1}}^2= |\widehat u_0(t)|^2+\|u_h(t)\|_{H^{m+1}}^2= |\widehat u_0(t)|^2+\|\partial_x u_h(t)\|_{H^{m}}^2. \end{equation} Hence, since $V\in C([0,\infty);H^m)$, it follows that $u\in C([0,\infty);H^{m+1})$. We now turn to the proof of the asymptotic behavior of the global solution as described by equation \eqref{eq:decay}. The idea is again based on the decomposition $u=\widehat{u}_{0}+u_{h}$ and to show that $\lim_{t\to\infty} \|u_h(t)\|_{H^{m+1}}=0$. So we set \begin{equation} \label{eq:outline-of-proof-theorem3:4} \mu\eqdef\limsup_{t\to\infty}\|u_h(t)\|_{H^{m+1}}. \end{equation} Then for a given $\epsilon >0$, there exists a $t_0$ such that $ \sup_{[t_0,\infty)} \|u_h(t)\|_{H^{m+1}}\leq \mu+\epsilon $. Using the energy estimate \eqref{eq:Gronwall:2} for $t>t_0$ and Proposition \ref{prop:1} with $A=\mu+\epsilon$, we obtain \begin{equation} \label{eq:Gronwall:13} \begin{split} \sqrt{E(t)}&\leq e^{-\varkappa(t-t_0)}\sqrt{E(t_0)}+ \varkappa\left(1-e^{-\varkappa(t-t_0)}\right)\sup_{[t_0,t]} \| u_h(\tau)\|_{\mathbullet H^m} \\ & \quad + \mathord{e^{-\varkappa t}} (t-t_0)\sup_{[t_0,t]}\left\{\|a(\tau,\cdot) (1+u(\tau))^3\|_{H^m}\right\} \\ & \leq e^{-\varkappa(t-t_0)}\sqrt{E(t_0)}+ \varkappa\left(1-e^{-\varkappa(t-t_0)}\right)(\mu+\epsilon)\\ &\quad + \mathord{e^{-\varkappa t}} (t-t_0)C(\mu+\epsilon)\sup_{[0,\infty)}\|a(t,\cdot)\|_{H^m}. \end{split} \end{equation} We conclude from inequality \eqref{eq:Gronwall:13} that \begin{equation} \label{eq:asymp:1} \limsup_{t\to\infty}\sqrt{E(t)}\leq \varkappa(\mu+\epsilon) \end{equation} holds. On the other hand, using the fact that \begin{math} \|u_h(t)\|_{H^{m+1}}=\|\partial_x u_h(t)\|_{H^m}\leq \sqrt{E(t)} \end{math} we obtain \begin{equation} \label{sec:step-4} \mu\leq \limsup_{t\to\infty}{\sqrt{E(t)}}\leq \varkappa(\mu+\epsilon). \end{equation} We may assume that $\mu$ is strictly positive since otherwise there is nothing to be proven. Since $\varkappa$ is strictly smaller than $1$, we can choose $\epsilon$ to be $\epsilon=\left( 1-\varkappa \right)\mu>0$. Then, from inequality \eqref{sec:step-4}, we obtain \begin{equation} \label{eq:section4-sh:15} \mu\leq \varkappa \left( \mu+\epsilon \right)=\varkappa\mu + \varkappa \left( 1-\varkappa \right)\mu<\varkappa\mu+ \left( 1-\varkappa \right)\mu=\mu, \end{equation} which implies that $\mu=0$ holds. Recalling definition \eqref{eq:outline-of-proof-theorem3:4}, we conclude that \begin{equation} \label{eq:section4-sh:12} \lim_{t\to\infty}\|u(t)\|_{H^{m+1}}^2=\lim_{t\to\infty}|\widehat u_0(t)|^2+\lim_{t\to\infty} \|u_h(t)\|_{H^{m+1}}^2=\lim_{t\to\infty}|\widehat u_0(t)|^2, \end{equation} holds and that is why it remains to estimate the limit of the term $|\widehat u_0(t)|$ only. Since we use the following initial data, $f,g\in \mathbullet H^{m}$, we can express $\widehat{u}_0(t)$ explicitly by formula \eqref{eq:u-0} and observe that \begin{equation} \widehat u_{ 0}(t)=\frac{1}{2\varkappa}\int_0^t\left(1-e^{ -2\varkappa(t-\tau) } \right)e^{-\varkappa \tau} \widehat F_{ 0}(\tau) d\tau. \end{equation} holds. Using a similar procedure we used to obtain inequalities \eqref{eq:estimate:7}, and \eqref{eq:estimate:8} respectively we conclude \begin{equation} |\widehat u_0(t)|\leq \frac{2}{2\varkappa^2} \sup_{[0,\infty)}\|a(t,\cdot)\|_{L^\infty}C(2\alpha\beta). \end{equation} Thus by the smallness condition \eqref{eq:cond:1}, we obtain \begin{equation} |\widehat u_0(t)|\leq\frac{\alpha\beta\epsilon_1}{3}. \end{equation} and observe that inequality \eqref{eq:decay} holds with $\widetilde \epsilon=\frac{\alpha\beta\epsilon_1}{3}$. \end{proof} \end{document} \section{Blowup of solutions even for small initial data} \label{sec:blow-solut-large} In the previous sections, \ref{subsec:homogeneous} and \ref{sec:glob-exist-bootstr}, we proved the global existence (and uniqueness) of classical solutions in the Sobolev spaces $H^m$. It is important to emphasize, that the smallness of $a(t,x)$ played an essential role in the proof. That is why we want to drop to the smallness assumption on $a(t,x)$ and investigate its consequence. Our main result can be stated as follows. \begin{thm}[Blowup in finite time] \label{thm:blow-up} Let $u$ be the solution to the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2} in the interval $[0,T)$, where $0<T\leq \infty$ and assume the following conditions: \begin{equation} \label{eq:section3A-blowup:8} 0<a_{0}\leq a(t,x), \qquad \forall t\geq 0, \end{equation} \begin{equation} \label{eq:blowup:3} 1+f(x)>0, \quad \Delta f(x) \geq 0, \qquad x\in \setT^3, \end{equation} \begin{equation} \varkappa(1+f(x))+g(x)\geq 0, \end{equation} \begin{equation} \label{eq:section3A-blowup:7} \widehat g_{0}> 0 \end{equation} and \begin{equation} \label{eq:section3A-blowup:9} \widehat g_0^2-\frac{a_0}{2}(1+\widehat f_0)^4\leq 0. \end{equation} Then for sufficiently large $a_0$, $T$ is finite, and moreover the following holds: \begin{equation} \label{eq:section6-blowup:1} \lim_{t\uparrow T}\|u(t,\cdot)\|_{H^{m+1}}=\infty. \end{equation} \end{thm} We recall that $\widehat f_0$ and $\widehat g_0$ are the zero Fourier coefficients of $f$ and $g$ respectively. Also, note that condition \eqref{eq:blowup:3} implies that $1+\widehat f_0>0$. \begin{rem} \label{rem:section5-blowup:1} \hfill \begin{enumerate}[label=\arabic*.] \item Note that for large $a(t,x)$ blow-up occurs in finite time even if the initial data are small. \item We actually can neglect condition \eqref{eq:section3A-blowup:9} since most likely it holds when $a_0$ is large. \item We proved the blow up under the assumptions that $\Delta f(x)\geq 0$ and $\widehat g_0>0$, where $\partial_t u(0,x)=g(x) $. Our conjecture is that the blow up holds even without those restrictions. \end{enumerate} \end{rem} \textsc{Proof Sketch:} By the local existence theorem, Theorem \ref{thr:section3-fourier:1}, there exists a regular unique solution $u$ to the Cauchy problem, \eqref{eq:wave:1}--\eqref{eq:wave:2}, namely $u(t,\cdot)\in L^\infty([0,T]; H^{m+1}(\setT^3)\cap C^{0,1}([0,T]; H^{m}(\setT^3)$ We adopt an idea of Yagdjian \cite{Yagdjian_05} that was used for a different wave equation, and we set \begin{equation} \label{eq:blow:1} F(t)\upperrel{\rm def}{=}\frac{1}{(2\pi)^3}\int_{\setT^3} u(t,x)dx=\widehat u_{ 0}(t) \end{equation} and we shall derive differential inequality \eqref{eq:blow:6-a} for $F$. For that purpose, we need to apply Jensen's inequality to the right hand side of equation \eqref{eq:blow:3}. In order to do so, we have to ensure that the term $1+u(t,x)$ is nonnegative in the existence interval $[0,T)$ and we prove it in Section \ref{sec:positivity}. Finally, we use Lemma \ref{lem:4} below, which states that a function that satisfies differential inequality \eqref{eq:blow:6-a} blows up in finite time. Since \begin{equation} \label{eq:blowup:9} \|u(t)\|_{H^{m+1}}^2=|F(t)|^2+\sum_{0\neq k\in\setZ^3}|k|^{2(m+1)} | \widehat u_k(t)|^2\geq |F(t)|^2 \end{equation} holds, the blow up of $F$ implies the blow up of the solution to the Cauchy problem \eqref{eq:wave:1}--\eqref{eq:wave:2}. \begin{proof}[Proof of Theorem \ref{thm:blow-up}] We start to derive a differential inequality for $F$ that is defined by equation \eqref{eq:blow:1}. First, note that \begin{equation} F'(t)=\frac{1}{(2\pi)^3}\int_{\setT^3} \partial_t u(t,x)dx =\widehat u_{ 0}'(t) \quad \text{ and}\quad F''(t)=\frac{1}{(2\pi)^3}\int_{\setT^3} \partial_{tt} u(t,x)dx. \end{equation} It is a well known fact that the integral of the Laplacian over the $\setT^3$ is zero, or in other words, \begin{equation} \label{eq:section5-blowup1} \int_{\setT^3} \Delta u(t,x)dx =0. \end{equation} One way to prove equation \eqref{eq:section5-blowup1} is to expand $u$ to its Fourier series \eqref{eq:fs:1}, then we observe that the zero coefficient $\Delta u $ is zero (another possibility to prove equation \eqref{eq:section5-blowup1} is to use Gauss' theorem). So we obtain \begin{equation} \label{eq:blow:3} F''+2\varkappa F'=\frac{1}{(2\pi)^3}\int_{\setT^3}\left(\partial_{tt} u+2\varkappa \partial_t u-\Delta u\right)dx =\mathord{e^{-\varkappa t}}\frac{1}{(2\pi)^3} \int_{\setT^3}a(t,x)(1+u(t,x))^3dx. \end{equation} Our idea is to estimate the right hand side of equation \eqref{eq:blow:3} by Jensen's inequality (see e.~g.~\cite[Ch.~2]{Lieb-Loss}), with the convex function $s^3$, where $s=\sqrt[3]{a(t,x)}(1+u(t,x))$. The function $s^3$ is convex, only for $s\geq 0$. Recall that $0<a(t,x)$ holds by assumption \eqref{eq:section3A-blowup:8}. The proof of the positivity of $1+u(t,x)$ is more involved and we refer to Theorem \ref{thm:positivity} in Section \ref{sec:positivity}. Applying Jensens's inequality as outlined above we obtain \begin{equation} \label{eq:section5-blowup3} \begin{split} & \frac{1}{(2\pi)^3}\int_{\setT^3} a(t,x)\left(1+u(t,x)\right)^3 dx \geq \left( \frac{1}{(2\pi)^3}\int_{\setT^3} a^{1/3}(t,x)\left(1+u(t,x)\right)dx\right)^3\\ & \geq a_0 \left(\frac{1}{(2\pi)^3}\int_{\setT^3} \left(1+u(t,x)\right)dx\right)^3=a_0\left(1+F(t)\right)^3. \end{split} \end{equation} Hence, we have obtained the following initial differential inequality \begin{subequations} \begin{align} \label{eq:blow:6-a} & F''+2\varkappa F'\geq e^{-\varkappa t} a_0\left(1+F\right)^3 \\ \label{eq:blow:6-b} & F(0)=\widehat f_0, \quad \ F'(0)=\widehat g_0 \end{align} \end{subequations} for $t\in [0,T)$. We now use Lemma \ref{lem:4}, stating that $F$ blows up in a finite time interval, then $u(t,x)$ also blows up using equation \eqref{eq:blowup:9}. \end{proof} It remains to state and to prove Lemma \ref{lem:4}. \begin{lem}[Blow up for the associated differential inequality] \label{lem:4} Let $F$ satisfy the differential inequality \eqref{eq:blow:6-a} in the interval $[0,T)$, where $0<T\leq \infty$ and with the initial data \eqref{eq:blow:6-b}. Suppose assumptions \eqref{eq:blowup:3}--\eqref{eq:section3A-blowup:9} of Theorem \ref{thm:blow-up} hold, then for sufficiently large $a_0$, $T$ is finite, and moreover \begin{equation} \label{eq:blow:6} \lim_{t \uparrow T}F(t)=\infty. \end{equation} \end{lem} Note, that the differential inequality \eqref{eq:blow:6-a} contains a strong damping term $e^{-\varkappa t}$. So we want to know whether this damping term prevents the blowup in finite time. It turns out, however, that this is not the case. \textsc{Proof sketch:} The proof consists of three main steps. In the first step, we show that $F'(t)$ is a positive function in the interval of existence $[0,T)$. In the second step, we shall make a variable change in order to transform the differential inequality \eqref{eq:blow:6-a} to an inequality without a first-order term. This together with the positivity of $F'$ will enable us in the third step to integrate the inequalities and to estimate the time of the blow up. \begin{proof}[Proof of Lemma \ref{lem:4}] \hfill \begin{enumerate}[label=\textbf{Step \arabic*},wide, itemsep=4ex, labelwidth=!, labelindent=0pt] \item \label{item:section5-blowup:1} We claim that $F^\prime(t)>0$ holds in the existence interval $[0,T)$. To see that we set \begin{equation} \label{eq:section5-blowup:1} T^\ast=\sup\{T_1: F'(t)>0 \quad \text{for}\ \ t\in [0,T_1), 0\leq T_1\leq T\}. \end{equation} By the assumptions of Lemma \ref{lem:4}, $F^\prime(0)=\widehat g_0>0$, hence $T^\ast>0$. We now assume by contradiction that $T^\ast<T$, then by the continuity of $F^\prime(t)$, we conclude that $F^\prime(T^\ast)=0$. Recall that by assumption \eqref{eq:blowup:3}, $1+F(0)=1+\widehat f_0>0$, hence $1+F(T^\ast)\geq 1+F(0)>0$ and consequently \begin{equation} \label{eq:section5-blowup:3} F^{\prime\prime}(T^{\ast}) = F^{\prime\prime}(T^{\ast}) +2\varkappa F'(T^{\ast})\geq e^{-\varkappa T^{\ast}} a_0\left(1+F(T^{\ast})\right)^3>0. \end{equation} This implies that $F$ attains a local minimum at time $T^\ast$. But this is impossible since $F$ is an increasing function. Therefore we conclude that $T^\ast =T$. \item\label{item:section5-blowup:2} We start with a variable change of the form $\tau=\omega(t)$ and define a function $G(\tau)$ such that $F(t)=G(\omega(t))$. Then $F^\prime (t)= \frac{ dG}{d \tau}(\omega(t))\omega'(t)$ and $F^{\prime\prime} (t)= \frac{ d^2G}{d \tau^2}(\omega(t))(\omega'(t))^2+ \frac{ dG}{d \tau}(\omega(t))\omega^{\prime\prime}(t)$, which leads to \begin{equation} \label{eq:blow:2} F^{\prime\prime}+2\varkappa F^\prime= \frac{d^2 G}{d\tau^2}\left(\omega^\prime \right)^2 +\frac{d G}{d\tau}\left(\omega^{\prime\prime}+2\varkappa\omega^\prime\right). \end{equation} Now we choose $\omega $ such that it satisfies the equation $\omega^{\prime\prime}+2\varkappa\omega^\prime=0$, and in order to obtain an one-to-one transformation for $t >0$ we require that $\omega'>0$. It is straightforward to calculate its general solution \begin{equation*} \omega(t)=C_1e^{-2\varkappa t}+C_0, \end{equation*} for which we choose $C_1=-1 $ and $ C_0=2$, and then \begin{equation} \label{eq:section3A-blowup:5} \tau=\omega(t)=-e^{-2\varkappa t}+2. \end{equation} Note that $\omega $ maps $[0,\infty) $ onto $[1,2)$ in a one-to-one manner. Taking into account equation \eqref{eq:blow:2}, inequality \eqref{eq:blow:6-a} is equivalent to \begin{equation} \label{eq:blow:8a} \frac{d^2 G}{d\tau^2}\left(\omega^\prime\right)^2\geq e^{-\varkappa t} a_0(1+G)^3 \end{equation} or \begin{equation} \label{eq:blow:9} \frac{d^2 G}{d\tau^2}\geq \frac{e^{-\varkappa t}}{(2\varkappa e^{-2\varkappa t})^2} a_0(1+G)^3=\frac{e^{3\varkappa t}}{4\varkappa^2}a_0(1+G)^3 \geq\frac{ a_0}{4\varkappa^2}(1+G)^3. \end{equation} In order to simplify the notation, we set $\frac{d G}{d\tau}=G^\prime$ and $G^{\prime\prime}=\frac{d^2 G}{d\tau^2}$, then $G$ satisfies the initial value inequality \begin{subequations} \begin{align} \label{eq:blow:5} & G''\geq \frac{a_0}{4\varkappa^2}(1+G)^3 \\ \label{eq:blow:4} & G(1)=\widehat f_0, \quad G'(1)=\frac{\widehat g_0}{2\varkappa}. \end{align} \end{subequations} We are now in a position to show the blow up for $G$ at some $1<\tau_0<2$, and consequently, $F$ will blow up at $t_0=\omega^{-1}(\tau_0)$. \vskip 5mm \item\label{item:section5-blowup:3} We will show that if $a_0$ is sufficiently large, then there exits $\tau_0<2$ such that \begin{equation} \label{eq:blow:7} \lim_{\tau \uparrow \tau_0} G(\tau)=\infty. \end{equation} Note that $F'>0$ holds in the existence interval as it was proven in \ref{item:section5-blowup:1}, and since $F'=G'\frac{d\omega}{d t}$ and $\omega^\prime>0$, we can conclude that $G'>0$ holds. Thus we can multiply inequality \eqref{eq:blow:5} by $G'$ and obtain \begin{equation*} G'' G'\geq \frac{a_0}{4\varkappa^2}(1+G)^3 G'. \end{equation*} Integrating both sides of inequality \eqref{eq:blow:5} from $1 $ to $\tau$, we conclude that \begin{equation*} \frac{1}{2}\left(\left(G'(\tau)\right)^2-\left(G'(1)\right)^2\right)\geq \frac{a_0}{16 \varkappa^2}\left(\left(1+G(\tau)\right)^4-\left(1+G(1)\right)^4\right). \end{equation*} Taking into account the initial values \eqref{eq:blow:4} we observe that \begin{equation} \begin{split} \label{eq:blow:14} \left(G'(\tau)\right)^2 &\geq \frac{a_0}{8\varkappa^2}\left(1+G(\tau)\right)^4+\left(\frac{\widehat g_0}{2\varkappa }\right)^2 -\frac{a_0}{8\varkappa^2}\left(1+\widehat f_0\right)^4\\ & =\frac{a_0}{8\varkappa^2} \left\{\left(1+G(\tau)\right)^4 - \left\{ \left(1+\widehat f_0\right)^4-\frac{2\widehat g_0^2}{a_0}\right\}\right\}. \end{split} \end{equation} The expression $(1+\widehat f_0)^4-\frac{2\widehat g_0^2}{a_0}\geq 0$ by assumption \eqref{eq:section3A-blowup:9}, and in order to simplify the calculations we set \begin{equation*} \lambda^4\eqdef(1+\widehat f_0)^4-\frac{2\widehat g_0^2}{a_0}. \end{equation*} Now, since $G'(\tau)>0$, \begin{equation} (1+G(\tau))^4\geq (1+G(1))^4=(1+\widehat f_0)^4\geq \lambda^4, \end{equation} hence, \begin{equation} G'(\tau)\geq \frac{\sqrt{a_0}}{\sqrt{8}\varkappa}\left\{ \left(1+G(\tau)\right)^4-\lambda^4\right\}^{\frac{1}{2}}\geq \frac{\sqrt{a_0}}{\sqrt{8}\varkappa}\left\{ \left(1+G(\tau)\right)^2-\lambda^2\right\}, \end{equation} or \begin{equation} \label{eq:blow:12} \frac{\sqrt{8}\varkappa}{\sqrt{a_0}}\dfrac{G'(\tau)}{ \left(1+G(\tau)\right)^2-\lambda^2}=\frac{\sqrt{8}\varkappa}{\sqrt{a_0}}\frac{ G'(\tau)}{2\lambda}\left(\dfrac{1}{ \left(1+G(\tau)\right)-\lambda}-\dfrac{1}{ \left(1+G(\tau)\right)+\lambda}\right) \geq 1. \end{equation} Integration of both sides of equation \eqref{eq:blow:12} results in \begin{equation} \frac{\sqrt{2}\varkappa}{\lambda\sqrt{a_0}}\left\{\ln\left(\frac{ 1+G(\tau)-\lambda}{ 1+G(\tau)+\lambda}\right)-\ln\left(\frac{ 1+\widehat f_0-\lambda}{ 1+\widehat f_0+\lambda}\right)\right\}\geq \tau-1, \end{equation} or \begin{equation} \label{eq:blow:11} \ln\left(\frac{ 1+G(\tau)-\lambda}{ 1+G(\tau)+\lambda}\right)\geq \frac{\lambda\sqrt{a_0}}{\sqrt{2}\varkappa}(\tau-1)+\ln\left(\frac{ 1+\widehat f_0-\lambda}{ 1+\widehat f_0+\lambda}\right). \end{equation} Note that by inequality \eqref{eq:section3A-blowup:7} $1+\widehat f_0-\lambda>0$ so inequality \eqref{eq:blow:11} is well defined. We set \begin{equation} \beta\eqdef\frac{1+\widehat f_0-{\lambda}}{1+\widehat f_0+{\lambda} }, \end{equation} then inequality \eqref{eq:blow:11} is equivalent to \begin{equation} \frac{1+G(\tau)-\lambda}{ 1+G(\tau)+\lambda}\geq e^{\frac{\lambda\sqrt{a_0}}{\sqrt{2}\varkappa}(\tau-1)}\beta, \end{equation} and from this inequality we get that \begin{equation} \label{eq:blow:10} {1+G(\tau)}\geq \dfrac{\lambda \left( e^{\frac{\lambda\sqrt{a_0}}{\sqrt{2}\varkappa}(\tau-1)}\beta+1\right)}{1-\beta e^{\frac{\lambda\sqrt{a_0}}{\sqrt{2}\varkappa}(\tau-1)}}. \end{equation} The right hand side of inequality \eqref{eq:blow:10} blows up at time $\tau_0$ for which $\ln(1/\beta)= \frac{\lambda\sqrt{a_0}}{\sqrt{2}\varkappa}(\tau_0-1)$ or \begin{equation} \tau_0=\frac{\sqrt{2}\varkappa}{\lambda\sqrt{a_0}}\ln\left(\frac{1}{\beta} \right)+1. \end{equation} Thus blow up will occur at time $\tau_0$, however, in order that it will be finite in $t$, we have to assure that $ \tau_0<2$, that is, \begin{equation} \label{eq:blow:8} \frac{\sqrt{2}\varkappa}{\lambda\sqrt{a_0}}\ln\left(\frac{1}{\beta} \right)<1. \end{equation} So we now estimate this expression. Recall that \begin{equation} \lambda^4= (1+\widehat f_0)^4-\frac{2\widehat g_0^2}{a_0}=(1+\widehat f_0)^4\left(1-\frac{2\widehat g_0^2}{(1+\widehat f_0)^4a_0}\right). \end{equation} We set \begin{equation} \label{eq:blow:13} z=\frac{2\widehat g_0^2}{(1+\widehat f_0)^4a_0} \end{equation} and we note that $z$ becomes smaller as $a_0$ grows. Hence \begin{equation} \lambda =(1+\widehat f_0)(1-z)^{\frac{1}{4}}=(1+\widehat f_0)\left(1-\frac{1}{4}z+o(z)\right), \end{equation} and \begin{equation} \frac{1}{\beta} =\frac{(1+\widehat f_0)+{\lambda}}{(1+\widehat f_0)-{\lambda} }=\frac{1+(1-z)^{\frac{1}{4}}}{1-(1-z)^{\frac{1}{4}}} =\frac{2-\frac{z}{4}+o(z)}{\frac{z}{4}+o(z)}=\frac{8}{z}-1+o(z). \end{equation} We now express $a_0$ by $z$ through condition \eqref{eq:blow:13}, then \begin{equation} \label{eq:blow:21} \frac{\sqrt{2}\varkappa}{\lambda\sqrt{a_0}}\ln\left(\frac{1}{\beta} \right)=\frac{\varkappa (1+\widehat f_0)\sqrt{z}}{(1-z)^{\frac{1}{4}}\widehat g_0}\left( \ln\left(\frac{8}{z}-1+o(z)\right)\right) \to 0, \quad \text{as} \ z\to 0. \end{equation} Consequently, inequality \eqref{eq:blow:8} holds for $a_0$ sufficiently large and there exists $\tau_0<2$ such that equation \eqref{eq:blow:7} is true. \end{enumerate} \end{proof} \end{document} \section{Positivity of the scalar field function} \label{sec:positivity} The proof of the blowup of classical solution presented in Section \ref{sec:blow-solut-large} depends on Jensen inequality. This inequality states that \begin{equation} \label{eq:section6-positivity:1} \Phi\left(\int f d \mu\right)\leq \int \Phi\left(f \right) d \mu, \end{equation} holds, whenever $\Phi$ is a convex function and $\mu$ is a probabilistic measure. In our case, the function $\Phi(s)=s^3$ is a convex function only if $ s\geq 0$. Hence, in order to apply \eqref{eq:section6-positivity:1}, we need to show that $ 1+u(t,x)\geq 0$. Since the proof of this inequality is a bit lengthy and requires additional tools, we have moved the proof to a separate section. Another important issue with the positivity of $1+u(t,x)\geq 0$ is the following. The metric of the spacetime is given by $g_{\alpha\beta}=\phi^2\eta_{\alpha\beta}$, where $\phi(t,x)=e^{\varkappa t}(1+u(t,x))$. Hence, if $1+u(t,x)=0$, then metric vanishes and the solution in that case has no physical meaning. We recall that $u $ satisfies the initial value problem \eqref{eq:wave:1}--\eqref{eq:wave:2}, and the known existence theorem for semi-linear wave equations, \cite[Theorem 6.4.11]{Hormander_1997}, assures the existence and uniqueness of $C^2 $-solution in a certain time interval $[0, T)$. Our aim is to show that $ 1+u(t,x)\geq 0$ provided that $a(t,x)>0 $ and the initial data $ 1+u(0,x)=1+f(x)$ are positive. However, the condition $1+f(x)>0$ is not sufficient and further conditions are needed. The question of which additional conditions to impose has also been discussed in other publications in which the solution of the linearized equation is required to be positive, see for example \cite{Caffarelli_Friedman_86} and the celebrated paper by John \cite{John_79}. In the following we will present the main result of this section. Since our major interest here is the positivity, we assume that initial data $ f $ and $ g $ are sufficiently smooth in this section. \begin{thm}[The positivity of $1+u(t,x)$] \label{thm:positivity} Assume $ u(t,x)$ is a unique $C^2 $ solution of initial value problem \eqref{eq:wave:1}--\eqref{eq:wave:2} for $ t\in [0,T)$, $x\in \setT^3$ and for some positive $T$. Moreover, assume that \begin{align} \label{eq:section6-positivity:6} a(t,x)&>0 \\ \label{eq:positivity:5} 1+f(x)&>0, \quad g(x) \geq 0, \quad x\in \setT^3\\ \label{eq:positivity:6} \Delta f(x)&\geq 0, \quad x\in \setT^3. \end{align} Then \begin{equation} \label{eq:section6-positivity:4} 1+u(t,x)>0 \quad \text{ for } \ (t,x)\in [0,T)\times \setT^3. \end{equation} \end{thm} The proof of Theorem \ref{thm:positivity} is based on the and the application of Kirchhoff's formula for linear wave equations in $\setR^3$ \cite[\S 5]{john86:_partial} and this is why we have to linearize equation \eqref{eq:wave:1}. We first transform the equation \eqref{eq:wave:1} with the damping term, to an appropriate form by setting $\phi(t,x)=e^{\varkappa t}(1+ u(t,x))$. We recall that $\phi $ satisfies the field equation \eqref{eq:Nordstrom:3} with the cosmological constant $\varkappa^2$, thus the initial value problem \eqref{eq:wave:1}--\eqref{eq:wave:2} is equivalent to \begin{subequations} \begin{align} \label{eq:positivity:1} \phi_{t t}-\Delta \phi&=e^{-3 \varkappa t} a(t, x) \phi^{3}+\varkappa^{2} \phi \\ \label{eq:positivity:18} \phi(0, x)&=1+f(x) \\ \label{eq:section6-positivity:2} \phi_{t}(0, x)&=h(x), \end{align} \end{subequations} where \begin{equation} h(x)\upperrel{\text{\rm def}}{=}\varkappa(1+f(x))+g(x). \end{equation} The linearization of \eqref{eq:positivity:1}, \eqref{eq:positivity:18} and \eqref{eq:section6-positivity:2} results in the system \begin{subequations} \begin{align} \label{eq:positivity:2} v_{tt} - \Delta v&=P(t, x) \\ \label{eq:positivity:22} v(0, t)&=1+f(x)\\ \label{eq:section6-positivity:3} v_{t}(0, x)&=h(x), \end{align} \end{subequations} where $P(t,x)$ denotes the linearization of the nonlinear right hand side of \eqref{eq:positivity:1} and whose precise form it of no importance. By Kirchhoff's formula (see e.g.~\cite[\S 5]{John_79}) solution of the above system is given by \begin{equation} \label{eq:positivity:4} \begin{aligned} v(t, x)=& \frac{t}{4 \pi} \int_{|\xi|=1} h(x+t \xi) d \omega_\xi+\frac{\partial}{\partial t}\left(\frac{t}{4 \pi} \int_{|\xi|=1}(1+f(x+t\xi)) d \omega_\xi\right)\\ &+\frac{1}{4 \pi} \int_{0}^{t}(t-s)\left(\int_{|\xi|=1} P(s, x+(t-s) \xi) d \omega_\xi \right) d s, \end{aligned} \end{equation} where $ d \omega_\xi$ is the Lebesgue measure of the unite sphere $\mathbb{S}^2$. \begin{rem} In this paper, however, we deal with spatially periodic solutions, while Kirchhoff's formula \eqref{eq:positivity:4} provides solutions in $\setR^3$. But if initial data \eqref{eq:positivity:22} and \eqref{eq:section6-positivity:3}, and the right hand side of \eqref{eq:positivity:2} are periodic, then it follows from \eqref{eq:positivity:4} that $v(t,x) $ is also a periodic function of the space variable $x$. \end{rem} \textsc{Proof sketch:} Our proof's strategy can be described as follows: \begin{enumerate}[label=\arabic*.] \item We first show that the solution to the homogeneous initial value problem \eqref{eq:positivity:2} --~\eqref{eq:section6-positivity:3} is positive, if the conditions \eqref{eq:positivity:5} and \eqref{eq:positivity:6} are met. \item We then construct a monotone sequence, $0<\phi_n\leq \phi_{n+1}$, by an iteration of the linearized equation. \item We show that whenever $\phi(x,t)>0$, then \begin{equation} \label{eq:skech:1} \phi_n(t,x)\leq \phi(t,x). \end{equation} We then use \eqref{eq:skech:1} to show that $\phi(t,x)>0 $ in the entire existence interval. \item \label{item:section6-positivity:3} Finally, the positivity of $\phi$ implies that $1+u(t,x)=e^{-\varkappa t}\phi(t,x)>0$. \end{enumerate} \subsection*{The iteration scheme} We denote the right hand side of \eqref{eq:positivity:1} by $G(\phi)$, that is, \begin{equation*} G(\phi)=G(\phi, t, x)=e^{-3 k t} a(t, x) \phi^{3}+k^{2} \phi. \end{equation*} Note that $G(0)=0$ and \begin{equation*} \frac{\partial}{\partial \phi} G(\phi)=3 e^{-3 k t} a(t, x) \phi^{2}+k^{2}. \end{equation*} Hence, if $a(t,x)>0 $, then $ G $ is an increasing function of $\phi$ and non-negative for $\phi\geq 0 $. We now let $\phi_0(t,x)=0$ and set $\phi_{n+1}$ to be the solution to the linear equation \begin{equation} \label{eq:positivity:8} \begin{cases} & \left(\phi_{n+1}\right)_{t t}-\Delta \phi_{n+1} =G\left(\phi_{n}\right) \\ & \phi_{n+1}(0, x) =1+f(x) \\ & (\phi_{n+1})_{t}(0, x) =h(x) \end{cases}. \end{equation} \subsubsection*{\textsc{Step 1: The free wave equation.}} \label{sec:step-1:-free} The first step of the iteration consists of showing that the free wave equation \begin{equation} \label{eq:positivity:30} \begin{cases} & \left(\phi_{1}\right)_{t t}-\Delta \phi_{1} =G\left(0\right)=0 \\ & \phi_{1}(0, x) =1+f(x) \\ & (\phi_{1})_{t}(0, x) =h(x) \end{cases} \end{equation} has a positive solution. \begin{prop} \label{prop:6.1} Under conditions \eqref{eq:positivity:5} and \eqref{eq:positivity:6} it follows that $\phi_1(t,x)>0$. \end{prop} \begin{proof} Taking the time derivative in Krichhoff's formula \eqref{eq:positivity:4}, we observe that \begin{equation} \label{eq:section6-positivity:5} \phi_1(t, x) =\frac{t}{4 \pi} \int_{|\xi|=1} \left(h(x+t \xi)+ \xi\cdot \nabla f(x+t\xi)\right) d \omega_\xi+\frac{1}{4 \pi} \int_{|\xi|=1}(1+f(x+ t\xi) )d \omega_\xi. \end{equation} Applying now Gauss Divergence Theorem to term $ \xi\cdot \nabla f(x+t\xi)$, we obtain that \begin{equation} \label{} \phi_1(t, x) =\frac{t}{4 \pi} \int_{|\xi|=1} \left(h(x+t \xi)\right) d \omega_\xi + \frac{t^2}{4 \pi} \int_{|\xi|\leq1} \Delta f(x+t \xi) d \xi +\frac{1}{4 \pi} \int_{|\xi|=1}(1+f(x+ t\xi) )d \omega_\xi. \end{equation} Hence conditions \eqref{eq:positivity:5} and \eqref{eq:positivity:6} imply that $\phi_1(t,x)>0$. \end{proof} \subsubsection*{\textsc{Step 2: Monotonicity.}} \label{sec:textscst-2:-monot} \begin{prop} \label{prop:monoton} Assume conditions \eqref{eq:section6-positivity:6}, \eqref{eq:positivity:5} and \eqref{eq:positivity:6} are satisfied then the sequence $\{\phi_n\}$ defined by \eqref{eq:positivity:8} is monotone, in other words \begin{equation*} \phi_n(t,x)\leq \phi_{n+1}(t,x). \end{equation*} holds \end{prop} \begin{proof} The proof is obviously done by induction. We already have proved that $\phi_1(t,x)>\phi_0(t,x)\equiv 0$ holds. Assume $\phi_{n-1}\leq \phi_n$, then $G(\phi_{n-1})\leq G(\phi_n)$ since both functions are positive and $G $ is increasing. Hence, by Kirchhoff's formula \eqref{eq:positivity:4}, we observe that \begin{equation} \begin{aligned} \phi_{n+1}(t, x)=& \frac{t}{4 \pi} \int_{|\xi|=1} h(x+t \xi) d \omega_\xi+\frac{\partial}{\partial t}\left(\frac{t}{4 \pi} \int_{|\xi|=1}(1+f(x+t\xi)) d \omega_\xi\right)\\ &+\frac{1}{4 \pi} \int_{0}^{t}(t-s)\left(\int_{|\xi|=1} G(\phi_{n})(s, x+(t-s) \xi) d \omega_\xi \right) d s \\ & \geq \frac{t}{4 \pi} \int_{|\xi|=1} h(x+t \xi) d \omega_\xi+\frac{\partial}{\partial t}\left(\frac{t}{4 \pi} \int_{|\xi|=1}(1+f(x+t\xi)) d \omega_\xi\right)\\ &+\frac{1}{4 \pi} \int_{0}^{t}(t-s)\left(\int_{|\xi|=1} G(\phi_{n-1})(s, x+(t-s) \xi) d \omega_\xi \right) d s \\ & =\phi_n(t,x). \end{aligned} \end{equation} which finishes the proof. \end{proof} \textsc{Step 3:} It remains to show that $\phi(t,x)>0$ holds. \begin{prop} \label{porp:positive:2} Assume $f(x)$, $h(x) $ and $a(t,x)$ are periodic functions that satisfies the assumptions of Proposition \ref{prop:monoton}. Let $\phi $ be a $C^2$ solution to initial value problem \eqref{eq:positivity:1}-\eqref{eq:section6-positivity:2} in the interval $[0,T)$. Then \begin{equation} \phi(t,x)>0, \quad t\in [0,T) \ \text {and all}\ x\in\setT^3. \end{equation} \end{prop} \begin{proof} We set \begin{equation} \label{eq:positivity:20} T^{*}=\sup \left\{T_{1}: \phi(t, x)>0\right. \mbox{for} \left.t \in\left[0, T_{1}\right) \ \text{and } \forall x \in \setT^{3}\right\}. \end{equation} Since $1+f(x)>0$, then by continuity, $0<T^\ast \leq T$. The proof consists essentially of two steps. In the first one we show that $\phi_n(t,x)\leq \phi(t,x)$ for $t\in [0,T^\ast)$, and in the second one we show that $T^\ast=T$. We prove the first step, again, by induction. Proposition \ref{prop:6.1} implies that $0=\phi_0(t,x)<\phi(t,x) $ for $ t\in [0,T^\ast)$. Now, assume $\phi_{n-1}(t,x)\leq \phi(t,x)$ for $ t\in [0,T^\ast)$, then in a similar way to the proof of the monotonicity, Proposition \ref{prop:monoton}, we obtain that \begin{equation} \begin{aligned} \phi_{n}(t, x)=& \frac{t}{4 \pi} \int_{|\xi|=1} h(x+t \xi) d \omega_\xi+\frac{\partial}{\partial t}\left(\frac{t}{4 \pi} \int_{|\xi|=1}(1+f(x+t\xi)) d \omega_\xi\right)\\ &+\frac{1}{4 \pi} \int_{0}^{t}(t-s)\left(\int_{|\xi|=1} G(\phi_{n-1})(s, x+(t-s) \xi) d \omega_\xi \right) d s \\ & \leq \frac{t}{4 \pi} \int_{|\xi|=1} h(x+t \xi) d \omega_\xi+\frac{\partial}{\partial t}\left(\frac{t}{4 \pi} \int_{|\xi|=1}(1+f(x+t\xi)) d \omega_\xi\right)\\ &+\frac{1}{4 \pi} \int_{0}^{t}(t-s)\left(\int_{|\xi|=1} G(\phi)(s, x+(t-s) \xi) d \omega_\xi \right) d s \\ & =\phi(t,x). \end{aligned} \end{equation} The last equality follows from the fact that $\phi$ satisfies the initial value problem \eqref{eq:positivity:1}--\eqref{eq:section6-positivity:2}. We turn now to the second step. We claim that $T^\ast=T$. If not, then $T^\ast<T $ and we will derive a contradiction. First, we note that there exists a $x_0\in\setT^3$ such that $\phi(T^\ast,x_0)\leq 0$. Since otherwise, by continuity, there exists $T^{\ast\ast}>T^\ast$ such that $\phi(t,x)>0$ for $t\in [0,T^{\ast\ast})$ and all $x\in\setT^3$, and that obviously contradicts the definition of $T^\ast$ in \eqref{eq:positivity:20}. Now, using monotonicity and the first step, we conclude that \begin{equation} \label{eq:positivity:13} 0 <\lim _{t \rightarrow T^{*-}} \phi_{n}\left(t, x_{0}\right) \leq \lim _{t \rightarrow T^{*-}} \phi\left(t, x_{0}\right)=\phi\left(T^{*}, x_{0}\right) \leq 0 \end{equation} holds, which is the desired contradiction. \end{proof} Step \ref{item:section6-positivity:3} is obvious, and that completes the proof of Theorem \ref{thm:positivity}. \end{document}
{ "timestamp": "2022-06-28T02:24:51", "yymm": "2007", "arxiv_id": "2007.13603", "language": "en", "url": "https://arxiv.org/abs/2007.13603" }
\section{Introduction} \label{sec:intro} Predicting flow and transport processes in the subsurface is challenging, as the heterogeneous subsurface structure is usually not known. Heterogeneity can cause a broad distribution of transport time scales with short times for advective transport along fast paths and very long times for diffusive and advective transport in the zones with very low to zero flow velocity \citep{BS1997,Jardineetal99, Haggerty2000}. Depending on the subsurface structure, the full range of time scales can be important for scalar transport. Although the larger fraction of the mass might be transported fast, a substantial fraction can experience very large transport times, which might be crucial for applications such as contaminant remediation or recovery of substances. The range of transport time scales causes challenges for predictions. If the subsurface structure is known, numerical solutions of the transport equation in the domain can be derived. The computational effort is, however, very high, as the resolution of all relevant time and spatial scales is required. If the structure is not known, statistical approaches might be used, which increases the computational burden even more. Upscaled transport models are derived in order to allow for efficient predictions, where the detailed resolution is not required, but the effects of the non-resolved processes are captured in effective transport mechanisms. The derivation of upscaled transport equations has been pursued in the frameworks of volume averaging~\citep{brenner:book,Whitaker:book}, homogenization theory~\citep{Hornung:book}, and stochastic averaging~\citep{NEUMAN1993,CBH2002}, which can yield local or spatio-temporal non-local upscaled transport equations that typically rely on closure approximations. Such closure approximations often rely on the assumption of weak heterogeneity, or on the assumption that average transport is Fickian. Mobile-immobile mass transfer (MIM), matrix-diffusion and multi-rate mass transfer (MRMT) approaches derived for solute transport in highly heterogeneous media conceptualize the medium as primary continuum and a suite of multiple secondary continua \citep{Haggerty1995, Carrera1998}. The fastest domain covers the main transport, while the mass exchange with the other continua is described as a source term. The source term is formulated as a convolution of the concentration in the fast domain and a memory function. The memory function encodes the mass transfer processes between mobile and immobile domains. An overview of the terminology of mobile-immobile, multirate mass transfer and in general memory function models can be found in~\cite{Ginn2017}. The continuous time random walk (CTRW) approach for transport in highly heterogeneous media naturally accounts for broad distributions of transport time scales over characteristic length scales inherent to the medium structure~\citep{Berkowitz2006}. The information about small scale mass transfer and medium structure is contained in the transition time distribution. The phenomenology of mobile-immobile and CTRW approaches is similar in that both account for broad distributions of mass transfer time scales. In fact, the mathematical equivalence between the frameworks has been shown in the literature~\citep{Dentz2003, Schumeretal2003, BensonMeer2009, Russian2016, Comolli2016}. A crucial point for an upscaled model is predictability. A model is useful for applications if parameters can be identified independently from specific settings. They should either be predictable from knowledge about material properties and specific transport parameters, or should be transferable, meaning that if they are fitted from experimental observations, they should be transferable to other settings. Comparative studies of the predictive capabilities of different upscaling approaches and large scale models can be found in \cite{Frippiat2008},~\cite{Neuman2008}, \cite{Fiori2015}, \cite{Lu2018} and~\cite{Pedretti2018}. The parameterization of mobile-immobile models for the case that transport in the slow domains is dominantly diffusive has been studied in the past. There is a good understanding of the memory function and how parameters can be estimated based on diffusion coefficients and the geometry of the heterogeneous medium (or fractured medium) \citep{maloszewski1985,Carrera1998,Zhang2014,GMDC2008}. Oftentimes, both slow advection and diffusion are lumped into empirical memory functions based on parametric models like truncated power laws~\citep{Willmann2008}. It is not clear, however, whether slow advection can be represented in such a framework, and what the form and parameterization of the memory function would be. In general, both advective transport as well as diffusive transport are relevant for the scalar transport in the slow zones of a heterogeneous medium. To formulate mobile-immobile mass transfer models in general requires a method to parameterize the memory functions for a combination of diffusive and advective transport. As mentioned above, in the MRMT framework the effects of advection and diffusion have been accounted for by phenomenological memory functions~\citep{Willmann2008}, and similarly, in the CTRW approach, the combined effect of diffusion and advection on solute travel times have been quantified single parametric transition time distributions~\citep{Berkowitz2000}. Volume averaging has been used as a systematic way to quantify transport and advective-diffusive mass transfer in bimodal media~\citep{Chastanet2008, Golfier2011, Davit2012}, which, however, typically leads to more or less complex closure problems. Closure approximations can be based on weak heterogeneity~\citep{Golfier2011}, or the introduction of time-dependent mass transfer coefficients~\citep{Chastanet2008}. The impact of advective mass transfer between slow and fast medium portions, can be systematically assessed by studying purely advective transport in highly heterogeneous media. Thus, in order to investigate the mechanisms of advective trapping in heterogeneous porous media, we focus here on structures characterized by a background-inclusion pattern. The simplest model for such a structure is a 2D medium with circular inclusions. \citet{Eames1999} studied the advective transport in a background-inclusion field with a bimodal conductivity distribution. These authors consider regular, as well as random structures of the inclusions. It is demonstrated in their paper that the macrodispersion coefficient that is derived for transport in such media diverges for the case that the inclusions are permeable in the limit of an inclusion/matrix permeability ratio to zero. If on the other hand the transport coefficient is calculated for the case that inclusions are impermeable, a finite macrodispersion coefficient is obtained. This observation indicates that the concept of hydrodynamic dispersion is not adequate to describe transport in the case of very low permeability ratios. \cite{Rubin1995} develops perturbation theory expressions for time dependent dispersion coefficients in bimodal media. \cite{Dagan2003} and \cite{Fiori2003} study time-dependent apparent dispersion in a similar bimodal setup as \cite{Eames1999} using a Lagrangian approach combined with a self-consistent effective medium approximation. \cite{Fiori2006}, \cite{Fiori2007} and \cite{tyukhovactrw2016} study transport in composite media characterized by Gaussian and non-Gaussian distributions of the logarithm of hydraulic conductivity. \cite{Fiori2006} and \cite{Fiori2007} derive semi-analytical expressions for particle travel times in order to map the conductivity distribution on solute breakthrough curves. \cite{tyukhovactrw2016} use a kinematic relationship to relate the advection time over a single inclusion to its conductivity as the basis for CTRW model to predict solute breakthrough curves. While these approaches provide the methodology to construct upscaled expressions for solute breakthrough curves, they do not provide evolution equations for the average solute concentrations. \cite{Silliman1987}, \cite{Murphy1997} and \cite{Levy2003Measurement} observed non-Fickian behaviors for the breakthrough in tank experiment characterized by low conductivity inclusions embedded in a sandy matrix. \cite{Berkowitz2000} modeled the tailing behaviors observed in these experiments using a CTRW approach, whose parameters were estimated from the observed breakthrough curves. \cite{Ginn2001} use a stochastic-convective streamtube approach to model aerobic biodegradation in a column experiment with bimodal medium structure. \citet{Zinn2004} carried out experiments in tank experiments with bimodal medium structure and derived an upscaled model to describe the observed breakthrough curves. For the advectively dominated transport in background and inclusions, the authors use a streamtube model, for diffusion-dominated transport in the inclusion, a matrix diffusion model. As shown in our paper, in the case of randomly distributed inclusions, the streamtube model breaks down for large scale advective transport because individual streamlines sample a random number of inclusions that can be characterized by a Poisson distribution. In this paper we derive an upscaled model for advective transport in a bimodal 2D medium with randomly placed circular inclusions. The methodology is based on a Lagrangian approach that allows to identify and quantify the stochastic rules of advective particle motion in disordered media. Similar approaches have been used in previous works for the analysis and upscaling of pore-scale transport~\citep{morales2017, Puyguiraud2019a} and for transport in multi-Gaussian hydraulic conductivity fields~\citep{Hakoun2019} and fractured media~\citep{Hyman2019}. Here, we use a Lagrangian approach to gain understanding of the stochastic principles of transport in random composite media through the analysis of advective trapping events in low conductivity inclusions, and the distribution of flow speeds sampled between them. This analysis facilitates the formulation of upscaled transport as a multi-trapping model. This is considered a first step towards a mobile-immobile mass transfer model of transport in highly heterogeneous media that includes advection and diffusion in the whole domain and that allows for parameter predictions based on a given structure. In Section II we introduce the flow and transport model used. In Section III we discuss the transport behavior of three types of media: a single inclusion, a periodic packing of inclusions and a random packing of inclusions. In Section IV we present the upscaled model derived for the random packing and we give some conclusions in Section V. \section{Flow and transport model}\label{sec:flowtpt} \begin{figure} \centering \includegraphics[width=.9\textwidth]{./Fig1a.pdf} \includegraphics[width=.9\textwidth]{./Fig1b.pdf} \caption{Flow and transport domain and streamlines of the Darcy flow $\mathbf q({\mathbf x})$ for (top) regular and (bottom) random packing considering no flow boundary conditions at the top and bottom boundaries. Streamlines that cross at least one inclusion are green. Red streamlines do not go through any inclusion. In the regular media streamlines either cross all the inclusions in the horizontal or none of them. In the random media almost all streamlines cross at least on inclusion. \label{fig:rnd}} \end{figure} We consider flow and transport in a 2D medium characterized by a binary distribution of hydraulic permeabilities, where the material with high permeability $K_m$ is connected (background material or matrix), while the material with low permeability $K_i$ is disconnected (inclusions). For simplicity we assume that the inclusions have a circular shape of radius $r_{0}$ and can be regularly or randomly arranged (see Figure~\ref{fig:rnd}). Packings are characterized by the domain size $L_{x} \times L_{y}$, inclusion radius, and covered volume fraction. In regular packings the inclusions needed to cover the desired area are placed in a uniform equispaced grid inside the domain (Figure~\ref{fig:rnd} top). Random packings are generated by drawing the centers coordinates from an uniform distribution and discarding inclusions that overlap previously existing ones. The algorithm stops when the desired volume fraction is covered. This method generates arrangements with an exponential distribution of distances between inclusions (Figure~\ref{fig:rnd} bottom). \subsection{Flow} We consider steady state flow through the medium described by the Darcy equation \begin{align} \label{eq:flow} \mathbf q({\mathbf x}) = - K({\mathbf x}) \nabla h({\mathbf x}), \end{align} where $K({\mathbf x})$ is the hydraulic conductivity, $h$ is the piezometric head, and $\mathbf{q}$ is Darcy velocity. Both medium and fluid are assumed to be incompressible, which implies that $\nabla \cdot \mathbf q({\mathbf x}) = 0$. A constant flow rate $q_0$ is imposed on the left domain boundary and constant head on the right one so that the mean flow direction is along the $x$ axis. \subsubsection{Flow distribution} We discuss briefly here the flow distribution between the matrix and the inclusions which will give us some information to analyze the transport in the following sections. For a single inclusion embedded in an infinite matrix, flow inside the inclusion is uniform and aligned with the mean flow direction. The ratio between undisturbed background flow velocity and the flow velocity in the inclusion is given by~\citep{Wheatcraft1985} \begin{align} \label{eq:qin} \frac{q_{in}}{q_0} = \frac{2 \kappa}{1+\kappa}, \end{align} where $\kappa = K_i/K_m$ is the conductivity ratio. For the composite media under consideration here, flow inside the inclusions is in general not perfectly uniform and is not aligned with the mean flow direction as shown in Figure~\ref{fig:rnd}. To estimate the background flow velocity for small conductivity ratio under consideration, flow through the inclusions may be disregarded compared to the flow through the matrix. Thus, we can approximate the average flow velocity in the background \begin{align}\label{eq:vm-chi} q_m = \frac{q_0}{1 - \chi}. \end{align} where $\chi$ denotes the volume fraction of the inclusions. \subsection{Transport} We consider purely advective transport, which is governed by the following Liouville equation for the concentration $c({\mathbf x},t)$ \begin{align} \label{eq:adv} \frac{\partial c({\mathbf x},t)}{\partial t} + \mathbf v({\mathbf x}) \cdot \nabla c({\mathbf x},t) = 0, \end{align} where $\mathbf v({\mathbf x}) = \mathbf q({\mathbf x}) / \phi$. For simplicity, porosity $\phi$ is assumed to be constant in this work. Note that this is generally not true, particularly for geological media. However, porosity in different materials varies typically between $0.05$ and $0.4$, and this variation is much lower than that of hydraulic conductivity, which may vary over several orders or magnitude between different materials \citep{Bear1972}. Solute is initially uniformly distributed along a line $c({\mathbf x},t = 0) = c_0 \delta(x)$. The transport problem is solved in a Lagrangian framework. The equation of motion for the position ${\mathbf x}(t;\mathbf a)$ of a fluid particle is \begin{align} \frac{d {\mathbf x}(t;{\mathbf a})}{dt} = \mathbf v[{\mathbf x}(t;{\mathbf a})]. \end{align} with ${\mathbf x}(t=0;{\mathbf a}) = {\mathbf a}$. The distribution of initial positions is $\rho({\mathbf a}) = \delta(a_1)$. In the following transport will be analyzed in terms of the arrival time distribution of particles at increasing distances from the inlet. For a medium with impermeable inclusions, macrotransport can be described by the advection dispersion equation (ADE) \begin{align} \label{eq:ADE} \frac{\partial \overline c(x,t)}{\partial t} + v_a \frac{\partial \overline c(x,t)}{\partial x} - D_{a} \frac{\partial \overline c(x,t)}{\partial x^2} = 0, \end{align} where $v_a$ is the apparent velocity and the dispersion coefficient $D_{a}$. For the condition of a low density of inclusions, i.e. $\chi \ll 1$, \citet{Eames1999} report \begin{align}\label{eq:EamesImpervious} D_{a} = 0.74 \chi v_{0} r_{0} \end{align} for impermeable inclusions and \begin{align}\label{eq:EamesLimitZero} D_{a} = \frac{8}{3\upi\kappa} \chi v_{0} r_{0} \end{align} in the limit of $\kappa \to 0$. The distribution of arrival times at a position $x_{c}$ for an instantaneous injection into the flux at $x = 0$ is given by~\cite{Ogata1961} and \cite{KreftZuber1978} \begin{align} \label{eq:f} f(t,x_{c}) = \frac{x_{c} \exp\left[- (x_{c}-v_{a} t)^2/4 D_{a} t \right]}{\sqrt{4 D_{a} t^3}}. \end{align} For the complementary cumulative arrival time distribution, we obtain accordingly \begin{align} \label{eq:ade_analytic} F(t,x_{c}) = \int\limits_t^\infty dt' f(t',x_{c}) = 1-\frac{1}{2} \left[\text{erfc}\left(\frac{x_{c} - v_{a} t}{\sqrt{4 D_{a} t}}\right) + \exp\left(\frac{x_{c} v_{a}}{D_{a}}\right)\text{erfc}\left(\frac{x_{c}+v_{a}t}{\sqrt{4 D_{a} t}}\right)\right]. \end{align} We use these solutions in the following as references for the observed arrival time distributions. Furthermore, we estimate the apparent velocity $v_{a}$ and apparent dispersion coefficient $D_{a}$ from the mean $m_{b}$ and variance $\sigma_{b}^{2}$ of the breakthrough time by using the Fickian relations \begin{align} \label{eq:vD} v_{a} = \frac{x_{c}}{m_{b}}, && D_{a} = \frac{v_{a}^{3}\sigma_{b}^{2}}{2 x_{c}}. \end{align} \subsection{Numerical model}\label{sec:Numerics} The rectangular domain of size $L_{x} \times L_{y}$ is discretized using square cells of side $\Delta = r_{0}/30$. This discretization ensures that the circular shape of the inclusions is well reproduced. To avoid boundary effects the horizontal dimension is extended a buffer length $\lceil 4r_{0} \rceil$ equally distributed between the left and right boundaries. Steady state flow \eqref{eq:flow} is solved using a two-point flux finite volume scheme. Uniform velocity $v_{0}$ is prescribed on the left boundary and head on the right boundary. The top and bottom boundaries are periodic. Velocity is calculated on the cells sides. The advection equation \eqref{eq:adv} is integrated using the semi-analytical method of Pollock~\citep{Pollock1988}. At the beginning of the simulation $N_{p} = 10^{6}$ particles are uniformly distributed along the left boundary. The buffer between the boundary and the first inclusions ensures that flow is uniform and the streamlines parallel at the inlet. Therefore the uniform distribution of particles is equivalent to a flux-weighted injection. The simulation runs until all particles leave the domain. Streamlines and equivalently particle trajectories are illustrated in Figure~\ref{fig:rnd}. Results are reported in dimensionless units. We chose as characteristic length the side of the unit cell $\ell_{c}$ of a regular arrangement with inclusions of radius $r_{0}$ that covers a volume fraction $\chi$ (Figure~\ref{fig:inclusion}). That is, $\ell_{c}= r_{0}\sqrt{\upi/\chi}$. The characteristic time is $\tau_{c} = \ell_{c}/v_{0}$ so that a dimensionless time of one is required to traverse the unit cell at the prescribed velocity. The time needed to travel through the buffer area is subtracted from the results. \section{Transport behavior} We study the transport behavior in media with random arrangements of inclusions. Transport is characterized by the travel time of the particles in terms of the breakthrough curve or equivalently by the complementary cumulative breakthrough curve at control planes. We will also analyze the trapping events distribution (i. e., the number of inclusions that a particle is transported through before arriving at the control plane), and the velocity distribution inside the inclusions. The velocity in the background material does not vary much. However, the tortuosity of the flow paths leads to an enhanced spreading of the particles as discussed for macrodispersion. \begin{figure} \centering \includegraphics[width=.9\textwidth]{./Fig2.pdf} \caption{Sketch of the set up for a unit cell of size $\ell_{c}$ single inclusion containing an inclusion of radius $r_{0}$. Only particles in the segment $b$ enter the inclusion. } \label{fig:inclusion} \end{figure} \subsection{Single inclusion} We consider first the case of a medium in which there is only one inclusion of low permeability (see Figure~\ref{fig:inclusion}). We analyze the residence time of the particles within the inclusion and the relation with the breakthrough curve. \subsubsection{Residence times} The residence time distribution in a single inclusion in an infinite domain is obtained by purely geometrical considerations as follows. The flow field within an isolated single inclusion is constant. Since the streamlines inside the inclusion are parallel, the particles that go through it are uniformly distributed over the vertical diameter. This means that the vertical particle position is uniformly distributed in $\left[-r_{0},r_{0}\right]$. The position $h$ of a particle on the vertical diameter of the inclusion is $h = r_{0} \sin \alpha$. Therefore, we obtain the angle $\alpha$ at which the particle entered the inclusion as \begin{align} \alpha(h/r_{0}) = \arcsin{(h/r_{0})}. \end{align} From this we obtain the angular distribution that corresponds to the uniform particle distribution as \begin{align} p_\alpha(\alpha) = \cos{\alpha}. \end{align} The length of the circle segment traversed by the particle $l_i$ is given by $s(\alpha) = 2 r_{0} \cos{\alpha}$, whose distribution $p_{l_{i}}(l_{i})$ is obtained from $p_\alpha(\alpha)$ as \begin{align} p_{l_{i}}(l_{i}) = p_\alpha[\arccos{(l_{i}/2r_{0})}] \left.\frac{1}{dl_{i}(\alpha)/d\alpha}\right|_{\alpha = \arccos{(l_{i}/2r_{0})}} \end{align} Thus, \begin{align} \label{eq:p_li} p_{l_{i}}(l_{i}) = \frac{l_{i}}{r_{0}} \frac{1}{\sqrt{1 - (l_{i}/2r_{0})^2}} \end{align} and the distribution of transition times $t = l_{i}/v_{in}$ is given by \begin{align} \label{eq:1incl-res-times} \psi(t) = \frac{t}{\tau_{in}^2} \frac{1}{\sqrt{1 - (t / \tau_{in})^2}}, \end{align} where we defined $\tau_{in} = 2r_{0}/v_{in}$ is the maximum advection time across the inclusion. The comparison between~\eqref{eq:1incl-res-times} and residence times obtained numerically is shown in Figure~{\ref{fig:1incl-res-times}. \begin{figure} \centering \includegraphics[width=.9\textwidth]{./Fig3.pdf} \caption{Comparison between the simulated (dots) and analytical (solid line) residence time distribution of particles traveling through a single inclusion $\chi =0.01$, and $\kappa = 0.1$. \eqref{eq:1incl-res-times} }\label{fig:1incl-res-times} \end{figure} \subsubsection{Fraction of particles traversing the inclusion} The fraction of particles entering over a length $\ell_c$ that traverses through the inclusion is obtained from flux conservation. The size $b$ of the streamtube in the matrix passing through the inclusion is obtained from \begin{align} 2 r_{0} v_{in} = b v_m \end{align} Thus, the flux proportion that goes through the inclusion can be written as \begin{align} \label{eq:a0} a_0 = \frac{b}{\ell_{c}} = \frac{2 r_0 v_{in}}{\ell_{c}v_m} = \frac{4 r_0}{\ell_c} \frac{\kappa}{1 + \kappa}, \end{align} where we used expression~\eqref{eq:qin}. \subsubsection{Breakthrough curves} The breakthrough and the complementary cumulative breakthrough curves for the single inclusion are shown in Figure~\ref{fig:1incl-cbtc}. The first arrival occurs at $t=1$, which correspond to the time needed to go through the unit cell. Part of the streamlines are bent by the presence of the low permeability inclusion causing the peak to widen. The rest of the curve reflects the effect of the low permeability inclusion with a breakthrough curve (Figure~\ref{fig:1incl-cbtc}~right) that follows the above calculated residence time distribution. \begin{figure} \centering \includegraphics[width=.45\textwidth]{./Fig4a.pdf} \includegraphics[width=.45\textwidth]{./Fig4b.pdf} \caption{Breakthrough curve (a) and complementary cumulative breakthrough curve (b) for a system containing one inclusion ($\chi =0.01$, $\kappa = 0.1$) measured at a distance $\ell_{c}$ from the inlet. The curves show a early arrival of particles that only travel through the matrix and a long tail formed by the particles traversing the inclusion at different heights.}\label{fig:1incl-cbtc} \end{figure} Transport through a single inclusion can be conceptualized as a streamtube model with two types streamtubes. In one of them, a percentage of particles $a_{0}$~\eqref{eq:a0} is transported through the inclusion, while in the other one particles are transported only through the matrix. This conceptual model can be extended to regular packings whose unit cell contains only one inclusion. In this case particles will either travel through all the inclusions in the streamtube or none of them (see Figure~~\ref{fig:rnd}). The travel times inside of each streamtube are distributed due to the tortuosity of the streamlines. For regular packings the streamlines differ from the single inclusion case because of the finite size of the unit cell, which enforces a straight streamline at its boundary. Based on the conceptual model of two streamtubes and considering that the inclusions are much less permeable than the background, the breakthrough curve is characterized by two distinct pulses caused by transport in the streamtubes without and with inclusions (Figure~\ref{fig:1incl-cbtc}). Note that the transition is continuous, as the outermost streamlines of the two streamtubes coincide . For short media, the periodic medium could be a useful approximation to predict breakthrough curves. \citet{Zinn2004} carried out experiments of solute transport in two-dimensional glass bead packs, where circular inclusions were randomly placed into a less permeable background. They used the streamtube approach to predict breakthrough curves for the advective dominated case. As the approximate solution is based on one single inclusion, periodicity is inherently assumed. In the measured breakthrough curves, the double breakthrough behaviour is very clear and it could be demonstrated that their streamtube approach worked well to reproduce the breakthrough curves (see also next subsection). \subsection{Random packings}\label{sec:rp} We consider now random packings of inclusions generated as explained in Section~\ref{sec:flowtpt}. First we consider media of different sizes ($3 \le L_{x}/\ell_{c} \le 500$; $1 \le L_{y}/\ell_{c} \leq 105$) and covered volume fraction ($0.1 \le \chi \leq 0.55$) in which we study the velocity distribution in the matrix and inclusions, the trapping events experienced by particles and the trapping time distribution. \begin{figure} \centering \includegraphics[width=.9\textwidth]{./Fig5.pdf} \caption{Mean velocity distribution inside the inclusions (symbols) for a media with varying volume fraction and inclusions' size with constant $\kappa = 0.01$. The solid lines show the fit to a log-normal distribution to the all the data with same $\chi$. The base case geometry is $L_{x}=49.7\ell_{c}$, $L_{y}=2.5\ell_{c}$, and $\chi=0.1$. The rest of the cases keep the same domain proportions. }\label{fig:vel-dist} \end{figure} Then we study the behavior of breakthrough curves. First we explore further the streamtube model using the geometry of~\citet{Zinn2004}. Next we consider two scenarios, a long medium ($387\ell_{c} \times 3.8\ell_{c}$) in which transport is analyzed as the distance from the inlet increases, and a wide medium ($84\ell_{c} \times 28\ell_{c}$) in which the effect of the length of the line along which solute is injected is studied. \subsubsection{Velocity distribution\label{sec:vpdf}} The velocity inside isolated regularly arranged inclusions is approximately constant under low density of inclusions conditions, this means for $\chi \ll 1$. For increasing $\chi$ in random packing this is in general not the case and flow velocities vary inside the inclusions and between inclusions. We characterize the inclusions by their mean velocities, and study their distributions $p_v(v)$ as a function of volume fraction $\chi$. Figure~\ref{fig:vel-dist} shows the distributions of inclusion velocities for different volume fractions and inclusion sizes. We observe that the distribution of mean velocities can be well approximated by a log-normal distribution. Consistent with \eqref{eq:qin} and \eqref{eq:vm-chi}, the mean velocity of the distribution is independent of the inclusion size and depends only on the volume fraction $\chi$ for constant $\kappa$. The distribution becomes narrower with decreasing $\chi$. In fact, in the limit of low density of inclusions, $p_v(v)$ should converge to the Delta distribution $p_v(v) = \delta(v-v_{in})$, where $v_{in}$ is the constant velocity in a single isolated inclusion. The velocity in the matrix (Figure~\ref{fig:vel-mat}) is inversely proportional to the covered area ratio $\chi$ and follows the relation \eqref{eq:vm-chi} until a high volume fraction is covered, reaching the percolation threshold, and the hypothesis that flow through the inclusions is negligible compared to the flow through the matrix is no longer valid. \begin{figure} \centering \includegraphics[width=.9\textwidth]{./Fig6.pdf} \caption{Mean velocity in the matrix versus volume fraction occupied by the inclusions. The solid line correspond to the solution~\eqref{eq:vm-chi} for isolated inclusions. Dots colors correspond to the medium length.}\label{fig:vel-mat} \end{figure} \subsubsection{Trapping events} The number of trapping events experienced by a particle in random media is not binary distributed as in the regular ones. As the inclusions are randomly distributed in space, the distance between them is approximately an exponential distribution, or in other words, the number of inclusions that may be encountered within a given distance follows a Poisson process~\citep{Feller1968}. In fact, we find that the statistics of the number $n_{tr}$ of trapping events within a travel distance $\ell$ can be described by the Poisson distribution \begin{align}\label{eq:trap-poisson} p(n_{tr},\ell) = \frac{e^{-k\ell}\left(k \ell\right)^{n_{tr}}}{n_{tr}!}. \end{align} An example of the trapping events distributions is shown in Figures~\ref{fig:trap-events-Lx}. It can be seen that the distribution of trapping events evolves as the particles sample the medium. At short distance from the inlet the distribution is narrow suggesting that for a small medium the streamtube approximation could be sufficient to explain transport. As the distance from the inlet increases, the distribution widens and the probability of not being trapped decreases. At a sufficient travel distance all particles experience at least one trapping event and the distribution converges to a Poisson distribution. \begin{figure} \centering \includegraphics[width=.9\textwidth]{./Fig7.pdf} \caption{Distribution of number of trapping events (symbols) at different distances $x_{c}$ from the inlet for an arrangement of inclusions ($L_{x}=387\ell_{c}$, $L_{y}=3.87\ell_{c}$, $\kappa = 0.01$, $\chi=0.3$; same as in Figure \ref{fig:rndLx}). The solid lines are the fit to a Poisson distribution.}\label{fig:trap-events-Lx} \end{figure} The trapping rate $k$, that is the number of trapping events per traveled distance, that characterizes the Poisson distribution depends on the geometry of the arrangement. To assess this dependence we performed a series of simulations varying the medium geometry (radius, length, width, and area covered by the inclusions). The average distance between the inclusions $d$ was computed with the following algorithm. First, we take the lines between all pairs of inclusions' centers that do not intersect another inclusion. Then, for every pair of lines that intersect, the shortest one is kept. Finally, the average length of the remaining lines is calculated. As shown in Figure~\ref{fig:trap-freq} the trapping rate is inversely proportional to the average distance between the inclusion $d$, expressed in terms of the unit cell size $\ell_{c}$. \begin{figure} \centering \includegraphics[width=.9\textwidth]{./Fig8.pdf} \caption{Trapping frequency (parameter in Poisson distribution) versus mean distance between inclusions. Point color is the covered volume fraction $\chi$.}\label{fig:trap-freq} \end{figure} \subsubsection{Distribution of trapping times} The trapping time distribution is obtained numerically from the residence time distribution in a single inclusion \eqref{eq:1incl-res-times}. The distribution of trapping times in the following is denoted by $\psi_{f}(t)$. It can be constructed from the distribution $\psi(t|v)$ of trapping times for a given inclusion velocity, and the distribution $p_v(v)$ of velocities as \begin{align} \label{eq:traptimesdist} \psi_f(t) = \int\limits_0^\infty d v p_v(v) \psi(t|v). \end{align} Figure~\ref{fig:trap-times} compares the distribution of trapping times obtained from the direct numerical simulations and the model~\eqref{eq:traptimesdist}. \begin{figure} \centering \includegraphics[width=.9\textwidth]{./Fig9.pdf} \caption{Comparison between the theoretical trapping times distribution given by~\eqref{eq:traptimesdist} (black line) and the numerical results for media with $L_{y}=3.87\ell_{c}$, $\kappa = 0.01$, $\chi=0.3$, and different length (symbols).}\label{fig:trap-times} \end{figure} \subsubsection{Breakthrough curves} We consider first a medium based on the inclusions geometry of \citet{Zinn2004}. This medium has $\chi = 0.37$, $L_{x}= 10.8\ell_{c}$, and $L_{y} = 5.4\ell_{c}$. We consider an intermediate permeability ratio scenario with $\kappa= 0.01$. The breakthrough curve (Figure~\ref{fig:zinn-btc}) is affected by the random arrangement of inclusions. However, we can distinguish the contribution of particles that experience different numbers of trapping events. Given the small size of the domain and the low number of inclusions, particles experience only a few trapping events and most of them travel through the domain without entering any inclusion. Based on this phenomenology, \citet{Zinn2004} used a streamtube approach in order to model the breakthrough curves observed in their experiment. Their approach identified a streamtube passing only through the matrix and a second streamtube that passes through a constant number of inclusion. This approach is not valid in a large medium characterized by a random arrangement of inclusions because streamlines may pass through random numbers of inclusions, as discussed below. \begin{figure} \centering \includegraphics[width=.49\textwidth]{./Fig10a.pdf} \includegraphics[width=.49\textwidth]{./Fig10b.pdf} \caption{Breakthrough curve (a) and complementary cumulative breakthrough curve for a random medium with a geometry as in \citet{Zinn2004} ($L_{x}= 10.8\ell_{c}$, $L_{y} = 5.4\ell_{c}$, $\kappa= 0.01$, and $\chi = 0.37$). The dashed lines correspond to the analytical solution~\eqref{eq:ade_analytic} where $D_{a}$ and $v_{a}$ are obtained from the mean and variance of the breakthrough time.}\label{fig:zinn-btc} \end{figure} Next we consider a long and narrow medium ($387\ell_{c} \times 3.8\ell_{c}$), where particles can travel through a larger number of inclusions. As shown in Figure~\ref{fig:rndLx} the shape of the breakthrough curves depends on the traveled distance, that is, the amount of medium heterogeneity sampled. The curves become smoother as the distance from the inlet increases. For short distances (Figure~\ref{fig:rndLx}~a and d) the first part of the curve is dominated by the dispersion caused between the streamlines along the fast paths and the tail of the curve by the streamlines going through the inclusions as in the case of the geometry of \citet{Zinn2004}. For a sufficiently long distance from the inlet (Figure~\ref{fig:rndLx}~b,~c, e, f), the shapes of the breakthrough curves suggest that the peak and tail behavior can be modeled by an effective hydrodynamic dispersion coefficient. The parameters of the apparent center of mass velocity and dispersion coefficients are obtained from the breakthrough data according to~\eqref{eq:vD}. Their values are given in Table~\ref{tab:velDisp}. The average velocity fluctuates little, and is close or equal to the velocity set by the flow boundary condition. The dispersion coefficient is variable and evolves with distance from the inlet plane. The corresponding Fickian solutions~\eqref{eq:f} and ~\eqref{eq:ade_analytic} provide good descriptions of the breakthrough curves at large distances ($x_{c} > 300 \ell_{c}$, Fig.~\ref{fig:rndLx}c and f) from the inlet plane. However, the dispersion coefficients differ from the ones obtained by~\citet{Eames1999} for impermeable inclusions \eqref{eq:EamesImpervious}, $D_{a}= 0.069$, and in the limit $\kappa \to 0$ \eqref{eq:EamesLimitZero}, $D_{a} = 7.87$. As pointed out by~\citet{Eames1999}, their expressions are valid in the low density of inclusions limit of $\chi \ll 1$, which is not the case for the volume fractions under consideration here. The fact that the inclusion velocities are distributed, as discussed in Section~\ref{sec:vpdf}, is a manifestation of the interaction between inclusions, this means, they cannot be considered isolated. In summary, while the Fickian solution fits the peaks and part of the tails at intermediate and large distances from the inlet plane ($x_c > 50 \ell_c$), it fails to reproduce the sharp cut-offs at early and late times, and completely fails to reproduce the breakthrough curves at short distances ($x_c < 10 \ell_c$). Furthermore, the apparent dispersion coefficients fitted to the data evolve with distance from the inlet plane, which cannot be accommodated by a standard Fickian model based on constant transport parameters. \begin{figure} \centering \includegraphics[width=.45\textwidth]{./Fig11a.pdf} \includegraphics[width=.45\textwidth]{./Fig11b.pdf} \includegraphics[width=.45\textwidth]{./Fig11c.pdf} \includegraphics[width=.45\textwidth]{./Fig11d.pdf} \includegraphics[width=.45\textwidth]{./Fig11e.pdf} \includegraphics[width=.45\textwidth]{./Fig11f.pdf} \caption{Breakthrough curves (a--c) and complementary cumulative breakthrough curves (d--f) at different distances from the inlet for a random arrangement of inclusions ($L_{x}=387\ell_{c}$, $L_{y}=3.87\ell_{c}$, $\kappa = 0.01$, $\chi=0.3$). The dashed lines correspond to the analytical solution~\eqref{eq:ade_analytic} where $D_{a}$ and $v_{a}$ are given in Table~\ref{tab:velDisp}. Note that the cases $ x_{c} = \ell_{c}, 5\ell_{c}$ are not modelled. }\label{fig:rndLx} \end{figure} \begin{table} \begin{center} \def~{\hphantom{0}} \begin{tabular}{llllllll} $x_{c}$ & 10 & 50 & 100& 200 & 300 & 387 \\ $v_{a}$ &~1 &~1 &~~1 &~~0.98 &~~0.96 &~~0.96\\ $D_{a}$ &~2.59 &~3.48 &~~3.87&~~4.42&~~2.86 &~~3.28\\ \end{tabular} \end{center} \caption{Values of velocity $v_{a}$ and dispersion coefficient $D_{a}$ are determined from the mean and variance of the corresponding breakthrough times. For comparison the values predicted by \citet{Eames1999} are $D_{a}= 0.0686$ for impermeable inclusions and $D_{a} = 7.87$ for $\kappa \ll 1$.} \label{tab:velDisp} \end{table} In order to study the impact of the width of the initial particle distribution on heterogeneity sampling we performed another series of simulations in a wide medium ($84\ell_{c} \times 28\ell_{c}$) in which we considered injection lines of increasing length (from $0.28\ell_{c}$ to the whole width of the medium; centred at $L_{y}/2$) and computed the breakthrough curves at different distances from the inlet. The breakthrough curves (Figure~\ref{fig:InjSize}) show that injection lines of small length (Figure~\ref{fig:InjSize}a,b with injection lines of length $0.28\ell_{c}$ and $1\ell_{c}$ respectively) do not sample enough of the medium variability even at the maximum travelled distance simulated. The curves have distinct peaks/bumps, whose number increases with the distance as the number of trapping events experienced by the particles grows. This behavior is similar to the streamtube behavior observed for the long medium (Figure~\ref{fig:rndLx}) and in the geometry of \citet{Zinn2004} (Figure~\ref{fig:zinn-btc}). For injection lines of length above $5\ell_{C}$ (Figure~\ref{fig:InjSize}c,d), the medium properties are better sampled and the shape of the curves are more similar between injections. The shape and number of peaks/bumps is less dependent on the travelled distance and only the tail of the curve changes. Note that the medium is not long enough to observe the asymptotic behavior of Figure~\ref{fig:rndLx} d. \begin{figure} \centering \includegraphics[width=.45\textwidth]{./Fig12a.pdf} \includegraphics[width=.45\textwidth]{./Fig12b.pdf} \includegraphics[width=.45\textwidth]{./Fig12c.pdf} \includegraphics[width=.45\textwidth]{./Fig12d.pdf} \includegraphics[width=.45\textwidth]{./Fig12e.pdf} \includegraphics[width=.45\textwidth]{./Fig12f.pdf} \caption{Breakthrough curves at different distances from the inlet for injection lines of increasing length (a, d $0.28\ell_{c}$, b, e $5\ell_{c}$, and c, d $28\ell_{c}$) for a medium with $L_{x} = 84\ell_{c}$, $L_{y} = 28\ell_{c}$, $\kappa = 0.01$, and $\chi=0.2$).}\label{fig:InjSize} \end{figure} \section{Upscaled transport model} We derive an upscaled model for transport through random packings. Unlike regular packings, in which streamtubes traverse either the same number of inclusions or none of them, in random packings streamtubes can sample a random number of inclusions. This means, one cannot distinguish only two kinds of streamtubes, but one has a set of streamtubes, each of which is characterized by different random numbers of inclusions. The analysis of Section~\ref{sec:rp} has shown that the number of trapping events, this means, the number of inclusions a particle crosses along a trajectory, can be represented by the Poisson distribution~\eqref{eq:trap-poisson} characterized by the trapping rate $k$. Based on these observations, we can now quantify the upscaled particle motion using a continuous time random walk (CTRW) framework~\citep{Berkowitz2006, noetinger2016}. To this end, we consider advective-dispersive particle transitions in the mobile matrix \begin{subequations} \label{eq:ctrw} \begin{align} \label{ctrw} dx(s) = v_{m} ds +\sqrt{2D_m ds} \xi(s), \end{align} where $s$ denotes the mobile time spend outside the inclusions, $v_{m}$ is the mean velocity in the matrix, $D_{m}$ is the longitudinal dispersion coefficient, $\xi(s)$ is a Gaussian white noise characterized by zero mean and unit variance. In the model, $v_{m}$ can be estimated from the covered area $\chi$ (Figure~\ref{fig:vel-mat}) and $D_{m}$ is taken equal to the dispersion coefficient for impermeable inclusions \eqref{eq:EamesImpervious}. During the mobile time $s$ particles encounter $n_{s}$ inclusions, where $n_{s}$ is distributed according to~\eqref{eq:trap-poisson}. The clock time $t(s)$ after the mobile time $s$ has passed is given by \begin{align} \label{eq:time} t(s) = s + \sum\limits_{i = 1}^{n_{s}} \tau_{i} \end{align} where $n_{s}$ is Poisson distributed with mean value $\langle n_{s} \rangle = k v_{m} s$. The trapping times $\tau_{i}$ are defined by (see Appendix~\ref{app:upscaling}) \begin{align} \tau_{i} = \frac{\ell_{i}}{v_{i}} - \frac{\ell_{i}}{v_{m}}, \end{align} where the distance $\ell_{i}$ is the secant of the circular inclusion at the height where the particle enters the inclusion (Figure~\ref{fig:inclusion}). It is distributed according to~\eqref{eq:p_li}. The velocity $v_{i}$ in the inclusion is assumed to be constant and lognormally distributed (see Section~\ref{sec:vpdf} and Figure~\ref{fig:vel-dist}). The trapping time denotes the time a particle spends in the inclusion minus the time it would take to traverse the inclusion with the mean velocity $v_{m}$. Thus it quantifies the net impact of the inclusion. The medium is considered ergodic if each particle samples the same distribution $\psi_{f}(t)$ of trapping times as it moves through the medium. This property is clearly not fulfilled for a periodic medium and depends on the medium and injection line length in random media. According to the above, the clock time $t(s)$ is a compound Poisson process~\citep{Feller1968,Margolin:et:al:2003,BensonMeer2009,Comolli2016}. Thus, its distribution $\psi(t)$ can be written in Laplace space as (see also Appendix~\ref{app:upscaling}) \begin{align} \label{eq:psi} \psi^{\ast}(\lambda|s) = \exp\left(- \lambda s - k s v_m \left[1 - \psi^{\ast}_{f}(\lambda) \right] \right), \end{align} \end{subequations} where Laplace transformed quantities are marked by an asterisk, and $\lambda$ denotes the Laplace variable. Equations~\eqref{ctrw}--\eqref{eq:psi} constitute and upscaled CTRW model combined with a multi-trapping approach. In the following, we discuss the equivalent formulation in terms of a time non-local partial-differential equation that describes advective mobile-immobile mass transfer. In Appendix~\ref{app:upscaling}, we derive for the mobile, this means non-trapped, solute concentration $c_m(x,t)$ the governing equation \begin{subequations} \label{eq:mrmt-all} \begin{align} \label{eq:mrmt} \frac{\partial c_{m}(x,t)}{\partial t} + \frac{\partial}{\partial t} \int\limits_{0}^{t} dt' \varphi(t - t') \gamma c_{m}(x,t') + v_{m} \frac{\partial c_{m}(x,t)}{\partial x} - D_{m} \frac{\partial^{2} c_{m}(x,t)}{\partial x^{2}} = 0, \end{align} where the trapping rate is given by $\gamma = k v_m$. The memory function $\varphi(t)$ is given explicitly in terms of the advective trapping time distribution $\psi_{f}(t)$ as \begin{align} \label{eq:phi} \varphi(t) = \int\limits_{t}^{\infty} dt' \psi_{f}(t'). \end{align} The trapping time distribution $\psi_{f}(t)$, defined in Eq.~\eqref{eq:traptimesdist}, is determined by the inclusion size and flow velocities within the inclusions. This means, it is fully quantified in terms of the microscopic advective trapping mechanisms. The memory function~\eqref{eq:phi} denotes the probability that the trapping time is larger than the time $t$. Thus, we can define the immobile concentration $c_{im}(x,t)$ as \begin{align} \label{eq:cimm} c_{im}(x,t) = \int\limits_{0}^{t} dt' \varphi(t - t') \gamma c_m(x,t'), \end{align} \end{subequations} This equation reads as follows. The immobile concentration is equal to the probability that a particle gets trapped in the immobile region at any time $t' < t$ times the probability that the trapping time is smaller than $t - t'$. Note that in the special case of a single advection time scale $\tau_a$, this means for $\psi_f(t) = \delta(t - \tau_a)$, the memory function~\eqref{eq:phi} reduces to a step function as considered in~\cite{Ginn2017}. The upscaled model defined by~\eqref{eq:mrmt}--\eqref{eq:cimm} is equal in form to memory function formulations of (multirate) mobile-immobile mass transfer~\citep{Haggerty1995, Carrera1998, Schumeretal2003, Dentz2003, Ginn2017}, and defines the immobile concentration in terms of the memory function $\varphi(t)$ as expressed by~\eqref{eq:cimm}. The memory function here quantifies advective mass transfer between high and low conductivity regions, and is fully defined in terms of the advective trapping mechanisms, inclusion size and velocity distribution. The formulation~\eqref{eq:mrmt}--\eqref{eq:cimm} of the upscaled model in terms of the non-local partial differential equation can be considered as an advective mobile-immobile mass transfer model. \begin{figure} \centering \includegraphics[width=.45\textwidth]{./Fig13a.pdf} \includegraphics[width=.45\textwidth]{./Fig13b.pdf} \includegraphics[width=.45\textwidth]{./Fig13c.pdf} \includegraphics[width=.45\textwidth]{./Fig13d.pdf} \includegraphics[width=.45\textwidth]{./Fig13e.pdf} \includegraphics[width=.45\textwidth]{./Fig13f.pdf} \caption{Comparison of the breakthrough curves (a--c) and complementary cumulative breakthrough curves (d--f) of the CTRW model \eqref{eq:ctrw} (solid lines) to the numerical simulations (dots) for a random arrangement of inclusions with $L_{x}=387 \ell_c$, $L_{y}=3.8 \ell_c$, $\kappa = 0.01$, and $\chi=0.3$. The curves are measures at increasing distance from the inlet. The CTRW models uses the parameters (velocity in the matrix, mean number of trapping events) measured at the outlet. The mean number of trapping events is rescaled for the intermediate distances.}\label{fig:CTRW-5100} \end{figure} Before we apply this model to the data of the direct numerical simulations, some comments on its assumptions are in order. The basis of the model is the assumption of ergodicity of the underlying disorder in the following sense. First, the CTRW samples the number $n_{s}$ of trapping events from a Poisson distribution. This implies that the inclusion pattern is random and fluctuates on a characteristic length scale. All particles sample from the same Poisson distribution, this means all particles must have access to the same statistics as they move through the medium, which means that the spatial pattern needs to be stationary. The same holds for the distribution of trapping times, which are sampled as independent identically distributed random variables. In the following, we analyze the breakthrough curves in the light of these remarks. Figure \ref{fig:CTRW-5100} compares the results for the breakthrough curves of the direct numerical simulations to the prediction of the CTRW for a narrow medium of $L_y = 3.8 \ell_{c}$ at different distances from the inlet. This medium has in average 2.5 inclusions per vertical cross section. For this case, we do not expect that the upscaled model provides a good prediction at short distances because the ergodicity conditions discussed above do not apply. All the particles in the direct numerical simulation initially experience the same or similar disorder, this means they are not independent statistically. In fact, transport can be interpreted as occurring in streamtubes as discussed above. Only with distance from the inlet, particles start sampling the medium structure and heterogeneity. This means in terms of the number of times particles pass an inclusion, and the trapping times experienced. Remarkably, sampling is sufficiently efficient due to the random nature of the medium that the upscaled CTRW model reproduces the primary peak of first arrival and secondary peaks that correspond to different numbers of trapping events. This means, particles sample along single streamlines a representative part of the medium statistics. \begin{figure} \centering \includegraphics[width=.49\textwidth]{./Fig14a.pdf} \includegraphics[width=.49\textwidth]{./Fig14b.pdf} \caption{Comparison of the breakthrough curves (a) and complementary cumulative breakthrough curves (b) of the CTRW model \eqref{eq:ctrw} (solid lines) to the numerical simulations (dots) for a random arrangement of inclusions with $L_{x} = 84\ell_{c}$, $L_{y} = 28\ell_{c}$, $\kappa = 0.01$, and $\chi=0.2$ for an injection line of size equal to the domain height. The curves are measured at different distances from the inlet.}\label{fig:CTRW-Ly} \end{figure} The breakthrough curves shown in Figure \ref{fig:CTRW-Ly} are measured in a medium whose lateral extension comprises $28 \ell_c$, this means, the particles injected over the medium cross section sample from the start a representative part of the medium heterogeneity, both in terms of the spatial structure and in terms of the trapping time statistics. Thus, the upscaled CTRW model predicts the direct simulation data already at short distances from the inlet. It provides good predictions for the first arrival, and also, as in Figure~\ref{fig:CTRW-5100} for the secondary peaks. \section{Conclusions} We studied in this paper the advective transport of solutes in an idealized heterogeneous porous medium consisting of a homogeneous background material with low permeability circular inclusions. In such media the distribution of flow, and solute if injected uniformly, between the matrix and the low permeability inclusions is given by the permeability ratio provided that the inclusion density is not very high. While velocity in the matrix depends on fraction of the area covered by the inclusions, the mean velocity in the inclusions follows a lognormal distribution. Transport is characterized by breakthrough curves whose shape evolves as the medium is sampled. At short distance from the injection inlet, or when the injection length is smaller than the domain, the breakthrough curve has a wavy shape that reflects the trapping of particles at the inclusions. This curve can be interpreted as transport through streamtubes with different velocities. As the distance, or injection length, grows, the properties of the medium are better sampled and the curve becomes smoother. Particles arrive gradually at the control plane, which reflects the tortuosity of the streamlines that go through the matrix. The better sampling of the velocity distribution in the inclusions makes the tail of the curve also smoother. These features are common to the behavior of the ADE \eqref{eq:ADE}. However, the shape of the curves cannot be predicted with a macrodispersion coefficient. The ADE overestimates concentration at early times and underestimates it at late times. Unrealistic values of the dispersion coefficient are obtained from the variance of the breakthrough times (see Table~\ref{tab:velDisp}). This is particularly accentuated when the medium is undersampled. The problem with the representation of variable travel times as macrodispersion can be illustrated with the results of \citet{Eames1999}. In their analysis, the macrodispersion coefficient derived for a finite permeability ratio $\kappa$ diverges in the limit $\kappa \to 0$. However, if the inclusions are impermeable from the beginning, the macrodispersion coefficient is finite. The latter captures the effect of tortuous streamlines in the background material. If inclusions are considered permeable and $\kappa$ is very high, the macrodispersion coefficient also captures the trapping in the inclusions. As long as the inclusions are permeable, there is a probability that solute gets into an inclusion. If inside the inclusion, solute is slowed down, which leads to a tailing of the breakthrough curve. The lower the permeability ratio, the stronger the tailing due to the longer delay time. Therefore, in the limit of a ratio of zero, the tail gets infinitely long. The effect that the probability to get into an inclusion also goes to zero does not counteract the infinitely long trapping time. As the macrodispersion coefficient is obtained from the total solute distribution in the domain, a retention in an inclusion for infinite time leads to a diverging macrodispersion coefficient. In case that the inclusions are impermeable from the beginning, there is no transport through inclusions that could cause tailing. Therefore the macrodispersion coefficient is finite. The behavior is inherent to assuming an advection dispersion equation for the upscaled model. We developed an upscaled transport model using a Lagrangian framework. The main assumption of the model is that the medium structure is ergodic. Therefore the model performance improves, as the particles sample a larger part of the medium heterogeneity. This is the case either when the particles travel a long distance from the inlet or when the injection length is long, so that the medium properties are explored even at short distance from the inlet. The CTRW model developed has solid predictability capabilities because it is parameterized by measurable medium properties. The CTRW model is parameterized by the trapping rate, which we observed can be characterized by a Poisson distribution whose trapping rate (number of trapping events per distance) is inversely proportional to the mean separation between inclusions, the velocity in the matrix, which is well approximated by a function of the area covered by the inclusions (Figure~\ref{fig:vel-mat}), and the velocity distribution inside the inclusions. The mean velocity inside the inclusions follows a log-normal distribution, which needs further investigation. So does the Poisson distribution of trapping events, which constitutes an important part of the upscaled model with possible applications to more general scenarios. Furthermore, we assumed constant porosity and considered a 2D scenario. We anticipate that for 3D geometries and variable porosity, the trapping rate and velocity distribution may change, and that otherwise the derived model remains valid. The upscaled model was also formulated in an equivalent mobile-immobile memory function model. The memory function is determined by the trapping time distribution and for the same reasons as outlined above for the CTRW model, predictable from information about parameters and structure of the porous medium. The memory function in our setting describes tailing of breakthrough curves due to advective transport through circular inclusions with low permeability. To generalize it towards media with inclusions with a distribution of permeability values or sizes, is straight forward, if appropriate models for the velocity distribution inside of the inclusions can be formulated. Purely advective transport was here considered as a limiting case for advective-diffusive transport. The other limiting case, purely diffusive transport inside of inclusions, has been studied and mobile-immobile models are well established for it. In a next step it would be necessary to consider the combined effect of advective and diffusive transport inside of inclusions, and to derive predictive mobile-immobile memory function models based on the models for pure advection and for pure diffusion. \section*{Acknowledgements} Data used for producing the figures can be downloaded from digital.csic.es (https://digital.csic.es/handle/10261/216991) and by solving the respective equations. The authors thank Prof. Timothy R. Ginn and two anonymous reviewers for their comments on the paper. J.J.H. and M.D. acknowledge the support of the Spanish Ministry of Science and Innovation (project CEX2018-000794-S and project HydroPore PID2019-106887GB-C31). J.J.H. acknowledges the support of the European Social Fund and Spanish Ministry of Ministry of Science, Innovation and Universities through the ``Ram\'on y Cajal'' fellowship (RYC-2017-22300). The authors thank Pierre Uszes for providing the code to measure the average distance between inclusions. \section*{Declaration of Interests} The authors report no conflict of interest.
{ "timestamp": "2020-07-28T02:41:23", "yymm": "2007", "arxiv_id": "2007.13608", "language": "en", "url": "https://arxiv.org/abs/2007.13608" }
\section{Introduction} Named Entity Recognition (NER) is a vital part of information extraction. It aims to locate and classify the named entities from unstructured text. The different entity categories are usually the person, location and organization names, etc. Kazakh language is an agglutinative language with complex morphological word structures. Each root/stem in the language can produce hundreds or thousands of new words. It leads to the severe problem of data sparsity when automatically identifying the entities. In order to tackle the problem, Tolegen et al. (2016) \cite{Tolegen:16} have given the systematic study for Kazakh NER by using conditional random fields. More specifically, the authors assembled and annotated the Kazakh NER corpus (KNC), and proposed a set of named entity features with the exploration of their effects. To achieve a state-of-the-art result for Kazakh NER compared with other languages' NER. Authors have manually designed feature templates, which in practice is a labor-intensive process and requires a lot of expertise. With the intention of alleviating the task-specific feature engineering, there has been increasing interest in using deep learning to solve the NER task for many languages. However, the effectiveness of the deep learning for Kazakh NER is still unexplored. One of the aims of this work is to use deep learning for Kazakh NER to avoid the task-specific feature engineering and to achieve a new state-of-the-art result. As in similar studies\cite{Collobert:2011} the neural networks (NNs) produces high results for English or for other languages by using distributed word representations. But using only surface word representation in deep learning is may not enough to reach the state-of-the-art results for under-resourced MCLs. The main reason is that deep learning approaches are data hungry, their performance is strongly correlated with the amount of available training data. In this paper, we introduce three types of representation for MCL including word, root and entity tag embeddings. With the purpose of discovering how above embeddings contribute to model performance independently, we use a simple NN as the baseline to do the investigation. We also improve this basic model from two perspectives. One is to apply a tensor transformation layer to extract multi-dimensional interactions among those representations. The other is to map each entity tag into a vector representation. The result shows that the use of root embedding can lead to a significant improvement to the models in term of improving test results. Our NNs reached good outcomes by transferring intermediate representations learned on large unlabeled data. We compare the NNs with the existing CRF-based NER system for Kazakh \cite{Tolegen:16} and the other bidirectional-LSTM-CRF \cite{N16-1030} that considered as the state-of-the-art in NER. Our NNs outperforms the state-of-the-art and the result indicates that the proposed NNs can be potentially applied to other morphologically complex languages. The rest of the paper is organized as follows: Section 2 reviews the existing work. Section 3 gives the named entity features used in this work. Section 4 describes the details of neural networks. Section 5 reports the results of experiments and the paper is concluded in Section 6 with future work. \section{Related Work} Named Entity Recognition have been studied for several decades, not only for English \cite{Chieu:2003:NER,Klein:2003:NER,konvens:17_tkachenko12o}, but also for other MCL, including Kazakh \cite{Tolegen:16} and Turkish \cite{Yeniterzi:2011:EMT,DBLP:conf/coling/SekerE12}. For instance, Chieu and Hwee Tou (2003) \cite{Chieu:2003:NER} presented a maximum entropy approach based NER systems for English and German, where the authors used both local and global features to enhance their models and achieved good performance in NER. In order to explore the flexibilities of the four diverse classifiers (Hidden Markov model, maximum entropy, transformation-based learning, robust linear classifier) for NER, the work \cite{Florian:2003:NER} showed that a combined system of these models under different conditions could reduce the F1-score error by a factor of 15 to 21\% on English data-set. As known, the maximum entropy approach was suffering from the label bias problem \cite{Lafferty:2001}, then the researchers attempted to use CRF model \cite{McCallum:2003:NER} and presented CRF-based NER systems with a number of external features. Such supervised NER systems were extremely sensitive to the selection of an appropriate feature set, in the work \cite{konvens:17_tkachenko12o}, the authors explored various combinations of a set of features (local and non-local knowledge features) and compared their impact on recognition performance for English. Using the CRF with optimized feature template, they obtained a 91.02\% F1-score on the CoNLL 2003 \cite{TjongKimSang:2003} data-set. For Turkish, Yeniterzi (2011)\cite{Yeniterzi:2011:EMT} analyzed the effect of the morphological features, they utilized CRF that enhanced with several syntactic and contextual features, their model achieved an 88.94\% F1-score on Turkish test data. In same direction Seker and Eryigit (2012)\cite{DBLP:conf/coling/SekerE12} presented a CRF-based NER system with their feature set, their final model achieved the highest F1-score (92\%). For Kazakh, Tolegen et al. (2016)\cite{Tolegen:16} annotated a Kazakh NER corpus (KNC), and carefully analyzed the effect of the morphological (6 features) and word type (4 features) features using CRF. Their results showed that the model could be improved by using morphological features significantly, the final CRF-based NER system achieved an 89.81\% F1 on Kazakh test data. In this work, we use such CRF-based NER system as one baseline and make comparison to our deep learning models. Recently, deep learning models including biLSTM have obtained a significant success on various natural languages processing tasks, such as POS tagging \cite{wieting-EtAl:2016:EMNLP2016,ling-EtAl:2015:EMNLP2,toleu-tolegen-makazhanov:2017:Short,8880244}, NER \cite{Chieu:2003:NER,kuru-can-yuret:2016:COLING}, machine translation \cite{DBLP:journals/corr/BahdanauCB14,He:NIPS:2016:6469}, word segmentation \cite{kuru-can-yuret:2016:COLING} and on other fields like speech recognition \cite{10.1080/23311916.2020.1727168,10.1080/23311916.2020.1727168,Graves06connectionisttemporal,orken2018,articleOrken:2018}. As the state-of-the-art of NER, in the study \cite{N16-1030}, the authors have explored various neural architectures for NER including the language independent character-based biLSTM-CRF models. These type of models on German, Dutch and English have achieved 81.74\%, 85.75\% and 90.94\%. Our models have several differences compared to other state-of-the-art. One difference is that we introduce root embedding to tackle the problem of data sparsity caused by MCL. The decoding part (refers it to CRF layer in literature \cite{N16-1030,P16-1101,W18-5605}) of NNs is combined into NNs using tag embedding. Then the word, root and tag embeddings are efficiently incorporated and calculated by NNs in the same vector space, which allows us to extract higher-level vector features. \begin{table} \caption{The entity features, more details see Tolegen et al. \cite{Tolegen:16}}\label{table:statistics} \centering \begin{tabular}{clcl} \hline \multicolumn{2}{c}{Morphological features} & \multicolumn{2}{c}{Word type features} \\ \hline \multicolumn{2}{c}{Root} & \multicolumn{2}{c}{Case feature} \\ \multicolumn{2}{c}{Part of speech} & \multicolumn{2}{c}{Start of the sentence} \\ \multicolumn{2}{c}{Inflectional suffixes} & \multicolumn{2}{c}{Latin spelling words} \\ \multicolumn{2}{c}{Derivational suffixes} & \multicolumn{2}{c}{Acronym} \\ \multicolumn{2}{c}{Proper noun} & \multicolumn{2}{c}{-} \\ \multicolumn{2}{c}{Kazakh Name suffixes} & \multicolumn{2}{c}{-} \\ \hline \end{tabular} \end{table} \section{Named Entity Features} NER models are often enhanced with named entity features. In this work, with the purpose of making a fair comparison, we utilize the same entity features proposed by Tolegen et al. (2016)\cite{Tolegen:16}. The entity features are given in Table 1 with two categories: morphological and word type information. Morphological features are extracted by using the morphological tagger of our implementation. We used a single value (1 or 0) to represent each feature according to each word has the feature or not. Then each word in the corpus contains an entity feature vector to feed into NNs with word, root and tag embeddings. \section{The Neural Networks } In this section, we describe our NNs for MCL NER. Unlike other NNs for English or other similar languages, we introduce three types of representations: word, root and tag embedding. In order to explore the effect of root and tag embedding separately and clearly, our first model is general deep neural network (DNN), which was first proposed by Bengio et al. (2003)\cite{Bengio:2003} for probabilistic language model, and re-introduced by Collobert et al. (2011)\cite{Collobert:2011} for multiple NLP tasks. DNN also is a standard model for sequence labeling task and could be a strong baseline. The second model is the extension of the DNN by applying a tensor layer to DNN. The tensor layer can be viewed as a non-linear transformation that extracts higher dimensional interactions from the input. The architecture of our NN is shown in Figure 1. The first layer is lookup table layer which extracts features for each word. Here, the features are a window of words, and root ($S_i$) plus tag embedding ($t_{i-1}$). The concatenation of these feature vectors are fed into the next several layers for feature extractions. The next layer is tensor layer and the remaining layers are standard NN layers. The NN layers are trained by backpropagation and the details of NNs are given in the following sections. \begin{figure}[!ht] \centering \includegraphics[width=8cm]{ArchitectureFF.pdf} \caption{The architecture of the Neural Network.} \end{figure} \subsection{Mapping words and tags into feature vectors } The NNs have two dictionaries\footnote{\scriptsize {The dictionary is extracted from training data and performed some pre-processing, namely lowercasing and word-stemming. Words outside this dictionary are replaced by a single special symbol}.}: one for roots and another for words. For simplicity, we will use one notation for both dictionaries in the following descriptions. Let $\mathcal{D}$ be the finite dictionary, and for each word $x_i \in \mathcal{D}$ is represented as a $d$-dimensional vector $M_{x_i} \in \mathbb{R}^{1 \times d}$ where $d$ is word vector size (a hyper-parameter). All word representation of the $\mathcal{D}$ are stored in a embedding matrix $M \in \mathbb{R}^{d \times |\mathcal{D}|}$ where $|\mathcal{D}|$ is size of the dictionary. Each word $x_i \in \mathcal{D}$ corresponds to an index $k_i$ which is column index of the embedding matrix, and then the corresponding word embedding is retrieved by the lookup table layer $LT_M(\cdot)$: \begin{equation} \begin{aligned} LT_M(k_i) = M_{x_i} \end{aligned} \label{eq:1} \end{equation} Similar to word embedding, we introduce tag embedding $L \in \mathbb{R}^{d \times |\mathcal{T}|}$, where $d$ is the vector size and $\mathcal{T}$ is a tag set. The lookup table layer can be seen as a simple projection layer where the word embedding for each context and tag embedding for the previous word is retrieved by lookup table operation. To use these features effectively, we use a sliding window approach\footnote{\scriptsize {The words exceeding the sentence boundaries are mapped to one of two special symbols, namely ``start" and ``end" symbols.}}. More precisely, for each word $x_i \in X$, a window size word's embeddings are given by the lookup table layer: \begin{equation} \begin{aligned} f_{\theta}^{1}(x_i) = \begin{bmatrix} M_{x_{i-\frac {w}{2}}} \dots M_{x_i} \dots M_{x_{i+\frac {w}{2}}} , S_{i} , t_{i-1} \\ \end{bmatrix} \end{aligned} \label{eq:1} \end{equation} where $f_{\theta}^{1}(x_i) \in \mathbb{R}^{1 \times wd}$ is $w$ word feature vectors, the $w$ is the window size (a hyper-parameter), $t_{i-1} \in \mathbb{R}^{1 \times d}$ is previous tag embedding, $S_{i}$ is embedding of current root. These embedding matrix is initialized with small random numbers and trained by back-propagation. \subsection{Tensor Layer} In order to capture more interactions between roots, surface words, tags and entity features, we extend the DNN to the tensor neural network. We use 3-way tensor $ \mathrm{T} \in \mathbb{R}^{h_2 \times h_1 \times h_1}$, where $h_1$ is size of previous layer and $h_2$ is size of tensor layer. We define the output of a tensor product $h$ via the following vectorized notation. \begin{equation} \begin{aligned} h = g(e^T \mathrm{T} e + W^3e + b^3) \end{aligned} \label{eq:1} \end{equation} where $e \in \mathbb{R}^{h_1} $ is output of previous layer, $W^3 \in \mathbb{R}^{h_2 \times h_1} $, $h \in \mathbb{R}^{h_2}$. Maintaining the full tensor directly leads to parametric explosion. Here, we use a tensor factorization approach~\cite{pei2014maxmargin} that factorizes each tensor slice as the product of two low-rank matrices, and get the factorized tensor function: \begin{equation} \begin{aligned} h = g(e^\mathrm{T} P^{[i]} Q^{[i]}e + W^3e + b^3) \end{aligned} \label{eq:1} \end{equation} where the matrix $P^{[i]} \in \mathbb{R}^{h_1 \times r}$ and $Q^{[i]} \in \mathbb{R}^{r \times h_1}$ are two low rank matrices, and $r$ is number of the factors (a hyper-parameter). \subsection{Tag inference} There are strong dependencies between the named entity tags in a sentence for the NER. In order to capture the tag transitions, we use a transition score $A_{ij}$ ~\cite{Collobert:2011,Zheng:2013} for jumping from one tag $i \in \mathcal{T}$ to another tag $j \in \mathcal{T}$ and an initial scores $A_{0i}$ for starting from the $i^{th}$ tag. For the input sentence $X$ with a tag sequence $Y$, a sentence-level score can be calculated by the sum of transition and the output of NNs: \begin{equation} \begin{aligned} s(X,Y,\theta) = \sum_{n=1}^{N}(A_{t_{i-1},t_i} + f_{\theta}(t_i|i)) \end{aligned} \label{eq:1} \end{equation} where $f_{\theta}(t_i|i)$ indicates the score output by the network for the $t_{i}$ tag at the $i^{th}$ word. It should be noted that this model calculates the tag transition score independently from NNs. One possible way of combining the both tag transitions and neural network outputs is to feed the previous tag embedding to the NNs. Then, the output of NNs could calculate a transition score given the previous tag embedding, and it can be written as follows: \begin{equation} \begin{aligned} s(X,Y,\theta) = \sum_{n=1}^{N}f_{\theta}(t_i|i,t_{i-1}) \end{aligned} \label{eq:1} \end{equation} At inference time, for a sentence $X$, we can find the best tag path $Y^*$ by maximizing the sentence score. The Viterbi algorithm can be used for this inference. \section{Experiments} We conducted several experiments to evaluate our NNs. One of them is to explore the effects of the word, root and tag embedding plus the tensor layer for MCL NER task, independently. Another is to show the results of our models after using the pre-trained root and word embeddings. The last is to compare our models to the state-of-the-art including character embedding-based biLSTM-CRF \cite{N16-1030}. \subsection{Data-set} In experiments we used the data from \cite{Tur:2003:SIE:973762.973766} for Turkish and the Kazakh NER corpus (KNC) from \cite{Tolegen:16}. Both corpus were divided into training (80\%), development (10\%) and test (10\%) set. The development set is for choosing the hyper-parameters and model selection. We adopted IOB tagging scheme~\cite{TjongKimSang:2002} for all experiments and used standard conlleval evaluation script\footnote{\scriptsize {www.cnts.ua.ac.be/conll2000/chunking/conlleval.txt}} to report the F-score, precision and recall values. \begin{table}[] \caption{Corpus statistics.} \begin{tabular}{l|lllll|lllll} \hline & \multicolumn{5}{c|}{Kazakh} & \multicolumn{5}{c}{Turkish} \\ \hline & \#sent. & \#token & \#LOC & \#ORG & \#PER & \#sent. & \#token & \#LOC & \#ORG & \#PER \\ \hline train & 14457 & 215448 & 5870 & 2065 & 3424 & 22050 & 397062 & 9387 & 7389 & 13080 \\ dev. & 1807 & 27277 & 785 & 247 & 413 & 2756 & 48990 & 1171 & 869 & 1690 \\ test & 1807 & 27145 & 731 & 247 & 452 & 2756 & 46785 & 1157 & 925 & 1521 \\ \hline \end{tabular} \end{table} \subsection{Model setup} A set of experiments were conducted to chose the hyper-parameters and the hyper-parameters are tuned on the development set. The initial learning rate of AdaGrad is set to 0.01 and the regularization is fixed to $10^{-4}$. Generally, the number of hidden units has a limited impact on the performance as long as it is large enough. Window size $w$ was set to 3, the word, root and tag embedding size was set to 50, number of hidden units was 300 for NNs, and for those NNs with tensor layer, it was set to 50 and its factor size was set to 3. After finding the best hyper-parameters, we would train final models for all NNs. After each epoch over the training set, we measured the accuracy of the model on the development set and chose the final model that obtained the highest performance on development set, then use the test set to evaluate the selected model. We made several pre-processing to the corpora, namely token and sentence segmentation, lowercasing surface words and the roots were kept in original forms. \subsection{Results} We evaluate the following model variations in the experiment: i) a baseline neural network, \textit{NN}, which contains a discrete tag transition; ii) \textit{NN+root} refers to a model that uses root embedding and the discrete tag transition. iii) \textit{NN+root+tag} is a model that the discrete tag transition in NN is replaced by named entity tag embedding. iv) \textit{NN+root+tensor} refers to tensor layer-based model with discrete tag transition. v) models with \textit{+feat} refer to the models use the named entity feature. \begin{table}[] \caption{Results of the NNs for Kazakh and Turkish (F1-score, \%). Here \textit{root} and \textit{tag} indicate root and tag embeddings; \textit{tensor} means tensor layer; \textit{feat} denotes entity feature vector; \textit{Kaz} - Kazakh and \textit{Tur} - Turkish; \textit{Ov} - Overall.} \begin{tabular}{c|c|c|c|l|lcllcc|cccllcll} \hline L. & \# & Models & \multicolumn{8}{c|}{Development set} & \multicolumn{8}{c}{Test set} \\ \hline \multirow{8}{*}{Kaz} & & & \multicolumn{3}{c}{LOC} & \multicolumn{3}{c}{ORG} & PER & Ov & LOC & ORG & \multicolumn{3}{c}{PER} & \multicolumn{3}{c}{Ov} \\ \cline{2-19} & 1 & NN & \multicolumn{3}{c}{86.69} & \multicolumn{3}{c}{68.95} & 68.57 & 78.66 & 86.32 & 69.51 & \multicolumn{3}{c}{64.78} & \multicolumn{3}{c}{76.89} \\ & 2 & NN+root & \multicolumn{3}{c}{87.48} & \multicolumn{3}{c}{70.23} & 75.66 & 81.20 & 87.74 & 72.53 & \multicolumn{3}{c}{\textbf{75.25}} & \multicolumn{3}{c}{\textbf{81.36}} \\ & 3 & NN+root+tag & \multicolumn{3}{c}{88.85} & \multicolumn{3}{c}{67.69} & 79.68 & 82.81 & 87.65 & 73.75 & \multicolumn{3}{c}{76.13} & \multicolumn{3}{c}{\textbf{81.86}} \\ & 4 & NN+root+tensor & \multicolumn{3}{c}{89.56} & \multicolumn{3}{c}{72.54} & 81.07 & 84.22 & 88.51 & \textbf{75.79} & \multicolumn{3}{c}{77.32} & \multicolumn{3}{c}{\textbf{82.83}} \\ \cline{2-19} & 5 & NN+root+feat & \multicolumn{3}{c}{93.48} & \multicolumn{3}{c}{78.35} & 91.59 & 90.40 & 92.48 & 78.90 & \multicolumn{3}{c}{90.75} & \multicolumn{3}{c}{89.54} \\ & 6 & NN+root+tensor+feat & \multicolumn{3}{c}{93.78} & \multicolumn{3}{c}{81.48} & 90.91 & 90.87 & 92.22 & \textbf{81.57} & \multicolumn{3}{c}{91.27} & \multicolumn{3}{c}{90.11} \\ & 7 & NN+root+tag+tensor+feat & \multicolumn{3}{c}{93.65} & \multicolumn{3}{c}{81.28} & 92.42 & 91.27 & \textbf{92.96} & 78.89 & \multicolumn{3}{c}{\textbf{91.70}} & \multicolumn{3}{c}{\textbf{90.28}} \\ \hline \multirow{7}{*}{Tur} & 8 & NN & \multicolumn{3}{c|}{85.06} & \multicolumn{3}{c}{74.70} & 81.11 & 80.86 & 83.17 & 76.26 & \multicolumn{3}{c}{80.55} & \multicolumn{3}{c}{80.29} \\ & 9 & NN+root & \multicolumn{3}{c}{87.38} & \multicolumn{3}{c}{77.13} & 84.78 & 83.78 & 85.78 & 78.66 & \multicolumn{3}{c}{84.03} & \multicolumn{3}{c}{\textbf{83.17}} \\ & 10 & NN+root+tag & \multicolumn{3}{c}{90.70} & \multicolumn{3}{c}{84.93} & 86.67 & 87.53 & \textbf{90.02} & \textbf{86.14} & \multicolumn{3}{c}{85.95} & \multicolumn{3}{c}{\textbf{87.31}} \\ & 11 & NN+root+tensor & \multicolumn{3}{c}{92.43} & \multicolumn{3}{c}{86.45} & 89.63 & 89.78 & 90.50 & 87.14 & \multicolumn{3}{c}{\textbf{90.00}} & \multicolumn{3}{c}{\textbf{89.42}} \\ \cline{2-19} & 12 & NN+root+feat & \multicolumn{3}{c}{91.54} & \multicolumn{3}{c}{89.04} & 91.62 & 91.01 & 90.27 & 89.50 & \multicolumn{3}{c}{91.95} & \multicolumn{3}{c}{90.78} \\ & 13 & NN+root+tensor+feat & \multicolumn{3}{c|}{93.60} & \multicolumn{3}{c}{88.88} & 92.23 & 91.88 & 92.05 & 89.35 & \multicolumn{3}{c}{92.01} & \multicolumn{3}{c}{91.34} \\ & 14 & NN+root+tag+tensor+feat & \multicolumn{3}{c}{91.77} & \multicolumn{3}{c}{89.72} & 92.23 & 91.44 & 92.80 & 88.45 & \multicolumn{3}{c}{91.91} & \multicolumn{3}{c}{\textbf{91.39}} \\ \hline \end{tabular} \end{table} Table 3 summaries the results for Kazakh and Turkish. Rows (1-4, 8-11) are given to compare the root, tag embedding and tensor layer independently. Rows (5-7, 12-14) shows the effect of entity features. As shown, when only use the surface word forms, the \textit{NN} gives 76.89\% overall F1-score for Kazakh. The \textit{NN} gives low F1-scores of 64.78\% and 69.51\% for PER and ORG respectively. There are mainly two reasons for this: i) the number of person and organization names are less than location (Table 2), and ii) compared to other entities, the length of organization name is much longer, it also has ambiguous words with people names\footnote{It often appears when the organization name is given after someone's name.}. For Turkish, \textit{NN} yields 80.29\% overall F1. It is evident from (row 2, 9) that \textit{NN+root} is improved significantly in all terms after using the root embedding. There are 4.47\% and 2.88\% improvements in overall F1 for Kazakh and Turkish compare to \textit{NN}. More precisely, using root embedding, \textit{NN+root} gives 10.47\%, 3.02\% and 1.42\% improvements for Kazakh PER, ORG, LOC entities, respectively. The result for Turkish also follows the pattern. Row (3,10) shows the effect of replacing the discrete tag transition with named entity tag embedding. We could observe that \textit{NN+root+tag} yields overall F1-scores of 81.86\% and 87.31\% for Kazakh and Turkish. Compared to \textit{NN+root}, the model with entity tag embedding has a significant improvement for Turkish with 4.14\% in overall F1. For two languages, the model performances are boosted by using tensor transformation; it shows that the tensor layer could capture the more interactions between root and word vectors. Using the entity features, \textit{NN+root+feat} give a significant improvement for Kazakh (from 81.36 to 89.54\% ) and Turkish (from 83.17 to 90.78\%). The best result for Kazakh is 90.28\% F1-score that is obtained by using tensor transformation with tag embeddings and entity features. We compare our NNs with exiting CRF-based NER system~\cite{Tolegen:16} and other state-of-the-art models. According to the recent studies for NER \cite{N16-1030,P16-1101,W18-5605}, the current cutting-edge deep learning models for sequence labeling problem is bi-directional LSTM with CRF layer. On the one hand, we trained such state-of-the-art NER model for Kazakh language for making comparisons. On the other, It is also worth to see how does a character-based model perform well for agglutinative languages. Because the character-based approaches seem to be well suited for agglutinative nature of the languages and it can serve as a stronger baseline than CRF. For those biLSTM-based models, we set hyper-parameters are comparable with those models yield the state-of-the-art results for English \cite{N16-1030,P16-1101}. The word and character embeddings are set to $300$ and $100$, respectively. The hidden unit of LSTM for both character and word are set to $300$. The dropout is set to 0.5 and use "Adam" updating strategy for learning model parameters. It should be note that the form of entities in Kazakh always starts with capital letter, and the data set used for all biLSTM-based models are not converted to lowercase, which could lead a positive effect for recognition. For a fair comparison, the following NER models are trained on the same training, development and test set. Table 4 shows the comparison of our NNs with state-of-the-art for Kazakh. \begin{table}[] \caption{Comparison of our NNs and state-of-the-art} \centering \begin{tabular}{c|cccc} \hline Models & LOC & ORG & PER & Overall \\ \hline CRF \cite{Tolegen:16} & 91.71 & \textbf{83.40} & 90.06 & \textbf{89.81} \\ biLSTM+dropout & 85.84 & 68.91 & 72.75 & 78.76 \\ biLSTM-CRF+dropout & 86.52 & 69.57 & 75.79 & 80.28 \\ biLSTM-CRF+Characters+dropout & 90.43 & 76.10 & 85.88 & \textbf{86.45} \\ \hline NN+root+feat & 92.48 & 78.90 & 90.75 & 89.54 \\ NN+root+tensor+feat & 92.22 & 81.57 & 91.27 & 90.11 \\ NN+root+tag+tensor+feat & \textbf{92.96} & 78.89 & 91.70 & \textbf{90.28} \\ \hline NN+root+feat* & 91.74 & 81.00 & 90.99 & 89.70 \\ NN+root+tensor+feat* & 92.91 & 81.76 & 91.09 & 90.40 \\ NN+root+tag+tensor+feat* & 91.33 & \textbf{81.88} & \textbf{92.00} & \textbf{90.49} \\ \hline \end{tabular} \end{table} The CRF-based system~\cite{Tolegen:16} achieved an F1-score of 89.81\% using all features with their well-designed feature template. The biLSTM-CRF with character embedding yields 86.45\% F1-score which is better than the result of the model without using characters. It can be seen, the significant improvement about 6\% in overall F1-score was gained after using character embeddings. It indicates that character-based model fits the nature of the MCL. We initialized the root and word embedding by using pre-trained embeddings. The skip-gram model of \emph{word2vec}\footnote{\scriptsize {https://code.google.com/p/word2vec/}.} ~\cite{Mikolov:13} is used to train root and word vectors on large Kazakh news articles and Wikipedia texts \footnote{\scriptsize {In order to reduce dictionary size of root and surface word, we did some pre-processing namely, lowercasing and word stemming by morphological analyzer and disambiguator.}}. Table 4 also shows the results after pre-training the root and word embedding marked with symbol *. As shown, the pre-trained root and word representations have a minor effect on the overall F1-score of NN models. Especially for organization names, the pre-trained embeddings have positive effects. The \textit{NN+root+feat*} and the \textit{NN+root+tag+tensor+feat*} models achieve around 2\% improvement for organization F1-score compared to those of the models without using the per-trained embeddings (the former's is form 78.90\% to 81.00\% and the latter's is from 78.89\% to 81.88\%). Overall, our NN outperforms the CRF-based system and other state-of-the-art (biLSTM-CRF-character+dropout), and the best NN yields an F1 of 90.49\%, a new state-of-the-art for Kazakh NER. To show the effect of word embeddings after the model training. We calculated the ten nearest neighbors of a few randomly chosen query words (first row). Their distances were measured by the cosine similarity. As given in Table 5, the nearest neighbors in three columns are related to their named entity labels: all location, person and organization names are listed in the first, second and third column, respectively. Compared to CRF, instead of using discrete features, the NNs project root, words into a vector space, which could group similar words by their meaning and the NNs has non-linear transformations to extract higher-level features. In this way, the NNs may reduce the effects of data sparsity problems of MCL. \label{sect:pdf} \begin{table}[h] \begin{center} \caption{ \label{font-table} Example words in Kazakh and their 10 closest neighbors. Here, we used the Latin alphabet to write Kazakh words for convenience.} \begin{tabular}{c|c|c} \hline \emph{ Kazakhstan} (Location) & \emph{Meirambek} (Person)& \emph{KazMunayGas} (Organization) \\ \hline Kiev & Oteshev & Nurmukasan \\ Sheshenstandagy & Klinton & TsesnaBank \\ Kyzylorda & Shokievtin & Euroodaktyn\\ Angliada & Dagradorzh & Atletikony \\ Burabai & Tarantinonyn & Bayern \\ Iran & Nikliochenko & Euroodakka\\ Singapore & Luis & CenterCredittin \\ Neva & Monhes & Juventus \\ London & Fernades & Aldaraspan \\ Romania & Fog & Liverpool \\ \hline \end{tabular} \end{center} \end{table} \section{Conclusions} We presented several neural networks for NER of MCLs. The key aspects of our model for MCL are to utilize different embeddings and layer, namely, i) root embedding, ii) entity tag embedding and iii) the tensor layer. The effects of those aspects are investigated individually. The use of root embedding leads to a significant result on MCLs' NER. The other two also gives positive effects. For Kazakh, the proposed NNs outperform the CRF-based NER system and other state-of-the-art including character-based biLSTM-CRF model. The comparisons showed that character embedding is vital to MCL's NER. The experimental results indicate that the proposed NNs can be potentially applied to other morphologically complex languages. \section*{Acknowledgments} The work was funded by the Committee of Science of Ministry of Education and Science of the Republic of Kazakhstan under the grant AP09259324. \bibliographystyle{splncs04}
{ "timestamp": "2021-10-05T02:33:27", "yymm": "2007", "arxiv_id": "2007.13626", "language": "en", "url": "https://arxiv.org/abs/2007.13626" }
\section{Introduction} The Kardar--Parisi--Zhang (KPZ) universality class of stochastic growth models in one dimension are described by a stochastically growing interface parameterized by a height function. For general initial conditions, the one-point distribution of the large time limit can be written in terms of a variational problem. The ingredients are the (scaled) initial condition and the Airy$_2$ process, $\mathcal A_2$. The latter arises as the limiting interface process when the macroscopic geometry of the interface in the law of large numbers is curved. It was discovered in the work of Pr\"ahofer and Spohn~\cite{PS02} and described by its finite-dimensional distribution. Soon after, Johansson showed weak convergence of the discrete polynuclear growth model to the Airy$_2$ process~\cite{Jo03b}. In the same paper, a first variational formula appeared (see Corollary~1.3 of~\cite{Jo03b}) \begin{equation}\label{eq1} F_{\rm GOE}(2^{2/3}s)=\P\left(\sup_{t\in\mathbb R}(\mathcal A_2(t)-t^2)\leq s\right) \end{equation} where $\mathcal A_2$ is the Airy$_2$ process and $F_{\rm GOE}$ is the GOE Tracy--Widom distribution function discovered in random matrix theory~\cite{TW96}. Formula~\eqref{eq1} corresponds to the flat initial condition as $F_{\rm GOE}$ is the limiting distribution of the corresponding rescaled interface. Later, variational formulas describing the one-point distributions for some special initial conditions appeared in several papers, see for instance~\cite{QR13B,BL13,QR13,QR16}. The first study of a large class of initial conditions, including random initial conditions, is the paper of Corwin, Liu and Wang~\cite{CLW16}. In a last passage percolation model they showed the convergence of the one-point distribution to a probability distribution expressed by the variational formula \begin{equation}\label{eq2} \P\left(\sup_{t\in\mathbb R}\left\{h_0(t)+\mathcal A_2(t)-t^2\right\}\le s\right) \end{equation} where $h_0$ is the scaling limit of the initial height profile. Shortly after, Remenik and Quastel in~\cite{QR16} asked and answered the question how much discrepancy from the perfectly flat initial condition would be allowed to still see the GOE Tracy--Widom distribution for the KPZ equation. In their paper the variational representation plays an important role. The variational formula approach is proved to be useful since it allows one to go beyond the use of exact formulas and to show, for instance, universal limiting distribution for a flat but tilted profile~\cite{FO17}. Building on~\cite{CLW16}, Chhita, Ferrari and Spohn derived a variational formula which describes the limiting distribution for random initial conditions which scale to a Brownian motion with the result~\cite{CFS16} \begin{equation}\label{defFsigma} F^{(\sigma)}(s)=\P\left(\sup_{t\in\mathbb R}\left\{\sqrt2\sigma B(t)+\mathcal A_2(t)-t^2\right\}\le s\right) \end{equation} where $B$ is a standard two-sided Brownian motion independent of the Airy$_2$ process $\mathcal A_2$. This distribution has two special cases which could be analyzed using exact formulas, namely $\sigma=0$ is the flat case and it reduces to \eqref{eq1} whereas $\sigma=1$ corresponds to the stationary initial condition for the model, so that $F^{(1)}(s)$ is the Baik--Rains distribution~\cite{BR00}. The characterization through a variational formula is tightly related to the question of universality. In the framework of this paper, the key universal ingredient is the Airy$_2$ process which is a projection of a more general space-time random process. The study of this process started with the discovery of the KPZ fixed point by Matetski, Quastel and Remenik~\cite{MQR17}, for further properties see~\cite{Pim17,Pim19,CHH19}, and continued with the desription of the full space-time process called the Airy sheet or also directed landscape by Dauvergne, Ortmann and Vir{\'a}g~\cite{DOV18}, see also~\cite{NQR20}. Deducing concrete information from a variational formula is however not always an easy task. For example, given \eqref{defFsigma}, it is not clear what are the tails of the distribution. They have only been known for a long time in the cases $\sigma=0$ and $\sigma=1$, because these distributions had other representations, see e.g.~\cite{PS00}. Meerson and Schmidt considered the $F^{(\sigma)}$ distribution in~\cite{MS17} and they deduced the correct right tail behavior by a physically motivated but non-rigorous method. They found that $\ln(1-F^{(\sigma)}(s))\sim -\frac43\frac1{\sqrt{1+3\sigma^4}}s^{3/2}$ for $s\gg 1$. They also performed large scale simulations on the exclusion process confirming their finding. In this paper we give a rigorous proof of the asymptotics and extend the results of~\cite{MS17} by obtaining upper and lower bound on the prefactor in front of the stretched exponential decay, see Theorem~\ref{thm:Fsigmatail}. The upper tail distribution is governed by the maximal value of $\sqrt2\sigma B(t)-t^2$. This fact holds already for non-random initial conditions. For instance, in the case of \eqref{eq1}, the tail behaviour of $1-F_{\rm GOE}(2^{2/3}s)$ matches that of $1-F_{\rm GUE}(s)$ up to the exponential scale as the maximal value of $-t^2$ is obtained at $t=0$. The same was shown for another simple function $h_0$ in \eqref{eq2} as noticed in~\cite{Vai20}. To make this point explicit, Theorem~\ref{thm:generalcurve} gives the tail decay for a generic non-random initial condition. One important ingredient for the proof of Theorems~\ref{thm:Fsigmatail} and~\ref{thm:generalcurve} is the observation that, for all $\delta>0$, the tail distribution of \begin{equation} \P\left(\sup_{t\in\mathbb R}\left(\mathcal A_2(t)-\delta t^2\right)>s\right) \end{equation} is, in the exponential scale, independent of $\delta$, see Theorem~\ref{thm:Airyparabola} for a detailed statement. For a given realization of the Brownian motion, if the supremum in \eqref{eq1} is taken over a finite interval instead of $\mathbb R$, the distribution we are considering also has a Fredholm determinant expression with a kernel depending on the Brownian motion~\cite{CQR11}. This representation is however not directly applicable when taking the limit as the finite interval approaches $\mathbb R$. For this purpose, it is better to use the kernel given in terms of hitting times noticed first in~\cite{QR16}. That representation works well provided that the function $h_0$ is lower than a parabola with a prefactor $3/4$, while in our application we need to get close to $1$. The explicit kernel representation has some intrinsic technical issues as confirmed also in the simplest case of a hat-shaped $h_0$~\cite{Vai20}. Our method is mainly probabilistic and avoids the computation of a correlation kernel and its asymptotic analysis. The second issue that we had to deal with was that the density of the maximum of a Brownian motion with parabolic drift studied first by Groeneboom~\cite{Gro89}, see also~\cite{Gro10} for explicit formulas, contains a term with a linear combination of Airy $\operatorname{Ai}$ and Airy $\operatorname{Bi}$ functions. The leading term is however coming from a subtle cancellation and it does not follow from the naive asymptotic of the Airy functions. Fortunately, we could avoid this issue by using an integral representation discovered by Janson, Louchard and Martin-L\"of in~\cite{JLML10}, which we carefully analyzed asymptotically, see Proposition~\ref{prop:Groeneboom}. The paper is organized as follows. We state the main results in the rest of the introduction. We first prove Theorem~\ref{thm:Airyparabola} on the upper tail of the supremum of the Airy$_2$ process minus a parabola with arbitrary coefficient in Section~\ref{s:airy}. Then Section~\ref{s:groeneboom} is about the asymptotic of the supremum of the Brownian motion minus a parabola. Section~\ref{s:Fsigmatail} proves Theorem~\ref{thm:Fsigmatail} on the right tail of the limiting distribution $F^{(\sigma)}$ for Brownian initial conditions. The proof of Theorem~\ref{thm:generalcurve} about the case of general deterministic initial conditions is given in Section~\ref{s:deterministic}. \paragraph{Acknowledgments.} The work of P.L.~Ferrari was partly funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy -- GZ 2047/1, projekt-id 390685813 and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Projektnummer 211504053 -- SFB 1060. The work of B.~Vet\H o was supported by the NKFI (National Research, Development and Innovation Office) grants PD123994 and FK123962, by the Bolyai Research Scholarship of the Hungarian Academy of Sciences and by the \'UNKP--20--5 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund. \paragraph{Main results} \begin{theorem}\label{thm:Fsigmatail} Let $\sigma>0$ be fixed. For $s$ large enough, the right tail of $F^{(\sigma)}(s)$ satisfies \begin{equation}\label{Fsigmatail} C_1\, s^{-3/4} e^{-\frac43\frac1{\sqrt{1+3\sigma^4}}s^{3/2}}\leq 1-F^{(\sigma)}(s)\leq C_2 \, s^{3/4} \ln(s)\, e^{-\frac43\frac1{\sqrt{1+3\sigma^4}}s^{3/2}} \end{equation} for some constants $C_1,C_2$ independent of $s$. \end{theorem} The $s^{-3/4}$ behavior of the prefactor of the lower bound seems to be the correct one. Indeed, for $\sigma=0$ the prefactor is $s^{-3/4}/(4\sqrt{2\pi})$ (see \eqref{eq1.8} below) and for $\sigma=1$ it is given by\footnote{For $\sigma=0$, the prefactor is obtained from equations (1) and (25), (26) of~\cite{BBdF08}, except that in (26) there is a typo, namely $x^{-3/2}$ should be $x^{-3/4}$. It can be also easily obtained from the Fredholm determinant representation of $F_{\rm GOE}$ in~\cite{FS05b}. For $\sigma=1$ the distribution is the Baik--Rains distribution, given in Definition 2 of~\cite{BR00}. The prefactor easily follows using (2.3), (2.6) of\cite{BR00}, as well as (26) of~\cite{BBdF08}.} $s^{-3/4}/\sqrt{\pi}$. In Proposition~\ref{prop:Fsigmalower} and~\ref{prop:Fsigmaupper} we give expressions of the $\sigma$-dependence of $C_1,C_2$. As it could be expected by the definition of the $F^{(\sigma)}$ distribution \eqref{defFsigma}, the upper tail behaviour of the $F^{(\sigma)}$ distribution is related to tail of the maximum of the Airy$_2$ process minus a parabola. The variational formula \eqref{eq1} was proved in~\cite{Jo03b}. In the next result we generalize \cite{Jo03b} in the sense that we compute the tail decay for the supremum with parabola which can have any coefficient between $0$ and $1$. \begin{theorem}\label{thm:Airyparabola} Let $(\mathcal A_2(t))_{t\in\mathbb R}$ denote the Airy$_2$ process. There is a constant $C>0$ such that for all $c\in(0,1)$ \begin{equation}\label{Airyparabola} 1-F_{\rm GUE}(s)\leq \P\left(\sup_{t\in\mathbb R}\left(\mathcal A_2(t)-(1-c)t^2\right)>s\right) \le C\frac{\ln(s/(1-c))}{s^{3/4}\sqrt{1-c}}e^{-\frac43s^{3/2}} \end{equation} holds as $s\to\infty$ where the constant $C$ is independent of $s$. \end{theorem} This result might be compared to what has been proven for the KPZ equation with narrow wedge initial condition at finite but large time. In that case, it is known that the decay is exponential with $s^{3/2}$ power, but without a more precise information on the coefficient, see Proposition~4.2 of~\cite{CGH19}. The lower bound in \eqref{Airyparabola} is obvious by taking $t=0$ instead of the supremum. Its asymptotic expansion follows from (1) and (25) of~\cite{BBdF08}, namely \begin{equation}\label{eqGUEtail} 1-F_{\rm GUE}(s)=\frac{1}{16\pi s^{3/2}}e^{-\frac43 s^{3/2}}(1+\O(s^{-3/2})) \end{equation} as $s\to\infty$. We remark that for $c\leq 0$ the upper bound in \eqref{Airyparabola} is trivial as well, since \begin{equation}\label{eq1.8} \P\left(\sup_{t\in\mathbb R}\left(\mathcal A_2(t)-(1-c)t^2\right)>s\right) \leq \P\left(\sup_{t\in\mathbb R}\left(\mathcal A_2(t)-t^2\right)>s\right)\sim \frac{1}{4\sqrt{2\pi} s^{3/4}}e^{-\frac43 s^{3/2}} \end{equation} where we used \eqref{eq1} and the $x\to\infty$ tail asymptotic \begin{equation}\label{eqGOEtail} 1-F_{\rm GOE}(x)\sim \frac{e^{-\frac23x^{3/2}}}{4\sqrt\pi x^{3/4}}. \end{equation} \begin{theorem}\label{thm:generalcurve} Let $h_0:\mathbb R\to\mathbb R$ be a function satisfying $h_0(t)\leq A+(1-\varepsilon) t^2$ for all $t\in\mathbb R$, for some constants $A\in\mathbb R$ and $\varepsilon>0$. Let $\kappa(h_0)=\sup_{t\in\mathbb R} \{h_0(t)-t^2\}$ and let $M>0$ be large enough so that $h_0(t)\leq \kappa(h_0)+(1-\frac\varepsilon2) t^2$ for all $|t|\geq M$. Then there are positive real constants $C_1$ and $C_2$ which do not depend on the function $h_0$ and $s$, such that for $s$ large enough \begin{equation}\label{generalcurvebound} C_1 \frac{e^{-\frac43 (s-\kappa(h_0))^{3/2}}}{(s-\kappa(h_0))^{3/2}}\leq \P\left(\sup_{t\in\mathbb R}\left\{h_0(t)+\mathcal A_2(t)-t^2\right\}\geq s\right)\leq C_2 M \frac{e^{-\frac43 (s-\kappa(h_0))^{3/2}}}{(s-\kappa(h_0))^{1/4}}. \end{equation} \end{theorem} \section{Supremum of the Airy$_2$ process minus a parabola}\label{s:airy} The aim of this section is to prove Theorem~\ref{thm:Airyparabola} about the upper tail behaviour of the Airy$_2$ process minus a parabola with arbitrary coefficient. The first ingredient is a simple bound on the supremum of the Airy$_2$ process over a finite interval. \begin{lemma}\label{lemma:Airyfinite} There is a explicit constant $C$ such that for all $a>0$ \begin{equation}\label{Airyfinite} \P\bigg(\sup_{t\in[0,a]}\mathcal A_2(t)>s\bigg)\le C\,\frac{e^{-\frac43(s-a^2)^{3/2}}}{(s-a^2)^{3/4}} \end{equation} holds if $s$ is large enough. From this we get the bound \begin{equation}\label{Airyfinite2} \P\bigg(\sup_{t\in[0,a]}\mathcal A_2(t)>s\bigg)\le C'\,\frac{a}{s^{1/4}}e^{-\frac43 s^{3/2}} \end{equation} for large $s$ with some other constant $C'$. \end{lemma} \begin{proof} The probability on the left-hand side of \eqref{Airyfinite} can be upper bounded as \begin{equation}\label{Airyfinitebound}\begin{aligned} \P\bigg(\sup_{t\in[0,a]}\mathcal A_2(t)>s\bigg)&\le\P\bigg(\sup_{t\in[0,a]}\left(\mathcal A_2(t)-t^2\right)>s-a^2\bigg)\\ &\le\P\left(\sup_{t\in\mathbb R}\left(\mathcal A_2(t)-t^2\right)>s-a^2\right)\\ &=1-F_{\text{GOE}}\left(2^{2/3}\left(s-a^2\right)\right) \end{aligned}\end{equation} where we extended the range of $t$ in the second inequality and used \eqref{eq1} in the last equality above which is the result of~\cite{Jo03b}. The proof of the first inequality \eqref{Airyfinite} with $C=1/(4\sqrt{2\pi})$ is completed by applying the upper tail behaviour of the $F_{\rm GOE}$ distribution \eqref{eqGOEtail}. For \eqref{Airyfinite2}, we divide the interval $[0,a]$ into pieces of length $1/\sqrt{s}$ and by the union bound we get \begin{equation}\label{eq2.3}\begin{aligned} \P\bigg(\sup_{t\in[0,a]}\mathcal A_2(t)>s\bigg) &\le\sum_{k=1}^{a\sqrt{s}}\P\bigg(\sup_{t\in[(k-1)/\sqrt{s},k/\sqrt{s}]}\mathcal A_2(t)>s\bigg)\\ &\le C\, a \sqrt{s} \frac{e^{-\frac43 (s-1/s)^{3/2}}}{(s-1/s)^{3/4}}\\ &\leq C'\, \frac{a}{s^{1/4}}e^{-\frac43 s^{3/2}} \end{aligned}\end{equation} for some constant $C'$ where we applied stationarity of the Airy$_2$ process and \eqref{Airyfinite} in the second inequality and the bound $-(s-1/s)^{3/2}\leq -s^{3/2}+2$ for all $s\geq 1$ in the third inequality above. \end{proof} With this lemma we are ready to prove Theorem~\ref{thm:Airyparabola}. \begin{proof}[Proof of Theorem~\ref{thm:Airyparabola}] To bound the probability that the Airy$_2$ process remains below a parabola, consider an increasing sequence $x_0=0<x_1<x_2<\dots$ to be a partition of $\mathbb R_+$ to be specified later. Then we have by symmetry and the union bound \begin{equation}\label{Airyparabolabound}\begin{aligned} \P\left(\sup_{t\in\mathbb R}\left(\mathcal A_2(t)-(1-c)t^2\right)>s\right) &\le2\P\left(\cup_{k=0}^\infty\left\{\exists t\in[x_k,x_{k+1}]:\mathcal A_2(t)>s+(1-c)t^2\right\}\right)\\ &\le2\sum_{k=0}^\infty\P\left(\exists t\in[x_k,x_{k+1}]:\mathcal A_2(t)>s+(1-c)t^2\right)\\ &\le2\sum_{k=0}^\infty\P\left(\exists t\in[x_k,x_{k+1}]:\mathcal A_2(t)>s+(1-c)x_k^2\right)\\ &\le2C\sum_{k=0}^\infty\frac{e^{-\frac43\left(s+(1-c)x_k^2-(x_{k+1}-x_k)^2\right)^{3/2}}} {\left(s+(1-c)x_k^2-(x_{k+1}-x_k)^2\right)^{3/4}} \end{aligned}\end{equation} where we decreased the barrier which $\mathcal A_2(t)$ has to reach in $[x_k,x_{k+1}]$ in the third inequality, while in the forth inequality we used Lemma~\ref{lemma:Airyfinite} and the translation invariance of the Airy$_2$ process. The $k=0$ term in the sum on the right-hand side of \eqref{Airyparabolabound} is $e^{-\frac43(s-x_1^2)^{3/2}}/(s-x_1^2)^{3/4}$, hence if we choose $x_1=1/\sqrt s$, then it is still bounded by a constant multiplied by $e^{-\frac43s^{3/2}}/s^{3/4}$. The rest of the sequence is chosen so that it satisfies \begin{equation}\label{recurrence} (x_{k+1}-x_k)^2=\frac14(1-c)x_k^2 \end{equation} for $k=1,2,\dots$ which is the geometric choice $x_{k+1}=\gamma x_k$ with $\gamma=1+\frac{\sqrt{1-c}}2$. With this sequence, the right-hand side of \eqref{Airyparabolabound} can be bounded as \begin{equation}\label{Airyparabolabound2}\begin{aligned} \P\left(\sup_{t\in\mathbb R}\left(\mathcal A_2(t)-(1-c)t^2\right)>s\right) &\le2C\frac{e^{-\frac43(s-\frac1{s})^{3/2}}}{(s-\frac1s)^{3/4}}+2C\sum_{k=1}^\infty \frac{e^{-\frac43\left(s+\frac34(1-c)x_k^2\right)^{3/2}}}{s^{3/4}}\\ &\le2C'\frac{e^{-\frac43s^{3/2}}}{s^{3/4}}+2C'\frac{e^{-\frac43s^{3/2}}}{s^{3/4}} \sum_{k=1}^\infty e^{-\frac43\left(\frac34(1-c)\gamma^{2(k-1)} s^{-1}\right)^{3/2}} \end{aligned}\end{equation} where we used the inequality $(a+b)^{3/2}\ge a^{3/2}+b^{3/2}$ in the last step. The sum on the right-hand side of \eqref{Airyparabolabound2} can be upper bounded using Lemma~\ref{lem2.3} below with $\alpha=\gamma^3$ and $\beta=\sqrt{3}(1-c)^{3/2}/(2s^{3/2})$ as \begin{equation}\label{sumbound} \sum_{k=1}^\infty e^{-\frac{\sqrt{3}(1-c)^{3/2}}{2s^{3/2}}\gamma^{3(k-1)}} \le\frac{\ln\left(1+\frac{2\gamma^3s^{3/2}}{\sqrt{3}(1-c)^{3/2}}\right)}{3\ln\gamma} \le\widetilde C\,\frac{\ln(s/(1-c))}{\sqrt{1-c}} \end{equation} for $s$ large enough with some $\widetilde C$ which does not depend on $c$. In the last inequality above we used that $\ln\gamma\sim \frac12\sqrt{1-c}$ as $c\to1$. The inequalities \eqref{Airyparabolabound2} and \eqref{sumbound} together prove \eqref{Airyparabola}. \end{proof} \begin{lemma}\label{lem2.3} Let $\alpha>1$ and $\beta>0$. Then \begin{equation} \sum_{k=0}^\infty e^{-\beta\alpha^k}\le \frac{\ln(1+\alpha/\beta)}{\ln\alpha} \end{equation} \end{lemma} \begin{proof} We start by bounding the sum by an integral as \begin{equation}\label{eq2.9} \sum_{k=0}^\infty e^{-\beta\alpha^k}\leq \int_{-1}^\infty \d x\, e^{-\beta \alpha^x}. \end{equation} The change of variables $w=\beta\alpha^x$ gives $\frac{\d w}{\d x}=w \ln\alpha$ so that the right-hand side of \eqref{eq2.9} \begin{equation} \int_{-1}^\infty \d x\, e^{-\beta \alpha^x}= \frac{1}{\ln\alpha} \int_{\beta/\alpha}^\infty \d w\,\frac{e^{-w}}{w} =:\frac{1}{\ln\alpha}E_1(\beta/\alpha). \end{equation} Equation (5.1.20) of~\cite{AS84} provides a bound on the exponential integral function $E_1$, namely $E_1(x)\leq e^{-x}\ln(1+1/x)$ for $x>0$. Thus \begin{equation} \sum_{k=0}^\infty e^{-\beta\alpha^k} \leq \frac{1}{\ln\alpha} e^{-\beta/\alpha}\ln(1+\alpha/\beta) \end{equation} which gives the claimed bound since $\alpha,\beta>0$. \end{proof} \section{Supremum of Brownian motion minus a parabola}\label{s:groeneboom} \begin{proposition}\label{prop:Groeneboom} Let $G(x)=\P\left(\max_{t\in\mathbb R}(B(t)-\tfrac12 t^2)\geq x\right)$ where $B(t)$ is a standard two-sided Brownian motion. Then, as $x\to\infty$ we have \begin{equation}\label{eqGroenExpansion} G(x)= 3^{-1/2} e^{-\frac43 \sqrt{\frac23}\,x^{3/2}} (1+\O(x^{-1/4})) \end{equation} as well as \begin{equation}\label{eqGroenExpansionB} -\frac{\d}{\d x}G(x)= \frac{2\sqrt{2}}{3} e^{-\frac43 \sqrt{\frac23}\,x^{3/2}} \sqrt{x} (1+\O(x^{-1/4})). \end{equation} Consequently, for any $c>0$ the density of the random variable $\max_{t\in\mathbb R}(B(t)-ct^2)$ satisfies \begin{equation}\label{eqGroenExpansionC} f_c(x):=-\frac{\d}{\d x}G\left((2c)^{1/3}x\right)=\frac43\sqrt c\,e^{-\frac43 \sqrt{\frac43c}\,x^{3/2}} \sqrt{x} (1+\O(x^{-1/4})) \end{equation} as $x\to\infty$. \end{proposition} The proof is given below. The distribution function $G(x)$ is written as a contour integral in Lemma~3.5 of~\cite{JLML10} as follows: \begin{equation}\label{Gcontint} G(x)=\frac{1}{2\mathrm i}\int_\gamma\d z\, \frac{\operatorname{Hi}(z)}{\operatorname{Ai}(z)}\operatorname{Ai}(z+2^{1/3}x) \end{equation} where $\gamma$ is a path passing to the right of all zeroes of the Airy function $\operatorname{Ai}$ from $-\mathrm i\infty$ to $\mathrm i\infty$. The function $\operatorname{Hi}$ is defined by (see (10.4.44) of~\cite{AS84}) \begin{equation}\label{defHi} \operatorname{Hi}(z)=\pi^{-1}\int_0^\infty \d t\, e^{-t^3/3+z t}. \end{equation} In~\cite{JLML10}, the contour $\gamma$ in \eqref{Gcontint} was chosen to come from $e^{-i\theta}\infty$ and arrive to $e^{i\theta}\infty$ with $\theta$ slightly larger than $\pi/2$. The reason why this contour can be deformed to the vertical one is the following. Lemma~A.1 of~\cite{JLML10} was used to argue for convergence of the integral \eqref{Gcontint}: they showed the decay of the ratio of the two Airy $\operatorname{Ai}$ functions, together with a bound on $\operatorname{Hi}$ for contours with angle more than $\pi/2$. The bound in Lemma~\ref{lemma:Hibound} is enough to get the convergence also for vertical contours. For the proof of \eqref{prop:Groeneboom}, the asymptotic of the Airy and $\operatorname{Hi}$ functions will be needed. By (10.4.90) of~\cite{AS84}, for $x$ real, \begin{equation}\label{eq3.6} \operatorname{Hi}(x)\sim \pi^{-1/2}x^{-1/4}e^{\frac23 x^{3/2}}\textrm{ as }x\to\infty. \end{equation} For our purposes, we need also the asymptotic behavior of $\operatorname{Hi}(z)$ for complex-valued $z$ close to $2^{1/3} x/3$ which we state below and prove later in this section. \begin{lemma}\label{LemmaHi} Let $z$ be such that $|\arg(z)|<\pi/3$. Then for large $z$ we have the asymptotic behavior \begin{equation}\label{Hiasymptotic} \operatorname{Hi}(z)\, e^{-\frac23 z^{3/2}} = \pi^{-1/2} z^{-1/4}+\O(|z|^{-1/2}). \end{equation} \end{lemma} \begin{lemma}\label{lemma:Hibound} Let $\theta\in[\pi/2,3\pi/2]$ and $x\in\mathbb R$. Then for all $y\geq 0$, \begin{equation}\label{Hibound} \left|\operatorname{Hi}\left(x+e^{\mathrm i\theta}y\right)\right|\le\operatorname{Hi}(x). \end{equation} \end{lemma} \begin{proof} By the defintion \eqref{defHi}, \begin{equation}\label{Hibound2} \operatorname{Hi}\left(x+e^{\mathrm i\theta}y\right)=\pi^{-1}\int_0^\infty \d t\, e^{-t^3/3}e^{xt}e^{e^{\mathrm i\theta}y t} \end{equation} where $|e^{e^{\mathrm i\theta}y t}|\le1$ for all $\theta\in[\pi/2,3\pi/2]$ and $t\ge0$. Since $e^{-t^3/3}e^{xt}$ is positive, the absolute value of the integral in \eqref{Hibound2} can be upper bounded by the integral of $e^{-t^3/3}e^{xt}$ which yields \eqref{Hibound}. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:Groeneboom}] Let us first prove \eqref{eqGroenExpansion}. In order to estimate $G(x)$ for large $x$, we use the integral representation \eqref{Gcontint}. The integration contour is chosen to be vertical $\gamma=\frac{2^{1/3}}{3}x+\mathrm i\mathbb R$. If $x>0$, then for $z\in\gamma$, we have $\arg(z)\in[-\pi/2,\pi/2]$. Hence we can use the asypmtotics of the Airy function \begin{equation}\label{Airyasymp} \operatorname{Ai}(z)=\tfrac12 \pi^{-1/2} z^{-1/4} e^{-\frac23 z^{3/2}}(1+O(1/z)) \textrm{ for }|\arg(z)|<\pi, \end{equation} see (10.4.59) of~\cite{AS84}. Let us parameterize the path $\gamma$ as $z=\frac{2^{1/3}}3x+\mathrm i 2^{1/3} x v$, $v\in\mathbb R$. \emph{Contribution for $|v|>1/3$.} Using Lemma~\ref{lemma:Hibound} and~\ref{LemmaHi} for the $\operatorname{Hi}$ function and the asymptotic \eqref{Airyasymp} on the Airy functions, the contribution for $|v|>1/3$ is bounded by \begin{equation}\label{eq3.11} C x^{3/4}\int_{1/3}^\infty \d v\, e^{-x^{3/2}g(v)},\quad g(v)=\frac{2\sqrt{2}}{9\sqrt{3}}\Re[(4+3 \mathrm i v)^{3/2}-(1+3 \mathrm i v)^{3/2}-1] \end{equation} for some constant $C$. Notice that for $a\geq 0$, \begin{equation} \frac{\d \Re((a+\mathrm i v)^{3/2})}{\d a}=-\frac32 \Im\sqrt{a+\mathrm i v} \end{equation} which is an increasing function of $a$. This implies that $g(v)$ is monotone increasing in $v\geq 0$ with $g(v)\sim\sqrt{v}$ as $v\to\infty$. We get that the leading behavior of the integral is bounded by $C' e^{-x^{3/2} g(1/3)}$ with $g(1/3)-g(0)<0$, for some other constant $C'$. Thus this contribution is vanishing with respect to the leading one computed below. \emph{Contribution for $|v|\leq 1/3$.} For the leading contribution, we use the asymptotic expansion of Lemma~\ref{LemmaHi} and \eqref{Airyasymp}, with the result that the contribution of $G(x)$ coming from the integral over $|v|\leq 1/3$ is given by \begin{multline} 2^{-2/3}x\int_{|v|\leq 1/3} \d v\, \frac{\operatorname{Hi}\left(\frac{2^{1/3}}3x+\I2^{1/3}xv\right)}{\operatorname{Ai}\left(\frac{2^{1/3}}3x+\I2^{1/3}xv\right)} \operatorname{Ai}\left(\frac{4\cdot2^{1/3}}3x+\I2^{1/3}xv\right)\\ =\frac{x^{3/4}}{\sqrt{\pi}2^{3/4}} \int_{|v|\leq 1/3} \d v\,\frac{e^{ x^{3/2} h(v)}}{(4/3+\mathrm i v)^{1/4}}(1+\O(1/x^{1/4})) \end{multline} with $h(v)=\frac43 \sqrt{2} ((1/3+\mathrm i v)^{3/2}-(4/3+\mathrm i v)^{3/2}/2)$. Further, one notices that $\Re(h(v))$ is strictly increasing for $v<0$ and strictly decreasing for $v>0$ with a quadratic approximation for small $v$ given by \begin{equation} h(v)= -\frac{4\sqrt{2}}{3\sqrt{3}}-\frac{3\sqrt{3}}{4\sqrt{2}}v^2+\O(v^3). \end{equation} Then, using standard steep descent analysis, we get \eqref{eqGroenExpansion}. The proof of \eqref{eqGroenExpansionB} is similar. The only difference is that we need to replace the asymptotic expansion of $\operatorname{Ai}(z+2^{1/3}x)$ with the one of $2^{1/3}\operatorname{Ai}'(z+2^{1/3}x)$. By (10.4.61) of~\cite{AS84}, we have $\operatorname{Ai}'(z)=-\frac12\pi^{-1/2}z^{1/4}e^{-2 z^{3/2}/3}(1+\O(1/z))$. Thus in the asymptotic analysis we need to replace $z^{-1/4}$ with $-2^{1/3}z^{1/4}$. This gives the claimed result. By replacing $t$ by $(2c)^{-2/3}t$ and using Brownian rescaling, \begin{equation} \P\left(\max_{t\in\mathbb R}\left(B(t)-c t^2\right)\geq x\right) =\P\left(\max_{t\in\mathbb R}\left(B(t)-\tfrac12 t^2\right)\geq x(2c)^{1/3}\right) =G\left((2c)^{1/3}x\right), \end{equation} hence \eqref{eqGroenExpansionC} follows from \eqref{eqGroenExpansionB} by substitution. \end{proof} Now we prove the claimed asymptotic expansion of $\operatorname{Hi}$. \begin{proof}[Proof of Lemma~\ref{LemmaHi}] By symmetry with respect to the real axis ($z\mapsto \bar z$), consider $z$ with $\arg(z)\in [0,\pi/3)$. We parameterize $z=r e^{\mathrm i \theta}$ with $r>0$ and $\theta\in [0,\pi/3)$. We introduce the change of variables $t=\sqrt{z}+u$ which yields \begin{equation} -\frac13 t^3+z t = \frac23 z^{3/2}-\frac13 u^3-\sqrt{z} u^2 \end{equation} and we get \begin{equation}\label{eqHiRepr} \operatorname{Hi}(z)\, e^{-2 z^{3/2}/3} = \pi^{-1} \int_{-\sqrt{z}}^\infty \d u\, e^{-u^3/3-\sqrt{z} u^2}. \end{equation} For large values of $|z|$, the leading contribution comes from a neighborhood of $0$. Consider the integration contour $\Gamma=\Gamma_1\vee\Gamma_2$ where $\Gamma_1=\{-\sqrt{z}+\mathrm i y,0\leq y\leq \Im(\sqrt{z})\}$ and $\Gamma_2=\{x, -\Re(\sqrt{z})\leq x<\infty\}$. Consider first the contribution on the contour $\Gamma_1$. Let $f(u)=\Re(-u^3/3-\sqrt{z} u^2)$. Then we have \begin{equation} f(-\sqrt{z})=-\frac23 \Re(z^{3/2})=-\frac23 r^{3/2}\cos(3\theta/2)<0 = f(0). \end{equation} Setting $u=-\sqrt{z}+\mathrm i y=-\sqrt{r}\cos(\theta/2)-\mathrm i\sqrt{r}\sin(\theta/2)+\mathrm i y$, we obtain \begin{equation} \Re(-u^3/3-\sqrt{z}u^2) = {\rm const} -\sqrt{r}\sin(\theta) y, \end{equation} which is decreasing in $y$. Thus the contribution of the integral of \eqref{eqHiRepr} over $\Gamma_1$ can bounded by the maximum of the integrand times the length of the contour, that is, by $\pi^{-1} \Im(\sqrt{z}) e^{f(-\sqrt{z})}\leq C e^{-\frac23 r^{3/2}\cos(3\theta/2)}$. Next we focus on the contribution over $\Gamma_2$. We have, for all $u\geq -\sqrt{r}\cos(\theta/2)=-\Re(\sqrt z)$, the bound \begin{equation} \Re(-u^3/3-\sqrt{z}u^2)=-\tfrac13u^3-\sqrt{r}\cos(\theta/2) u^2\leq -\tfrac23 \sqrt{r}\cos(\theta/2) u^2. \end{equation} For any $\delta>0$, which can be chosen later as a function of $r$, the tails of the Gaussian integral gives \begin{equation} \left|\pi^{-1} \int_{\Gamma_2\setminus \{|u|\leq \delta\}} du\, e^{-u^3/3-\sqrt{z} u^2}\right|\leq C e^{-\tfrac23 \sqrt{r}\cos(\theta/2) \delta^2}. \end{equation} Next, the local contribution is close to the integral with only the quadratic term. Indeed, using $|e^x-1|\leq |x| e^{|x|}$, we have \begin{multline} \left|\pi^{-1} \int_{\{|u|\leq \delta\}} \d u\, e^{-u^3/3-\sqrt{z} u^2}-\pi^{-1} \int_{\{|u|\leq \delta\}} \d u\, e^{-\sqrt{z} u^2}\right|\\ \leq \pi^{-1} \int_{\{|u|\leq \delta\}} \d u\left|e^{-u^3/3-\sqrt{z} u^2}\right| \frac{|u|^3}{3}=\O(\delta^3). \end{multline} Extending the integration in the Gaussian integral to $\mathbb R$, we only make a small error, namely \begin{equation} \left|\pi^{-1} \int_{\{|u|\leq \delta\}} \d u\, e^{-\sqrt{z} u^2} - \pi^{-1} \int_{\mathbb R} \d u\, e^{-\sqrt{z} u^2}\right|\leq \O(e^{-\delta^2 \sqrt{r}\cos(\theta/2)}). \end{equation} Finally, the Gaussian integral can be computed explicitly as \begin{equation} \pi^{-1} \int_{\mathbb R} \d u\, e^{-\sqrt{z} u^2} = \pi^{-1/2} z^{-1/4}. \end{equation} Combining all these bounds, we get \begin{equation} \operatorname{Hi}(z)\, e^{-\frac23 z^{3/2}}=\pi^{-1/2} z^{-1/4} +\O(\delta^3,\sqrt{r} e^{-\frac23r^{3/2}\cos(3\theta/2)}, e^{-\frac23 \sqrt{r}\cos(\theta/2)\delta^2}). \end{equation} Now, since $\theta\in [0,\pi/3)$, we have $\cos(3\theta/2),\cos(\theta/2)\in [1/\sqrt{2},1]$. By choosing $\delta=r^{-1/6}$, we get \eqref{Hiasymptotic}. \end{proof} \section{Tail bounds for random initial conditions}\label{s:Fsigmatail} In this section we prove Theorem~\ref{thm:Fsigmatail} about the upper tail decay of the $F^{(\sigma)}(s)$ distribution, which follows by combining Propositions~\ref{prop:Fsigmalower} and~\ref{prop:Fsigmaupper} below. Fix $c\in(0,1)$ and let \begin{equation}\label{deftauM} \tau_c=\arg\max\left(\sqrt2\sigma B(t)-ct^2\right),\quad M_c=\max_{t\in\mathbb R}\left(\sqrt2\sigma B(t)-ct^2\right)=\sqrt2\sigma B(\tau_c)-c\tau_c^2 \end{equation} be the position and the value of the maximum of the two-sided Brownian motion $\sqrt2\sigma B(t)$ with diffusion coefficient $2\sigma^2$ where $B(t)$ is a standard one. Note that \begin{equation} M_c=\max_{t\in\mathbb R}\left(\sqrt2\sigma B(t)-ct^2\right) =\max_{u\in\mathbb R}\left(\sqrt2\sigma B\left(\frac u{2\sigma^2}\right)-\frac{cu^2}{4\sigma^4}\right) \stackrel{\d}{=}\max_{u\in\mathbb R}\left(B(u)-\frac c{4\sigma^4}u^2\right) \end{equation} where the second equality follows by the change of variables $t=u/(2\sigma^2)$ and the third one by Brownian scaling. As a consequence, the random variable $M_c$ has density $f_{\frac c{4\sigma^4}}(x)$ where $f_c$ was defined in \eqref{eqGroenExpansionC}. \paragraph{Lower bound.} The idea of the lower bound is to use the inequality \begin{equation} \sup_{t\in \mathbb R} \big\{\sqrt{2}\sigma B(t)+\mathcal A_2(t)-t^2\big\}\geq \sqrt{2}\sigma B(t_0)+\mathcal A_2(t_0)-t_0^2 \end{equation} which holds for any choice of $t_0\in\mathbb R$. Furthermore, $\P\left(\sqrt{2}\sigma B(t_0)+\mathcal A_2(t_0)-t_0^2>s\right)$ will be the largest, that is, we get the best lower bound if we take a time $t_0$ where $\sqrt{2}\sigma B(t_0)+\mathcal A_2(t_0)-t_0^2$ is the largest. As the Airy$_2$ process is stationary and independent of $B$, it does not make any difference for $\mathcal A_2(t_0)$ which time is chosen. Thus the idea is to choose $t_0$ to be the random time $\tau_1$ which maximizes $\sqrt{2}\sigma B(t)-t^2$. \begin{proposition}\label{prop:Fsigmalower} For all $\sigma>0$, there is a constant $C_1$ independent of $s$ such that \begin{equation} 1-F^{(\sigma)}(s) \geq C_1 \sigma^2 (1+3\sigma^4)^{1/4} s^{-3/4}e^{-\frac43\frac1{\sqrt{1+3\sigma^4}}s^{3/2}} \end{equation} holds for $s\gg \max\{\sigma^{-4},\sigma^4\}$. \end{proposition} \begin{proof} The upper tail of the $F^{(\sigma)}$ distribution can be rewritten as \begin{equation}\label{Fsigmaconditional} 1-F^{(\sigma)}(s)=\mathbf E\left(\P\left(\sup_{t\in\mathbb R}\left(\sqrt2\sigma B(t)+\mathcal A_2(t)-t^2\right)>s\,\Big|\,\tau_1\right)\right) \end{equation} by conditioning on the value of the time $\tau_1$. The conditional probability on the right-hand side of \eqref{Fsigmaconditional} can be lower bounded by replacing the supremum of $\sqrt2\sigma B(t)+\mathcal A_2(t)-t^2$ with its value at $t=\tau_1$ to get \begin{equation}\label{Fsigmalower}\begin{aligned} 1-F^{(\sigma)}&\ge\mathbf E\left(\P\left(\sqrt2\sigma B(\tau_1)+\mathcal A_2(\tau_1)-\tau_1^2>s\,\big|\,\tau_1\right)\right)\\ &=\mathbf E\left(\P\left(\mathcal A_2(\tau_1)>s-M_1\,|\,\tau_1\right)\right)\\ &=\mathbf E\left(1-F_{\text{GUE}}(s-M_1)\right) \end{aligned}\end{equation} where the first equality follows by the definition \eqref{deftauM} of $M_1$ and by rearranging. In the second equality, we used that the Airy$_2$ process is stationary with GUE Tracy--Widom distribution at any position independently of $\tau_1$. By using Proposition~\ref{prop:Groeneboom} about the asymptotic of the density of $M_1$ and the tail decay of the GUE Tracy--Widom distribution, see \eqref{eqGUEtail}, one gets that the right-hand side of \eqref{Fsigmalower} can be lower bounded by \begin{equation}\label{1-FGUEintegral}\begin{aligned} &\mathbf E\left(1-F_{\text{GUE}}(s-M_1)\right)\\ &\qquad\ge\frac43\sqrt{\frac1{4\sigma^4}}\frac1{16\pi}\int_0^s\d m\,e^{-\frac43\sqrt{\frac43\frac1{4\sigma^4}}m^{3/2}} e^{-\frac43(s-m)^{3/2}}\frac{\sqrt{m}}{(s-m)^{3/2}}\left(1+R(s,m))\right)\\ &\qquad=\frac1{24\pi\sigma^2}\int_0^1\d \mu\,e^{-\frac43\sqrt{\frac43\frac1{4\sigma^4}}s^{3/2}\mu^{3/2}} e^{-\frac43s^{3/2}(1-\mu)^{3/2}}\frac{\sqrt{\mu}}{(1-\mu)^{3/2}}\left(1+R(s,s \mu)\right) \end{aligned}\end{equation} with the change of variables $m=s\mu$. Here $R(s,m)=\O((s-m)^{-3/2},m^{-1/4})$ and $R(s,s\mu)=\O(s^{-3/2}(1-\mu)^{-3/2},s^{-1/4}\mu^{-1/4})$ is meant as $s\to\infty$ with $\mu\in (0,1)$. Let $g(\mu)=-\frac43\sqrt{\frac43\frac1{4\sigma^4}}\mu^{3/2}-\frac43(1-\mu)^{3/2}$. One can compute that \begin{equation}\label{mu0} g'(\mu)=0 \textrm{ for }\mu=\mu_0=\frac{3\sigma^4}{1+3\sigma^4} \end{equation} as well as \begin{equation} g''(\mu)<0\text{ for all }\mu\in [0,1]. \end{equation} In particular, Taylor expansion gives $g(\mu)=g(\mu_0)-\alpha (\mu-\mu_0)^2+\O((\mu-\mu_0)^3)$ with $g(\mu_0)=-\frac43\frac1{\sqrt{1+3\sigma^4}}$ and $\alpha=(1+3\sigma^4)^{3/2}/(6\sigma^4)$. The main contribution of the integral on the right-hand side of \eqref{1-FGUEintegral} comes from the regime $|\mu-\mu_0|\sim\frac1{s^{3/4}\sqrt{g''(\mu_0)}}=\frac{\sqrt3\sigma^2}{(1+3\sigma^4)^{3/4}}\frac1{s^{3/4}}$. We assume that $s\gg \max\{\sigma^4,\sigma^{-4}\}$ which can be written equivalently as $s^{-1/4}\ll\sigma\ll s^{1/4}$. Next we show that the error terms in $R$ in \eqref{1-FGUEintegral} are small in the regime of the main contribution. If $\sigma\to0$ as $s\to\infty$, then $\mu_0\sim3\sigma^4$ by \eqref{mu0} and for the regime which we consider $|\mu-\mu_0|\sim\frac{\sqrt3\sigma^2}{s^{3/4}}=o(\sigma^4)$ holds as long as $\sigma\gg s^{-1/4}$. Hence $\mu s\to\infty$ and $R\to0$ in the regime of the main contribution. If $\sigma\to\infty$ with $s\to\infty$, then $1-\mu_0\sim\frac1{3\sigma^4}$ and the width of the regime considered is $\sim\frac1{3^{3/4}\sigma^3s^{3/4}}=o(\sigma^{-4})$ provided that $\sigma\ll s^{1/4}$. Furthermore, $(1-\mu)s\to\infty$ and $R\to0$ in the regime which gives the main contribution. The error $R$ also goes to $0$ in the regime above if $\sigma$ remains bounded away from $0$ and infinity. In the regime of $\mu$ that we consider, the higher order terms of the expansion are controlled by the quadratic term for all $s\gg \min\{\sigma^4,\sigma^{-4}\}$. Thus the quadratic approximation leads to the lower bound \begin{equation}\begin{aligned} \mathbf E\left(1-F_{\text{GUE}}(s-M_1)\right)&\ge\frac1{24\sqrt\pi\sigma^2}\frac{\sqrt{\mu_0}}{(1-\mu_0)^{3/2}\sqrt\alpha s^{3/4}} e^{-g(\mu_0) s^{3/2}}(1+\O(s^{-1/4}))\\ &=\frac{\sigma^2(1+3\sigma^4)^{1/4}}{4\sqrt{2\pi}}s^{-3/4}e^{-\frac43\frac1{\sqrt{1+3\sigma^4}}s^{3/2}}(1+\O(s^{-1/4})). \end{aligned}\end{equation} \end{proof} \paragraph{Upper bound.} This strategy for getting the upper bound is different. We noticed that the tail distribution of $\sup_{t\in\mathbb R}(\mathcal A_2(t)-(1-c)t^2)$ is, in the exponential scale, independent of $c$ provided that $c<1$. This implies that the tail distribution will be determined mostly by the tail of $M_c=\sup_{t\in\mathbb R}(\sqrt{2}\sigma B(t)-c t^2)$. The proof of the upper bound goes by conditioning on the value of $M_c$ and bounding $\sqrt{2}\sigma B(t)-c t^2$ by $M_c$ from above. \begin{proposition}\label{prop:Fsigmaupper} For all $\sigma>0$, there is a constant $C_2$ independent of $s$ such that \begin{equation}\label{Fsigmaupper} 1-F^{(\sigma)}(s) \leq C_2 \sigma^6(1+3\sigma^4)^{-2} s^{3/4}\ln(s)\,e^{-\frac43\frac1{\sqrt{1+3\sigma^4}}s^{3/2}} \end{equation} holds for $s\gg \max\{\sigma^{-4},\sigma^4\}$. \end{proposition} \begin{proof} To get an upper bound, one can write the event \begin{multline} \left\{\sup_{t\in\mathbb R}\left(\sqrt2\sigma B(t)+\mathcal A_2(t)-t^2\right)>s\right\}\\ =\left\{\exists t\in\mathbb R:\left(\sqrt2\sigma B(t)-ct^2\right)+\left(\mathcal A_2(t)-(1-c)t^2\right)>s\right\} \end{multline} for any $c\in(0,1)$. Since the maximum of the first term on the right-hand side is $M_c$, it holds for any $t\in\mathbb R$ that $\sqrt2\sigma B(t)-ct^2\le M_c$, and one can bound the upper tail of $F^{(\sigma)}$ as \begin{equation}\label{Fsigmaupperbound}\begin{aligned} 1-F^{(\sigma)}(s)&\le\P\left(\exists t\in\mathbb R:M_c+\left(\mathcal A_2(t)-(1-c)t^2\right)>s\right)\\ &=\mathbf E\left(\P\left(\sup_{t\in\mathbb R}\left(\mathcal A_2(t)-(1-c)t^2\right)>s-M_c\,\Big|\,M_c\right)\right) \end{aligned}\end{equation} where the last equality follows by conditioning and rearrangement. Now the upper bound on the right-hand side of \eqref{Fsigmaupperbound} can be bounded by an integral using Proposition~\ref{prop:Groeneboom} about the density of $M_c$ and by Theorem~\ref{thm:Airyparabola}. Hence we get \begin{equation}\begin{aligned} &1-F^{(\sigma)}(s)\\ &\quad\le C\int_0^s\d m\,e^{-\frac43\sqrt{\frac43\frac c{4\sigma^4}}m^{3/2}}e^{-\frac43(s-m)^{3/2}} \frac{\sqrt{m}\ln((s-m)/(1-c))}{(s-m)^{3/4}\sqrt{1-c}} \left(1+\O(m^{-1/4})\right)\\ &\quad\leq C \int_0^1\d \mu\,e^{-\frac43\sqrt{\frac43\frac c{4\sigma^4}}\mu^{3/2}s^{3/2}}e^{-\frac43(1-\mu)^{3/2}s^{3/2}}\, \frac{s^{3/4}\sqrt{\mu}\ln((s(1-\mu))/(1-c))}{(1-\mu)^{3/4}\sqrt{1-c}}\\ &\qquad\times\left(1+\O\Big(\frac{1}{(s\mu)^{1/4}}\Big)\right). \end{aligned}\end{equation} As for the lower bound, we need to have $s\gg \max\{\sigma^{-4},\sigma^4\}$ to apply the approximations. Very similarly to \eqref{1-FGUEintegral}, one gets that the exponent is maximal for $\mu=\mu_0=\frac{3\sigma^4}{c+3\sigma^4}$. We also have \begin{equation} -\frac43\sqrt{\frac43\frac c{4\sigma^4}}\mu^{3/2}-\frac43(1-\mu)^{3/2} =-\frac43\frac{\sqrt{c}}{\sqrt{c+3\sigma^4}}-\frac{(c+3\sigma^4)^{3/2}}{6\sigma^4\sqrt{c}} (\mu-\mu_0)^2+\O\left((\mu-\mu_0)^3\right). \end{equation} This gives \begin{equation}\label{eq4.16} 1-F^{(\sigma)}(s)\le C'\frac{\sigma^4}{\sqrt{c+3\sigma^4}\sqrt{c}} \frac{\ln(s/(1-c))}{\sqrt{1-c}}e^{-\frac43\frac{\sqrt{c}}{\sqrt{c+3\sigma^4}}s^{3/2}}(1+\O(s^{-1/4})) \end{equation} for some constant $C'$ which does not depend on $c$ and $\sigma$. Finally, since \begin{equation} \frac{\sqrt{c}}{\sqrt{c+3\sigma^4}}= \frac{1}{\sqrt{1+3\sigma^4}}-\frac{3\sigma^4}{2(1+3\sigma^4)^{3/2}}(1-c)+\O((1-c)^2), \end{equation} we choose $1-c=\tilde c s^{-3/2}$. With the choice $\tilde c=\frac14 (1+3\sigma^4)^{3/2}/\sigma^4$, together with \eqref{eq4.16} we obtain \begin{equation} 1-F^{(\sigma)}(s)\le C'' \sigma^6(1+3\sigma^4)^{-2} s^{3/4}\ln(s)\,e^{-\frac43\frac{1}{\sqrt{1+3\sigma^4}}s^{3/2}} \end{equation} for some other constant $C''$ independent of $\sigma,s$. \end{proof} \section{Tail bounds for deterministic initial profile}\label{s:deterministic} In this section we prove Theorem~\ref{thm:generalcurve} confirming the heuristics that the leading contribution for the right tail decay comes from the position where the function $h_0(t)-t^2$ is maximal. \begin{proof}[Proof of Theorem~\ref{thm:generalcurve}] Let $\tau\in\mathbb R$ be a time such that $\kappa(h_0)=\sup_{t\in\mathbb R} \{h_0(t)-t^2\}=h_0(\tau)-\tau^2$. For the lower bound in \eqref{generalcurvebound} note that \begin{equation} \sup_{t\in\mathbb R}\left\{h_0(t)+\mathcal A_2(t)-t^2\right\}\ge h_0(\tau)+\mathcal A_2(\tau)-\tau^2=\kappa(h_0)+\mathcal A_2(\tau) \end{equation} by the definition of the time $\tau$. Hence \begin{equation} \P\left(\sup_{t\in\mathbb R}\left\{h_0(t)+\mathcal A_2(t)-t^2\right\}\geq s\right) \geq\P\left(\kappa(h_0)+\mathcal A_2(\tau)\geq s\right)=1-F_{\rm GUE}(s-\kappa(h_0)). \end{equation} This inequality together with the asymptotic \eqref{eqGUEtail} leads to the lower bound in \eqref{generalcurvebound}. Now we consider the upper bound. The function $h_0(t)-t^2$ is bounded from above by $\kappa(h_0)$ for all times $t\in\mathbb R$ and it is bounded from above by $\kappa(h_0)-\frac\varepsilon2 t^2$ for $|t|>M$. Therefore \begin{equation}\label{eq2.4}\begin{aligned} &\P\left(\sup_{t\in\mathbb R}\left\{h_0(t)+\mathcal A_2(t)-t^2\right\}\geq s\right)\\ &\leq \P\bigg(\sup_{|t|\leq M}\left\{\kappa(h_0)+\mathcal A_2(t)\right\}\geq s\bigg)+\P\bigg(\sup_{|t|>M}\left\{\kappa(h_0)+\mathcal A_2(t)-\frac\varepsilon2 t^2\right\}\geq s\bigg)\\ &\leq \P\bigg(\sup_{|t|\leq M}\mathcal A_2(t)\geq s-\kappa(h_0)\bigg)+\P\bigg(\sup_{t\in\mathbb R}\left\{\mathcal A_2(t)-\frac\varepsilon2 t^2\right\}\geq s-\kappa(h_0)\bigg). \end{aligned}\end{equation} The first term is bounded using Lemma~\ref{lemma:Airyfinite}. The second term is bounded using Theorem~\ref{thm:Airyparabola}. Altogether we get \begin{multline} \P\left(\sup_{t\in\mathbb R}\left\{h_0(t)+\mathcal A_2(t)-t^2\right\}\geq s\right)\\ \le C'\frac{2M}{(s-\kappa(h_0))^{1/4}}e^{-\frac43 (s-\kappa(h_0))^{3/2}} +C\frac{\ln[2(s-\kappa(h_0))/\varepsilon]}{(s-\kappa(h_0))^{3/4}\sqrt{\varepsilon/2}}e^{-\frac43(s-\kappa(h_0))^{3/2}}. \end{multline} Since $\varepsilon$ is fixed, for large $s$ the second term is smaller than the first one, which completes the proof. \end{proof}
{ "timestamp": "2021-03-04T02:26:59", "yymm": "2007", "arxiv_id": "2007.13496", "language": "en", "url": "https://arxiv.org/abs/2007.13496" }
\section{INTRODUCTION} \IEEEPARstart{T}{he} exponential growth of connected devices and emergence of the Internet-of-Everything (IoE), enabling ubiquitous connectivity among billions of people and machines, have been the major driving forces towards the evolution of wireless technologies, aiming to support a plethora of new services, including enhanced mobile broadband and ultra-reliable and low-latency communications. While the demand for new IoE services, e.g., extended reality, autonomous driving and tactile Internet continues to grow, it is necessary for future wireless networks to deliver high reliability, low latency, and very high data rates. In this context, the notion of visible light communications (VLC) has emerged as a promising wireless technology for massive connectivity of users with high data rates. To realize VLC, a simple and inexpensive modification is required to the existing lighting infrastructure \cite{Komine2004,Elgala2011,Ghassem2019,Ndjiongue}. The key attractive features of VLC include, but are not limited to, security, high degree of spatial reuse, and immunity to electromagnetic interference \cite{Grobe2013}. The advancement in solid-state has introduced light emitting diodes (LEDs) as energy-efficient light sources, which are envisioned to dominate the next generation of wireless infrastructure. One of the interesting features of LEDs is their ability to rapidly switch between different light intensities in a way that is not perceptible to human eyes. This enables them to be the main technology for VLC systems. The key principle of VLC is to use emitted light from the LEDs to perform data transmission through intensity modulation and direct detection (IM/DD), without affecting the LEDs' main illumination function. The huge unregulated spectrum of visible light allows VLC to offload data traffic from radio-frequency (RF)/microwave systems while providing high data rates. VLC uses the 400 THz to 789 THz visible light spectrum, which is characterized by the features of low penetration through objects, secure communications, and high quality-of-service (QoS) in interference-free small cells designs \cite{Karuna2015,Mirami2015,IEEE2018}. Fig. \ref{Fig:VLC1} shows the VLC spectrum band and examples of its use in healthcare, work office, transportation, and smart cities. Also, the structure of the survey is illustrated \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{Fig1.eps} \caption{VLC spectrum and examples of its use.} \label{Fig:VLC1} \end{figure*} \begin{table*}[t] \scriptsize \centering \caption{List of acronyms and Abbreviations.} \begin{tabular}[l] {|p{2.5cm}|p{6cm}|p{2.5cm}|p{5cm}|} \hline \textbf{Abbreviation }&\textbf{Definition}&\textbf{Abbreviation}&\textbf{Definition } \\ \hline \hline ADT & Angle Diversity Transmitter& MU &Multi-User \\ \hline AO &Alternating Optimization &{MUD} &Multi-User Detection \\ \hline BD & Block Diagonalization & MUI &Multi-User Interference\\ \hline BER & Bit Error Rate & NOMA &Non-Orthogonal Multiple Access \\ \hline BC& Broadcast Channel & OCDMA &Optical Code-Division Multiple Access \\ \hline CDMA & Code-Division Multiple Access &OFDM & Orthogonal Frequency-Division Multiplexing\\ \hline C-NOMA& Code NOMA &OFDMA&Orthogonal Frequency-Division Multiple Access \\ \hline CoMP &Coordinated multi-point &OMA & Orthogonal Multiple Access\\ \hline CSI &Channel State Information & OOK &On-Off Keying \\ \hline CSIT &CSI at Transmitter & P-NOMA &Power NOMA\\ \hline DC & Direct Current &PD&Photo Detectors \\ \hline DD &Direct Detection &QoS& Quality-of-Service\\ \hline FoV & Field of View &RGB & Red, Green and Blue \\ \hline ICI & Inter-Channel Interference &RSMA &Rate-splitting Multiple Access\\ \hline IM & Intensity Modulation &SC & Super-position Coding\\ \hline ISI &Inter-Symbol Interference &SDMA &Space Division Multiple Access\\ \hline LED &Light Emitting Diode& SIC & Successive Interference Cancellation \\ \hline LTE & Long-Term Evolution &SINR& Signal-to-Interference-Plus-Noise Ratio\\ \hline LoS &Line-of-Sight &SNR &Signal-to-Noise Ratio\\ \hline MA & Multiple Access &TDMA& Time-Division Multiple Access \\ \hline MAC & Media Access Control &VLC& Visible Light Communication \\ \hline MIMO & Multiple-Input Multiple-Output& WSMSE &Weighted Sum Mean Squared Error \\ \hline MISO &Multiple-Input Single-Output &WSR&Weighted Sum Rate\\ \hline MMSE & Minimum Mean-Square Error &ZF & Zero-Forcing \\ \hline MSE &Mean-Square Error & ZF-DPC &Zero-Forcing Dirty-Paper\\ \hline \end{tabular} \label{Table1} \end{table*} \begin{figure}[ht] \centerline{\includegraphics[width=3.5in]{chart.eps}} \caption{Organization of the paper.} \label{Fig:chart} \end{figure} in Fig. \ref{Fig:chart}. \subsection{Motivation and Contribution} Despite its advantages, VLC suffers from several drawbacks that limit its performance. For example, the limited modulation bandwidth and peak optical power of LEDs are considered as the main obstacles towards realizing the full potential of VLC systems \cite{Jovicic2013}. Therefore, several studies have been carried out to enhance the spectral efficiency of VLC systems. In particular, two research directions have been identified: in the first one, researchers focused on the design of dedicated VLC analog hardware and digital signal processing techniques. In the second one, researchers focused on enhancing the spectral efficiency through the development of different optical-based modulation and coding schemes, adaptive modulation, equalization, VLC cooperative communications, orthogonal and non-orthogonal multiple access (OMA/NOMA) schemes, and multiple-input multiple-output (MIMO) \cite{Pathak2015}. In the context of VLC, several optical OMA schemes have been proposed, including time-division multiple access (TDMA), orthogonal frequency-division multiple access (OFDMA) \cite{Armstrong2009}, and optical code-division multiple access (OCDMA) \cite{Salehi1989}. These schemes rely on assigning orthogonal resources to different users. For example, in TDMA, different users are allocated different time slots for communication, while in OFDMA different users are assigned different orthogonal frequency sub-carriers. In OCDMA, users communicate at the same time and frequency, which can be achieved through the use of different orthogonal optical codes. In contrast, space-division multiple access (SDMA) exploits the spatial separation between users to provide full time and frequency resources. On the contrary, NOMA has been recently introduced as a spectrum-efficient multiple access (MA) scheme that allows different users to share the same time and frequency resources, leading to an enhanced spectral efficiency \cite{Ding2014}. NOMA is realized either by assigning different power levels to different users (known as power (P)-NOMA) or by allocating different spreading sequences (called code (C)-NOMA). Resource allocation in NOMA is determined according to different criteria, such as link quality, users fairness, targeted individual and sum rates, and users' QoS requirements. In the same context, rate-splitting multiple access (RSMA) has recently emerged as a potentially robust and generalized MA scheme for future wireless systems, which is able to accommodate different users in a heterogeneous environment. In particular, novel research results have shown that RSMA in MIMO-based RF systems outperforms other common MA schemes, such as NOMA and SDMA, in terms of spectral efficiency \cite{Mao2018}. \textcolor{black}{The performance gain of RSMA comes from the fact that the transmitted signal of each user is divided into one or several common parts and a private part. All common parts are multiplexed and encoded into a single (or several) common streams intended for all (or to a subset of) users. On the other hand, the private parts are encoded separately into multiple private streams, which are then superimposed with the common stream(s). The super-symbol is then transmitted to all users over the VLC downlink. At each user, the common streams are decoded first in order to obtain the common parts of the intended user, utilizing iterative successive interference cancellation (SIC). Subsequently, the private part is decoded while treating the other users' private parts as noise. The NOMA scheme can be obtained from RSMA by treating some users' signals as common parts and the remaining as private parts. On the other hand, SDMA can be realized from RSMA by using only the private parts to encode users' messages.} The split of the messages into common and private parts enables RSMA to provide robust services for different network loads and users deployments. In VLC systems, MIMO channels are practically highly correlated, which inevitably degrades the performance of linear precoding schemes. This has motivated the investigation of different receiver structures and precoding schemes in order to mitigate the effect of channel correlation in MIMO VLC systems. Likewise, several articles on the previously presented topics, namely OMA, NOMA and RSMA techniques design, have been presented for RF systems. Yet, only few have addressed their application in VLC networks (mainly for OMA and NOMA), and summarized these studies in surveys \cite{Bawazir2018,Vaezi2019,Obeed2019}. Moreover, none of them has discussed the integration of RSMA into VLC systems. Motivated by the above, in this survey, we shed the light on several spectrally efficient MA schemes for VLC systems. In more details, we present a comprehensive study of NOMA and SDMA schemes, with particular attention to MIMO-VLC systems. In addition, we address the potential integration of the RSMA scheme in MIMO-VLC systems. Finally, open issues and some interesting related research directions are discussed. \textit{Notation:} Bold upper-case letters denote matrices and bold lower-case letters denote vectors. $(\cdot )^{T}$ denotes the transpose operation, $\mathbb{E}(\cdot)$ is the statistical expectation operation, $|\cdot|$ is the absolute value operation, \textbf{\rm{I}} is the identity matrix, \textbf{0} is the zero matrix, tr$(\cdot)$ is the trace of a matrix, and $\mathcal{N}(0,\sigma^2)$ is a real-value Gaussian distribution with zero mean and variance $\sigma^{2}$. Let $\textbf{z}=[z_1,\ldots,z_Z]$ be a vector of length $Z$, then $L_1(\textbf{z})=\sum_{i=1}^Z |z_i|$ is the $L_1$ norm. \section{VLC components and channel Model}\label{sec:Channel} In VLC, unlike RF systems, data is conveyed on the intensity of the emitted light from the LEDs, therefore, frequency and phase modulations cannot be applied. Moreover, due to the characteristics of intensity modulation, transmitted signals must be positive and real valued. Also, to ensure that the LED is functioning in its dynamic range, the transmitted peak power should not exceed a particular constant value. In this section, different components of VLC systems are presented in addition to VLC channel modeling. Fig. \ref{Fig:VLCComp} illustrates the basic VLC transceiver components. At the transmitter, an illuminating device is utilized for data modulation through IM/DD. There are a variety of light sources that are available for optical communication, but the commonly used ones are LEDs and laser diodes. Particularly, LEDs are the most popular illuminating devices, due to their low fabrication cost. They are composed of solid-state semiconductor devices that produce spontaneous optical radiation when subjected to a voltage bias across the P-N junction \cite{Sze2007}. The direct current (DC) bias excites the electrons resulting in released energy in the form of photons. In most buildings, white LEDs are preferred since objects seen under white LEDs have similar colors to when seen under natural light. Two common designs are considered for white LEDs. In the first design, a blue LED with a yellow phosphor layer is utilized, while in the second design, red, green, and blue (RGB) LEDs are combined together. The first method is more popular, due to its simplicity and low implementation cost. However, it suffers from limited modulation bandwidth due to the intrinsic properties of the phosphor coating. On the other hand, RGB LEDs are more suitable for color shift keying modulation, enabling higher achievable data rates \cite{Monteiro2014}. \begin{figure}[t] \centerline{\includegraphics[width=3.75in]{Blocks_gen.eps}} \caption{VLC transceiver components.} \label{Fig:VLCComp} \end{figure} It is recalled that VLC receivers comprise photo detectors (PDs), also known as non-imaging receivers or imaging (camera) sensors. These are used in order to convert incident light power into electrical current proportional to light intensity. A typical VLC receiver consists of an optical filter, optical concentrator, PDs and pre-amplifier. The optical filter eliminates interference from ambient light sources, while the optical concentrator enlarges the effective reception area of the PD without increasing its physical size. The optical concentrator is characterized by three parameters, i.e., field of view (FoV), refractive index, and radius. In order to increase the achievable diversity gain of an optical communication link, multiple receiving units can be deployed with different orientations, optical filters, and concentrators. However, such deployment comes at the expense of additional receiver size and complexity. To address this issue, an imaging sensor with a single wide FoV concentrator can be used to create multiple images of the received signals. Imaging sensors consist of an array of PDs that are integrated with the same circuit. It is worth noting that the required large number of PDs to capture high resolution photos renders them energy inefficient. It is noted that the PDs area of a VLC system is much larger than the corresponding wavelength. Consequently, the multipath fading in an indoor VLC environment does not occur \cite{Dai2015,Marshoud2016}. Nevertheless, indoor optical links suffer from dispersion, modeled as linear baseband impulse response. Also, the indoor optical wireless channels can be assumed quasi-static, due to the relatively low mobility of users and connected objects in indoor environments. Typically, the channel of a VLC link can be modeled as follows: With the non-line-of-sight components neglected in front of stronger line-of-sight (LoS) ones, the DC channel gain from the $i^{\rm th}$ LED to the $k^{\rm th}$ PD can be expressed by {\cite{Komine2004}} \begin{equation} \label{eq:channel} \small h_{k,i}= \left\{\begin{matrix*}[l] \frac{A_{k}}{d^2_{k,i}} R_{o}(\varphi_{k,i}) T_{s}(\phi_{k,i}) g(\phi_{k,i}) \cos(\phi_{k,i}), & 0 \leqslant \phi_{k,i} \leqslant \phi_{c}\\ 0, & \text{otherwise,} \end{matrix*}\right. \end{equation} where $A_{k}$ denotes the PD area, $d_{k,i}$ is the distance between the $i^{\rm th}$ LED and $k^{\rm th}$ PD, $\varphi_{k,i}$ is the transmission angle from the $i^{\rm th}$ LED to the $k^{\rm th}$ PD, $\phi_{k,i}$ denotes the incident angle with respect to the receiver, and $\phi_{c}$ is the FoV of the PD. These angles are well-illustrated in Fig. \ref{Fig:channel1}. Moreover, $T_{s}(\phi_{k,i})$ is the gain of the optical filter, and $g(\phi_{k,i} )$ is the gain of the optical concentrator, expressed as \begin{equation} \label{eq:g} g(\phi_{k,i})= \left\{\begin{matrix*}[l] \frac{n^2}{\sin^2({\phi_{c}})}, & 0 \leqslant \phi_{k,i} \leqslant \phi_{c}\\ 0, & \phi_{k,i}>\phi_{c}. \end{matrix*}\right. \end{equation} Here, $n$ is the refractive index and $R_{o}(\varphi_{k,i})$ is the Lambertian radiant intensity given by \begin{equation} \label{eq:R} R_{o}(\varphi_{k,i})= \frac{m+1}{2\pi} \left(\cos(\varphi_{k,i})\right)^m \end{equation} with $m$ denoting the order of the Lambertian emission, namely \begin{equation} \label{eq:m} m = \frac{\ln{(2)}}{\ln\left({\cos(\varphi_{1/2})}\right)} \end{equation} \begin{figure}[t] \centering \centerline{\includegraphics[width=3.5in]{vlc_channel.eps}} \caption{VLC channel model (link between LED $i$ and PD $k$).} \label{Fig:channel1} \end{figure} where $\varphi_{1/2}$ is the LED semi-angle at half power. For a typical VLC link, the received noise at the $k^{\rm th}$ PD can be modeled as a Gaussian random variable with zero mean and variance \begin{equation} \label{eq:noise} \sigma_k^{2} = \sigma^2_{k,\rm{sh}} + \sigma^2_{k,\rm{th}} \end{equation} where $\sigma^2_{k,\rm{sh}}$ and $\sigma^2_{k,\rm{th}}$ are the variances of the shot and thermal noises at the $k^{\rm th}$ PD, respectively. The shot noise is caused by the high rate of the physical photo-electronic conversion process, whose variance can be written as \begin{equation} \label{eq:noise2} \sigma^{2}_{k,\rm{sh}} = 2qB\left(\zeta_k h_{k,i}x_{i}+I_{\rm{bg}}I_{2}\right) \end{equation} where $q$ represents the electronic charge, while $\zeta_k$ denotes the detector responsivity. Also, $x_{i}$ is the transmitted signal by the $i^{\rm th}$ LED, $B$ is the corresponding bandwidth, $I_{\rm{bg}}$ is the background current, and $I_{2}$ denotes the noise bandwidth factor. On the other hand, the thermal noise results from the transimpedance receiver circuitry and its variance at the $k^{\rm th}$ PD is given by \begin{equation} \label{eq:noise3} \sigma^{2}_{k,\rm{th}} = \frac{8\pi K T_{k}}{G}\eta A_{k} I_{2} B^{2} + \frac {16 \pi^{2} K T_{k} \gamma}{g_{m}} \eta^{2} A^2_{k} I_{3} B^{3} \end{equation} where $K$ is the Boltzmann's constant, $T_{k}$ is the absolute temperature, $G$ is the open-loop voltage gain, $A_k$ is the PD area, $\eta$ is the PD's fixed capacitance per unit area, $\gamma$ is the field-effect transistor (FET) channel noise factor, $g$ is the FET transconductance, and $I_{3}$ = 0.0868 \cite{Komine2004}. Modern infrastructures are commonly equipped with LED fixtures or arrays. A single fixture is composed of $Q$ LEDs, and may be viewed as a single VLC source,\footnote{In the remaining of this paper, we interchangeably designate by LED a fixture of LEDs.} with the DC channel gain given by \begin{equation} \label{eq:fixture} \small h_{k,j}=\left\{ \begin{array}{ll} A_k \sum \limits_{i=1}^Q d_{k,j,i}^{-2} R_o(\varphi_{k,j,i}) T_s(\phi_{k,j,i}) g(\phi_{k,j,i}) \cos(\phi_{k,j,i}), \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; 0 \leq \phi_{k,j,i} \leq \phi_c \\ 0, \; \mbox{otherwise}\end{array} \right. \end{equation} where $d_{k,j,i}$ and $\varphi_{k,j,i}$ denote the respective distance and transmission angle between the $i^{\rm th}$ LED in the $j^{\rm th}$ fixture and the $k^{\rm th}$ PD, and $\phi_{k,j,i}$ is the incident angle with respect to the receiver. Since the separation between LEDs in the same fixture is negligible compared to the distance between the fixture and the $k^{\rm th}$ PD, then distances and angles implicating index $i$ can be assumed approximately the same for all LEDs. Hence, the channel gain from the $j^{\rm th}$ fixture to the $k^{\rm th}$ PD can be given by \begin{equation} \label{eq:fixture_app} \small h_{k,j}\approx \left\{ \begin{array}{ll} Q \; h_{k,i}, & 0 \leq \phi_{k,j,i} \leq \phi_c, \; \forall i \\ 0, & \mbox{otherwise.}\end{array} \right. \end{equation} In the following sections, we provide an in-depth study of the common MA schemes in the context of VLC. \section {\textcolor{black}{NOMA} for VLC} \label{sec:MA} \subsection{Background} Inspired by promising multiplexing gains achieved by conventional MA techniques, which were developed for RF systems, optical MA have received great attention recently. To this end, conventional OMA schemes such as OFDMA and TDMA have been extensively studied in the context of VLC, in which users are allocated orthogonal frequency/time resources. In the same context, several optical OFDM-based MA techniques were proposed, such as DC biased optical OFDM, asymmetrically clipped OFDM, asymmetrically clipped DC-biased optical OFDM, fast-OFDM and, polar-OFDM. However, OFDM suffers from high peak-to-average power ratio, which is difficult to overcome in VLC systems due to the non-linearity of LEDs \cite{Armstrong2009,Mesleh2011_2,Giaco2012,Dissan2013,Elgala2015}. OMA schemes efficiently mitigate interference among users’ signals by allocating orthogonal resources. However, the number of served users is limited and cannot exceed the number of available orthogonal resources. This concern is also true for VLC systems. Motivated by the above, researchers have recently focused on the design of novel NOMA techniques as a promising candidate to enhance spectral efficiency in 5G and beyond networks \cite{Ding2014}. The key principle is to allow different users to share the same frequency resources simultaneously at the expense of multi-user interference (MUI). To perform multi-user detection (MUD), different users are assigned distinct power levels based on their channels gain, which is referred to as P-NOMA, or different spreading sequence, known as C-NOMA \cite{Ding2014,Ali2017,Wei2018}. \textcolor{black}{In RF systems, P-NOMA was considered as a candidate for the downlink communication of various standardizations activities such as the 3rd generation partnership project (3GPP) standard LTE Release 13 \cite{Dobre}. Furthermore, P-NOMA has been envisioned as a key solution in 5G mobile systems \cite{Dai2015}. On the other hand, both P-NOMA and C-NOMA have been considered for the uplink communication, in order to serve a larger number of users. } In downlink P-NOMA systems, superposition coding (SC) at the base station and SIC at the receiver are utilized to transmit and detect the intended user's signal by eliminating users' signals with higher power levels, respectively. \textcolor{black}{While, in the uplink P-NOMA systems, the transmitted power is limited by the end users, the transmitted power of each individual user should be carefully adjusted such that the user with better channel gain will have more power contribution in the received signal. At the base station, the user with the best channel gain is decoded first. Then, a subsequent SIC is performed in order to decode the messages of the weaker users, which is the opposite of the downlink P-NOMA \cite{Dai2015,Dobre}}. \textcolor{black}{In C-NOMA, users are multiplexed in the code domain, in which each user is assigned a different code. Unlike the conventional code-division multiple access (CDMA), where dense spreading sequences are used, C-NOMA utilizes non-orthogonal sequences with low cross correlation or sparse spreading sequences to efficiently reduce the inter-user interference \cite{Shoreh,Chen_CDMA,Dai2018,Lou2018}, and hence, enhance the overall system performance. Specifically, optimal performance can be achieved in C-NOMA based VLC systems by exploiting optical code sequences \cite{Rashidi2013}. At the receiver, multi-user detection can then be realized by adopting message passing algorithms (MPA). It is noted that different versions of C-NOMA have been developed, such as low-density spreading (LDS)-CDMA \cite{Razavi2011}, low-density spreading (LDS)-OFDM \cite{Imari2011}, and sparse code multiple access (SCMA) \cite{cai}. LDS-CDMA utilizes low density spreading sequences in order to reduce the interference on each chip compared to the traditional CDMA. On the other hand, LDS-OFDM can be thought as a combination of both OFDM and LDS-CDMA, where the resulted chips from the implementation of LDS-CDMA are transmitted over a set of sub-carriers. Finally, SCMA is a form of LDS-CDMA, where the information bits can be directly mapped to a distinct sparse codeword. Yet, although C-NOMA has a potential to enhance spectral efficiency, it requires additional bandwidth, challenging codebook design and is not easily applicable to the current systems compared to P-NOMA, which has a simple implementation in the existing networks. For theses reasons, most of the research on NOMA systems have extensively considered the performance of P-NOMA \cite{Tao,cai2018,Dai2015}. In VLC systems, the research on C-NOMA is limited only to \cite{Lou2018}. Therefore, it requires further investigation in the context of VLC.} \textcolor{black}{It is also worth noting that uplink VLC is impractical due to the power limitations of portable devices and the unpleasant radiance produced by end users. So, it is expected that the current VLC technologies will rely on RF or infrared in the uplink communications \cite{Ahmadi2018,Ding2019,alresheedi2017}. Consequently, most of the research efforts on NOMA in VLC systems have focused on the downlink scenario.} A basic system model for two-user downlink P-NOMA in VLC systems is illustrated in Fig. \ref{Fig:PDNOMA}. \textcolor{black}{For indoor VLC systems, P-NOMA is preferred for several reasons \cite{Marshoud_WCL}. First, P-NOMA depends on the channel state information, which can be readily available in VLC systems. Second, P-NOMA achieves the best performance in the high SNR regime, which is a common SNR regime in VLC channels \cite{Ghassem2019}. Third, P-NOMA performs best when users’ channels are distinct. In VLC systems, the underlying symmetry issue of the channels has been addressed in \cite{Marshoud2017}, where the authors proposed reducing channels' symmetry by an adaptive tuning of the semi-angle of the LEDs and the receiver’s FoV, as well as in \cite{Wang2018}, where advance receiver structures were considered. Finally, P-NOMA can be easily integrated with various technologies, such as MIMO and cooperative networks \cite{cai}. All these reasons motivated an important study focus on P-NOMA based VLC systems. } \subsection{Superposition Coding and Successive Interference Cancellation} SC was first introduced in 1972 by Cover \cite{cover} as a method to transmit different signals to several receivers through a single source. To make SC practical, the transmitter encodes the data of two users as a two-layer single signal. Then, one receiver recovers the messages of the two layers, while the other recovers only one message from one-layer and ignores the message of the second layer. SC is realized by allocating higher power coefficients to users with the weakest channel conditions. On the other hand, users with strongest channel gains are assigned the lowest power levels\cite{Dobre}. For instance, in Fig. \ref{Fig:PDNOMA}, the second user $U_2$ is allocated fraction $P_2$ of the total power, and the first user $U_1$ is allocated $P_1$, such that $P_1<P_2$. Since the weaker user $U_2$ is allocated higher power, it can directly decode its signal while treating the signals of the other users as noise. On the other hand, using SIC, $U_1$ has to remove the interference by decoding $U_2$ signal, before detecting its own signal. \begin{figure}[t] \centerline{\includegraphics[width=3.5in]{NOMA_VLC.eps}} \caption{Scenario of two users P-NOMA in VLC.} \label{Fig:PDNOMA} \end{figure} \subsection{NOMA in VLC} Several studies have considered the performance of different NOMA configurations in VLC systems. In \cite{Dai2018}, the symbol error rate (SER) of C-NOMA-based VLC system was investigated, where users demonstrated identical error rate performance at different locations. On the other hand, recent published papers have shown that in VLC systems, P-NOMA is an efficient MA scheme for the aforementioned reasons \cite{Lin2019,Marshoud2017,liu2019}. Furthermore, reported results showed that NOMA\footnote{In the remaining of the paper, P-NOMA is designated by NOMA.} outperforms OMA techniques, such as OFDMA and TDMA, in terms of system capacity and number of simultaneously served users, particularly in single-input single-output broadcast channels and for certain channel strength disparities among users \cite{Kizi2015,Yin2016,Yang2017,Shen2017}. The work in \cite{NOMA_random_receiver} considered the performance of NOMA VLC system when users have random vertical orientations. In particular, the authors proposed users scheduling techniques and feedback mechanisms to boost the spectral efficiency. Moreover, a hybrid NOMA-OFDM system was investigated and shown to have superior performance in terms of the achievable rate over OMA-OFDM systems \cite{Chu2017,Fu2018}. Likewise, for uplink NOMA-based VLC communications, joint detection was proposed in \cite{Guan} to decode the messages of multiple users. Although NOMA is efficient for scenarios where the number of users is higher than the number of available orthogonal resources, its complexity grows rapidly and proportionally to the number of users, since the $k^{\rm th}$ user needs to decode the messages of the $k-1$ users before detecting its own signal. To address this issue, a simple approach is to group users into small clusters, such that users of the same cluster communicate using NOMA, while the different clusters are scheduled using an OMA technique. Particularly, MIMO can be leveraged to provide additional gains for transmissions, which can be realized through precoded SC and hybrid SDMA-NOMA or OMA. In precoded SC, all users are sorted based on their effective precoded channel gains in a single cluster \cite{Wyner1974,Nguyen2017,Dobre2018}. On the contrary, in hybrid SDMA/NOMA/OMA, users are grouped in clusters separated by SDMA\footnote{SDMA is discussed in detail in Section \ref{sdma}.} \cite{Zeng2017_3}, where the users of a single cluster are served using NOMA. It was demonstrated in \cite{Zeng2017_3} and \cite{Zeng2017_2} that MIMO-NOMA outperforms MIMO-OMA in terms of sum rate and user fairness. However, despite the aforementioned advantages of MIMO-NOMA systems, they come at the expense of a complex transmitter design, where joint optimization of signals precoding/decoding orders is required for different users. As explained earlier, MIMO design can be realized by assuming multiple transmitting LEDs and multiple PDs at the receiver. Such a system cannot employ the same power allocation method designed for single transmitting LED NOMA VLC systems, such as gain ratio power allocation (GRPA) \cite{Marshoud2016}. Accordingly, several power allocation strategies have been proposed in the literature for MIMO-NOMA RF systems, e.g., hybrid precoding and post-detection \cite{Ding2016}, and signal alignment \cite{Ding2016_2,dobri2020}. However, their counterpart in the MIMO-NOMA VLC is almost non-existent. To our knowledge, only Chen \textit{et al.} investigated NOMA-based MIMO VLC systems and proposed a power allocation strategy, called normalized gain difference power allocation (NGDPA) \cite{Chen2018}. The reported results for NGDPA illustrated a sum rate improvement of 29.1\% compared to GRPA. Multi-cell NOMA in the context of VLC has also not been well-investigated in the literature. In \cite{Zhang2017}, Zhang \textit{et al.} proposed a user grouping scheme based on users locations to minimize the interference caused by multi-cell deployment. Further, the authors in \cite{Rajput2019} proposed a joint NOMA transmission scheme to serve users in overlapping regions of different cells. In \cite{Shi2019}, Shi \textit{et al.} investigated the use of offset quadrature amplitude modulation (OQAM)/OFDM-NOMA modulation in a multi-cell VLC system. \textcolor{black}{Although NOMA schemes outperform OMA-based VLC systems in terms of spectral efficiency, the multiplexing gain of NOMA is highly affected by channel symmetry. In the context of VLC, channel symmetry is a major challenge as the communication is usually due to the LoS scenario \cite{Ahmadi2018,Marshoud_WCL}. Indeed for small cell design, where the number of users is considered relatively large, it is inefficient to multiplex all users using NOMA, as this may lead to an increased complexity and unsuccessful SIC operation. Therefore, hybrid NOMA/OMA schemes are promising solutions to realize a trade-off between multiplexing gain, computational complexity, and error propagation. In hybrid NOMA/OMA systems, users are split into multiple groups, where users within the same group employ NOMA, while different groups are multiplexed using OMA. User pairing and grouping represents key challenge in hybrid NOMA/OMA systems that requires sophisticated algorithms to have the full potentials of NOMA. However in VLC systems, as users mobility is low, the variation in the channel conditions is relatively small, hence, user pairing and grouping requires less complicated algorithms compared to RF systems. The effect of user paring and grouping has been extensively studied in RF systems \cite{Benjebbovu2013,Sedaghat2018,Abbasi2016,Zhu2019,He2016}, but, its application in VLC systems requires further investigations. For instance, in \cite{Yin2016, Almohimmah2018}, the authors adopted channel gain-based pairing strategy that aims to maximize the system's throughput. This strategy relies on selecting two users with the most distinctive channel gains to perform NOMA. However, their approach causes high interference to users with correlated channels. In \cite{Yapici2019}, the authors proposed individual and group-based NOMA users pairing in a VLC system and showed that near-optimal sum-rate performance can be achieved. Moreover, user grouping based on users' locations was proposed in \cite{Zhang2017} in order to reduce the interference in VLC multi-cell networks, where users in each group are served by only one access point using NOMA. On the other hand, the authors in \cite{Abumarshoud2019} proposed a hybrid OMA/NOMA scheme for attocellular VLC based on a smart transmitter to select dynamically the adequate MA technique according to the environment conditions. Finally, in \cite{Janjua2020}, the authors proposed an efficient user paring for the cases of having odd and even number of users. First users are ordered in ascending order based on the channel strength, then, they are either grouped into two or three groups depending on whether the number of the uses is even or odd. Then pairing is performed by choosing one user from each group starting from the users with the lowest channel gain. Reducing channels' correlation for the end users is another method of enhancing the performance of NOMA system. In \cite{Marshoud2017}, the authors proposed an adaptive adjustment of the semi-angle of the LEDs and the FoV of the PD in order to create dissimilar channels. Additionally, the use of different advanced receiver structures to reduce channels correlation is a potential solution that may be further investigated \cite{Wang2018}. Nevertheless, the enhancement of NOMA performance through reducing channels symmetry need to be further explored in the context of VLC. } Finally, it is noted that NOMA based hybrid VLC/RF systems have been proposed as acceptable solutions that compensate for the limitation of VLC systems, particularly in uplink communication scenarios \cite{Papanikolaou2019,Papanikolaou_GLOBECOM,Zhou2018}. In the same context, hybrid wavelength division multiplexing (WDM)-NOMA has been proposed in \cite{Aljohani}, where multi-color LEDs are used to allow simultaneous transmissions at different wavelengths. Relevant work on NOMA in VLC is summarized in Table \ref{TableI}. \begin{table* \caption{VLC Related Work on NOMA.} \label{TableI} \centering \footnotesize \begin{tabular}{|p{30pt}|p{120pt}|p{120pt}|p{180pt}|} \hline {\textbf{Reference}} & \makecell{\textbf{System Model}} & \makecell{\textbf{Objective}} & \makecell{\textbf{Findings}} \\ \hline \hline \makecell{\cite{Dai2018}} & \makecell{Single-cell downlink C-NOMA} & \makecell{Analysis and evaluation of the SER} & \makecell{Using adequate power allocation, users at\\ different locations achieve almost an identical SER } \\ \hline \makecell{\cite{Marshoud2017}} & \makecell{Single-cell downlink NOMA} & \makecell{Derivation of closed-form expression\\ for the bit error rate (BER)} & \makecell{Closed-form expressions match simulation results} \\ \hline \makecell{\cite{Yin2016}} & \makecell{Single-cell downlink NOMA} & \makecell{ Derivation of closed-form expressions\\ for the coverage probability\\ and ergodic sum rate } & \makecell{ NOMA outperforms conventional OMA scheme} \\ \hline \makecell{\cite{NOMA_random_receiver}} & \makecell{Single-cell downlink NOMA} & \makecell{Derivation of closed-form expressions\\ for the sum rate and outage\\ probability } & \makecell{Analytical results agree with simulation results \\ and near-optimal sum rate is achieved using \\ a limited feedback scheme} \\ \hline \makecell{\cite{Chu2017},\cite{Fu2018}} & \makecell{Single-cell downlink NOMA-OFDM} & \makecell{Maximization of the sum rate} & \makecell{NOMA-OFDM is superior to OMA-OFDM system,\\ in terms of achievable data rate} \\ \hline \makecell{\cite{Guan}} & \makecell{Single-cell uplink NOMA} & \makecell{Evaluation of BER based on phase \\ pre-distorted joint detection } & \makecell{Improved BER performance compared to \\NOMA based on SIC for different power values } \\ \hline \makecell{\cite{Chen2018}} & \makecell{MIMO-NOMA} & \makecell{Maximization of the sum rate} & \makecell{NGDPA improves the sum rate \\ performance compared to GRPA } \\ \hline \makecell{\cite{Zhang2017}} & \makecell{ Downlink MU-multi-cell NOMA} & \makecell{Maximization of the sum rate\\ and max-min rate criterion} & \makecell{User grouping and power allocation optimized, \\hence achieving higher sum user rate than OMA } \\ \hline \makecell{\cite{Rajput2019}} & \makecell{Downlink MU-multi-cell NOMA} & \makecell{Maximization of the sum rate } & \makecell{Joint transmission (JT) NOMA achieves higher sum\\ rates compared to the frequency reuse factor-2 NOMA} \\ \hline \makecell{\cite{Shi2019}} & \makecell{ Downlink MU-multi-cell\\ OQAM/OFDM-NOMA} & \makecell{ Evaluation of spectral efficiency, \\ BER, and error vector magnitude} & \makecell{Proposed scheme outperforms OFDM-NOMA\\ and is more robust to inter-cell interference } \\ \hline \makecell{\cite{Abumarshoud2019}} & \makecell{Multi-cell hybrid OMA/NOMA} & \makecell{Evaluation of sum rate, outage\\ probability, and fairness performances} & \makecell{Dynamically selecting the adequate MA technique\\ achieves better performances that static configuration} \\ \hline \makecell{\cite{Papanikolaou2019,Papanikolaou_GLOBECOM}} & \makecell{Downlink hybrid VLC/RF} & \makecell{Maximization of the sum rate} & \makecell{Optimal joint user grouping and power allocation \\based on game theory was proposed; this outperforms\\ the standard opportunistic scheme} \\ \hline \makecell{\cite{Zhou2018}} & \makecell{Cooperative NOMA VLC/RF with \\simultaneous wireless information\\ and power transfer (SWIPT)} & \makecell{Derivation of closed-form expression \\for the outage probability} & \makecell{A trade-off on rate splitting allows outage performance\\ balancing among users} \\ \hline \makecell{\cite{Aljohani}} & \makecell{Hybrid WDM-NOMA} & \makecell{Maximization of the sum rate} & {WDM-NOMA outperforms NOMA in terms of sum rate} \\ \hline \end{tabular} \end{table*} \section {\textcolor{black}{SDMA} for VLC} \label{sdma} \subsection{Background} In recent building designs, it is common to have multiple illuminating LEDs in indoor spaces. In VLC systems, channel access can be realized either through multiple access channel (MAC) or broadcast channel (BC). Most research effort has focused on the analysis of downlink BC of MU-MIMO, with an emphasis on the data rate performance. Such systems experience interference when orthogonal frequency/time resources are limited. MUI is a common issue in MU-MIMO systems, which can be eliminated at the receiver using an efficient MUD technique \cite{Spencer2004,Spencer2004_2}. However, the implementation of MUD in VLC systems suffers from high complexity and energy inefficiency. Therefore, SDMA, which is based on data precoding at the transmitter, constitutes a promising alternative solution. \subsection{SDMA in VLC} An early implementation of SDMA is based on block diagonalization (BD), a generalized form of channel inversion precoding \cite{Chen2013}. Although BD is a simple linear precoding technique, its application is limited to the scenario where the number of transmitting LEDs is larger than the total number of served users, i.e., overloaded regime. The authors in \cite{Hong2013} used BD precoding in downlink MU-MIMO VLC and showed that BD is constrained by the correlation of the involved wireless links. Hence, a scheme that reduces this correlation was proposed, based on the adjustment of PDs' FoVs. Yu \textit{et al.} developed in \cite{Yu2013} linear zero-forcing (ZF) and ZF dirty paper coding (ZF-DPC) schemes in order to eliminate MUI and maximize the throughput or max-min fairness. However, in \cite{Ma2013}, the authors relaxed the ZF condition by applying the minimum mean squared error (MMSE) as a performance metric for the precoder design, in both perfect and imperfect CSI scenarios. In \cite{Li2015}, an optimal mean squared error (MSE) precoder was designed in order to minimize the BER, under per-LED power constraints. The transceiver design was later simplified by adopting a ZF precoder. The corresponding results showed that the simplified scheme outperforms the conventional ZF precoder in terms of BER, while MSE achieves the best performance. Similar designs were proposed in \cite{Pham2017}, where an optimal ZF precoder was obtained using an iterative concave-convex procedure, aiming at maximizing the achievable per-user data rate. Then, the authors simplified the precoder design using the high signal-to-noise ratio (SNR) approximation. In \cite{Shen2016_2}, Shen \textit{et al.} proposed a different beamforming technique aiming at maximizing the sum rate of a virtual MIMO VLC system. Beamforming was designed using the sequential parametric convex approximation method, and it has been shown through simulations that it outperforms conventional ZF-based beamforming, particularly for highly correlated VLC channels and low optical transmit power. Likewise, Marshoud \textit{et al.} developed in \cite{Marshoud_2018} an optical adaptive precoding for downlink MIMO VLC systems, under perfect and imperfect CSI. BER results showed that their scheme is more robust to imperfect CSI and channel correlation than conventional channel inversion precoding. Authors of \cite{Wang2015} proposed precoding for an OFDM-based MU-MIMO VLC system, where precoding is applied at each sub-carrier, using ZF and MMSE techniques. This led to the enhancement of the sum rate performance at high SNR and for the case of uncorrelated channels. In \cite{Zeng_2017}, the sum rate maximization problem was reformulated as a weighted MMSE (WMMSE) problem to jointly design the BD precoding and receive filter coefficients. A similar approach was also considered in \cite{Ying2015}. Finally, Adasme \textit{et al.} proposed in \cite{Adasme} a hybrid approach, called spatial TDMA (STDMA), where full connectivity is achieved by allowing simultaneous data rate transmission of multiple nodes within an optimized schedule. The contribution in \cite{Ma2015,Ma2018} focused on precoding designs for coordinated multi-point (CoMP) MU-MIMO VLC systems. Through numerical analysis, the authors showed improvements in terms of signal-to-interference-plus-noise ratio (SINR) and weighted sum MSE (WSMSE), respectively. Additionally, Yin \textit{et al.} considered in \cite{Yin2015,Yin_GLOBECOM} different SDMA grouping algorithms to obtain a trade-off between the Jain’s fairness index and area spectral efficiency for a CoMP-VLC system through the utilization of linear ZF precoding. The authors in \cite{Yangchen2018} proposed a joint precoder and equalizer design based on interference alignment for MU multi-cell MIMO VLC systems under imperfect CSI. In \cite{Pham2019}, different levels of coordination/cooperation were considered using a ZF precoder. It is worth noting that SDMA can be also realized using an angle diversity transmitter (ADT), which consists of multiple directional narrow-beam LED elements. An ADT creates independent narrow-band beams (by reducing the FoVs of LEDs) towards spatially deployed users, while achieving the same coverage as a single wide-beam transmitter \cite{Carrut2000,Kim2014,Chen2015}. ADTs can also replace conventional single-element transmitters in multi-cell scenarios such that more power is directed towards each user, and hence, improving the communication's reliability \cite{Basnayaka}. In order to avoid interference among users, spatial separation needs to be implemented by adequately allocating transmit power to the beams. Subsequently, each receiver attempts to detect its signal by treating any interference as noise. In spite of the ADT's interference reduction potential, it requires a complex optical front-end to supply independent signals to multiple LED elements.\\ Compared to NOMA, SDMA simplifies the transmitter and receiver designs. However, it becomes inefficient as soon as the number of users exceeds the number of transmit LEDs, i.e., an overloaded scenario. It should be noted that the number of LEDs has to be more than or equal to the number of users in order to guarantee interference reduction. Moreover, since SDMA depends highly on the CSI at the transmitter (CSIT) in order to mitigate interference, its performance degrades with imperfect CSI \cite{Guanghan,Sifaou,Marshoud_2018}. Finally, due to the unique characteristics of IM/DD, which require signals to be real and unipolar, it is very difficult to pair orthogonal users together, as in RF systems. The accomplished work on SDMA in VLC is summarized in Table \ref{TableIII}. \begin{table* \caption{VLC Related Work on SDMA.} \label{TableIII} \centering \footnotesize \begin{tabular}{|p{30pt}|p{120pt}|p{120pt}|p{180pt}|} \hline {\textbf{Reference}} & \makecell{\textbf{System Model}} & \makecell{\textbf{Objective}} & \makecell{\textbf{Findings}} \\ \hline \hline \makecell{\cite{Chen2013,Hong2013}} & \makecell{MU-MISO with BD precoding} & \makecell{Evaluation of the BER } & \makecell{With enough transmit power, a data rate of 100 Mbps \\is achieved for BER=$10^{-6}$} \\ \hline \makecell{\cite{Yu2013}} & \makecell{MU-MISO with ZF\\ or ZF-DPC precoding} & \makecell{ Maximization of the throughput \\and max-min fairness} & \makecell{ZF-DPC outperforms linear ZF, in particular when\\ users are close to each other} \\ \hline \makecell{\cite{Ma2013}} & \makecell{MU-MISO with MMSE precoding} & \makecell{Evaluation of the optimal linear\\ MMSE precoder under perfect \\ and imperfect CSIT} & \makecell{Linear MMSE precoding is able to separate \\the broadcast signals at the VLC receivers} \\ \hline \makecell{\cite{Li2015}} & \makecell{MU-MISO with MMSE/ZF precoding} & \makecell{Minimization of the MSE\\ and evaluation of the BER } & \makecell{MMSE precoding achieves beast results, \\while proposed simplified ZF approaches MMSE \\ performance for a small number of (or dispersed) users } \\ \hline \makecell{\cite{Pham2017}} & \makecell{MU-MISO with ZF precoding } & \makecell{Maximization of the sum rate \\ and max-min fairness} & \makecell{The generalized-inverse ZF design achieves better\\ performance than the pseudo-inverse ZF design, \\in particular for high SNRs} \\ \hline \makecell{\cite{Shen2016_2}} & \makecell{MU-MISO with ZF precoding} & \makecell{Maximization of the sum rate } & \makecell{The proposed approach does not restrict the co-channel\\ interference to zero, and thus, achieves a higher sum\\ rate than conventional ZF techniques } \\ \hline \makecell{\cite{Marshoud_2018}} & \makecell{MU-MISO with adaptive precoding } & \makecell{Derivation of closed-form expression\\ and evaluation of BER under perfect\\ and imperfect CSIT} & \makecell{Adaptive precoding provides significant performance \\enhancement compared to conventional channel \\inversion precoding} \\ \hline \makecell{\cite{Wang2015}} & \makecell{MU-MIMO OFDM with ZF/MMSE \\ precoding } & \makecell{Evaluation of the spectral efficiency } & \makecell{Sub-carrier with higher index achieves a higher\\ spectral efficiency, particularly for highly correlated \\users, and MMSE outperforms ZF for low \\transmit power and close users} \\ \hline \makecell{\cite{Zeng_2017}} & \makecell{MU-MISO with BD precoding} & \makecell{Maximization of the sum rate\\ with imperfect CSIT} & \makecell{Robust precoding is designed using BD and WMMSE \\ to suppress MUI and channel estimation errors} \\ \hline \makecell{\cite{Ying2015}} & \makecell{MU-MIMO with joint \\ MMSE precoding and equalizing} & \makecell{Minimization of the MSE \\and evaluation of the BER\\ in presence of CSIT errors} & \makecell{Proposed joint optimization method demonstrates BER \\improvements when experiencing imperfect CSIT} \\ \hline \makecell{\cite{Adasme}} & \makecell{MU-STDMA} & \makecell{Minimization of the total scheduling\\ time and power consumption } & \makecell{STDMA achieves full connectivity, and the \\proposed greedy algorithm significantly reduces \\the processing time} \\ \hline \makecell{\cite{Ma2015, Ma2018}} & \makecell{Multi-cell MU-MIMO CoMP \\with MMSE precoding} & \makecell{Minimization of the WSMSE} & \makecell{Proposed approach realizes low-complexity interference\\ mitigation compared to CoMP JT} \\ \hline \makecell{\cite{Yin2015,Yin_GLOBECOM}} & \makecell{CoMP SDMA with ZF precoding} & \makecell{Evaluation of the Jain’s fairness index\\ and of area spectral efficiency} & \makecell{The proposed grouping algorithm achieves better\\ area spectral efficiency-fairness trade-off compared\\ to existing benchmarks} \\ \hline \makecell{\cite{Yangchen2018}} & \makecell{Multi-cell MU-MIMO joint\\ MMSE precoding and equalizing } & \makecell{Minimization of the MSE \\ and sum rate evaluation\\ with imperfect CSIT} & \makecell{The joint design of the precoder and equalizer\\ efficiently reduces inter-user interference and\\ inter-cell interference, and achieves better performance\\ compared to existing MMSE and max-rate designs} \\ \hline \makecell{\cite{Pham2019}} & \makecell{Multi-cell MU-MIMO CoMP\\ with ZF precoding } & \makecell{Maximization of the sum rate} & \makecell{Partial cooperative precoding and coordinated precoding\\ outperform per-cell coordinated precoding when the \\number of users is not large compared to the\\ number of the LEDs or for high transmit power} \\ \hline \makecell{\cite{Kim2014,Chen2015}} & \makecell{SDMA using ADTs} & \makecell{Evaluation of the throughput} & \makecell{The use of ADTs improves the performance of\\ multi-user systems, and optical SDMA outperforms \\optical TDMA in terms of throughput} \\ \hline \makecell{\cite{Basnayaka}} & \makecell{Attocell SDMA downlink using ADT} & \makecell{Derivation of closed-form \\expression for the spectral efficiency} & \makecell{inter-cell interference is mitigated and optical SDMA\\ outperforms optical TDMA} \\ \hline \makecell{\cite{Sifaou}} & \makecell{MU-MIMO using ADTs} & \makecell{Maximization of the minimum SINR\\ and evaluation of the rate per-user\\with imprefect CSIT} & \makecell{The proposed precoding and receiver design is robust \\to channel estimation errors and achieves significant \\gains compared to non-robust receiver design} \\ \hline \end{tabular} \end{table*} \section {\textcolor{black}{RSMA} for VLC} Although NOMA realizes simultaneous transmission of a large number of users in overloaded scenarios and SDMA achieves spatial separation between users in underloaded scenarios, their performance is highly dependent on users' deployments, channel conditions, and availability of CSIT. Therefore, a generalized configuration, which can optimize the utilization of resources for both overloaded and underloaded scenarios and provide more robustness to CSIT estimation errors, is of paramount importance. This was the main motivation behind the proposal of RSMA as a generalized scheme, where NOMA and SDMA can be considered as special cases. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{Blocks.eps} \caption{RSMA-based two-user MISO VLC system.} \label{Fig:2MISO} \end{figure*} \subsection {Background} The basic concept of rate-splitting was first introduced in \cite{Te_Han} for a two users single-input single-output BC scenario. In \cite{Mao2018,Clerckx2016}, RSMA was proposed as a powerful and generalized MA technique for RF systems. It was demonstrated that RSMA could potentially offer tactile improvements as a MA technique, by allowing wireless networks to efficiently serve multiple users with different capabilities in overloaded and underloaded scenarios. According to the key principle of RSMA, which relies on the implementation of linear precoding at the transmitter and SIC at the receiver, it has been shown that this MA scheme is capable of bridging the gap between NOMA and SDMA techniques. In RSMA, users' messages are split into common and private parts at the transmitter. Then, a combiner is used to multiplex the common parts of all users and encode them into a single common stream. Meanwhile, the private parts are encoded separately into multiple private streams. Subsequently, a linear precoder is used to mitigate MUI. Finally, all precoded streams are superimposed on the same signal and sent over a VLC BC channel. At each user, the common stream is decoded and the user's intended data is extracted. Then, interference introduced by the common stream is eliminated using SIC, as in NOMA. Subsequently, the private part of each user message can be decoded, while treating the private parts of other users messages as noise, as in SDMA. This mechanism is illustrated in (Fig. 1, \cite{Mao2018}).\\ RSMA depends mainly on the splitting design of messages and power allocation strategies between common and private parts of users' messages. Extensive research efforts have been devoted to the investigation of these issues in order to improve the efficiency of RSMA in the context of RF. In \cite{Mao2018}, the authors provided an analytical framework to study the performance of RSMA in MU-MISO BC channels. The reported results proved that RSMA outperforms NOMA and SDMA in terms of sum rate for different users' setups. Dai \textit{et al.} investigated in \cite{Dai2016} RSMA with massive MIMO and imperfect CSIT. They proposed a hierarchical-rate-splitting (HRS) framework where two different types of common messages are defined, which can be decoded by either all users or by a subset of them. Then, the associated sum rate performance was investigated in order to adjust the precoders of common messages. Numerical results illustrated the superiority of HRS compared to conventional techniques such as TDMA and BC with user scheduling. This work was extended in \cite{Dai2017} to a MU millimeter wave (mmWave) case, where CSIT is either statistical or quantized. Similar to \cite{Dai2016}, Joudeh \textit{et al.} proposed in \cite{Joudeh2017} a hybrid RSMA messages precoding in order to achieve max-min fairness amongst multiple co-channel multicast groups. The superiority of their approach is proved through degree-of-freedom analysis and simulation results. The authors in \cite{Papaz2017} evaluated the robustness of RSMA, in the presence of hardware impairments, such as phase distortion and thermal noises, and the availability of perfect/imperfect CSIT. In addition, Abdelhamid \textit{et al.} investigated in \cite{Abdelhamid} the use of channel inversion precoding for MU-MIMO RSMA system, where phase-shift-keying was the adopted modulation scheme. Results showed that RS combined with channel inversion has a significant sum rate improvement compared to RS with ZF or other MA schemes. Moreover, the authors in \cite{Yijie} incorporated RS with DPC to achieve the largest rate region for MISO BC with partial CSIT for different network loads and users' deployments. In \cite{Hao2015}, Hao \textit{et al.} proposed a practical scheme for private symbols encoding in RSMA using the conventional ZF beamforming. Then, they studied the sum rate performance for a two-user BC channel with limited CSI feedback. In \cite{Gui-Zhou}, the authors considered the trade-off between the spectral efficiency and energy efficiency for RSMA in multi-antenna BC channels. It was shown that RSMA achieves a significant improvement in terms of spectral and energy efficiency. The use of RSMA in the downlink of a MISO SWIPT BC channel was investigated in \cite{Mao_swipt}. The sum rate performance of rate-splitting was evaluated and compared with other MA schemes. A further study of RSMA in downlink CoMP JT networks was considered in \cite{Mao_JT_2019}, where results showed the superiority of RS in JT over SDMA- or NOMA-based JT. Also, in \cite{Zhang2019}, Zhang \textit{et al.} considered a cooperative rate splitting strategy based on the three-node relay channel, and demonstrated the enhanced performance of this scheme compared to cooperative-based NOMA. Similar results were reported in \cite{Yijie2019}, where the max-min fairness was used as a metric for a $K$-user MISO BC with user's relaying cooperative communication. The authors in \cite{Papazafeiropoulos} adopted an RS strategy to overcome the saturation occurred in multi-pair MIMO relay systems with imperfect CSIT. The use of RSMA in cloud radio access network was considered in \cite{yu2019}. Yu \textit{et al.} proposed an enhanced RSMA technique that outperforms the original RSMA through careful grouping of common signals that are chosen using hierarchical clustering with inter-UEs dissimilarity metric, defined based on channel directions. Additionally, the superiority of RSMA over other MA schemes was investigated for satellite systems in \cite{Longfei}, where users achieved max-min fairness for multi-beam satellite communications under CSIT uncertainty with minimum inter-beam interference. Finally, RSMA was considered in \cite{Rahmati} for cellular-connected drones, where the authors investigated the energy efficiency of RSMA and NOMA schemes in a mmWave downlink transmission scenario. Despite the extensive research efforts on RSMA for different systems in the RF domain, its applicability in VLC systems has not yet been explored. Therefore, in this article, we provide preliminary results on the performance of RSMA in VLC systems, and compare its capacity gain with respect to existing VLC MA techniques. Furthermore, we give insights into the challenges and future research directions for RSMA-based VLC systems. \subsection{RSMA-Based VLC} The concept of RSMA was proposed to a multi-antenna BC channel in \cite{Mao2018} to bridge the gap between two extreme MA schemes, namely NOMA and SDMA. Mao \textit{et al.} showed that RSMA works best in the multiple-input case. In the VLC context, this can be realized using several transmitting LEDs to create a BC channel towards several users. Hence, in this survey, we analyze the performance of RSMA in a downlink MU-MISO BC VLC system. \subsubsection {System Model} \label{sec:models} For the sake of simplicity but without a loss of generality, we assume two transmitting LEDs that send messages to two single-PD users, as depicted in Fig. \ref{Fig:2MISO}. Messages $U_1$ and $U_{2}$ are intended to users 1 and 2, respectively. $U_1$ is divided into two parts: private part $U^1_{1}$ and common part $U^{12}_{1}$. Similarly, $U_2$ is divided into $U^2_{2}$ and $U^{12}_{2}$. Then, the two private messages, $U^{1}_{1}$ and $U^{2}_{2}$, are encoded into private streams $s_{1}$ and $s_{2}$, respectively. Then, from a common codebook, $U^{12}_{1}$ and $U^{12}_{2}$ are combined and encoded into one common stream $s_{12}$. Without loss of generality, we assume that $s_{i}$ ($i \in \{1, 2, 12\}$) is randomly selected from a pulse amplitude modulation constellation with zero mean and normalized range $\{-1, 1\}$. Let $\textbf{s}=\left[s_1,s_2,s_{12}\right]^T$ be the transmitted symbols vector, with $\mathbb{E}(\textbf{s}\textbf{s}^{T})=\textbf{I}$. It is further assumed that the non-linear response of the LED is compensated through digital pre-disposition \cite{Stepniak2013}. To reduce MUI, a linear precoding matrix $\textbf{P}=\left[\textbf{p}_{1},\textbf{p}_{2},\textbf{p}_{12}\right]$ is considered, where $\textbf{p}_{i}=[p_{i,1} \, p_{i,2}]^T \in \mathbb{R}_{2 \times 1}$ is the precoding vector for the $i^{\rm th}$ stream. A DC bias $\textbf{d}_{DC} \in \mathbb{R}_{2 \times 1}$ is added in order to ensure positive signals, which is required by the LEDs. Hence, the transmitted signal, $\textbf{x} \in \mathbb{R}^+_{2 \times 1}$, can be written as \begin{equation} \mathbf{x} = \left[x_1, x_2\right]^T= \mathbf{{P}} \mathbf{s}+\mathbf{d}_{DC} = \sum_{i \in \{1,2,12\}}\mathbf{p}_{i}s_{i}+\mathbf{d}_{DC} \end{equation} and the received signal at the $k^{\rm th}$ PD, after optical-to-electrical conversion, is expressed as \begin{equation} y_{k}=\varsigma \zeta \mathbf{h}_{k}^{T} \mathbf{x}+{n}_{k}, \; \forall k \in \{1,2\} \end{equation} where $\varsigma$ is the conversion factor of any LED, $\zeta$ is the responsivity of any PD, $\mathbf{h}_{k}=[h_{k,1}, h_{k,2}]^T$ is the DC channel gain vector between the $k^{\rm th}$ PD and the transmitting LEDs, where each element is expressed as given in (\ref{eq:fixture_app}), and $n_k \sim \mathcal{N}(0,\sigma^2_k)$ is the additive white Gaussian noise, representing the thermal and shot noise, with zero-mean and variance $\sigma^2_k$. Due to the low mobility of indoor users, we assume that the channel gains are constant during the transmission, and that perfect CSI is available at the transmitter. In order to accurately design the precoding matrix $\textbf{P}$, the following constraints need to be satisfied to ensure that the LEDs work in their dynamic range \begin{eqnarray} \centering \small L_1(\mathbf{p}_l)&=& \sum_{i\in{1,2,12}} |p_{l,i}| \nonumber \\ &=&\text{min}\left(d_{{DC}},P_{\rm{max}}-d_{{DC}}\right), \; \forall l \in \{1,2\}. \end{eqnarray} The MMSE equalizer for the common stream is utilized at the $k^{\rm th}$ user for signal detection \cite{Ma2013}, followed by SIC as follows: First, user $k$ decodes the common signal $s_{12}$ while treating the other signals as noise. Hence, the received SINR at the $k^{\rm th}$ user, for the common signal, is expressed as \begin{equation} \label{eq:commom} \gamma^{12}_k = \frac{\left(\mathbf{h}^T_k \mathbf{p}_{12}\right)^2}{\left(\mathbf{h}^T_k \mathbf{p}_1\right)^2 + \left(\mathbf{h}^T_k \mathbf{p}_2\right)^2+ \hat{\sigma}^2_k}, \; \forall k \in \{1,2\} \end{equation} where $\hat{\sigma}^2_k=\sigma_k^2 / \left( \varsigma \zeta \right)^2$ is the normalized received noise power. For the sake of simplicity, we assume that $\varsigma \zeta=1$, and thus $\hat{\sigma}_k^2=\sigma_k^2$. Then, the effect of the common signal is removed using SIC. This allows for the detection of the private signal by first employing the MMSE equalizer, and then user $k$ attempts to decode its private message $s_k$, while treating the signals of other user as noise. Consequently, the received SINR at user $k$, for its private signal, can be written as \begin{equation} \label{eq:private} \gamma^{k}_k = \frac{\left(\mathbf{h}^T_k \mathbf{p}_{k}\right)^2}{\left(\mathbf{h}^T_k \mathbf{p}_{\bar{k}}\right)^2 + {\sigma}^2_k}, \; \forall (k,\bar{k}) \in \{(1,2),(2,1)\} \end{equation} and the achieved data rate at user $k$ is expressed by \cite{Mao2018} \begin{equation} R^{12}_k=\text{log}_{2}(1+\gamma^{12}_k), \end{equation} and \begin{equation} R_k^k=\text{log}_{2}⁡(1+\gamma^k_k),\; \forall k\in \{1,2\} \end{equation} where $R_k^{12}$ and $R_k^k$ are the data rates for the common and private signals, respectively. In order to ensure successful decoding of the common stream $s_{12}$ at both users, the common rate shall not exceed $R_{12}=\text{min}(R^{12}_1,R^{12}_2)$. The targeted common rate for each user can be achieved if $R_{12}$ is adequately shared between the two users, i.e., $R_{12}=\sum_{k=1}^2 R_{k,\rm{com}}$, where $R_{k,\rm{com}}$ is the $k^{\rm th}$ user portion of the common rate. Consequently, the total achievable data rate of user $k$, denoted $R_{k,\rm{ov}}$, can be expressed by \cite{Mao2018} \begin{equation} \label{eq:c4} R_{k,\rm{ov}}=R_{k,\rm{com}}+R_k^k, \; \forall k \in \{1,2\}. \end{equation} \subsubsection{Problem Formulation} Although conventional precoders, such as ZF and ZF-DPC, are simple and can efficiently remove MUI, they suffer from performance degradation at low SNR values. Consequently, there is a need for optimal precoding in order to maximize an objective function, e.g., sum rate, weighted sum rate (WSR), proportional fairness, or max-min fairness \cite{Pham2017}, under QoS requirements and per-LED transmit power constraints to take into account the nature of the optical signals, which are real and positive valued. Inspired by the MMSE precoding method presented in \cite{Nguyen2014}, we maximize the WSR of the studied MU-MISO VLC system. For a given weights vector $\textbf{w} =[w_1,w_2 ]$, the WSR maximization problem (P1) can be expressed as follows: \begin{subequations} \begin{align} \small \max_{\mathbf{P},\mathbf{R}_{\rm{com}}} & \quad {R}(\textbf{w})={\sum_{k=1}^2}w_k \;R_{k,\rm{ov}} \tag{P1} \\ \label{c1} \text{s.t.}\quad & L_1(\mathbf{p}_l) \leq \varepsilon, \; \forall l \in \{1,2\} \nonumber \tag{P1.a}\\ \label{c2} & \sum_{k=1}^2R_{k,\rm{com}} \leq R_{12} \tag{P1.b}\\ \label{c3} & \mathbf{R}_{\rm{com}}\geq \mathbf{0} \tag{P1.d} \end{align} \end{subequations} where $\textbf{R}_{\rm{com}}=[R_{1,\rm{com}},R_{2,\rm{com}}]$ is the common rate vector. (P1) is non-convex due to the presence of variables $\textbf{p}_k$ ($k \in \{1,2\}$) in the denominator of the SINR expressions (\ref{eq:commom})-(\ref{eq:private}). Thus, its solution is not straightforward. Similar to \cite{Christensen2008}, we opt for problem reformulation, where the objective becomes the minimization of the weighted MMSE, and is achieved by jointly optimizing the WMMSE precoding vectors and MSE equalizer weights. To obtain a local optimum, we utilize alternating optimization (AO) detailed in Algorithm \ref{Algo1} \cite{Christensen2008}, where $k$ is the iteration index, $\mathbf{w}$ is the MMSE weights vector, $\mathbf{\alpha}$ is the MSE receiver weights vector, $\mathbf{v}$ is the transformation of $\mathbf{R}_{\rm{com}} $, and $\delta$ is the tolerance threshold. In order to converge to a maximum WSR, the algorithm alternates between WMMSE precoding design and MSE equalizer weights design. For further details on the AO procedure, we refer the reader to Sections IV and V in \cite{Christensen2008} \begin{algorithm}[t] \small{ \caption{Alternating Optimization Algorithm} \label{Algo1} \begin{algorithmic}[1] \State {Initialize $k \xleftarrow{}0$, $\mathbf{P}[k]$, $R[k]$} \Repeat \State $k \xleftarrow{} k+1$; $\mathbf{P}[k-1] \xleftarrow{} \mathbf{P}$ \State Update the WMMSE weights $\mathbf{w} \xleftarrow{} \mathbf{w}(\mathbf{P}[k-1])$ \State Update the receive filter gains $\mathbf{\alpha} \xleftarrow{} \mathbf{\alpha}(\mathbf{P}[k-1])$ \State Solve (P1) using WMMSE transformation for updated ($\mathbf{w}, \mathbf{\alpha}$), then update ($\mathbf{P}$, $\mathbf{v}$) \Until {$|{R}[k]-{R}[k-1]|\leq \delta$}. \end{algorithmic}} \end{algorithm} Finally, the reformulated problem can be solved using optimization software such as CVX in MATLAB\cite{CVX}. It is noted that the AO algorithm converges faster and achieves better performance than other types of precoding optimization algorithms. However, its complexity increases with the number of users. \subsubsection {NOMA and SDMA as Special Cases of RSMA} As we mentioned earlier, RSMA is a generalized MA scheme, where NOMA and SDMA are special cases. To implement SDMA from RSMA, the common stream is allocated null power, and each user's message is encoded into a private stream only. Hence, the transmitted signal in this case is \begin{equation} \mathbf{x}=\mathbf{P} \mathbf{s}+\mathbf{d}_{DC}=\sum_{i\in\{1,2\}}\mathbf{p}_i s_i+\mathbf{d}_{DC} \end{equation} and the received SINR at each user simplifies into (\ref{eq:private}). Similarly, NOMA can be obtained from RSMA by encoding one of the users' messages as a private stream, i.e., the user with the strongest channel, and the signal of the second user is encoded into a common stream. Assuming that user 1 has the strongest channel gain, then the transmitted signal in this case can be written as \begin{equation} \mathbf{x}=\mathbf{P}\mathbf{s}+\mathbf{d}_{DC} = \sum_{i \in \{1,12\}}\mathbf{p_i} s_i+\mathbf{d}_{DC} \end{equation} and the associated SINRs of the first and second users are given by \begin{equation} \gamma_1^1 = \frac{(\mathbf{h}^T_1 \mathbf{p}_{1})^2}{{\sigma}^2_1} \end{equation} and \begin{equation} \gamma_2^{12} = \text{min}\bigg( \frac{(\mathbf{h}^T_1 \mathbf{p}_{12})^2}{ (\mathbf{h}^T_1 \mathbf{p}_1)^2+ {\sigma}^2_1}\;,\;\frac{(\mathbf{h}^T_2 \mathbf{p}_{12})^2}{(\mathbf{h}^T_2 \mathbf{p}_1)^2+ {\sigma}^2_2}\bigg) \end{equation} respectively. It is worth mentioning that the flexibility of RSMA comes at the expense of a slightly higher encoding complexity at the transmitter. \begin{figure*}[t] \begin{minipage}{0.5\linewidth} \includegraphics[width=3in,center]{4LED.eps} \caption{Room configuration and users' scenarios (4 LEDs).} \label{Fig:4LEDs} \end{minipage} \hfill \begin{minipage}{0.5\linewidth} \includegraphics[width=3in,center]{2LED.eps} \caption{Room configuration and users' scenarios (2 LEDs).} \label{Fig:2LEDs} \end{minipage} \end{figure*} \section {Performance Evaluation} We present in this section different scenarios for the application of RSMA in VLC systems, where we investigate their performance in terms of WSR, and then compare them to performances of SDMA and NOMA. Moreover, we study the impact of different users' locations within an indoor space. \begin{figure}[t] \centering \includegraphics[width=3in]{4leds_far_position.eps} \caption{WSR vs. SNR per LED array (Scenario 1, 4 LEDs).} \label{Fig:Sim1} \end{figure} \begin{table}[t] \scriptsize \centering \caption{Simulation Parameters.} \begin{tabular} [c]{|p{3cm}|c|c|} \hline \textbf{Parameter}&\textbf{Symbol}&\textbf{Value} \\ \hline \hline Number of LEDs per fixture & $Q$ & $3600\; (60\times60)$\\ \hline LED beam angle& $\varphi_{1/2}$ & $60^o$\\ \hline PD area& $A_k$ ($k=1,2$) & $1\; cm^2$\\ \hline Refractive index of PD& $n$ & $1.5$\\ \hline Gain of optical filter& $T_s(\phi_{k,i})$ ($k=1,2$) &1\\ \hline FoV of PD & $\phi_c$ & $60^o$\\ \hline \end{tabular} \label{TableIV} \end{table} We consider an RSMA-based MU-MISO VLC system, where two single-PD users are served by two or four LED fixtures in a room of size $5\times5\times4 \;m^3$. The room configurations with the users' scenarios are detailed in Figs. \ref{Fig:4LEDs}--\ref{Fig:2LEDs}, as follows. In both figures, \textcolor{black}{main} two users' location scenarios are considered. In the first (blue circles), users are located in the middle space of the room with a separation of 3 m, whereas in the second (green circles), users are located in the top of the room, with a smaller separation of 0.4 m. Between the two figures, the number and locations of LEDs is varied from 4 to 2. \textcolor{black}{In addition, a third scenario is considered for the 4 LEDs case, where the separation between users is 0.94 m (yellow circle for user 1 and green circle of user 2).} All coordinates are expressed in the 3D-space system. Furthermore, we assume the same optical devices characteristics as in \cite{Ma2013}, while the two users are allocated equal priority, i.e., $w_1=w_2=\frac{1}{2}$ in the objective function of (P1). Since the noise power is assumed unity, then the SNR designates the transmit power per-LED. The remaining parameters are detailed in Table \ref{TableIV}. \begin{figure}[t] \centering \includegraphics[width=3in]{4leds_close_position.eps} \caption{WSR vs. SNR per LED array (Scenario 2, 4 LEDs).} \label{Fig:Sim2} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3in]{low_correlation.eps} \caption{\textcolor{black}{WSR vs. SNR per LED array (Scenario 3, 4 LEDs).}} \label{Fig:Sim21} \end{figure} Fig. \ref{Fig:Sim1} shows the WSR performance for RSMA, NOMA and SDMA, in ``Scenario 1, 4 LEDs". It can be seen that RSMA outperforms both NOMA and SDMA, particularly at high SNR. In addition, SDMA performs better than NOMA, since the number of transmitter LEDs is larger than the number of users, allowing efficient management of MUI. However, SDMA performs worse than RSMA due to the difficult channels alignment between users, caused by the nature of the VLC channel. In Fig. \ref{Fig:Sim2}, the same comparison is made for ``Scenario 2, 4 LEDs". With a smaller separation between users, channels are more correlated, which is reflected by the corresponding achievable performance. For instance, using RSMA, WSR=13 bits/s/Hz (RSMA) is achieved at SNR=40 dB, compared to WSR=15.5 bits/s/Hz in Fig. \ref{Fig:Sim1}. Nevertheless, the performance of RSMA still exceeds that of both NOMA and SDMA. For SNRs below 35 dB, NOMA outperforms SDMA. Indeed, NOMA is able to distinguish different users using precoding and SIC receivers. However, for SNRs above 35 dB, this procedure is less effective, and direct beamforming using SDMA becomes privileged. Consequently, NOMA is favored at lower SNRs for low users separation, whilst SDMA is more performant for high SNRs. \textcolor{black}{ To examine the impact of the users separation, i.e., correlation between users channels, in Fig. \ref{Fig:Sim21}, the WSR performance has been investigated for ``Scenario 3" (U$_1$ yellow + U$_2$ green in Fig. \ref{Fig:4LEDs}), where the users separation is 0.94 m. It can be seen that the WSR performance of NOMA is improved compared to ``Scenario 2." This is due to the reduced channel correlation between the two users, since the separating distance between users increased from 0.4 m to 0.94 m. Furthermore, SDMA and RSMA demonstrate improved performances, compared to ``Scenario 2," due to the lower interference between users.} \begin{figure}[t] \centering \includegraphics[width=3in]{2leds_far_position.eps} \caption{WSR vs. SNR per LED array (Scenario 1, 2 LEDs).} \label{Fig:Sim3} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3in]{2leds_close_position.eps} \caption{WSR vs. SNR per LED array (Scenario 2, 2 LEDs).} \label{Fig:Sim4} \end{figure} In Figs. \ref{Fig:Sim3}-\ref{Fig:Sim4}, we consider the same scenarios, but with 2 LEDs. Similar to the previous results, the superiority of RSMA in terms of WSR over the other techniques is clearly illustrated. SDMA's performance is slightly degraded due to the smaller number of LEDs. Similar to Fig. \ref{Fig:Sim2}, in Fig. \ref{Fig:Sim4} the SDMA performance is degraded at SNRs below 36 dB compared to NOMA, but outperforms the latter as the SNR increases. Finally, Fig. \ref{Fig:Sim5} illustrates the users' locations impact on the WSR performance of the RSMA scheme. We consider the room setup of 2 LEDs, and two users initially located in the middle of the room. From there, the first and second users travel to the east and west walls at the same constant speed, respectively. Thus, their physical separation increases until reaching its maximum 5 m (i.e., users have reached the opposite walls). It can be seen that WSR varies with the separation, until a maximum value is achieved for a separation equal to 3.6 m. This corresponds to users locations [-1.8, 0, 0.8] and [1.8, 0, 0.8], where the correlation between channels is low, but users are very close to one of the serving LEDs to capture maximal power. However, as this separation increases above 3.6 m, the WSR degrades due to longer distances between users and LEDs. It can be seen that these optimal users' locations are the same for different SNRs. Consequently, designing indoor spaces using RSMA-VLC requires a careful consideration of the LEDs' and users' locations. \begin{figure}[t] \centering \includegraphics[width=3.25in]{20_dB_25_dB_30.eps} \caption{Performance of RSMA for different users’ locations/separations and SNRs.} \label{Fig:Sim5} \end{figure} \section{Open Issues and Research Directions} In this paper, we reviewed different multiple access techniques proposed to improve the spectral efficiency of VLC systems and minimize the encountered VLC-specific interference issues. Then, we addressed a preliminary work on the utilization of RSMA within VLC systems. It has been shown that RSMA is a powerful MA scheme that can provide high data rates and reliable VLC communications. However, several associated issues need to be addressed and analyzed for the practical realization of RSMA-VLC. For instance, the impact of the non-linear distortion caused by the different circuits components, such as LEDs, PDs, and analog/digital and digital/analog converters has to be investigated. Moreover, as a novel MA technique, more efforts are required to study the design of the physical and MAC layers. Hence, different performance metrics, modulation and coding schemes, and security issues, are open research problems in the RSMA-VLC. Additionally, optimal precoding and power allocation for RSMA-VLC are still open for investigation, where new linear and non-linear techniques can be proposed and optimized. Moreover, the current literature has mainly focused on the Gaussian noise assumption, but neglected the effect of ambient light, which can cause significant degradation in performance. Other current assumptions include: the receiver is always pointing upward, a LoS is always available and CSI is perfectly known. However, this may not be the case in practical scenarios, where the receiver can be differently oriented, the VLC link may experience shadowing and/or blockage, and CSI knowledge is imperfect. Consequently, the design and performance evaluation of RSMA-VLC systems that take into account these practical concerns have to be studied. Innovative solutions to circumvent the absence of a LoS may include enabling optical cooperative communications and device-to-device (D2D) communications. Indeed, optical cooperation among VLC transmitters guarantees reliable transmissions to users in a specific area \cite{Pham2019}, whereas in D2D communications, users with strong VLC links may help forwarding data to users with blocked VLC channels \cite{Raveendran2019}. In the design of such systems, taking into consideration the different users' QoS may lead to improved performances. Finally, the analysis of massive MIMO RSMA-VLC systems is another interesting open research problem. \section {Conclusion} In this paper, we provided a conceptual background of several MA schemes for MIMO-VLC systems, along with their advantages and limitations. Specifically, our review covered NOMA and SDMA integration into VLC systems, showing how they minimize VLC interference and improve communications' performance. We reviewed also RSMA for RF systems, presented as a generalizing multiple access of NOMA and SDMA. Subsequently, we presented a preliminary study for the integration of RSMA into VLC systems, taking into consideration the per-LED power constraints. The SINR and WSR expressions are obtained for a two-user MISO VLC system, and results showed the flexibility of RSMA in generalizing NOMA and SDMA, at slightly increased design complexity. Through simulations, it has been proven that RSMA-VLC outperforms both techniques in terms of WSR. Also, RSMA is robust against channel correlation, and hence, it can be seen as a strong MA candidate for VLC in networks beyond 5G. Finally, a number of open issues and research directions, linked to MIMO RSMA-VLC, have been presented and discussed. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-07-28T02:39:52", "yymm": "2007", "arxiv_id": "2007.13560", "language": "en", "url": "https://arxiv.org/abs/2007.13560" }
\section{Problem Description} \label{Problem Description} Electrolysis is the process of decomposing a chemical product into various byproducts by applying an electrical current \cite{Wendt1999}. It takes place in \emph{electrolyzers}, which are systems composed of multiple \emph{electrochemical cells}. These cells act similarly to resistors: electrical current passing through them causes a voltage drop. The magnitude of this voltage drop depends on the operating conditions and the cell's degradation, increasing steadily when a fault is about to happen. Faults in electrochemical cells may become safety hazards. In order to diminish their occurrence, they are usually replaced every four years. This heuristic comes from the statistical analysis of the average lifetime of past cells, which represents an aggregation of data from multiple cells. Thus, it does not take into consideration the specificity of each cell, which is needed to take action concerning their maintenance. In order to implement a more efficient strategy that adequately considers such specificity, the cell's degradation must be monitored. However, degradation can only be determined directly by performing offline, and sometimes destructive, tests \cite{Causserand2010}. Our objective is to use non-invasive methods that do not require the full stoppage of the electrolyzer. As such, the cell's degradation must be inferred indirectly from other measurable properties. In this article, our approach relies on predicting the voltage that a healthy cell would present ($\widehat{V_t}$) and comparing it to the cell's measured voltage in real-time ($V_t$). If there is a divergence between the two of them, a fault is signaled. The divergence threshold and the type of fault diagnosed depend on the shape of the divergence, following a set of rules proposed by experts. In order to predict the voltage of a healthy cell, the following limitations are considered: \begin{enumerate} \item The cell's voltage drifts slowly over time due to its degradation. \item Even for the same operating conditions and degradation level, the voltage of a cell differs from the ones of similar cells. This difference is induced by disparities in manufacturing, installation procedures, and other factors that cannot be directly quantified. The latter is referred to in this article as the specificity of each cell. \item There is a delay between a change in the operating conditions and the response of the cell. In order to account for this delay, the operating conditions at the previous time-steps must be considered by the prediction model. \item Inside the electrolyzer, cells are electrically connected in series and they all receive the same chemical input. This means that the only measurable variable that is different for each cell is their voltage. This voltage is stored according to the position of each cell in the electrolyzer, and not according to the unique identification number of each cell. Moreover, as previously stated, cells' degradation data is not available. \item In order to predict faults, the predicted voltage must be independent from the measured voltage. This limitation entails that the measured voltage cannot be used as an input to the prediction model. \item Electrolyzers are shut down and restarted many times throughout their lifetime. During each shutdown, the data collection is stopped. Any cell may be substituted or exchanged at the discretion of the plant's operator, without being reflected in the data. The intervals when the electrolyzer is operating are called cycles and, as all its cells are under tension, no changes are performed by the operator. \end{enumerate} \section{Previous Work} \label{Previous Work} To be able to find the right model and techniques that apply to our problem, we first need to understand the system we are working with, which is a chemical electrolyzer. As previously stated, chemical electrolyzers are composed of multiple cells, where the electrolysis process takes place. There are different cell technologies, the most efficient being \emph{Ion-Exchange Membranes}, on which our article is focused. In these cells, the anode and the cathode are separated by a semi-permeable membrane that does not permit both electrolytes to mix, yet allows the ions needed for the electrolysis to travel across \cite{Paidar2016}. As time passes, holes start to appear in the membrane, and the electrodes start to lose their coating, thus increasing the voltage of the cell \cite{Jalali2009}. As the degradation advances, it reaches a point where an undesirable reaction between the solution in the anode and the solution in the cathode starts to occur. This reaction is characterized by a spike in the cell's voltage \cite{TheChlorineInstitute2018}. It is a dangerous situation requiring the electrolyzer to be stopped immediately. Therefore, detecting anomalies in the cell's voltage leads to predicting faults in the cell. As previously stated, we do so by predicting the expected voltage that a healthy cell would present and then comparing it with its actual measured voltage. The better the accuracy of the model used for predicting such voltage, the earlier it is possible to signal the fault. In order to calculate the expected cell's voltage ($\widehat{V_t}$), the underlying chemical reaction needs to be approximated. This function is composed of many parameters, which need to be estimated. Some of them are operation-specific, i.e., identical for all the cells that perform the electrolysis for the same operating conditions. However, as per limitation 2, there are also cell-specific parameters. Moreover, limitation 1 implies that there are also degradation-specific parameters that change over time as the cell degrades. In order to define a model that accounts for all these parameters, three types of data are needed: \emph{operation data}, \emph{cell-specific data}, and \emph{degradation-specific data}. Nonetheless, as per limitations 4 and 5, we neither have cell-specific data nor degradation-specific data that we can use. Thus, if only operation-specific parameters are used, the same voltage would be predicted for each cell. This is not an acceptable result, because the specificity of each cell would be lost. In order to overcome this problem, the current approach relies on fitting a different model for each cell and retraining it periodically in order to update the degradation-specific parameters. As per limitation 6, the retraining frequency must be at least once per cycle. An expert-defined parametric model is currently used for this task \cite{Tremblay2012}. Operation-specific parameters are defined by the experts, while cell and degradation-specific parameters are estimated at the beginning of each cycle by using a linear regression. This model has the advantage of being simple, thus needing fewer observations to train than equivalent non-parametric models \cite{Veillette2010}. This is crucial, as the observations used at each cycle for training the model cannot be used for predicting the cells' voltage. Therefore, during the training period, it is not possible to detect faults either. This model also has easy to explain results. Nonetheless, it requires an understanding of the chemical process to define it. Suppositions in the model entail an accuracy loss, as they do not account for all the details present in a production environment. An alternative approach is to use non-parametric Machine Learning (ML) techniques. Support Vector Machines (SVM) were used to predict the voltage of a \emph{chlor-alkali} cell and explore its response to different operating conditions \cite{Kaveh2009}. They obtained better accuracy than parametric models, but the scope of their work was limited to a single cell in a controlled lab environment with pre-defined operating conditions. They carried out a similar study using artificial Neural Networks (NNs) \cite{Kaveh2008}. However, their network only had two hidden layers and a reduced number of neurons, falling behind recently developed networks. NNs are non-parametric models composed of multiple mathematical entities called \emph{neurons}. Their name comes from the fact that they are inspired by how biological brains work \cite{Goodfellow2016}. Neurons are grouped in layers. In the most basic neural architecture, called the Multi-Layer Perceptron (MLP), each neuron of a particular layer is connected to all the neurons of the previous layer. A scalar, called \emph{weight}, quantify each connection. The neuron's output is the sum of the values of each neuron from the previous layer multiplied by their respective weights. Up to this point, the output of the model would be a linear combination. In order to approximate non-linear functions, the output of each neuron is transformed by an activation function \cite{Nielsen2015}. Indeed, when using non-linear activation functions and enough neurons combined with data, NNs are considered to be universal approximators \cite{Pinkus1999}. For our application, this is primordial: we are no longer forced to make assumptions about the underlying chemical function. There are many architectures derived from the general MLP, each of them specifically tailored to work with a different kind of input data and objective. Examples of these architectures are Recurrent Neural Networks (RNNs) and neural encoders, which are both used in this article. As per limitation 3, we are interested in RNNs for our application. An RNN is a particular neural architecture that deals with sequential data. It accomplishes so by keeping an internal state that is updated for every time-step of the sequence \cite{Karpathy2015}. This internal state serves the purpose of a memory, allowing past information to persist in the network. In order to predict the output of a certain time-step, it considers the input for that time-step and the internal state from the previous time-steps. However, RNNs struggle when dealing with long temporal sequences \cite{Bengio1994}. Long-Short Term Memory (LSTM) networks are a type of RNNs that does not suffer from this problem \cite{Hochreiter1997}. LSTMs are capable of modifying their internal state, either by removing or by adding new information, through the use of learnable gates \cite{Olah2015}. This flexibility makes LSTMs to be more commonly used in practice than RNNs. A neural encoder is a type of neural architecture whose objective is to take an input vector and reduce its dimensionality to a desired one. It is usually paired with a decoder. The decoder receives the output of the encoder and transforms it to minimize an objective function. The whole network is trained \emph{backpropagating} the loss of the decoder \cite{Rumelhart1985}. If the objective of the decoder is to reconstruct the original input vector, it is called an autoencoder \mbox{\cite{Hinton2006}}. A review of different types of autoencoders is presented in the work of Tschannen et al. \cite{Tschannen2018}. Autoencoders are often used for anomaly detection \cite{Sakurada2014} and noise reduction \cite{Vincent2008}. They are also applicable to time sequences in combination with LSTMs \cite{Malhotra2016}. \subsection{Originality} \label{Originality} In this article, we develop a method to apply NNs to the voltage prediction of all the electrolyzer's cells. The model works in a production environment, where operating conditions are not pre-defined, and each cell has a different level of degradation. The model also has better accuracy than the parametric model currently in use. Moreover, it does not need more observations per cycle for starting to make predictions. As NNs require more data than parametric models to approximate a function, we take a different approach. Instead of fitting a different model for each cell and cycle, we propose a neural network that is trained with data currently available, thus avoiding retraining once deployed. This network is based on an \emph{encoder-decoder} architecture, where we substitute the \emph{decoder} by a \emph{predictor} -- a subnetwork that predicts the cell's voltage. Therefore, instead of training the network by minimizing the reconstruction error of the input sequence, we train it by minimizing the error between the voltage predicted by the network and the measured voltage. In order to overcome all the six limitations previously mentioned, the originality of our approach in comparison with the already existing approaches is: \begin{enumerate} \item The encoder subnetwork addresses limitations 1, 2, and 6. It does so by finding two features that represent the specificity of each cell and that are updated at each cycle to account for the degradation. \item The predictor addresses limitation 3, as it takes into account the temporality of the observations. \item Together, the encoder and the predictor address limitations 4 and 5. The predictor does not use the voltage as an input, yet it is still able to predict a different voltage for each cell, despite using the same operating conditions as input. It accomplishes so by taking the output of the encoder as an additional input, which is unique for each cell. Hence, the voltage prediction is not biased by the cell's measured voltage. \end{enumerate} \section{Data Preparation} \label{Data Preparation} \subsection{Features} \label{Features} Data is collected from two different sources. The first source is the plant's control system, which registers three features common to all the cells in the electrolyzer. These features are the electrical current that passes through the cells ($I$), and the temperature plus the concentration of the caustic at the outlet of the electrolyzer ($T$ and $X$ respectively). \linebreak The second source is the output of sensors that measure the voltage of each cell individually ($V$). Cells' measurements are carried out sequentially, which means that the controller reads a sensor and then proceeds to read the next one. This process is repeated for every cell in the electrolyzer and takes between one and two seconds to loop over all the cells. The plant's controller, on the other hand, does not follow a strict pattern of data collection. Each sensor connected to it has a different sampling rate, varying from one to around thirty seconds, which results in data observations that are misaligned and sampled at different intervals. In order to solve this problem, we downsample and align the measurements to the minute. This approach provides enough resolution to detect possible faults, as faults develop in the scale of hours. It also reduces the amount of unnecessary training data and facilitates the deployment of the model in slower computing processors, as the latency requirements are less strict. The resulting data after alignment is tabulated. Each row represents a different time of observation, and there is a different column for each electrolyzer's feature and cell. An electrolyzer is usually formed of around 160 cells. An excerpt of this data is presented in Table~\ref{table: Excerpt of the dataset}. \begin{table}[t] \caption{Excerpt of the dataset where \emph{n} represents the last observation, and \emph{m} the last electrolyzer's cell.} \label{table: Excerpt of the dataset} \vskip 0.1in \begin{center} \begin{small} \begin{tabular}{lcccccc} \toprule Index & $I$ & $T$ & $X$ & $V_1$ & \dots & $V_m$ \\ \midrule 1 & 1.163 & 76.668 & 31.949 & 2.475 & \dots & 2.468 \\ 2 & 1.887 & 76.555 & 31.945 & 2.532 & \dots & 2.518 \\ 3 & 2.072 & 76.501 & 31.941 & 2.562 & \dots & 2.549 \\ 4 & 2.036 & 76.449 & 31.937 & 2.560 & \dots & 2.554 \\ 5 & 2.425 & 76.397 & 31.937 & 2.577 & \dots & 2.562 \\ \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ $n$-4 & 16.311 & 88.742 & 32.672 & 3.380 & \dots & 3.318 \\ $n$-3 & 16.310 & 88.733 & 32.673 & 3.380 & \dots & 3.318 \\ $n$-2 & 16.310 & 88.724 & 32.673 & 3.380 & \dots & 3.318 \\ $n$-1 & 16.309 & 88.715 & 32.674 & 3.381 & \dots & 3.317 \\ $n$ & 16.309 & 88.706 & 32.675 & 3.381 & \dots & 3.317 \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.2in \end{table} \subsection{Cycles} \label{Cycles} Electrolyzers are stopped and started many times during their lifetime due to production constraints, changing demand, work shifts, or maintenance requirements. The interval of time between a consecutive start and stop is known as a \emph{cycle}, and its length ranges from some hours to several weeks, depending on those conditions. Each cycle is divided into two phases of different lengths -- the \emph{startup} and the \emph{operation} phase. Both cycle's phases are shown in Figure~\ref{fig: Evolution of the electrical current}. The startup phase occurs when the electrical current increases from zero to 16 kA. The 16 kA threshold, defined by an expert, represents the moment when the cells reach their full production conditions. The rate at which the current increases differs for each startup, due to changes in the operating practices decided by the plant's operators. The length of this phase ranges anywhere from 20 minutes to 12 hours, which complicates making comparisons between different startups. This length's difference is shown in Figure~\ref{fig: Voltage startup same cell}. As previously mentioned, each cell responds differently to the same operating conditions, resulting in a different voltage increase during the startup. The voltage increases of three different cells are shown in Figure~\ref{fig: Voltage startup same cycle}. Note that cycles A, B, and C, as well as cells X, Y, and Z, are used as examples through the article. We chose these cycles because their operating conditions are significantly different. The three cells are chosen because they have different levels of degradation. Please note that their color schemes remain the same for all the figures in the article. The operation phase includes the rest of the cycle. The electrical current varies between 7 kA and 16 kA. We are especially interested in predicting the voltage during this phase, as a cell's fault would cause a significant disturbance. \begin{figure}[t] \caption{Evolution of the electrical current of the electrolyzer during a complete cycle. The startup phase is plotted in blue and the operation phase, in orange. Notice the length's difference between both phases.} \label{fig: Evolution of the electrical current} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{./fig/fig1.pdf}} \end{center} \vskip -0.2in \end{figure} \begin{figure}[t] \caption{Voltage during the startup phase of the same cell (Y) for three different cycles (A, B, C).} \label{fig: Voltage startup same cell} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{./fig/fig2.pdf}} \end{center} \vskip -0.2in \end{figure} \begin{figure}[t] \caption{Voltage during the startup phase of three different cells (X, Y, Z) for the same cycle (A).} \label{fig: Voltage startup same cycle} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{./fig/fig3.pdf}} \end{center} \vskip -0.2in \end{figure} \section{Methodology} \label{Methodology} \subsection{Data Processing} \label{Data Processing} As previously stated, when the electrolyzer is shut down, data collection is stopped, and so, it appears as missing data in the file. A new cycle is detected if the time difference between two consecutive observations is longer than 10 minutes and the following experts' conditions are met: \begin{enumerate} \item The electrical current reaches 16 kA during the cycle. \item The duration of the startup phase is less than 12 hours. \item The operation phase has at least the same duration as the startup phase. \end{enumerate} Data files are processed in order to ensure the satisfaction of these conditions, following the procedure outlined in Algorithm~\ref{alg: Detect Possible Cycles} and Algorithm~\ref{alg: Validate Cycles}. \begin{algorithm}[b] \caption{Detect Possible Cycles} \label{alg: Detect Possible Cycles} \begin{algorithmic} \STATE {\bfseries Input:} $time\_array$ with $n$ observations \STATE {\bfseries Initialize:} $initial\_cycle\_index \gets 0$, $i \gets 0$ \STATE {\bfseries Initialize:} $cycle\_list \gets [\ ]$ \WHILE{i $<$ lenght($time\_array$)} \STATE $diff_{i} \gets time\_array_{(i+1)} - time\_array_{(i)}$ \IF {$diff_{i} > 10$ minutes} \STATE $cycle \gets time\_array[initial\_cycle\_index:i]$ \STATE $cycle\_list$.append($cycle$) \STATE $initial\_cycle\_index \gets i+1$ \ENDIF \STATE $i \gets i + 1$ \ENDWHILE \end{algorithmic} \end{algorithm} \begin{algorithm}[b] \caption{Validate Cycles} \label{alg: Validate Cycles} \begin{algorithmic} \STATE {\bfseries Input:} $cycle\_list$ \FOR{each $cycle$ in $cycle\_list$} \IF {any observation in $cycle$ has $current > 16\ kA$} \STATE $idx \gets$ First observation where $current > 16\ kA$ \IF {$idx$ $\leq$ 12 hours} \STATE $startup \gets cycle[0:idx]$ \STATE $operation \gets cycle[idx:-1]$ \IF {lenght($operation$) $\geq$ length($startup$)} \STATE $cycle$ is a valid cycle \ENDIF \ENDIF \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} Once the data is structured in cycles, with their startup and operation phases defined, we proceed to scale it. For each data column, we use the \emph{unity-based normalization} scaling method, also known as \emph{Min-Max Scaling}. It scales the data linearly, so all the values are in the range [0,1] \cite{Raghav2018}. We do not use more sophisticated scaling techniques since our data is not normally distributed and since outliers are already removed in the original database. The column is scaled as \begin{small}$X_{scaled}=(X-min)/(max-min)$\end{small}. The minimum (\emph{min}) and maximum (\emph{max}) values use for scaling each feature are common for all the production plants and are determined by experts. They are the same for all the electrolyzers, so it is possible to use the same trained model for all of them. Missing observations appear as \emph{`NaN'} and are substituted by the value of `-1' in order to keep them outside of the scaler's range. After following this procedure, a subset of the resulting data is shown in Table~\ref{table: subset of processed data from cycle A} The final step is to export the data to TFRecord files, which is a binary format developed by Google and optimized for the preprocessing tf.data.Dataset API of TensorFlow 2.0 \cite{Google2019}. We export a different file for each cycle, containing all the cell's observations. \begin{table}[t] \caption{Subset of data from Cycle A after being processed. Data has been scaled and missing observations, replaced by `-1'. Note that, as expected, the input features at each time-step are the same for cells \emph{one} and \emph{m}, where \emph{m} represents the last electrolyzer's cell.} \label{table: subset of processed data from cycle A} \vskip 0.1in \begin{small} \begin{center} \begin{tabular}{lcccccc} \toprule Cell & Phase & Index & $I$ & $T$ & $X$ & $V$ \\ \midrule 1 & $Startup$ & 1 & 0.010 & 0.417 & 0.790 & 0.441 \\ & & 2 & 0.054 & 0.413 & 0.790 & 0.450 \\ & & 3 & 0.066 & 0.411 & 0.789 & 0.455 \\ & & 4 & 0.064 & -1 & 0.789 & 0.454 \\ & & \dots & \dots & \dots & \dots & \dots \\ & & $i^a$ & 0.920 & 0.790 & 0.851 & 0.587 \\ \cmidrule{2-7} & $Oper.$ & 1 & 0.928 & 0.806 & 0.852 & 0.587 \\ & & 2 & 0.928 & 0.811 & -1 & 0.587 \\ & & 3 & 0.928 & 0.815 & 0.852 & 0.587 \\ & & 4 & 0.928 & 0.819 & 0.852 & 0.586 \\ & & \dots & \dots & \dots & \dots & \dots \\ & & $j^b$ & 0.940 & 0.881 & 0.856 & 0.586 \\ \midrule \dots & \dots & \dots & \dots & \dots & \dots & \dots \\ \midrule $m$ & $Startup$ & 1 & 0.010 & 0.417 & 0.790 & 0.440 \\ & & 2 & 0.054 & 0.413 & 0.790 & 0.448 \\ & & 3 & 0.066 & 0.411 & 0.789 & 0.453 \\ & & 4 & 0.064 & -1 & 0.789 & 0.454 \\ & & \dots & \dots & \dots & \dots & \dots \\ & & $i^a$ & 0.920 & 0.790 & 0.851 & 0.573 \\ \cmidrule{2-7} & $Oper.$ & 1 & 0.928 & 0.806 & 0.852 & 0.575 \\ & & 2 & 0.928 & 0.811 & -1 & 0.575 \\ & & 3 & 0.928 & 0.815 & 0.852 & 0.574 \\ & & 4 & 0.928 & 0.819 & 0.852 & 0.574 \\ & & \dots & \dots & \dots & \dots & \dots \\ & & $j^b$ & 0.940 & 0.881 & 0.856 & 0.576 \\ \bottomrule \end{tabular} \end{center} \textsuperscript{$a$} $i$ : Last \emph{startup} observation of cycle A \\ \textsuperscript{$b$} $j$ : Last \emph{operation} observation of cycle A \end{small} \vskip -0.2in \end{table} \subsection{Model Architecture} \label{Model Architecture} As previously explained, we want to predict the voltage over the whole operation phase of the cycle, so the training must only be performed during the startup phase. However, neural networks are complex non-linear methods that require a large amount of data to converge to an optimal solution. Hence, only using the startup data for training a new model at each cycle does not provide satisfactory results. We have collected data from multiple cells across multiple cycles of different electrolyzers. Ideally, we would like to use all this data to train our network, but it is not straightforward to do so. The reason is because, as previously explained, each cell presents a different response to operating conditions, but no feature or labeled data is available to differentiate them. One possible approach is to train a base naïve model and then use transfer learning to retrain the last few layers of the model. However, we would need to do that for each cell at the beginning of each cycle, which is a computationally intensive task that would complicate the deployment \cite{Weiss2016}. Instead, we propose a different approach inspired by how an expert may look at many cells and differentiate them based solely on the evolution of their voltage during the startup sequence. Based on this idea, we deduce that it is possible to use the voltage of each cell during the startup phase to infer meaningful features that characterize such cells. These features account for both the specificity of the cell and its degradation and are used as input for the predictor model. This model also takes the operating conditions in order to predict the cell's voltage during the operation phase. The cell's voltage is only used as input during the startup phase for inferring the features. Indeed, the predictor model does not use the cell's voltage during the operation phase, so the predictions are not biased. Given enough data belonging to multiple cells and cycles, we would cover the whole subspace of possible features. Thus, new cells presented to the model would behave like cells that the model had already seen before, and so, it could also predict their voltage accurately. By using this approach, it is possible to train a single model that works for all the cells. Such a model predicts a different voltage for each cell based not only on the operating conditions but also on the specific cell's features inferred during the startup. This effectively avoids the need to retrain the model for each cycle and cell. At the same time, it allows us to train it beforehand with the whole dataset that we already have. In order to implement this model, we propose a neural network formed by two subnetworks: a self-supervised encoder and a decoder -- or voltage predictor. These two subnetworks are developed in detail in the two two following subsections. The algorithm required for training such a network is developed in Section~\ref{section: Training Procedure}. \subsubsection{Encoder} \label{section: Encoder} The encoder subnetwork is the key part of this model. Its goal is to infer the features that characterize the behavior of a particular cell during a certain cycle, using only its startup phase. It is a self-supervised method, as it does not need labeled degradation data. This subnetwork does not have a loss function on its own. Instead, it is trained with the loss backpropagated from the voltage predictor subnetwork. At its core, it is a form of performing a dimensionality reduction. However, the encoded vector -- the dimensionality-reduced vector -- is not the result of a statistical procedure, but the optimal representation that eases the learning of the predictor subnetwork. We use the three input features from the control plant -- temperature, caustic concentration, and electrical current -- as well as the voltage of the cell. These inputs are needed to standardize the cell's voltage to the specific operating conditions of each startup. All in all, the shape of the input vector is [720 time-steps, 4 features]. The masking layer forces the successive layers to ignore a time-step if all the features of that time-step are equal to a masked value, which is `-1' in order to filter the time-steps added during batch padding. In order to account for the temporality of the sequence, the next layer is a Long Short-Term Memory (LSTM). After it, two dense layers are chained, smoothing the transition to the final two-positional encoded result. Its output is a vector of coordinates [X, Y] for each cell and startup, with a shape of (1 time-step, 2 features). Moreover, these coordinates can be represented in a graph, providing an insight into the decision process taken by the network. The characteristics of the layers that compose this subnetwork -- in TensorFlow's terminology -- are depicted in Figure~\ref{fig: Encoder subnetwork}. \begin{figure}[t] \caption{Encoder subnetwork} \label{fig: Encoder subnetwork} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{./fig/fig4.pdf}} \end{center} \vskip -0.2in \end{figure} \subsubsection{Predictor} \label{section: Predictor} The predictor subnetwork, as its name indicates, is responsible for predicting the cell's voltage. It takes two inputs: a window of time-steps from the operation phase and the encoded representation of the cell's startup phase. We have heuristically determined that a window of four observations is enough to represent the dynamics of the chemical phenomena behind the cell's response. The encoded cell's startup is repeated four times and concatenated with the window of operation features. Note that the voltage is not included in the window of operation features. Two LSTM layers are used to find temporal correlations between the observations of each window. Two dense layers follow, in order to output the predicted voltage. The output layer has a sigmoid activation function, as the output voltage was scaled previously to the range [0, 1]. The whole model is trained by minimizing the loss between the voltage predicted by this subnetwork and the measured voltage. The Adam optimizer \cite{Kingma2014} and the backpropagation algorithm are used. For this subnetwork to get a good accuracy in the voltage prediction, the encoder must learn a faithful representation of the characterization of the cell. Figure~\ref{fig: Predictor subnetwork} presents its layers' parameters. \begin{figure}[t] \caption{Predictor subnetwork} \label{fig: Predictor subnetwork} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{./fig/fig5.pdf}} \end{center} \vskip -0.2in \end{figure} \subsection{Training Procedure} \label{section: Training Procedure} We start with the data presented in Table~\ref{table: subset of processed data from cycle A}. As previously mentioned in Section~\ref{section: Predictor}, in order to account for the temporality of the chemical reaction, we first need to group four consecutive time-steps into a single data entry. This data entry is known as \emph{window} or \emph{observation}. In order to train a model with this kind of data, we define a custom training loop. Algorithm~\ref{alg: Stochastic Training Algorithm} presents the training loop for stochastic training -- one observation per forward-backward pass. However, such a training loop is not efficient. In order to leverage the parallel computation made possible by GPUs, we use \emph{mini-batch} training instead. In this mode of training, many observations are grouped in a \emph{batch} and ingested by the GPU at the same time.\linebreak \makeatletter \newcommand\fs@spaceruled{\def\@fs@cfont{\bfseries}\let\@fs@capt\floatc@ruled \def\@fs@pre{\vspace{1\baselineskip}\hrule height.8pt depth0pt \kern2pt}% \def\@fs@post{\kern2pt\hrule\relax}% \def\@fs@mid{\kern2pt\hrule\kern2pt}% \let\@fs@iftopcapt\iftrue} \makeatother \floatstyle{spaceruled \restylefloat{algorithm} \begin{algorithm}[t] \caption{Stochastic Training Algorithm} \label{alg: Stochastic Training Algorithm} \begin{algorithmic} \STATE {\bfseries Input:} Directory with $n$ cycles in TFRecord format \FOR {each $file$ in folder} \STATE $cycle$ $\gets$ TFRecord.load($file$) \FOR {each $cell$ in $cycle$} \STATE $cell\_startup$, $cell\_operation$ $\gets$ $cell$.split() \FOR {each window in split$(cell\_operation)$} \STATE $operating\_conditions \gets window[!voltage]$ \STATE $cell\_voltage \gets window[voltage][-1]$\vspace{0.5em} \COMMENT{Forward pass} \STATE $encoded\_startup \gets$ encoder$(cell\_startup)$ \STATE $predicted\_voltage \gets$ predictor$($ \STATE \hspace{4em} $operating\_conditions,$ \STATE \hspace{4em} $encoded\_startup)$ \STATE $loss \gets$ mean\_squared\_error$($ \STATE \hspace{4em} $predicted\_voltage,$ \STATE \hspace{4em} $cell\_voltage)$ \vspace{0.5em} \COMMENT{Backward pass} \STATE $update\_network\_weights(loss)$ \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} In order to make it possible to tweak different parameters effortlessly, we decided to generate the windows and batches in real-time during training. The procedure is similar to that followed when doing \emph{data augmentation}, which is used extensively in computer vision \cite{Taylor2017}. While the GPU is processing a batch of windows, the CPU is already processing the next batch. The GPU is fast, so the CPU quickly becomes the bottleneck in such a pipeline. In order to overcome this issue, we use the {tf.data.Dataset} API to parallelize the input pipeline. We trained the network with two GPUs in parallel and a combined batch size of 1024 observations. Thanks to this parallelized CPU input pipeline, the utilization of both GPUs was consistently over 90\%. Three operations are crucial to make the network converge efficiently: \emph{padding}, \emph{shuffling}, and \emph{window striding}. \subsubsection{Padding} \label{section: Padding} Not every cycle's startup has the same duration. However, all the batches that we feed to the GPU must have the same number of time-steps. We solve this problem by padding the sequences -- adding `-1' values at the end of each observation's corresponding startup. This way, all the startups have the same duration of 720 minutes, which is the maximum duration of a startup. This padding value is later ignored by the masking layer of the encoder subnetwork, so it does not affect the results. \linebreak Table~\ref{table: Padded Startup} shows the startup phase of cycle A after being padded. \begin{table}[t] \caption{Padded startup. $i$ represents the last startup observation before padding.} \label{table: Padded Startup} \begin{center} \begin{small} \begin{tabular}{lcccc} \toprule Index & $I$ & $T$ & $X$ & $V$ \\ \midrule 1 & 0.010 & 0.417 & 0.790 & 0.441 \\ 2 & 0.054 & 0.413 & 0.790 & 0.450 \\ 3 & 0.066 & 0.411 & 0.789 & 0.455 \\ 4 & 0.064 & -1.000 & 0.789 & 0.454 \\ \dots & \dots & \dots & \dots & \dots \\ i & 0.920 & 0.790 & 0.851 & 0.587 \\ i+1 & -1 & -1 & -1 & -1 \\ ... & \dots & \dots & \dots & \dots \\ 719 & -1 & -1 & -1 & -1 \\ 720 & -1 & -1 & -1 & -1 \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.2in \end{table} \subsubsection{Shuffling} \label{section: Shuffling} We observed that the time required by the network to converge to an optimal solution was reduced considerably by the introduction of shuffling in the training process. There is an explanation for this behavior. Without shuffling, most batches only have observations from a single cell and cycle of the same electrolyzer. This situation constitutes a problem because the gradient updates at each batch are very different. However, after introducing shuffling, each batch has now observations from different cells, cycles, and electrolyzers. As a result, we obtain less noise in the backpropagated gradient and a smoother training loss, which helps the network weights to converge. However, due to the considerable number of observations in the training dataset, it is not possible to perform the shuffling operation entirely in-memory. In order to overcome this limitation, we combine two strategies: \begin{enumerate} \item \underline{Shuffle Buffer}: instead of shuffling the whole dataset, we only shuffle one subset of observations at a time. This subset is the shuffle buffer, and its number of observations is limited by the size of the computer's RAM. After the observations have been shuffled, we give them to the network for training. Once the buffer is depleted, a new subset is read, and the same operation is performed again. \item \underline{Interleaving}: as previously stated, we have a different file for each cycle and electrolyzer. We randomly chose a file, read an observation from it, and add it to the shuffle buffer. This process is repeated in a loop. Thanks to using interleaving, the shuffle buffer is filled with observations from multiple cycles and electrolyzers, thus increasing the randomness of the batches given to the network. \end{enumerate} \subsubsection{Window Striding} \label{section: Window Striding} As previously shown in Algorithm~\ref{alg: Stochastic Training Algorithm}, the encoder subnetwork updates its weights for each batch of operation observations. However, the encoder task of inferring the cell's features during the startup is more complex than that of the predictor. Hence, it is more efficient to train the network with fewer observations per cycle and more different startup sequences. In order to solve this, we increase the stride of our windowing function. The \emph{stride} is the number that defines how many windows of the sequence are ignored between two consecutive training observations. For example, a stride of 64 means that, for each cell and cycle, we only take one window every 64. \subsection{Testing Procedure} \label{section: Testing Procedure} \subsubsection{Parametric Model} \label{section: Parametric Model} We use the parametric model as the baseline for comparing the results obtained by our neural network. This model predicts the cell's voltage ($\hat{V}$) following the equation presented in Table~\ref{eq: EDE model}. The advantage of this model over other parametric models is the fewer required observations for getting an equivalent accuracy. This is crucial since its training data is limited to the startup phase of each cycle. \begin{table}[h] \caption{Parametric model} \label{eq: EDE model} \vskip 0.15in \begin{center} \begin{small} \begin{sc} $\hat{V}=u_0+[k+(90-T)\ast C_t+(32-X)\ast C_x\ ]*I/A$ \vskip 0.15in \begin{tabular}{c} \midrule Parameters \\ \midrule \end{tabular} \end{sc} \begin{tabular}{l} $C_t$ : Temperature correction factor [$V/^{\circ}C * m^2/kA$] \\ $C_x$ : Caustic correction factor [$V/\% * m^2/kA$] \\ $u_0$ : Cell's equilibrium voltage [$V$] \\ $k$ : Load dependent resistance [$V * m^2/kA$] \\ $A$ : Membrane's surface area [$m^2$] \\ \end{tabular} \vskip 0.15in \begin{sc} \begin{tabular}{c} \midrule Inputs \\ \midrule \end{tabular} \end{sc} \begin{tabular}{l} $I$ : Electrical Current [$kA$] \\ $T$ : Temperature [$^{\circ}C$] \\ $X$ : Concentration [\%] \\ \end{tabular} \end{small} \end{center} \end{table} The cell's manufacturer gives parameter \emph{$A$}, which for the cells in our dataset is 2.721. Parameters \emph{$C_t$} and \emph{$C_x$} are estimated for each plant by experts. For our test plant, \emph{$C_t$} = 0.0016 and \emph{$C_x$} = -0.0031. Parameters $u_0$ and $k$ depend on the degradation of each specific cell and must be estimated at the beginning of each cycle. In order to estimate them, a linear regression is fitted by minimizing the sum of least squares: \begin{small}$\sum{(\hat{V}-V)}^2$\end{small} with all of the cycle's startup observations. \subsubsection{Method} \label{section: Method} All the tests were carried out on data collected from three years of an electrolyzer. During this period, the electrolyzer went through 40 different cycles. As the electrolyzer has 160 cells, there are 6400 combinations of cells and cycles in the dataset. The data used follows the structure previously presented in Table~\ref{table: subset of processed data from cycle A}. A new parametric model was fitted for each cell during the startup phase of each cycle, following Equation 2. Consequently, this means that 6400 different models were trained. In contrast, with our approach, only one neural network model was trained. The network was trained with data from six different electrolyzers from the same plant, each one having different cells and startup sequences. In total, around 43200 combinations of cells and cycles were seen by the network during training. Of course, the electrolyzer used for testing was neither used for training nor for validation to decide when to stop training the network. For making the predictions, the network receives the startup sequence, but no retraining is performed. Figure~\ref{fig: Prediction Flowchart} illustrates the differences between the process followed by the parametric model and the one followed by the neural network. \begin{figure}[t] \caption{Prediction Flowchart} \label{fig: Prediction Flowchart} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{./fig/fig6.pdf}} \end{center} \vskip -0.2in \end{figure} \section{Results and Discussion} \label{section: Results and Discussion} \subsection{Network Insight} \label{section: Network Insight} We start by discussing the encoder's results. In Figure~\ref{fig: Startup Encoding}, it is possible to see the evolution of the encoding of cells X, Y, and Z along cycles A, B, and C of the testing electrolyzer. Each point presented in Figure 8 corresponds to the encoded startup sequence of a specific cell and cycle. As previously explained, each cell has its own specificity and degradation. Let us take the example of cell Z to showcase the veracity of the encoder's results. We know thanks to an expert that cell Z is the most degraded cell of the whole electrolyzer for these three cycles. In the graph, we can clearly see this, as cell Z is plotted at the cycle's extreme. Since the startup's sequences of the testing electrolyzer are different from those of the training electrolyzers, we prove that the encoder is effectively learning the degradation of the cells and not only memorizing the startup sequences. \begin{figure}[t] \caption{Evolution of the encoding of cells X, Y, and Z along cycles A, B, and C. The axes of the graph do not have units, as they are the result of a dimensionality reduction. They take values in the interval [0,1] since the encoder's output is a sigmoid function.} \label{fig: Startup Encoding} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{./fig/fig7.pdf}} \end{center} \vskip -0.2in \end{figure} Furthermore, the relative position of a cell compared to the rest of the cells of that cycle stays stable over time. This circumstance can be used as a safeguard for the prediction. Indeed, if we notice that the predicted voltage of a cell presents a significant error, we can check in the graph if the relative position of that cell has changed between the last cycle and the current one. If there is a significant variation, the voltage prediction error is most likely due to a problem with the encoder subnetwork and not with the cell. This is why we say that the network's results can be interpreted. In Figure~\ref{fig: Predicted Voltage}, we present the output voltage of the three cells highlighted in Figure~\ref{fig: Startup Encoding} during a subset of the operation phase of cycle A. We show that our model is capable of predicting their respective voltages. We can also see that the cell X, whose startup was plotted at the top of the cycle in Figure~\ref{fig: Startup Encoding}, present the lowest voltage of the three. Cell Y was encoded in the middle in Figure~\ref{fig: Startup Encoding} and, indeed, its voltage is between the two other cells. Similarly, cell Z was at the bottom of the cycle in Figure~\ref{fig: Startup Encoding}, and it presents the highest voltage in Figure~\ref{fig: Predicted Voltage}. This result is consistent in our tests with different cells. Thus, we can conclude that the magnitude of the predicted voltage is related to the position learned by the encoder. \begin{figure}[t] \caption{Predicted voltage for cells X, Y, and Z during a period of one week belonging to cycle A. The solid lines represent the real voltage of each cell, while the dotted lines represent the predicted voltage by the network.} \label{fig: Predicted Voltage} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{./fig/fig8.pdf}} \end{center} \vskip -0.2in \end{figure} In order to better visualize the prediction error, a closer look into a two-day period of the operation phase of cycle A is provided in Figure~\ref{fig: Models vs Electrical Current}. We compare the behavior of the neural network against the parametric model and show that the voltage predicted by the network is more stable than the one predicted by the parametric model. Indeed, the parametric model produces an unstable behavior because the experts' understanding of the reaction's kinetics is limited, and because, as previously explained, the parametric model must be simple in order to be trained using only the startup data. The response of the cell varies between a \emph{high load} and a \emph{low load}. However, the parametric model is not capable of adapting its response to both of them, so it fits a response based on a \emph{medium load}. Hence, an increased error is noticeable in Figure~\ref{fig: Models vs Electrical Current} when the current is around 16 kA or around 7 kA. Indeed, the parametric model works better around 13 kA, which corresponds to a medium load. The neural network, however, has no problem predicting the response for the different load levels, as it is not constrained to follow a linear relationship between the inputs. The fact that our model considers the temporality of the time series also contributes to decreasing the absolute error, as the kinetics of the cell are taken into account by the predictor subnetwork. This is especially helpful during load changes, as the model produces smooth transitions. \begin{figure}[t] \caption{Extract of cycle A. The upper subgraph shows the absolute error between the predicted and the measured voltage, averaged over Cells X, Y, and Z. The lower subgraph presents the electrical current.} \label{fig: Models vs Electrical Current} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{./fig/fig9.pdf}} \end{center} \vskip -0.2in \end{figure} \subsection{Accuracy} \label{section: Accuracy} We are interested in comparing the accuracy between cycles (\emph{inter-cycle}), as well as comparing the accuracy inside the cycle (\emph{intra-cycle}). We seek accurate results in both cases. \subsubsection{Inter-Cycle} \label{section: Inter-Cycle} For each cell and cycle, we calculate the average absolute error for all the observations. We then calculate a set of statistics for that averaged error, which are presented in Table~\ref{table: Inter-Cycle Error}. Measuring the accuracy between cycles is vital, as a model must work correctly for as many cells and cycles as possible. In fact, a model that is consistent with its predictions is preferred over a model that makes excellent predictions for some cells but weak ones for the rest of them. Even if the former has a higher error on average. Based on the results presented in Table~\ref{table: Inter-Cycle Error}, we conclude that our neural network is more reliable than the parametric model. Not only is the average error lower, but also the standard deviation, which indicates that the error is less dispersed between different cells and cycles. The parametric model produces more outliers, as its two highest percentiles present a significant deviation compared to those of the network. \begin{table}[b] \caption{Statistics of the average absolute error of both models across the different cells and cycles, presented for different percentiles.} \label{table: Inter-Cycle Error} \vskip 0.1in \begin{center} \begin{small} \begin{tabular}{lcc} \toprule Stats & Neural Network [$mV$] & Parametric Model [$mV$] \\ \midrule $\mu$ & 11.977 & 25.668 \\ $\sigma$ & 11.503 & 30.304 \\ $P_{25\%}$ & 5.470 & 8.398 \\ $P_{50\%}$ & 8.432 & 16.225 \\ $P_{75\%}$ & 13.488 & 30.760 \\ $P_{90\%}$ & 24.167 & 53.262 \\ $P_{95\%}$ & 35.461 & 82.745 \\ $P_{99\%}$ & 62.676 & 157.488 \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip 0.25in \end{table} \subsubsection{Intra-Cycle} \label{section: Intra-Cycle} For each combination of cell and cycle, we calculate a set of statistics that represents the distribution of the error and average them across all the 6400 combinations. Table~\ref{table: Intra-Cycle Error} presents these results. The neural network obtains better results than the parametric model for all the statistics. As expected, the average voltage is the same as in Table~\ref{table: Inter-Cycle Error}. We can conclude that the accuracy intra-cycle is better than the inter-cycle one. One possible explanation for this difference comes from the startup phase. For the neural network, when the encoder works correctly, the error stays constant inside the cycle. However, when the encoder fails to identify the degradation of the cell correctly, the accuracy of the whole cycle is penalized. The same holds true for the parametric model. When the startup is short, it does not have enough data to train, and the predictions are inaccurate for the whole cycle. \begin{table}[t] \caption{Average statistics of the absolute error inside a cycle, presented for different percentiles.} \label{table: Intra-Cycle Error} \vskip 0.1in \begin{center} \begin{small} \begin{tabular}{lcc} \toprule Stats & Neural Network [$mV$] & Parametric Model [$mV$] \\ \midrule $\mu$ & 11.977 & 25.668 \\ $\sigma$ & 4.586 & 6.109 \\ $P_{25\%}$ & 8.72 & 21.637 \\ $P_{50\%}$ & 11.85 & 25.435 \\ $P_{75\%}$ & 14.975 & 29.816 \\ $P_{90\%}$ & 17.712 & 33.184 \\ $P_{95\%}$ & 19.338 & 34.943 \\ $P_{99\%}$ & 22.213 & 39.088 \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.2in \end{table} \subsection{Fault Detection} \label{section: Fault Detection} As mentioned in the introduction, the end goal is to detect faults in cells before they happen. The indicator is the divergence between the cell's predicted voltage by a trained model and the cell's measured voltage. In order to demonstrate that our neural network is appropriate for this task, we used the sequence of a faulty cell that was previously identified by an expert. Figure~\ref{fig: Fault Detection} shows how the error between the predicted voltage and the measured voltage increases during the 48 hours that precede the fault. The error increases slowly until it reaches a plateau. It then stays stable for some hours until it starts to increase again with a more pronounced slope. It is desired to detect the fault before the plateau, as that would give the plant's operators more time to react and plan the maintenance. Therefore, the more accurate the model is, the earlier the impending fault can be detected. We set the fault detection threshold to the average 99\textsuperscript{th} percentile intra-cycle error, as presented in the last row of Table~\ref{table: Intra-Cycle Error}. In order to minimize the appearance of possible false positives, we add an extra tolerance of 10 mV. This results in a threshold of 32 mV for the neural network and 49 mV for the parametric model. In this case, the network was able to detect the fault 31 hours before it happened, which is a gain of 12 hours compared to the parametric model. \begin{figure}[t] \caption{Divergence between the predicted voltage and the measured voltage during the 48 hours preceding a cell's fault.} \label{fig: Fault Detection} \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{./fig/fig10.pdf}} \end{center} \vskip -0.2in \end{figure} \section{Conclusion} \label{section: Conclusion} In this article, we present a new approach for detecting faults in electrochemical cells by using a neural network model composed of an encoder and a decoder. Results show that this approach presents many advantages over expert-defined parametric models currently used for this task. Namely, the key points of our approach are: \begin{enumerate} \item \underline{Better accuracy}. Our model is capable of predicting the cell's voltage more accurately. We use a neural network that is not based on suppositions of the underlying chemical function. Instead, it is trained with a substantial amount of data coming from previous electrolyzers. It also takes into account the temporal relations that exist between the observations, thanks to its LSTM layers. \item \underline{Less human intervention required}. There is no need for an expert to spend time finding the specific parameters needed by each plant. This entails that the model may be deployed to more plants with little effort just by retraining the model, which in turn helps to improve their operational safety. \item \underline{Interpretability}. The output of the encoder can be plotted and explained, which helps the user to verify the correct functioning of the neural network. As the safety of the plant and its operators rely on the model, interpretability is very important. \item \underline{Continuously improving method}. As more data is collected from different cells and cycles, it is possible to retrain the model to include them. This will help reduce edge cases and the overall error of the neural network. \item \underline{Simple deployment}. A single model is valid for multiple cells and cycles without needing retraining once deployed. Therefore, the implementation of the model in the plant does not require extensive computing power. \end{enumerate} Thanks to this model, we can confidently say that we are one step closer to a fully automatic management of plant maintenance. \section{Future Work} \label{section: Future Work} There are, however, some caveats and suggestions that could be further investigated in future models: \begin{itemize} \item If the encoder does not find a good representation of a cell's startup, the voltage prediction for its operation phase will be incorrect. This happens scarcely, and, with more data available, it should occur even less often. Nevertheless, a safety fallback must be designed for these cases. \item It would be interesting to try to replace the LSTM encoder by an attention-based encoder in order to see if better accuracy is attained \cite{Vaswani2017}. \item Neural network's compression is an active field of research that could help in requiring less computing power and memory for making predictions \cite{Polino2018}. \end{itemize} \section{Acknowledgements} \label{section: Acknowledgements} We want to thank Mitacs and R2 Inc. for providing funding for this research.
{ "timestamp": "2020-07-28T02:37:43", "yymm": "2007", "arxiv_id": "2007.13492", "language": "en", "url": "https://arxiv.org/abs/2007.13492" }
\section{Introduction} The problem of group synchronization is critical for various tasks in data science, including structure from motion (SfM), simultaneous localization and mapping (SLAM), Cryo-electron microscopy imaging, sensor network localization, multi-object matching and community detection. Rotation synchronization, also known as rotation averaging, is the most common group synchronization problems in 3D reconstruction. It asks to recover camera rotations from measured relative rotations between pairs of cameras. Permutation synchronization, which has applications in multi-object matching, asks to obtain globally consistent matches of objects from possibly erroneous measurements of matches between some pairs of objects. The simplest example of group synchronization is $\mathbb Z_2$ synchronization, which appears in community detection. The general problem of group synchronization can be mathematically formulated as follows. Assume a graph $G([n],E)$ with $n$ vertices indexed by $[n]=\{1,\ldots,n\}$, a group $\mathcal{G}$, and a set of group elements $\{g_i^*\}_{i=1}^n \subseteq \mathcal{G}$. The problem asks to recover $\{g_i^*\}_{i=1}^n$ from noisy and corrupted measurements $\{g_{ij}\}_{ij \in E}$ of the group ratios $\{g_i^*g_j^{*-1}\}_{ij \in E}$. We note that one can only recover, or approximate, the original group elements $\{g_i^*\}_{i\in [n]}$ up to a right group action. Indeed, for any $g_0\in \mathcal G$, $g_{ij}^*$ can also be written as $g_i^*g_0(g_j^*g_0)^{-1}$ and thus $\{g_i^*g_0\}_{i\in [n]}$ is also a solution. The above mentioned synchronization problems (rotation, permutation, and $\mathbb Z_2$ synchronization) correspond to the groups $SO(3)$, $S_N$, and $\mathbb Z_2$, respectively. The most challenging issue for group synchronization is the practical scenario of highly corrupted and noisy measurements. Traditional least squares solvers often fail to produce accurate results in such a scenario. Moreover, some basic estimators that seem to be robust to corruption often do not tolerate in practice high level of noise. We aim to propose a general method for group synchronization that may tolerate high levels and different kinds of corruption and noise. While our basic ideas are formally general, in order to carefully refine and test them, we focus on the special problem of rotation synchronization, which is also known as rotation averaging \cite{RotationAveraging13}. We choose this problem as it is the most common, and possibly most difficult, synchronization problem in 3D computer vision. \subsection{Related Works} \label{sec:related} Most previous group synchronization solvers minimize an energy function. For the discrete groups $\mathbb Z_2$ and $S_N$, least squares energy minimization is commonly used. Relevant robustness results, under special corruption and noise models, are discussed in \citet{Z2Afonso2, Z2abbe, Z2Afonso, chen_partial, Huang13, PPM_vahan, deepti}. For Lie groups, such as $SO(D)$, that is, the group of $D \times D$ orthogonal matrices with determinant 1, where $D\geq 2$, least squares minimization was proposed to handle Gaussian noise \citep{rotationNP,StrongDuality18,Govindu04_Lie}. However, when the measurements are also adversarially corrupted, this framework does not work well and other corruption-robust energy functions need to be used \cite{ChatterjeeG13_rotation, L12, HartleyAT11_rotation, SO2ML, wang2013exact}. The most common corruption-robust energy function uses least absolute deviations. \citet{wang2013exact} prove that under a very special probabilistic setting with $\mathcal G = SO(D)$, the pure minimizer of this energy function can exactly recover the underlying group elements with high probability. However, their assumptions are strong and they use convex relaxation, which changes the original problem and is expensive to compute. \citet{SO2ML} apply a trimmed averaging procedure for robustly solving $SO(2)$ synchronization. They are able to recover the ground truth group elements under a special deterministic condition on the topology of the corrupted subgraph. However, the verification of this condition and its extension to $SO(D)$, where $D>2$, are nontrivial. \citet{HartleyAT11_rotation} used the Weiszfeld algorithm to minimize the least-absolute-deviations energy function with $\mathcal G = SO(3)$. Their method iteratively computes geodesic medians. However, they update only one rotation matrix per iteration, which results in slow empirical convergence and may increase the possibility of getting stuck at local minima. \citet{L12} proposed a robust Lie-algebraic averaging method over $\mathcal G = SO(3)$. They apply an iteratively reweighted least squares (IRLS) procedure in the tangent space of $SO(3)$ to optimize different robust energy functions, including the one that uses least absolute deviations. They claim that the use of the $\ell_{1/2}$ norm for deviations results in highest empirical accuracy. The empirical robustness of the two latter works is not theoretically guaranteed, even in simple settings. A recent deep learning method \cite{NeuroRA} solves a supervised version of rotation synchronization, but it does not apply to the above unsupervised formulation of the problem. \citet{truncatedLS} use least absolute deviations minimization for solving 1D translation synchronization, where $\mathcal G=\mathbb R$ with addition. They propose a special version of IRLS and provide a deterministic exact recovery guarantee that depends on properties of the graph and its Laplacian. They do not explain their general result in an adversarial setting, but in a very special noisy setting. Robustness results were established for least absolute deviations minimization in camera location estimation, which is somewhat similar to group synchronization \cite{HandLV15,LUDrecovery}. These results assume special probabilistic setting, however, they have relatively weak assumptions on the corruption model. Several energy minimization solutions have been proposed to $SE(3)$ synchronization \cite{SE3_MCMC, SE3_SDP_jesus, SE3_Rosen, SE3_sync, SE3_RPCA}. This problem asks to jointly estimate camera rotations and locations from relative measurements of both. Neither of these solutions successfully address highly corrupted scenarios. Other works on group synchronization, which do not minimize energy functions but aim to robustly recover corrupted solutions, screen corrupted edges using cycle consistency information. For a group $\mathcal{G}$ with group identity denoted by $e$, any $m \geq 3$, any cycle $L=\{i_1i_2, i_2i_3\dots i_m i_1\}$ of length $m$ and any corresponding product of ground-truth group ratios along $L$, $g^*_L=g_{i_1i_2}^*g_{i_2i_3}^*\cdots g_{i_mi_1}^*$, the cycle-consistency constraint is $g^*_L= e$. In practice, one is given the product of measurements, that is, $g_L=g_{i_1i_2}g_{i_2i_3}\cdots g_{i_mi_1}$, and in order to ``approximately satisfy the cycle-consistency constraint'' one tries to enforce $g_L$ to be sufficiently close to $e$. \citet{Zach2010} uses the cycle-consistency constraint to detect corrupted relative rotations in $SO(3)$. It seeks to maximize a log likelihood function, which is based on the cycle-consistency constraint, using either belief propagation or convex relaxation. However, no theoretical guarantees are provided for the accuracy of outlier detection. Moreover, the log likelihood function implies very strong assumptions on the joint densities of the given relative rotations. \citet{shen2016} classify the relative rotations as uncorrupted if they belong to any cycle that approximately satisfies the cycle-consistency constraint. However, this work only exploits local information and cannot handle the adversarial corruption case, where corrupted cycles can be approximately consistent. An iterative reweighting strategy, IR-AAB \cite{AAB}, was proposed to detect and remove corrupted pairwise directions for the different problem of camera location estimation. It utilizes another notion of cycle-consistency to infer the corruption level of each edge. \citet{cemp} extend the latter idea, and interpret it as a message passing procedure, to solve group synchronization with any compact group. They refer to their new procedure as cycle-edge message passing (CEMP). While We follow ideas of \citet{cemp,AAB}, we directly solve for group elements, instead of estimating corruption levels, using them to initial cleaning of edges and solving the cleaner problem with another method. To the best of our knowledge, the unified frameworks for group synchronization are \citet{ICMLphase,cemp,AMP_compact}. However, \citet{ICMLphase} and \citet{AMP_compact} assume special probabilistic models that do not address adversarial corruption. Furthermore, \citet{ICMLphase} only applies to Lie groups and the different setting of multi-frequencies. \subsection{Contribution of This Work} Current group synchronization solvers often do not perform well with highly corrupted and noisy group ratios. In order to address this issue, we propose a rotation synchronization solver that can address in practice high levels of noise and corruption. Our main ideas seem to generalize to group synchronization with any compact group, but more careful developments and testing are needed for other groups. We emphasize the following specific contributions of this work: \begin{itemize} \item We propose the message passing least squares (MPLS) framework as an alternative paradigm to IRLS for group synchronization, and in particular, rotation synchronization. It uses the theoretically guaranteed CEMP algorithm for estimating the underlying corruption levels. These estimates are then used for learning better weights for the weighted least squares problem. \item We explain in Section \ref{sec:issue} why the common IRLS solver may not be accurate enough and in Section \ref{sec:mpls} why MPLS can overcome these obstacles. \item While MPLS can be formally applied to any compact group, we refine and test it for the group $\mathcal G = SO(3)$. We demonstrate state-of-the-art results for rotation synchronization with both synthetic data having nontrivial scenarios and real SfM data. \end{itemize} \section{Setting for Robust Group Synchronization}\label{sec:adversarial} Some previous robustness theories for group synchronization typically assume a very special and often unrealistic corruption probabilistic model for very special groups \cite{deepti,wang2013exact}. In general, simplistic probabilistic models for corruption, such as generating potentially corrupted group ratios according to the Haar measure on $\mathcal G$ \cite{wang2013exact}, may not generate some nontrivial scenarios that often occur in practice. For example, in the application of rotation synchronization that arise in SfM, the corrupted camera relative rotations can be self-consistent due to the ambiguous scene structures \citep{1dsfm14}. However, in common probabilistic models, such as the one in \citet{wang2013exact}, cycles with corrupted edges are self-consistent with probability zero. A more realistic model is the adversarial corruption model for the different problem of camera location \cite{LUDrecovery, HandLV15}. However, it also assumes very special probabilistic models for the graph and camera locations, which are not realistic. A more general model of adversarial corruption with noise is due to \citet{cemp} and we review it here. We assume a graph $G([n],E)$ and a compact group $\mathcal G$ with a bi-invariant metric $d$, that is, for any $g_1$, $g_2$, $g_3\in \mathcal G$, $d(g_1,g_2)=$ $d(g_3g_1,g_3g_2)=$ $d(g_1g_3,g_2g_3)$. For $\mathcal G = SO(3)$, or any Lie group, $d$ is commonly chosen to be the geodesic distance. Since $\mathcal G$ is compact, we can scale $d$ and assume that $d(\cdot)\leq 1$. We partition $E$ into $E_g$ and $E_b$, which represent sets of good (uncorrupted) and bad (corrupted) edges, respectively. We will need a topological assumption on $E_b$, or equivalently, $E_g$. A necessary assumption is that $G([n],E_g)$ is connected, though further restrictions on $E_b$ may be needed for establishing theoretical guarantees \cite{cemp}. In the noiseless case, the adversarial corruption model generates group ratios in the following way. \begin{align}\label{eq:model} g_{ij}=\begin{cases} g^*_{ij}:=g_i^*g_j^{*-1}, & ij \in E_g;\\ \tilde g_{ij} \neq g^*_{ij}, & ij \in E_b. \end{cases} \end{align} That is, for edges $ij\in E_b$, the corrupted group ratio $\tilde g_{ij}\neq g_{ij}^*$ can be arbitrarily chosen from $\mathcal G$. The corruption is called adversarial since one can maliciously corrupt the group ratios for $ij\in E_b$ and also maliciously choose $E_b$ as long as the needed assumptions on $E_b$ are satisfied. One can even form cycle-consistent corrupted edges, so that they can be confused with the good edges. In the noisy case, we assume a noise model for $d(g_{ij},g_{ij}^*)$, where $ij\in E_g$. In theory, one may need to restrict this model \cite{cemp}, but in practice we test highly noisy scenarios. For $ij\in E$ we define the corruption level of $ij$ as \[s_{ij}^* = d(g_{ij},g_{ij}^*).\] We use ideas of \citet{cemp} to estimate $\{s_{ij}^*\}_{ij \in E}$, but then we propose new ideas to estimate $\{g_{i}^*\}_{i \in [n]}$. While exact estimation of both quantities is equivalent in the noiseless case \cite{cemp}, this property is not valid when adding noise. \section{Issues with the Common IRLS}\label{sec:issue} We first review the least squares minimization, least absolute and unsquared deviations minimization and IRLS for group synchronization. We then explain why IRLS may not form a good solution for the group synchronization problem, and in particular for Lie algebraic groups, such as the rotation group. The least squares minimization can be formulated as follows: \begin{align} \label{eq:l2} \min_{\{g_i\}_{i=1}^n \subseteq \mathcal G}\sum_{ij\in E}d^2(g_{ij},g_ig_j^{-1}), \end{align} where one often relaxes this formulation. This formulation is generally sensitive to outliers and thus more robust energy functions are commonly used when considering corrupted group ratios. More specifically, one may choose a special function $\rho(x) \neq x^2$ and solve the following least unsquared deviation formulation \begin{align}\label{eq:lp} \min_{\{g_i\}_{i=1}^n \subseteq \mathcal G}\sum_{ij\in E}\rho\left(d(g_{ij},g_ig_j^{-1})\right). \end{align} The special case of $\rho(x)=x$ \cite{HandLV15,HartleyAT11_rotation,ozyesil2015robust, wang2013exact} is referred to as least absolute deviations. Some other common choices are $\rho(x)=x^2/(x^2+\sigma^2)$ \cite{ChatterjeeG13_rotation} and $\rho(x)=\sqrt x$ \cite{L12}. The least unsquared formulation is typically solved using IRLS, where at iteration $t$ one solves the weighted least squares problem: \begin{align} \label{eq:irls} \{g_{i,t}\}_{i\in [n]}&= \operatorname*{arg\,min}_{\{g_i\}_{i=1}^n \subseteq \mathcal G}\sum_{ij\in E} w_{ij,t-1}d^2(g_{ij},g_ig_j^{-1}). \end{align} In the first iteration the weights can be initialized in a certain way, but in the next iterations the weights are updated using the residuals of this solution. Specifically, for $ij \in E$ and iteration $t$, the residual is $r_{ij,t}=d(g_{ij},g_{i,t}g_{j,t}^{-1})$ and the weight $w_{ij,t}$ is \begin{align} w_{ij,t}&= F(r_{ij,t}), \label{eq:weight_irls} \end{align} where the function $F$ depends on the choice of $\rho$. For $\rho(x)=x^p$, where $0<p<2$, $F(x)=\min\{x^{p-2},A\}$, where $1/A$ is a regularization parameter and here we fix $A=10^8$. The above IRLS procedure poses the following three issues. First, its convergence to the solution $\{g_i^*\}_{i\in [n]}$ is not guaranteed, especially under severe corruption. Indeed, IRLS succeeds when it accurately estimates the correct weights $w_{ij,t}$ for each edge. Ideally, when the solution $\{g_{i,t}\}_{i\in [n]}$ is close to the ground truth $\{g_i^*\}_{i\in [n]}$, the residual $r_{ij,t}$ must be close to the corruption level $s_{ij}^*$ so that weight $w_{ij,t}$ must be close to $F(s_{ij}^*)$. However, if edge $ij \in E_b$ is severely corrupted (or edge $ij \in E_g$ has high noise) and either $g_i^*$ or $g_j^*$ is wrongly estimated, then the residual $r_{ij,t}$ might have a very small value. Thus the weight $w_{ij,t}$ in \eqref{eq:weight_irls} can be extremely large and may result in an inaccurate solution in the next iteration and possibly low-quality solution at the last iteration. The second issue is that for common groups each iteration of \eqref{eq:irls} requires either SDP relaxation or tangent space approximation (for Lie groups). However, if the weights of IRLS are wrongly estimated in the beginning, then they may affect the tightness of the SDP relaxation and the validity of tangent space approximation. Therefore, such procedures tend to make the IRLS scheme sensitive to corruption and initialization of weights and group elements. At last, when dealing with noisy data where most of $s_{ij}^*$, $ij \in E$, are significantly greater than $0$, the current reweighting strategy usually gives non-negligible positive weights to outliers. This can be concluded from the expression of $F$ (e.g., for $\ell_p$ minimization) and the fact that in a good scenario $r_{ij,t} \approx s_{ij}^*$ and $s_{ij}^*$ can be away from 0. Therefore, outliers can be overweighed and this may lead to low-quality solutions. We remark that this issue is more noticeable in Lie groups, such as the rotation group, as all measurements are often noisy and corrupted; whereas in discrete groups some measurements may be rather accurate \cite{robust_multi_object2020}. \section{Message Passing Least Squares (MPLS)} \label{sec:mpls} In view of the drawbacks of the common IRLS scheme, we propose the MPLS (Message Passing Least Squares), or Minneapolis, algorithm. It carefully initializes and reevaluates the weights of a weighted least squares problem by our CEMP algorithm \cite{cemp} or a modified version of it. We first review the ideas of CEMP in Section \ref{sec:cemp}. We remark that its goal is to estimate the corruption levels $\{s_{ij}^*\}_{ij \in E}$ and not the group elements $\{g_{i}^*\}_{i \in [n]}$. Section \ref{sec:mpls2} formally describes MPLS for the general setting of group synchronization. Section \ref{sec:so3} carefully refines MPLS for rotation synchronization. Section \ref{sec:complexity} summarizes the complexity of the proposed algorithms. \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{illustration.pdf} \caption{Demonstration of MPLS in comparison with IRLS. IRLS updates the graph weights directly from the residuals, which can be inaccurate. It is demonstrated in the upper loop of the figure, where the part different than MPLS is crossed out. In contrast, MPLS updates the graph weights by applying a CEMP-like procedure to the residuals, demonstrated in the ``message-passing unit''. Good edges, such as $jk_1$, are marked with green, and bad edges are marked with red. For $ij\in E$ and $k\in \{k_1,k_2,\dots k_{50}\}$, $q_{ij,k}^t$ is updated using the two residuals $r_{ik,t}$ and $r_{jk,t}$ according to the indicated operation. The length of a bar around the computed value of each $q_{ij,k}^t$ is proportional to magnitude and the green or red colors designate good or bad corresponding cycles, respectively. The weighted sum $h_{ij,t}$ aims to approximate $s_{ij}^*$ and this approximation is good when the green $q_{ij,k}^t$ bars are much longer than the red bars. The weight $w_{ij,t}$ is formed as a convex combination of $r_{ij,t}$ and $h_{ij,t}$. The rest of the procedure is similar to IRLS. }\label{fig:demo} \end{figure*} \subsection{Cycle-Edge Message Passing (CEMP)} \label{sec:cemp} The CEMP procedure aims to estimate the corruption levels $\{s_{ij}^*\}_{ij \in E}$ from the cycle inconsistencies, which we define next. For simplicity and for ease of computation, we work here with 3-cycles, that is, triangles in the graph. For each edge $ij\in E$, we independently sample with replacement 50 nodes that form 3-cycles with $i$ and $j$. That is, if $k$ is such a node then $ik$ and $jk$ are in $E$. We denote this set of nodes by $C_{ij}$. We remark that the original version of CEMP in \citet{cemp} uses all 3-cycles and can be slower. We define the cycle inconsistency of the 3-cycle $ijk$ associated with edge $ij$ and $k \in C_{ij}$ as follows \begin{equation} \label{eq:def_dL} d_{ij,k} := d(g_{ij}g_{jk}g_{ki}, e), \quad k\in C_{ij}, ij\in E. \end{equation} The idea of CEMP is to iteratively estimate each corruption level $s_{ij}^*$ for $ij \in E$ from a weighted average of the cycle inconsistencies $\{d_{ij,k}\}_{k \in C_{ij}}$. To motivate this idea, we assume the noiseless adversarial corruption model and formulate the following proposition whose easy proof appears in \citet{cemp}. \begin{proposition}\label{prop:good cycle} If $s_{ik}^*=s_{jk}^*=0$, that is, $ik$, $jk \in E_g$, then $$s_{ij}^*=d_{ij,k}.$$ \end{proposition} If the condition of the above proposition holds, we call $ijk$ a good cycle with respect to $ij$, otherwise we call it a bad cycle. CEMP also aims to estimate the conditional probability $p_{ij,k}^t$ that the cycle $ijk$ is good, so $s_{ij}^*=d_{ij,k}.$ The conditioning is on the estimates of corruption levels computed in the previous iteration. CEMP uses this probability as a weight for the cycle inconsistency $d_{ij,k}$. The whole weighted sum thus aims to estimate the conditional expectation of $s_{ij}^*$ at iteration $t$. This estimate is denoted by $s_{ij,t}$. The iteration of CEMP thus contains two main steps: 1) Computation of the weight $p_{ij,k}^t$ of $d_{ij,k}$; 2) Computation of $s_{ij,t}$ as a weighted average of the cycle inconsistencies $\{d_{ij,k}\}_{k \in C_{ij}}$. In the former stage messages are passed from edges to cycles and in the latter stage messages are passed from cycles to edges. The simple procedure is summarized in Algorithm~\ref{alg:cemp}. We formulate it in generality for a compact group $\mathcal G$ with a bi-invariant metric $d$ and a graph $G([n],E)$, for which the cycle inconsistencies, $\{d_{ij,k}\}_{ij\in E, k\in C_{ij}}$, were computed in advance. Our default parameters are later specified in Section \ref{sec:implementation}. For generality, we write $|C_{ij}|$ instead of 50. \newpage \begin{algorithm}[h] \caption{CEMP \cite{cemp}} \label{alg:cemp} \begin{algorithmic} \REQUIRE $\{d_{ij,k}\}_{ij\in E, k\in C_{ij}}$, time step $T$, increasing $\{\beta_t\}_{t=0}^T$\\ \STATE \textbf{Steps:} \STATE $s_{ij,0} = \frac{1}{|C_{ij}|}\sum_{k\in C_{ij}} d_{ij,k} $ \hspace*{\fill} $ij\in E$ \FOR {$t=0:T$} \STATE \emph{Message passing from edges to cycles:} \STATE $p_{ij,k}^{t} = \exp\left(-\beta_t(s_{ik,t}+s_{jk,t})\right)$ \hspace*{\fill} $k\in C_{ij},\, ij\in E$ \STATE \emph{Message passing from cycles to edges:} \STATE $s_{ij,t+1} =\frac{1}{Z_{ij,t}}\sum\limits_{k\in C_{ij}}p_{ij,k}^t d_ {ij,k}$, \hspace*{\fill} $ij\in E$ \STATE where $Z_{ij,t}=\sum\limits_{k\in C_{ij}}p_{ij,k}^t$ is a normalization factor \ENDFOR \ENSURE $\{s_{ij,T}\}_{ij\in E}$ \end{algorithmic} \end{algorithm} Algorithm \ref{alg:cemp} can be explained in two different ways \cite{cemp}. First of all, it can be theoretically guaranteed to be robust to adversarial corruption and stable to low level of noise (see Theorem 5.4 in \citet{cemp}). Second of all, our above heuristic explanation can be made more rigorous using some statistical assumptions. Such assumptions are common in other message passing algorithms in statistical mechanics formulations and we thus find it important to motivate them here. We remark that our statistical assumptions are weaker than those of previous works on message passing \cite{AMP_Donoho,BP}, and in particular, those for group synchronization \cite{AMP_compact,Zach2010}. We also remark that they are not needed for establishing the above mentioned theory. Our first assumption is that $ijk$ is a good cycle if and only if $s_{ij}^*=d_{ij,k}$. Proposition \ref{prop:good cycle} implies the only if part, but the other part is generally not true. However, under special random corruption models (e.g., models in \citet{wang2013exact,ozyesil2015robust}), the assumed equivalence holds with probability $1$. We further assume that $\{s_{ij}^*\}_{ij \in E}$ and $\{s_{ij,t}\}_{ij \in E}$ are both i.i.d.~random variables and that for any $ij \in E$, $s_{ij}^{*}$ is independent of $s_{kl,t}$ for $kl \neq ij \in E$. We further assume that for any $ij\in E$ \begin{equation} \label{eq:pr_f} \Pr(s_{ij}^*=0|s_{ij,t}=x) = \exp(-\beta_t x). \end{equation} We also assume the existence of good cycle $ijk$ for any $ij\in E$. In view of these assumptions, in particular the i.i.d.~sampling and \eqref{eq:pr_f}, we obtain that the expression for $p_{ij,k}^t$ used in Algorithm \ref{alg:cemp} coincides with the conditional probability that $ijk$ is a good cycle, that is, the conditional probability that $s_{ik}^*=s_{jk}^*=0$: \begin{align}\label{eq:pijk} p_{ij,k}^t &= \Pr(s_{ik}^*=s_{jk}^*=0|\{s_{ab,t}\}_{ab\in E})\nonumber\\ &= \Pr(s_{ik}^*=0|s_{ik,t})\Pr(s_{jk}^*=0|s_{jk,t})\\ \nonumber &=\exp(-\beta_t (s_{ik,t} +s_{jk,t})). \end{align} Using the definition of conditional expectation, the equivalence assumption, the above i.i.d.~sampling assumptions and \eqref{eq:pr_f}, we show that the expression for $s_{ij,t}$ used in Algorithm \ref{alg:cemp} coincides with the conditional expectation of $s_{ij}^*$: \begin{align}\label{eq:sijt} &\mathbb E\left(s_{ij}^*|\{s_{ab,t}\}_{ab\in E}\right)\nonumber\\ &=\frac{1}{Z_{ij,t}}\sum_{k\in C_{ij}}\Pr\left(s_{ij}^*=d_{ij,k}|\{s_{ab,t}\}_{ab\in E}\right)d_{ij,k}\nonumber \end{align} \begin{align} &=\frac{1}{Z_{ij,t}}\sum_{k\in C_{ij}}\Pr\left(s_{ik}^*=s_{jk}^*=0|\{s_{ab,t}\}_{ab\in E}\right)d_{ij,k} \nonumber\\ &=\frac{1}{Z_{ij,t}}\sum_{k\in C_{ij}}\exp\left(-\beta_t(s_{ik,t}+s_{jk,t})\right)d_{ij,k} \\ &= \frac{1}{Z_{ij,t}}\sum_{k\in C_{ij}} p_{ij,k}^t d_{ij,k}.\nonumber \end{align} Note that our earlier motivation of Algorithm \ref{alg:cemp} assumed both \eqref{eq:pijk} and \eqref{eq:sijt}. Demonstration of a procedure similar to CEMP, but with different notation, appears in the lower part of Figure \ref{fig:demo}. \subsection{General Formulation of MPLS} \label{sec:mpls2} MPLS uses the basic ideas of CEMP in order to robustly estimate the residuals $\{r_{ij,t}\}_{ij \in E}$ and weights $\{w_{ij,t}\}_{ij \in E}$ of the IRLS scheme as well as to carefully initialize this scheme. It also incorporates a novel truncation idea. We explain in this and the next section how these new ideas address the drawbacks of the common IRLS procedure, which was reviewed in Section \ref{sec:issue}. We sketch MPLS in Algorithm \ref{alg:MPLS}, demonstrate it in Figure \ref{fig:demo} and explain it below. For generality, we assume in this section a compact group $\mathcal G$, a bi-invariant metric $d$ on $\mathcal G$ and a graph $G([n],E)$ with given relative measurements and cycle inconsistencies computed in advance. Our default parameters are later specified in Section \ref{sec:implementation}. Instead of using the traditional IRLS reweighting function $F(x)$ explained in Section \ref{sec:issue}, we use its truncated version $F_{\tau}(x)=F(x)\mathbf{1}_{\{x\leq \tau\}}+10^{-8}\mathbf{1}_{\{x> \tau\}}$ with a parameter $\tau>0$. We decrease $\tau$ as the iteration number increases in order to avoid overweighing outliers. By doing this we aim to address the third drawback of IRLS mentioned in Section \ref{sec:issue}. We remark that the truncated function is $F(x)\mathbf{1}_{\{x\leq \tau\}}$ and the additional term $10^{-8}\mathbf{1}_{\{x> \tau\}}$ is needed to ensure that the graph with weights resulting from $F_{\tau}$ is connected. \begin{algorithm}[h] \caption{Message Passing Least Squares (MPLS)}\label{alg:MPLS} \begin{algorithmic} \REQUIRE $\{g_{ij}\}_{ij\in E}$, $\{d_{ij,k}\}_{k\in C_{ij}}$, nonincreasing $\{\tau_t\}_{t\geq 0}$, increasing $\{\beta_t\}_{t=0}^T$, decreasing $\{\alpha_t\}_{t\geq 1}$ \STATE \textbf{Steps:} \STATE Compute $\{s_{ij,T}\}_{ij\in E}$ by CEMP \STATE $w_{ij,0}=F_{\tau_0}(s_{ij,T})$ \hspace*{\fill} $ij\in E$ \STATE $t=0$ \WHILE {not convergent} \STATE $t=t+1$ \STATE $\{g_{i,t}\}_{i\in [n]}=\operatorname*{arg\,min}\limits_{g_i\in\mathcal G}\sum\limits_{ij\in E} w_{ij,t-1}d^2(g_{ij},g_ig_j^{-1})$ \STATE $r_{ij,t}=d(g_{ij}, g_{i,t}g_{j,t}^{-1})$ \hspace*{\fill} $ij\in E$ \STATE $q_{ij,k}^{t} = \exp(-\beta_T(r_{ik,t}+r_{jk,t}))$ \hspace*{\fill} $k\in C_{ij},\, ij\in E$ \STATE $h_{ij,t} =\frac{\sum_{k\in C_{ij}}q_{ij,k}^t d_ {ij,k}}{\sum_{k\in C_{ij}}q_{ij,k}^t}$ \hspace*{\fill} $ij\in E$ \STATE $w_{ij,t}=F_{\tau_t}(\alpha_t h_{ij,t}+(1-\alpha_t)r_{ij,t})$ \hspace*{\fill} $ij\in E$ \ENDWHILE \ENSURE $\left\{g_{i,t}\right\}_{i\in [n]}$ \end{algorithmic} \end{algorithm} The initial step of the algorithm estimates the corruption levels $\{s_{ij}^*\}_{ij \in E}$ by CEMP. The initial weights for the IRLS procedure follow \eqref{eq:weight_irls} with additional truncation. At each iteration, the group ratios $\{g_{i,t}\}_{i\in [n]}$ are estimated from the weighted least squares procedure in \eqref{eq:irls}. However, the weights $w_{ij,t}$ are updated in a very different way. First of all, for each $ij \in E$ the corruption level $s_{ij}^*$ is re-estimated in two different ways and a convex combination of the two estimates is taken. The first estimate is a residual $r_{ij,t}$ computed with the newly updated estimates $\{g_{i,t}\}_{i\in [n]}$. This is the error of approximating the given measurement $g_{ij}$ by the newly estimated group ratio. The other estimate practically applies CEMP to re-estimate the corruption levels. For edge $ij \in E$, the latter estimate of $s_{ij}^*$ is denoted by $h_{ij,t}$. For interpretation, we can replace \eqref{eq:pr_f} with $\Pr(s_{ij}^*|r_{ij,t})=\exp(-\beta_T x)$ and use it to derive analogs of \eqref{eq:pijk} and \eqref{eq:sijt}. Unlike CEMP, we use the single parameter, $\beta_T$, as we assume that CEMP provides a sufficiently good initialization. At last, a similar weight as in \eqref{eq:weight_irls}, but truncated, is applied to the combined estimate $\alpha_t h_{ij,t}+(1-\alpha_t)r_{ij,t}$. We remark that utilizing the estimate $h_{ij,t}$ for the corruption level addresses the first drawback of IRLS discussed in Section \ref{sec:issue}. Indeed, assume the case where $ij\in E_b$ and $r_{ij,t}$ is close to 0. Here, $w_{ij,t}$ computed by IRLS is relatively large; however, since $ij\in E_b$, $w_{ij,t}$ needs to be small. Unlike $r_{ij,t}$ in IRLS, we expect that $h_{ij,t}$ in MPLS should not be too small as long as for some $k\in C_{ij}$, $d_{ij,k}$ are sufficiently large. This happens as long as there exists some $k\in C_{ij}$ for which the cycle $ijk$ is good. Indeed, in this case $s_{ij}^*$ is sufficiently large and for good cycles $d_{ij,k}=s_{ij}^*$. We further remark that $h_{ij,t}$ is a good approximation of $s_{ij}^*$ under certain conditions. For example, if for all $k\in C_{ij}$, $r_{ik,t}\approx s_{ik}^*$ and $r_{jk,t}\approx s_{jk}^*$, then plugging in the definition of $p_{ij,k}^t$ to the expression of $h_{ij,t}$, using the fact that $\beta_T$ is sufficiently large and at last applying Proposition \ref{prop:good cycle}, we obtain that \begin{align} h_{ij,t} &= \sum_{k\in C_{ij}}\frac{\exp(-\beta_T(r_{ik,t}+r_{jk,t}))}{\sum_{k\in C_{ij}}\exp(-\beta_T(r_{ik,t}+r_{jk,t}))}d_ {ij,k} \nonumber\\ &\approx \sum_{k\in C_{ij}}\frac{\exp(-\beta_T(s_{ik}^*+s_{jk}^*)) }{\sum_{k\in C_{ij}}\exp(-\beta_T(s_{ik}^*+s_{jk}^*))} d_ {ij,k} \\ &\approx \sum_{k\in C_{ij}}\frac{\mathbf{1}_{\{ijk \text{ is a good cycle}\}}}{\sum_{k\in C_{ij}}\mathbf{1}_{\{ijk \text{ is a good cycle}\}}} d_ {ij,k}=s_{ij}^*.\nonumber \end{align} This intuitive argument for a restricted case conveys the idea that ``local good information'' can be used to estimate $s_{ij}^*$. The theory of CEMP \cite{cemp} shows that under weaker conditions such information can propagate through the whole graph within a few iterations, but we cannot extend it to MPLS. If the graph $G([n], E)$ is dense with sufficiently many good cycles, then we expect that this good information can propagate in few iterations and that $h_{ij,t}$ will have a significant advantage over $r_{ij,t}$. However, in real scenarios of rotation synchronization in SfM, one may encounter sparse graphs, which may not have enough cycles and, in particular, not enough good cycles. In this case, utilizing $h_{ij,t}$ is mainly useful in the early iterations of the algorithm. On the other hand, when $\{g_{i,t}\}_{i \in [n]}$ are close to $\{g_i^*\}_{i \in [n]}$, $\{r_{ij,t}\}_{i \in [n]}$ will be sufficiently close to $\{s_{ij}^*\}_{i \in [n]}$. Aiming to address rotation synchronization, we decrease $\alpha_t$, the weight of $h_{ij,t}$, with $t$. In other applications, different choices of $\alpha_t$ can be used \cite{robust_multi_object2020}. The second drawback of IRLS, discussed in Section \ref{sec:issue}, is the possible difficulty of implementing the weighted least squares step of \eqref{eq:irls}. This issue is application-dependent, and since in this work we focus on rotation synchronization (equivalently, $SO(3)$ synchronization), we show in the next subsection how MPLS can deal with the above issue in this specific problem. Nevertheless, we claim that our framework can also be applied to other compact group synchronization problems and we demonstrate this claim in a follow up work \cite{robust_multi_object2020}. \subsection{MPLS for $SO(3)$ synchronization} \label{sec:so3} Rotation synchronization, or $SO(3)$ synchronization, aims to solve 3D rotations $\{\boldsymbol{R}_i^*\}_{i\in [n]} \in SO(3)$ from measurements $\{\boldsymbol{R}_{ij}\}_{ij\in E} \in SO(3)$ of the 3D relative rotations $\{\boldsymbol{R}_i^*\boldsymbol{R}_j^{*-1}\}_{ij\in E} \in SO(3)$. Throughout the rest of the paper, we use the following normalized geodesic distance for $\boldsymbol{R}_1,\boldsymbol{R}_2 \in SO(3)$: \begin{equation} \label{eq:geo_distance} d(\boldsymbol{R}_1,\boldsymbol{R}_2)=\|\log(\boldsymbol{R}_1\boldsymbol{R}_2^{-1})\|_F/(\sqrt2\pi), \end{equation} where $\log$ is the matrix logarithm and the normalization factor ensures that the diameter of $SO(3)$ is $1$. We provide some relevant preliminaries of the Riemannian geometry of $SO(3)$ in Section \ref{sec:prelim_riem} and then describe the implementation of MPLS for $SO(3)$, which we refer to as MPLS-$SO(3)$, in Section \ref{sec:mpls_so3_details}. \subsubsection{Preliminaries: $SO(3)$ and $\mathfrak{so}(3)$} \label{sec:prelim_riem} We note that $SO(3)$ is a Lie group, and its corresponding Lie algebra, $\mathfrak{so}(3)$, is the space of all skew symmetric matrices, which is isomorphic to $\mathbb R^3$. For each $\boldsymbol{R}\in SO(3)$, its corresponding element in $\mathfrak{so}(3)$ is $\boldsymbol\Omega=\log(\boldsymbol{R})$, where $\log$ denotes matrix logarithm. Each $\boldsymbol\Omega\in \mathfrak{so}(3)$ can be represented as $[\boldsymbol{\omega}]_{\times}$ for some $\boldsymbol\omega =(\omega_1, \omega_2, \omega_3)^T \in \mathbb R^3$ in the following way: \begin{align*} [\boldsymbol{\omega}]_{\times} := \left(\begin{array}{ccc} 0 & - \omega_3 & \omega_2 \\ \omega_3 & 0& -\omega_1 \\ -\omega_2 & \omega_1 & 0 \end{array}\right). \end{align*} In other words, we can map any $\boldsymbol\omega\in \mathbb R^3$ to $\boldsymbol\Omega=[\boldsymbol{\omega}]_{\times} \in \mathfrak{so}(3)$ and $\boldsymbol{R}=\exp([\boldsymbol{\omega}]_{\times}) \in SO(3)$, where $\exp$ denotes the matrix exponential function. We remark that geometrically $\boldsymbol\omega$ is the tangent vector at $\boldsymbol I$ of the geodesic path from $\boldsymbol I$ to $\boldsymbol{R}$. \subsubsection{Details of MPLS-$SO(3)$} \label{sec:mpls_so3_details} We note that in order to adapt MPLS to the group $SO(3)$, we only need a specific algorithm to solve the following formulation of the weighted least squares problem at iteration $t$ \begin{align} \nonumber &\min\limits_{\boldsymbol{R}_{i,t}\in \mathcal SO(3)}\sum\limits_{ij\in E} w_{ij,t}d^2(\boldsymbol{R}_{ij},\boldsymbol{R}_{i,t}\boldsymbol{R}_{j,t}^{-1})\\ =& \min\limits_{\boldsymbol{R}_{i,t}\in \mathcal SO(3)}\sum\limits_{ij\in E} w_{ij,t}d^2(\boldsymbol{I}, \boldsymbol{R}_{i,t}^{-1}\boldsymbol{R}_{ij}\boldsymbol{R}_{j,t}),\label{eq:wlsSO3} \end{align} where the last equality follows from the bi-invariance of $d$. The constraints on orthogonality and determinant of $\boldsymbol{R}_i$ are non-convex. If one relaxes those constraints, with an appropriate choice of the metric $d$, then the solution of the least squares problem in the relaxed Euclidean space often lies away from the embedding of $SO(3)$ into that space. For this reason, we follow the common choice of $d$ according to \eqref{eq:geo_distance} and implement the Lie-algebraic Averaging (LAA) procedure \cite{Govindu04_Lie, ChatterjeeG13_rotation, L12,consensusSO3}. We review LAA, explain why it may be problematic and why our overall implementation may overcome its problems. LAA aims to move from $\boldsymbol{R}_{i,t}$ to $\boldsymbol{R}_{i,t+1}$ along the manifold using the right group action $\boldsymbol{R}_{i,t}=\boldsymbol{R}_{i,t-1}\Delta \boldsymbol{R}_{i,t}$, where $\Delta \boldsymbol{R}_{i,t}\in SO(3)$. For this purpose, it defines $\Delta\boldsymbol{R}_{ij,t}=\boldsymbol{R}_{i,t-1}^{-1}\boldsymbol{R}_{ij}\boldsymbol{R}_{j,t-1}$ so that \begin{multline*} (\Delta \boldsymbol{R}_{i,t})^{-1}\Delta \boldsymbol{R}_{ij,t} \Delta \boldsymbol{R}_{j,t} = \\(\Delta \boldsymbol{R}_{i,t})^{-1}\boldsymbol{R}_{i,t-1}^{-1}\boldsymbol{R}_{ij}\boldsymbol{R}_{j,t-1} \Delta \boldsymbol{R}_{j,t}=\boldsymbol{R}_{i,t}^{-1}\boldsymbol{R}_{ij}\boldsymbol{R}_{j,t} \end{multline*} and \eqref{eq:wlsSO3} can be transformed to the still hard to solve equation \begin{equation}\label{eq:deltaR} \min\limits_{\Delta \boldsymbol{R}_{i,t}\in\mathcal SO(3)}\sum\limits_{ij\in E} w_{ij,t}d^2(\boldsymbol I,(\Delta \boldsymbol{R}_{i,t})^{-1}\Delta \boldsymbol{R}_{ij,t} \Delta \boldsymbol{R}_{j,t}). \end{equation} LAA then maps $\{\Delta \boldsymbol{R}_{i,t}\}_{i\in [n]}$ and $\{\Delta \boldsymbol{R}_{ij,t}\}_{ij\in E}$ to the tangent space of $\boldsymbol{I}$ by $\Delta \boldsymbol\Omega_{i,t}=\log \Delta \boldsymbol{R}_{i,t}$ and $\Delta \boldsymbol\Omega_{ij,t}=\log \Delta \boldsymbol{R}_{ij,t}$. Applying \eqref{eq:geo_distance} and the fact that the Riemannian logarithmic map, which is represented by $\log$, preserves the geodesic distance and using a ``naive approximation'': $d(\boldsymbol I,(\Delta \boldsymbol{R}_{i,t})^{-1}\Delta \boldsymbol{R}_{ij,t} \Delta \boldsymbol{R}_{j,t}) $. Therefore, LAA uses the following approximation \begin{multline} \label{eq:app_laa} d(\boldsymbol{I},(\Delta \boldsymbol{R}_{i,t})^{-1}\Delta \boldsymbol{R}_{ij,t}\Delta \boldsymbol{R}_{j,t}) = \\ \|\log((\Delta \boldsymbol{R}_{i,t})^{-1}\Delta \boldsymbol{R}_{ij,t} \Delta \boldsymbol{R}_{j,t})\|_F/(\sqrt2\pi) \approx \\ \|-\log(\Delta \boldsymbol{R}_{i,t})+\log( \Delta \boldsymbol{R}_{ij,t})+\log( \Delta \boldsymbol{R}_{j,t}))\|_F/(\sqrt2\pi) = \\ \|\Delta \boldsymbol\Omega_{i,t}-\Delta \boldsymbol\Omega_{j,t}-\Delta \boldsymbol\Omega_{ij,t}\|_F/(\sqrt2\pi). \end{multline} Consequently, LAA transforms \eqref{eq:deltaR} as follows: \begin{equation}\label{eq:delta_omega} \min\limits_{\Delta \boldsymbol\Omega_{i,t}\in\mathfrak{so}(3)}\sum\limits_{ij\in E} w_{ij,t}\|\Delta \boldsymbol\Omega_{i,t}-\Delta \boldsymbol\Omega_{j,t}-\Delta \boldsymbol\Omega_{ij,t}\|^2_F. \end{equation} However, the approximation in \eqref{eq:app_laa} is only valid when $\Delta \boldsymbol{R}_{ij,t}$, $\Delta \boldsymbol{R}_{i,t}$, $\Delta \boldsymbol{R}_{j,t}$ $\approx \boldsymbol{I}$, which is unrealistic. One can check that the following conditions: $\boldsymbol{R}_{ij}\approx \boldsymbol{R}_i^*\boldsymbol{R}_j^{*-1}$ ($s_{ij}^*\approx 0$), $\boldsymbol{R}_{i,t} \approx \boldsymbol{R}_i^*$ and $\boldsymbol{R}_{j,t} \approx \boldsymbol{R}_j^*$ for $t \geq 0$ imply that $\Delta \boldsymbol{R}_{ij,t}$, $\Delta \boldsymbol{R}_{i,t}$, $\Delta \boldsymbol{R}_{j,t}$ $\approx \boldsymbol{I}$ and thus imply \eqref{eq:app_laa}. Therefore, to make LAA work we need to give large weights to edges $ij$ with small $s_{ij}^*$ and provide a good initialization $\{\boldsymbol{R}_{i,0}\}_{i\in [n]}$ that is reasonably close to $\{\boldsymbol{R}_{i}^*\}_{i\in [n]}$ and so that $\{\boldsymbol{R}_{i,t}\}_{i\in [n]}$ for all $t\geq 1$ are still close to the ground truth. Our heuristic argument is that good approximation by CEMP, followed by MPLS, addresses these requirements. Indeed, to address the first requirement, we note that good initialization by CEMP can result in $s_{ij,T} \approx s_{ij}^*$ and by the nature of $F$, $w_{ij,0}$ is large when $s_{ij,T}$ is small. As for the second requirement, we assign the weights $s_{ij,T}$, obtained by CEMP, to each $ij\in E$ and find the minimum spanning tree (MST) for the weighted graph by Prim's algorithm. We initialize the rotations by fixing $\boldsymbol{R}_{1,0}=\boldsymbol{I}$, multiplying relative rotations along the computed MST and consequently obtaining $\boldsymbol{R}_{i,0}$ for any node $i$. We summarize our MPLS version of rotation averaging in Algorithm~\ref{alg:SO3}. \begin{algorithm}[h] \caption{MPLS-$SO(3)$}\label{alg:SO3} \begin{algorithmic} \REQUIRE $\{\boldsymbol{R}_{ij}\}_{ij\in E}$, $\{d_{ij,k}\}_{k\in C_{ij}}$, $\{\tau_t\}_{t\geq 0}$, $\{\beta_t\}_{t=0}^T$, $\{\alpha_t\}_{t\geq 1}$ \STATE \textbf{Steps:} \STATE Compute $\{s_{ij,T}\}_{ij\in E}$ by CEMP \STATE Form an $n \times n$ weight matrix $\boldsymbol{W}$, where $W_{ij}=W_{ji}= s_{ij,T}$ for $ij\in E$, and $W_{ij}=W_{ji}=0$ otherwise \STATE $G([n],E_{ST})=$ minimum spanning tree of $G([n],W)$ \STATE $\boldsymbol{R}_{1,0}=\boldsymbol{I}$ \STATE find $\{\boldsymbol{R}_{i,0}\}_{i>1}$ by $\boldsymbol{R}_i=\boldsymbol{R}_{ij}\boldsymbol{R}_j$ for $ij\in E_{ST}$ \STATE $t=0$ \STATE $w_{ij,0}=F_{\tau_0}(s_{ij,T})$ \WHILE {not convergent} \STATE $t=t+1$ \STATE $\Delta \boldsymbol\Omega_{ij,t}=\log(\boldsymbol{R}_{i,t-1}^{-1}\boldsymbol{R}_{ij}\boldsymbol{R}_{j,t-1})$ \hspace*{\fill} $ij\in E$ \STATE $\{\Delta \boldsymbol\Omega_{i,t}\}_{i\in [n]}=$ \STATE \quad \quad $\operatorname*{arg\,min}\limits_{\Delta \boldsymbol\Omega_{i,t}\in\mathfrak{so}(3)}\sum\limits_{ij\in E} w_{ij,t}\|\Delta \boldsymbol\Omega_{i,t}-\Delta \boldsymbol\Omega_{j,t}-\Delta \boldsymbol\Omega_{ij,t}\|^2_F$ \STATE $\boldsymbol{R}_{i,t}=\boldsymbol{R}_{i,t-1}\exp(\Delta \boldsymbol\Omega_{i,t})$ \hspace*{\fill} $i\in [n]$ \STATE $r_{ij,t}=\|\Delta \boldsymbol\Omega_{i,t}-\Delta \boldsymbol\Omega_{j,t}-\Delta \boldsymbol\Omega_{ij,t}\|_F/(\sqrt2\pi)$ \hspace*{\fill} $ij\in E$ \STATE $q_{ij,k}^{t} =\exp(-\beta_T(r_{ik,t}+r_{jk,t}))$ \hspace*{\fill} $k\in C_{ij},\, ij\in E$ \STATE $h_{ij,t} =\frac{\sum_{k\in C_{ij}}q_{ij,k}^t d_ {ij,k}}{\sum_{k\in C_{ij}}q_{ij,k}^t}$ \hspace*{\fill} $ij\in E$ \STATE $w_{ij,t}=F_{\tau_t}(\alpha_t h_{ij,t}+(1-\alpha_t)r_{ij,t})$ \hspace*{\fill} $ij\in E$ \ENDWHILE \ENSURE $\left\{\boldsymbol{R}_{i,t}\right\}_{i\in [n]}$ \end{algorithmic} \end{algorithm} \subsection{Computational Complexity} \label{sec:complexity} CEMP requires the computation of $d_{ij,k}$ for $ij \in E$ and $k\in C_{ij}$. Its computational complexity per iteration is thus of order $O(|E|)$ as we use $|C_{ij}|=50$ for all $ij \in E$. Since we advocate few iterations ($T=5$) of CEMP, or due to its fast convergence under special settings \cite{cemp}, we can assume that its total complexity is $O(|E|)$. The computational complexity of MPLS depends on the complexity of solving the weighted least squares problem, which depends on the group. For MPLS-$SO(3)$, the most expensive part is solving the weighted least squares problem in the tangent space, whose complexity is at most $O(n^3)$. This is thus also the complexity of MPLS-$SO(3)$ per iteration. Unlike CEMP, we have no convergence guarantees yet for MPLS. \section{Numerical Experiments} \label{sec:numerics} We test the proposed MPLS algorithm on rotation synchronization, while comparing with state-of-the-art methods. We also try simpler ideas than MPLS that are based on the basic strategy of CEMP. All computational tasks were implemented on a machine with 2.5GHz Intel i5 quad core processors and 8GB memory. \subsection{Implementation} \label{sec:implementation} We use the following default parameters for Algorithm \ref{alg:cemp}: $|C_{ij}|=50$ for $ij\in E$; $T=5$; $\beta_t=2^{t}$ and $t=0, \ldots, 5$. If an edge is not contained in any 3-cycle, we set its corruption level as 1. For MPLS-$SO(3)$, which we refer to in this section as MPLS, we use the above parameters of Algorithm \ref{alg:cemp} and the following ones for $t\geq 1$: $$\alpha_t=1/(t+1) \ \text{ and } \ \tau_{t}=\inf_x\left\{\hat P_t(x)> \max\{1-0.05t\,,0.8\}\right\}.$$ Here, $\hat P_t$ denotes the empirical distribution of $\{\alpha_t h_{ij,t}+$ $(1-\alpha_t)r_{ij,t}\}_{ij\in E}$. That is, for $t=0$, 1, 2, 3, we ignore $0\%$, $5\%$, $10\%$, $15\%$ of edges that have highest $\alpha_t h_{ij,t}+$ $(1-\alpha_t)r_{ij,t}$, and for $t \geq 4$ we ignore $20\%$ of such edges. $F(x)$ for MPLS is chosen as $x^{-3/2}$ and it corresponds to $\rho(x)=\sqrt x$. For simplicity and consistency, we use these choices of parameters for all of our experiments. We remark that our choice of $\beta_t$ in Algorithm \ref{alg:cemp} is supported by the theory of \citet{cemp}. We found that MPLS is not so sensitive to its parameters. One can choose other values of $\{\beta_t\}_{t\geq 0}$, for example any geometric sequence with ratio 2 or less, and stop after several iterations. Similarly, one may replace 0.8 and 0.05 in the definition of $\tau_t$ with $0.7-0.9$ and $0.01-0.1$, respectively, and perform similarly on average. We test two previous state-of-the-art IRLS methods: IRLS-GM \cite{ChatterjeeG13_rotation} with $\rho(x) = x^2/(x^2+25)$, $F(x)=25/(x^2+25)^2$ and IRLS-$\ell_{1/2}$ \cite{L12} with $\rho(x)=\sqrt x$, $F(x)=x^{-3/2}$. We use their implementation by \citet{L12}. We have also separately implemented the part of initializing the rotations of MPLS in Algorithm~\ref{alg:SO3} and refer to it by CEMP+MST. Recall that it solves rotations by direct propagation along the minimum weighted spanning tree of the graph with weights obtained by Algorithm \ref{alg:cemp} (CEMP). We also test the application of this initialization to the main algorithms in \citet{ChatterjeeG13_rotation} and \citet{L12} and refer to the resulting methods by CEMP+IRLS-GM and CEMP+IRLS-$\ell_{1/2}$, respectively. We remark that the original algorithms initialize by a careful least absolute deviations minimization. We use the convergence criterion $\sum_{i\in[n]}\|\Delta \boldsymbol\Omega_{i,t}\|_F/(\sqrt2n)< 0.001$ of \citet{L12} for all the above algorithms. Because the solution is determined up to a right group action, we align our estimated rotations $\{\hat\boldsymbol{R}_i\}$ with the ground truth ones $\{ \boldsymbol{R}_i^*\}$. That is, we find a rotation matrix $\boldsymbol{R}_\text{align}$ so that $\sum_{i\in [n]}\|\hat\boldsymbol{R}_i\boldsymbol{R}_\text{align}-\boldsymbol{R}_i^*\|_F^2$ is minimized. For synthetic data, we report the following mean estimation error in degrees: $180\cdot\sum_{i\in [n]} d(\hat \boldsymbol{R}_i\boldsymbol{R}_\text{align}\,, \boldsymbol{R}_i^*)/n$. For real data, we also report the median of $\{180\cdot d(\hat \boldsymbol{R}_i\boldsymbol{R}_\text{align}\,, \boldsymbol{R}_i^*)\}_{i\in [n]}$. \subsection{Synthetic Settings} We test the methods in the following two types of artificial scenarios. In both scenarios, the graph is generated by the Erd\H{o}s-R\'{e}nyi model $G(n,p)$ with $n=200$ and $p=0.5$. \subsubsection{Uniform Corruption} We consider the following random model for generating $\boldsymbol{R}_{ij}$ \begin{equation} \boldsymbol{R}_{ij}=\begin{cases} \text{Proj}(\boldsymbol{R}_{ij}^*+\sigma \boldsymbol{W}_{ij}),&\text{w.p. } 1-q;\\ \tilde \boldsymbol{R}_{ij}\sim \text{Haar}($SO(3)$),& \text{w.p. } q, \end{cases} \end{equation} where Proj denotes the projection onto $SO(3)$; $\boldsymbol{W}_{ij}$ is a $3 \times 3$ Wigner matrix whose elements follow i.i.d.~standard normal distribution; $\sigma\geq 0$ is a fixed noise level; $q$ is the probability that an edge is corrupted and Haar$(SO(3))$ is the Haar probability measure on $SO(3)$. We clarify that for any $3 \times 3$ matrix $\boldsymbol{A}$, $\text{Proj}(\boldsymbol{A}) = \operatorname*{arg\,min}_{\boldsymbol{R}\in SO(3)} \|\boldsymbol{R}-\boldsymbol{A}\|_F$. We test the algorithms with four values of $\sigma:$ $0$, $0.1$, $0.5$, and $1$. We average the mean error over 10 random samples from the uniform model and report it as a function of $q$ in Figure \ref{fig:s1}. We note that MPLS consistently outperforms the other methods for all tested values of $q$ and $\sigma$. In the noiseless case, MPLS exactly recovers the group ratios even when $70\%$ of the edges are corrupted. It also nearly recovers with $80\%$ corrupted edges, where the estimation errors for IRLS-GM and IRLS-$\ell_{1/2}$ are higher than 30 degrees. MPLS is also shown to be stable under high level of noise. Since all algorithms produce poor solutions when $q=0.9$, we only show results for $0\leq q\leq 0.8$. \begin{figure}[h] \centering \includegraphics[width=8cm]{uniform.pdf} \caption{Performance under uniform corruption. The mean error (in degrees) is plotted against the corruption probability $q$ for 4 values of $\sigma$. } \label{fig:s1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8cm]{adv.pdf} \caption{ Performance under self-consistent corruption. The mean error is plotted against the corruption probability $q$ for 4 values of $\sigma$.} \label{fig:adv} \end{figure} \begin{table*}[!htbp] \centering \resizebox{2\columnwidth}{!}{ \renewcommand{\arraystretch}{1.3} \tabcolsep=0.1cm \begin{tabular}{|l||c|c||c|c|c|c||c|c|c|c||c|c|c|c||c|c|c|c|} \hline Algorithms & \multicolumn{2}{c||}{}& \multicolumn{4}{c||}{IRLS-GM} & \multicolumn{4}{c||}{IRLS-$\ell_{\frac12}$} & \multicolumn{4}{c||}{CEMP+MST}& \multicolumn{4}{c|}{MPLS} \\ \text{Dataset}& $n$ & $m$ & {\large$\tilde{e}$} & {\large $\hat{e}$} & runtime & iter $\#$ & {\large$\tilde{e}$} & {\large $\hat{e}$} & runtime & iter $\#$ & {\large$\tilde{e}$} & {\large $\hat{e}$} & runtime & iter $\#$ &{\large$\tilde{e}$} & {\large $\hat{e}$} & runtime & iter $\#$\\\hline Alamo& 564 & 71237 & 3.64 & 1.30 & 14.2 & 10+8 & 3.67 & 1.32 & 15.5 & 10+9 & 4.05 & 1.62 & \textbf{10.38} & 6 & \textbf{3.44} & \textbf{1.16} & 20.6 & 6+8 \\\hline Ellis Island& 223 & 17309 & 3.04 & 1.06 & 3.2 & 10+9 & 2.71 & 0.93 & 2.8 & 10+13 & 2.94 & 1.11 & \textbf{2.4} & 6 & \textbf{2.61} & \textbf{0.88} & 4.0 & 6+11 \\\hline Gendarmenmarkt& 655 & 32815& \textbf{39.24} & \textbf{7.07} & 6.5 & 10+14 & 39.41 & 7.12 & 7.3 & 10+19 & 45.33 & 8.62 & \textbf{4.7} & 6 & 44.94 & 9.87 & 17.8 & 6+25 \\\hline Madrid Metropolis & 315 & 14903 & 5.30 & 1.78 & 3.8 & 10+30 & 4.88 & 1.88 & 2.7 & 10+12 & 5.10 & 1.66 & \textbf{2.1} & 6 & \textbf{4.65} & \textbf{1.26} & 5.2 & 6+23 \\\hline Montreal N.D.& 442 & 44501 & 1.25 & 0.58 & 6.5 & 10+6 & 1.22 & 0.57 & 7.3 & 10+8 & 1.33 & 0.79 & \textbf{6.3} & 6 & \textbf{1.04} & \textbf{0.51} & 9.3 & 6+7 \\\hline Notre Dame & 547 & 88577 & 2.63 & 0.78 & 17.2 & 10+7 & 2.26 & 0.71 & 22.5 & 10+10 & 2.35 & 0.94 & \textbf{13.2} & 6 & \textbf{2.06} & \textbf{0.67} & 31.5 & 6+8 \\\hline NYC Library& 307 & 13814 & 2.71 & 1.37 & 2.5 & 10+14 & 2.66 & 1.30 & 2.6 & 10+15 & 3.00 & 1.41 & \textbf{1.9} & 6 & \textbf{2.63} & \textbf{1.24} & 4.5 & 6+14 \\\hline Piazza Del Popolo & 306 & 18915 & 4.10 & 2.17 & 2.8 & 10+9 & 3.99 & 2.09 & 3.1 & 10+13 & \textbf{3.44} & \textbf{1.57} & \textbf{2.6} & 6 & 3.73 & 1.93 & 3.5 & 6+3 \\\hline Piccadilly& 2031 & 186458 & 5.12 & 2.02 & 153.5 & 10+16 & 5.19 & 2.34 & 170.2 & 10+19 & 4.66 & 1.98 & \textbf{45.8} & 6 & \textbf{3.93} & \textbf{1.81} & 191.9 & 6+21 \\\hline Roman Forum& 989 & 41836 & 2.66 & 1.58 & 8.6 & 10+9 & 2.69 & 1.57 & 11.4 & 10+17 & 2.80 & 1.45 & \textbf{6.1} & 6 & \textbf{2.62} & \textbf{1.37} & 8.8 & 6+8 \\\hline Tower of London& 440 & 15918 & 3.42 & 2.52 & 2.6 & 10+8 & 3.41 & 2.50 & 2.4 & 10+12 & \textbf{2.84} & \textbf{1.57} & \textbf{2.2} & 6 & 3.16 & 2.20 & 2.7 & 6+7 \\\hline Union Square& 680 & 17528 & 6.77 & 3.66 & 5.0 & 10+32 & 6.77 & 3.85 & 5.6 & 10+47 & 7.47 & 3.64 & \textbf{2.5} & 6 & \textbf{6.54} & \textbf{3.48} & 5.7 & 6+21 \\\hline Vienna Cathedral& 770 & 87876 & 8.13 & 1.92 & 28.3 & 10+13 & 8.07 & \textbf{1.76} & 45.4 & 10+23 & \textbf{6.91} & 2.63 & \textbf{13.1} & 6 & 7.21 & 2.83 & 42.6 & 6 +19 \\\hline Yorkminster & 410 & 20298 & 2.60 & 1.59 & \textbf{2.4} & 10+7 & \textbf{2.45} & 1.53 & 3.3 & 10+9 & 2.49 & \textbf{1.37} & 2.8 & 6 & 2.47 & 1.45 & 3.9 & 6+7 \\\hline \end{tabular}} \caption{Performance on the Photo Tourism datasets: $n$ and $m$ are the number of nodes and edges, respectively; $\tilde e$ and $\hat e$ indicate mean and median errors in degrees, respectively.; runtime is in seconds; and numbers of iterations (explained in the main text).}\label{tab:real} \end{table*} \subsubsection{Self-Consistent Corruption} In order to simulate self-consistent corruption, we independently draw from Haar($SO(3)$) two classes of rotations: $\{\boldsymbol{R}_i^*\}_{i\in [n]}$ and $\{\tilde \boldsymbol{R}_i\}_{i\in [n]}$. We denote their corresponding relative rotations by $\boldsymbol{R}_{ij}^*=\boldsymbol{R}_i^*\boldsymbol{R}_j^{*\intercal}$ and $\tilde\boldsymbol{R}_{ij}=\tilde\boldsymbol{R}_i\tilde\boldsymbol{R}_j^{\intercal}$ for $ij\in E$. The idea is to assign to edges in $E_g$ and $E_b$ relative rotations from two different classes, so cycle-consistency occurs in both $G([n],E_g)$ and $G([n],E_b)$. We also add noise to these relative rotations and assign them with Bernoulli model to the two classes, so one class is more significant. More specifically, for $ij \in E$ \begin{equation} \boldsymbol{R}_{ij}=\begin{cases} \text{Proj}(\boldsymbol{R}^*_{ij}+\sigma \boldsymbol{W}_{ij}),&\text{w.p. } 1-q;\\ \text{Proj}(\tilde\boldsymbol{R}_{ij}+\sigma \boldsymbol{W}_{ij}),& \text{w.p. } q, \end{cases} \end{equation} where $q$, $\sigma$, and $\boldsymbol{W}_{ij}$ are the same as in the above uniform corruption model. We remark that an information-theoretic threshold for the exact recovery when $\sigma =0$ is $q=0.5$. That is, for $q\geq 0.5$ there is no hope of exactly recovering $\{\boldsymbol{R}_{i}^*\}_{i\in [n]}$. We test the algorithms with four values of $\sigma:$ $0$, $0.1$, $0.5$, and $1$. We average the mean error over 10 random samples from the self-consistent model and report it as a function of $q$ in Figure \ref{fig:adv}. We focus on values of $q$ approaching the information-theoretic bound $0.5$ ($q=0.4$, $0.45$ and $0.48$). We note that MPLS consistently outperforms the other algorithm and that when $\sigma=0$ it can exactly recover the ground truth rotations when $q=0.48$. \subsection{Real Data} We compare the performance of the different algorithms on the Photo Tourism datasets \citep{1dsfm14}. Each of the 14 datasets consists of hundreds of 2D images of a 3D scene taken by cameras with different orientations and locations. For each pair of images of the same scene, we use the pipeline proposed by \citet{ozyesil2015robust} to estimate the relative 3D rotations. The ground truth camera orientations are also provided. Table \ref{tab:real} compares the performance of IRLS-GM, IRLS-$\ell_{1/2}$, CEMP+MST and MPLS, while reporting mean and median errors, runtime and number of iterations. The number of iterations is the sum of the number of iterations to initialize the rotations and the number of iterations of the rest of the algorithm, where CEMP+MST only has iterations in the initialization step. MPLS achieves the lowest mean and median error on $9$ out of $14$ datasets with runtime comparable to both IRLS, while IRLS-GM only outperforms MPLS on the Gendarmenmarkt dataset. This dataset is relatively sparse and lacks cycle information. It contains a large amount of self-consistent corruption and none of the methods solve it reasonably well. Among the tested 4 methods, the fastest approach is CEMP+MST. It achieves shortest runtime on 13 out of 14 dataset. Moreover, CEMP+MST is 3 times faster than other tested methods on the largest dataset (Piccadilly). We remark that CEMP+MST is able to achieve comparable results to common IRLS on most datasets, and has superior performance on 2 datasets, which have some perfectly estimated edges. In summary, for most of the datasets, MPLS provides the highest accuracy and CEMP+MST obtains the fastest runtime. \section{Conclusion} \label{sec:conclusion} We proposed a framework for solving group synchronization under high corruption and noise. This general framework requires a successful solution of the weighted least squares problem, which depends on the group. For $SO(3)$, we explained how a well-known solution integrates well with our framework. We demonstrated state-of-the-art performance of our framework for $SO(3)$ synchronization. We have motivated our method as an alternative to IRLS and explained how it may overcome the limitations of IRLS when applied to group synchronization. There are many directions to expand our work. One can carefully adapt and implement our proposed framework to other groups that occur in practice. One may develop certain theoretical guarantees for convergence and exact recovery of MPLS. \section*{Acknowledgement} This work was supported by NSF award DMS-18-21266. We thank Tyler Maunu for his valuable comments on an earlier version of this manuscript.
{ "timestamp": "2020-08-18T02:05:50", "yymm": "2007", "arxiv_id": "2007.13638", "language": "en", "url": "https://arxiv.org/abs/2007.13638" }
\section*{Supplemental Material} \renewcommand{\theequation}{\thesection.\arabic{equation}} \renewcommand{\thefigure}{\thesection.\arabic{figure}} \numberwithin{equation}{section} \section{Collision integral involving $2\leftrightarrow 2$ processes} \label{app-collision} In this section we calculate the collision integrals corresponding to the processes of the Compton scattering and electron-positron annihilation with the flip of chirality. Let us consider these processes separately. \paragraph{The Compton scattering.} The matrix element of the $t$-channel process $e_{L}(k)+\gamma(p)\to \gamma(k')+e_{R}(p')$ shown in Fig.~1(a) reads as \begin{equation} \mathcal{M}_{C}=\frac{1}{i}(-ie)^{2} \bar{u}_{s'}(p')P_{R}\gamma^{\nu} i\mathcal{S}_{\rm ret}(q)\gamma^{\mu}P_{R}u_{s}(k)\varepsilon^{*}_{\mu}(k',\lambda')\varepsilon_{\nu}(p,\lambda), \end{equation} where $P_{R}=(1+\gamma_{5})/2$ is the right chiral projector and the propagator has the form \begin{equation} \label{prop-fermion} \mathcal{S}_{\rm ret}(q)=[\mathcal{S}_{0,{\rm ret}}(q)-\Sigma_{\rm ret}(q)]^{-1}=\frac{\cancel{q}-\cancel{\varpi}+m_{e}\mathds{1}}{(q-\varpi)^{2}-m_{e}^{2}}. \end{equation} The 4-vector $\varpi^{\mu}={\rm tr}(\gamma^{\mu}\Sigma_{\rm ret})/4$ equals \cite{LeBellac} \begin{equation} \label{Sigma-mu} \varpi^{\mu}=\left(\varpi^{0},\,\frac{\mathbf{q}}{q} \left[-\frac{m_{\rm th}^{2}}{2q}+\frac{q^{0}}{q}\varpi^{0}\right]\right), \end{equation} where $m_{\rm th}=eT/2$ is the electron thermal mass and the function $\varpi^{0}$ is given by \begin{equation} \label{sigma-0-complex} \varpi^{0}(q^{0},\mathbf{q})=\frac{m_{\rm th}^{2}}{2}\int\frac{d\Omega_{\mathbf{v}}}{4\pi}\frac{1}{q^{0}-\mathbf{q}\cdot\mathbf{v}}=\frac{m_{\rm th}^{2}}{4k}\ln\frac{q^{0}+q}{q^{0}-q}. \end{equation} Note, that we use the retarded propagator for the intermediate particle while computing the scattering matrix element for the collision integral. This prescription arises in finite temperature quantum field theory when one derives the collision term from the Schwinger-Keldysh formalism, see \cite{Blaizot:1999xk,Blaizot:2001nr} as well as \cite{Ghiglieri:2020dpq} for a recent discussion. Because of the chiral projectors, only the massive term in the numerator in Eq.~(\ref{prop-fermion}) contributes to the matrix element. In the leading order in $m_{e}$ we get \begin{equation} \mathcal{M}_{C}^{(1)}=-m_{e}e^{2}\frac{\bar{u}_{s'}(p')\gamma^{\nu}\gamma^{\mu}P_{R}u_{s}(k)}{(q-\varpi)^{2}}\varepsilon^{*}_{\mu}(k',\lambda')\varepsilon_{\nu}(p,\lambda). \end{equation} Taking the squared modulus of this matrix element and summing over all possible spins and polarizations (we sum over the spin projections because we included the chiral projectors directly in the matrix element), we get \begin{equation} \label{matrix-elem-squre-compton} |\mathcal{M}_{C}^{(1)}|^{2}=\frac{8m_{e}^{2}e^{4}(k\cdot p')}{|(q-\varpi)^{2}|^{2}}. \end{equation} \paragraph{The annihilation process.} Let us now consider the process of annihilation $e_{L}(k)+\overline{e_{R}}(p)\to \gamma(k')+\gamma(p')$ shown in Fig.~1(b). It is worth noting that the incoming positron is the antiparticle to the right electron, i.e., it is left, so that the chirality is not conserved in this reaction. Following the same steps as in the case of Compton scattering, we obtain the matrix element \begin{equation} \mathcal{M}_{A}^{(1)}=-m_{e}e^{2}\frac{\bar{v}_{s'}(p)\gamma^{\nu}\gamma^{\mu}P_{R}u_{s}(k)}{(q-\varpi)^{2}}\varepsilon^{*}_{\mu}(k',\lambda')\varepsilon^{*}_{\nu}(p',\lambda) \end{equation} and its squared modulus \begin{equation} \label{matrix-elem-squre-annihil} |\mathcal{M}_{A}^{(1)}|^{2}=\frac{8m_{e}^{2}e^{4}(k\cdot p)}{|(q-\varpi)^{2}|^{2}}, \qquad q=k-k'. \end{equation} This matrix element coincides with that of the Compton process (\ref{matrix-elem-squre-compton}) up to the terms $\mathcal{O}(q)$ in the numerator. It is important to note that there is also the $u$-channel of annihilation when the outcoming photons are interchanged. However, the matrix element is exactly the same with $q=k-p'$ instead of $q=k-k'$. Changing the variables $k'\leftrightarrow p'$ in the collision integral we can see that the result is simply twice the result of $t$-channel. There is, however, the factor $1/2$ in front of the collision integral which takes into account the indistinguishability of the outcoming photons. In our calculation, we omit both, the $u$-channel and the factor of $1/2$. Taking into account the identity \begin{equation} [1-n_{F}(p)]n_{B}(p)=n_{F}(p)[1+n_{B}(p)]=\frac{1}{2\,{\rm sinh\,}p/T}, \end{equation} we conclude that the Compton scattering and the annihilation process make equal contributions to the collision integral. \paragraph{Calculation of the chirality flipping rate.} Substituting the expressions (\ref{matrix-elem-squre-compton}) and (\ref{matrix-elem-squre-annihil}) for to the Compton scattering and annihilation processes into the expression for chirality flipping rate~(6), we arrive at the following expression \begin{equation} \label{chirality-flip-compton-1} \Gamma_{\rm flip}^{2\leftrightarrow 2}=\frac{3m_{e}^{2}e^{4}T}{128\pi} \int_{0}^{\infty}q\,dq \int_{0}^{\pi}d\cos\theta_{kq}\left.\frac{ 1-\cos^{2}\theta_{kq}}{|(q^{0}-\varpi^{0})^{2}-(\mathbf{q}-\boldsymbol{\varpi})^{2}|^{2}}\right|_{q^{0}=q\cos\theta_{kq}}. \end{equation} Let us carefully consider its denominator \begin{eqnarray} \psi(q,\cos\theta_{kq})&=&\left.\frac{(q^{0}-\varpi^{0})^{2}-(\mathbf{q}-\boldsymbol{\varpi})^{2}}{q^{2}}\right|_{q^{0}=q\cos\theta_{kq}}\nonumber\\ &=&\Big[\cos\theta_{kq}-\frac{m_{\rm th}^{2}}{4q^{2}}\Big(\ln\frac{1+\cos\theta_{kq}}{1-\cos\theta_{kq}}-i\pi\Big)\Big]^{2}\nonumber\\ &-&\Big[1+\frac{m_{\rm th}^{2}}{2q^{2}}-\cos\theta_{kq}\frac{m_{\rm th}^{2}}{4q^{2}}\Big(\ln\frac{1+\cos\theta_{kq}}{1-\cos\theta_{kq}}-i\pi\Big)\Big]^{2} . \end{eqnarray} It is easy to see that it depends only on $q^{2}/m_{\rm th}^{2}$ and satisfies \begin{equation} \psi(q,-\cos\theta_{kq})=\psi^{*}(q,\cos\theta_{kq}), \end{equation} so that $|\psi|^{2}$ is invariant under the reflection $\theta_{kq}\to \pi -\theta_{kq}$. Introducing the new integration variables \begin{equation} \xi=q^{2}/m_{\rm th}^{2},\qquad y=\cos\theta_{kq}, \end{equation} we get the expression for the chirality flipping rate in the form \begin{equation} \Gamma_{\rm flip}^{2\leftrightarrow 2}=\frac{m_{e}^{2}}{T}\alpha\times\frac{3}{8}\int_{0}^{\infty}d\xi\int_{0}^{1}dy\frac{1-y^{2}}{\xi^{2}|\psi(m_{\rm th}\sqrt{\xi},y)|^{2}}. \end{equation} As the final step, we show that \begin{eqnarray} &&\frac{1-y^{2}}{\xi^{2}|\psi(m_{\rm th}\sqrt{\xi},y)|^{2}}\nonumber\\ &&=\frac{1-y^{2}}{\xi^{2}\left|\Big[y-\frac{1}{4\xi}\Big(\ln\frac{1+y}{1-y}-i\pi\Big)\Big]^{2}-\Big[1+\frac{1}{2\xi}-\frac{y}{4\xi}\Big(\ln\frac{1+y}{1-y}-i\pi\Big)\Big]^{2}\right|^{2}}\nonumber\\ &&=\frac{\xi^{2}/(1-y^{2})}{\left[\left(\xi+\frac{1}{4}\ln\frac{1+y}{1-y}+\frac{1}{2(1-y)}\right)^{2}+\frac{\pi^{2}}{16}\right]\left[\left(\xi-\frac{1}{4}\ln\frac{1+y}{1-y}+\frac{1}{2(1+y)}\right)^{2}+\frac{\pi^{2}}{16}\right]}, \end{eqnarray} and we end up with Eq.~(9) where the constant $C$ equals to \begin{eqnarray} C&=&\frac{3}{8}\int_{0}^{1}\frac{dy}{1-y^{2}}\int_{0}^{\infty} \frac{\xi^{2}\,d\xi}{\left[\left(\xi+\frac{1}{4}\ln\frac{1+y}{1-y}+\frac{1}{2(1-y)}\right)^{2}+\frac{\pi^{2}}{16}\right]\left[\left(\xi-\frac{1}{4}\ln\frac{1+y}{1-y}+\frac{1}{2(1+y)}\right)^{2}+\frac{\pi^{2}}{16}\right]}\nonumber\\ &\approx& 0.24. \end{eqnarray} \section{Chirality flipping rate from $1\leftrightarrow 2$ processes} \label{app-1to2} The contribution to the chirality flipping rate from the $1\leftrightarrow 2$ process shown in Fig.~2 can be estimated by Eq.~(10). Let us take into account the thermal corrections and show that they lead to the finite answer. For further convenience, let us decompose the momenta into components along the momentum $\mathbf{k}$ of the incoming electron and transverse to it, \begin{equation} \mathbf{p}=p_{\parallel}\hat{\mathbf{k}}+\mathbf{p}_{\perp},\ \ \mathbf{q}=(k-p_{\parallel})\hat{\mathbf{k}}-\mathbf{p}_{\perp}, \end{equation} where $\hat{\mathbf{k}}=\mathbf{k}/k$ and in the second expression we used the momentum conservation law. Treating the longitudinal components of all momenta to be $\sim T$ (we will see that this is true \textit{a posteriori}), we can expand the dispersion relations as follows \begin{equation} \epsilon_{k}\approx k+\frac{m_{\rm th}^{2}}{2k},\quad \epsilon_{q}\approx k-p_{\parallel}+\frac{m_{\rm th}^{2}+p_{\perp}^{2}}{2(k-p_{\parallel})},\quad \epsilon_{p}\approx p_{\parallel}+\frac{m_{\gamma}^{2}+p_{\perp}^{2}}{2p_{\parallel}}. \end{equation} Here $m_{\rm th}=eT/2$ and $m_{\gamma}=eT/\sqrt{6}$ are the asymptotic thermal masses of the electron and photon, respectively \cite{LeBellac}. The HTL effective theory predicts the modification of the dispersion relations. However, this would immediately wipe out all the available phase space for $1\leftrightarrow 2$ processes and lead to the vanishing contribution to the chirality flipping rate. Fortunately, the higher order (beyond HTL) corrections give rise also to the finite decay width of the quasiparticles \cite{Thoma:1995ju,Blaizot:1996az,Blaizot:1996hd} and allow for a slight violation of the energy conservation in the collision event. The electron decay width equals to $\gamma_{e}\approx e^{2}T/(4\pi) \log e^{-1}$, while the photon decay width is of higher order in $e$ and thus can be neglected. At technical level, we can incorporate this finite decay width by replacing the delta function of energies in Eq.~(10) by the corresponding Lorentz contour of the width $2\gamma_{e}$. The Lorentz function works only when its argument is less or of the order its width, \begin{equation} |\epsilon_{k}-\epsilon_{q}-\epsilon_{p}|\approx \frac{m_{\gamma}^{2}}{2p_{\parallel}}+\frac{m_{\rm th}^{2} p_{\parallel}}{2k(k-p_{\parallel})}+\frac{p_{\perp}^{2} k}{2p_{\parallel}(k-p_{\parallel})}\lesssim 2\gamma_{e}\sim Te^{2}\log e^{-1}. \end{equation} This immediately gives the restrictions on the longitudinal and transverse components of the momenta \begin{equation} \label{constraints-momenta} k>p_{\parallel}\gtrsim m_{\rm th}^{2}/\gamma_{e}\sim T/\log e^{-1}, \qquad p_{\perp} \lesssim m_{\rm th}. \end{equation} Now, let us consider the matrix element (11). The scalar products equal to \begin{equation} k\cdot p\approx \frac{k(m_{\gamma}^{2}+p_{\perp}^{2})}{2p_{\parallel}}+\frac{p_{\parallel}m_{\rm th}^{2}}{2k},\quad k^{2}\approx m_{\rm th}^{2}. \end{equation} Taking into account constraints (\ref{constraints-momenta}), we obtain that \begin{equation} \left|\mathcal{M}_{k\to pq}\right|^{2}=\mathcal{O}(1)\times e^{2}m_{e}^{2}. \end{equation} Then, the chirality flipping rate can be written as follows \begin{multline} \Gamma_{\rm flip}^{1\leftrightarrow 2}\propto \frac{e^{2}m_{e}^{2}}{T^{3}}\int_{0}^{\infty}\!\!k^{2}\,dk \int_{0}^{k}\!dp_{\parallel} \frac{n_{F}(k)[1+n_{B}(p_{\parallel})][1-n_{F}(k-p_{\parallel})]}{k p_{\parallel}(k-p_{\parallel})}\\ \times \int p_{\perp}\,dp_{\perp} \delta_{2\gamma_{e}}\left(\frac{m_{\gamma}^{2}}{2p_{\parallel}}+\frac{m_{\rm th}^{2} p_{\parallel}}{2k(k-p_{\parallel})}+\frac{p_{\perp}^{2} k}{2p_{\parallel}(k-p_{\parallel})}\right)\\ \propto \frac{e^{2}m_{e}^{2}}{T^{3}}\int_{0}^{\infty}\!\!dk \int_{0}^{k}\!dp_{\parallel}\ n_{F}(k)[1+n_{B}(p_{\parallel})][1-n_{F}(k-p_{\parallel})] \frac{2}{\pi}{\rm arctan}\frac{4\gamma_{e}}{\frac{m_{\gamma}^{2}}{p_{\parallel}}+\frac{m_{\rm th}^{2}p_{\parallel}}{k(k-p_{\parallel})}}. \end{multline} Without the arctangent, the integral over $p_{\parallel}$ would be logarithmically divergent for small momenta because of the Bose-Einstein distribution function. However, at the scale $p_{\parallel,\rm min}\sim m_{\gamma}^{2}/\gamma_{e}\sim T/\log e^{-1}$ the arctangent cuts this divergence. Finally, we get the estimate \begin{equation} \Gamma_{\rm flip}^{1\leftrightarrow 2}\sim \frac{m_{e}^{2}}{T}\times \alpha \log\log\alpha^{-1}, \end{equation} which is again of the first order in the electromagnetic coupling constant with a slight logarithmic enhancement. Thus, we confirm that the nearly collinear $1\leftrightarrow 2$ processes also contribute to the leading order chirality flipping rate. This contribution is studied in our companion paper \cite{PaperII}. \end{document}
{ "timestamp": "2021-01-13T02:24:29", "yymm": "2007", "arxiv_id": "2007.13691", "language": "en", "url": "https://arxiv.org/abs/2007.13691" }
\section{Acknowledgments} The authors would like to express gratitude to Prof. Amos Breskin and Dr. Arindam Roy of the Weizmann Institute for fruitful discussions and help. The authors are thankful to the High-Energy group at the Weizmann Institute working on the ATLAS experiment for their help in making parts of the setup. The authors would like to thank the group of Prof. Mirko Planini\'{c} from the University of Zagreb for help with setting up floating picoammerts and the reliable performance of their devices during the measurements. Special thanks are to the Physics Core Facility group of the Weizmann Institute for their constant support during the experiment. \section{Discussion and conclusions} \label{sec:conclusions} The experimental setup built at the Weizmann Institute of Science is used to measure electron and ion transparencies of a bipolar wire grid operated in a magnetic field in passive mode. Studies are made in Ne-based and Ar-based gas mixtures using CH$_4$, CF$_4$, and CO$_2$ as quenchers. The results for Ar/CH$_4$ (90:10) are qualitatively consistent with the measurements published in Ref.~\cite{AMENDOLIA198547}. The performance of the bi-polar grid is evaluated in terms of transparencies to electron and ion currents traversing it from above and from below respectively. Since the transparencies of the grid strongly depend on the electric fields coupled to it (Fig.~\ref{fig:fom_vs_teg_ArCH4_90_10}~vs~\ref{fig:fom_vs_teg_ArCH4_90_10_Et0p5}), most measurements are performed in configuration with electric fields 320~V/cm and 480~V/cm above and below the grid respectively. This configuration is chosen to facilitate comparisons between different gas mixtures. As a result, several common features can be seen in all measurements (Figs.~\ref{fig:t_necf49010},~\ref{fig:fom_vs_teg},~\ref{fig:fom_vs_teg_NeCF4_50_50}--\ref{fig:fom_vs_teg_ArCO2_90_10}). Without voltage bias on the grid wires, grid transparency to ions is about 70\% and grid transparency to electrons is above 90\% in all gas mixtures, except in Ar/CO$_2$ (Fig.~\ref{fig:fom_vs_teg_ArCO2_90_10}), where it is close to the ion transparency. Transparency values for electrons and ions measured at zero bias in the main field configuration do not depend on the strength of the magnetic field. Increasing voltage bias on the wires to $\pm$40~V in all gases brings the ion transparency to zero even in the strongest measured magnetic field of 1.2~T. At 1~mm pitch between the grid wires, this bias corresponds to an electric field of approximately 800~V/cm, twice the average of the coupled fields. An empirical estimate that the field inside the grid required to zero out the ion current through it shall be twice the field coupled to the grid also holds for other field configurations measured in this study (Figs.~\ref{fig:fom_vs_teg_ArCH4_90_10_Et0p5}, \ref{fig:fom_vs_teg_ArCH4_90_10_Ed140}). The shape of the curve for ion transparency shows weak dependence on the magnetic field, although the nature of the elevation that develops in the 10--30~V region in stronger magnetic fields is not clear. Garfield-based simulations well reproduce the grid transparency to ions but show no shape dependence on the magnetic field. Grid transparency to electrons is sensitive to the magnetic field and the required voltage bias to zero out the electron current increases in higher magnetic field by nearly a factor of 2 compared to ions (Figs.~\ref{fig:fom_vs_teg_NeCH4_90_10},~\ref{fig:fom_vs_teg_ArCH4_90_10}). At $\pm$40~V required to stop the ion flow, in 1.2~T field the grid retains 45--60\% transparency to electrons. The simulations reproduce the behavior of grid transparency to electrons in the absence of a magnetic field, but in some gases, the simulations show significant deviations from the measured curves when the magnetic field is present (Fig.~\ref{fig:mc_to_data}). To quantitatively evaluate the impact of the grid element in the structure of the \textrm{TPC}\xspace a figure of merit is introduced as explained in Sect~\ref{sec:results}. Its smaller values correspond to better performance of the \textrm{TPC}\xspace with the grid in suppressing positive ion backflow. A grid without voltage bias on its wires makes almost no impact on the \textrm{TPC}\xspace performance in any magnetic field, except in Ar/CO$_2$ gas mixture. Although this may be seen as a trivial statement, the measurements show if a grid plane is built in a \textrm{TPC}\xspace for the purpose to decouple drift and amplification regions, it makes almost no impact on the \textrm{TPC}\xspace performance (Figs.~\ref{fig:fom_vs_teg},~\ref{fig:fom_vs_teg_NeCF4_50_50}--\ref{fig:fom_vs_teg_ArCO2_90_10}). With the voltage bias on the wires, the grid performance strongly depends on the magnetic field. The effect of the grid in gas mixtures with small Lorentz angle using CO$_2$ as a quencher is insignificant, but it drastically improves in gases with larger Lorentz angles, such as the mixtures with CF$_4$ and CH$_4$. Between these two gases, CH$_4$ shows slightly better results (Fig.~\ref{fig:fom_vs_teg_NeCH4_90_10}~vs~\ref{fig:fom_vs_teg}, and Fig.~\ref{fig:fom_vs_teg_NeCH4_50_50}~vs~\ref{fig:fom_vs_teg_NeCF4_50_50}). The results somewhat improve in the mixtures with a lower concentration of the quenching gas (Fig.~\ref{fig:fom_vs_teg_NeCH4_90_10}~vs~\ref{fig:fom_vs_teg_NeCH4_50_50}, Fig.~\ref{fig:fom_vs_teg}~vs~\ref{fig:fom_vs_teg_NeCF4_50_50}, and Fig.~\ref{fig:fom_vs_teg_ArCH4_90_10}~vs~\ref{fig:fom_vs_teg_ArCH4_50_50}). For the same quencher, Ar-based mixtures show better results compared to Ne-based mixtures (Fig.~\ref{fig:fom_vs_teg_ArCH4_90_10}~vs~\ref{fig:fom_vs_teg_NeCH4_90_10}, and Fig.~\ref{fig:fom_vs_teg_ArCH4_50_50}~vs~\ref{fig:fom_vs_teg_NeCH4_50_50}). Although the results measured in this study are qualitatively consistent with the expectations coming from the theory of electrons and ions drift in gases~\cite{Hilke_2010,Sauli:1992bu}, the quantitative comparison shows significant deviations from the measured data, especially for electrons. Simulations based on Garfield++ toolkit are not sufficiently accurate in describing measurements in some gases (Fig.~\ref{fig:mc_to_data}). To get more insight into this problem, measurement were also done in other field configuration (Figs.~\ref{fig:fom_vs_teg_ArCH4_90_10_Et0p5},~\ref{fig:fom_vs_teg_ArCH4_90_10_Ed140}). Field change results in different behaviour seen in curves, which in some cases are consistent with expectations determined by the field changes. A surprising result is that although in Ar/CH$_4$ the Lorentz angle in lower electric field is expected to almost double~\cite{BITTL1997249} compared to the main setting, the measurements show that it results into small reduction of transparency to electrons (Figs.~\ref{fig:fom_vs_teg_ArCH4_90_10}~vs~\ref{fig:fom_vs_teg_ArCH4_90_10_Ed140}). Although some of the measured effects are not reproduced by simulation, results reported in the paper demonstrate that a passive bipolar grid operated in a magnetic field above 1~T can be used as an effective instrument to suppress the ion backflow in \textrm{TPCs}\xspace. \section{Results} \label{sec:results} The \textrm{BPG}\xspace transparencies in Ne:CF$_{4}$ (90:10) gas mixture are shown in Fig.~\ref{fig:t_necf49010}. Results for other gas mixtures are given in the Appendix. \begin{figure}[htb] \centering \includegraphics*[width=0.49\textwidth]{h_g_T_NeCF4_90_10_GEM_1_B_list_summary_table} \caption{\textrm{BPG}\xspace transparency as a function of \ensuremath{\Delta V}\xspace in Ne/CF$_{4}$ (90:10) gas mixture at different magnetic field setting. $\ensuremath{E_{d}}\xspace=320$~V/cm, $\ensuremath{E_{t}}\xspace=480$~V/cm. } \label{fig:t_necf49010} \end{figure} Measurements are done with the magnetic field switched off, and at the values 0.4~T, 0.8~T and 1.2~T. \ensuremath{T^{g}_{e}}\xspace and \ensuremath{T^{g}_{i}}\xspace are denoted by closed and open markers respectively. When the \textrm{BPG}\xspace is at $\ensuremath{\Delta V}\xspace=0$ for all values of the magnetic field $\ensuremath{T^{g}_{e}}\xspace\approx0.95$ and $\ensuremath{T^{g}_{i}}\xspace\approx0.67$, which is defined by the choice of $\ensuremath{E_{d}}\xspace/\ensuremath{E_{t}}\xspace$. With increasing \ensuremath{\Delta V}\xspace both transparencies decrease and reach zero. In the absence of the magnetic field, it occurs at $\ensuremath{\Delta V}\xspace\approx40$~V for ions and electrons. This voltage remains the same for ions also in the presence of the magnetic field although the shape of the \ensuremath{T^{g}_{i}}\xspace changes around 10--30~V. This effect is not fully understood. In the presence of magnetic field \ensuremath{T^{g}_{e}}\xspace behavior changes. It reaches zero at higher and higher voltages with increasing the magnetic field. At 1.2~T \ensuremath{T^{g}_{e}}\xspace is still around 0.5 when \ensuremath{T^{g}_{i}}\xspace is at zero. Thus, the \textrm{IBF}\xspace can be fully shut at the expense of losing approximately half of the primary ionization. In the highest measured field setting the shape of the \ensuremath{T^{g}_{e}}\xspace exhibits a kink at around $\ensuremath{\Delta V}\xspace=45$~V. Analogous behavior was also seen in~\cite{AMENDOLIA198547}. To quantitatively assess the insertion of the \textrm{BPG}\xspace element into \textrm{TPC}\xspace structure, one can introduce the figure of merit (\textit{FoM}\xspace) that is the ratio of the \textrm{IBF}\xspace flowing into the \textrm{TPC}\xspace with and without the \textrm{BPG}\xspace. The \textit{FoM}\xspace depends on the ratio of the transfer and drift fields $w=\ensuremath{E_{t}}\xspace/\ensuremath{E_{d}}\xspace=1.5$ discussed in Sect.~\ref{sec:et_measurement} and can be defined as: \begin{eqnarray} \textit{FoM}\xspace\left(w,\ensuremath{\Delta V}\xspace\right)=\frac{\ensuremath{T^{m}_{i}}\xspace(w,0)}{\ensuremath{T^{m}_{i}}\xspace(1,0)} \frac{\ensuremath{T^{g}_{i}}\xspace(w,\ensuremath{\Delta V}\xspace)}{\ensuremath{T^{g}_{e}}\xspace(w,\ensuremath{\Delta V}\xspace)}. \label{eqn:fom} \end{eqnarray} The \textit{FoM}\xspace is the product of two terms. The first term results from the discussion of Fig.~\ref{fig:edscan}, that higher \ensuremath{E_{t}}\xspace extracts more ions from the amplification plane of the \textrm{TPC}\xspace. Ion current extracted from that plane is characterized in Eq.~\ref{eqn:fom} by the ion transparency of the \textrm{mesh}\xspace. Thus, the first term is the ratio of \ensuremath{T^{m}_{i}}\xspace at the working setting of $w=1.5$ to that at $w=1$. The latter corresponds to the setup without the \textrm{BPG}\xspace in which the \ensuremath{E_{d}}\xspace is coupled directly to the \textrm{mesh}\xspace. The second term is the ion transparency \ensuremath{T^{g}_{i}}\xspace, divided by the electron transparency \ensuremath{T^{g}_{e}}\xspace. Denominator reflects the fact that the loss of primary ionization in \textrm{BPG}\xspace must be compensated by raising the gain in the \textrm{TPC}\xspace readout plane, which in turn would generate more ions. The \textit{FoM}\xspace defined by Eq.~\ref{eqn:fom} is greater than \ensuremath{T^{g}_{i}}\xspace for any \ensuremath{\Delta V}\xspace. A \textrm{TPC}\xspace with the \textrm{BPG}\xspace has better performance when the \textit{FoM}\xspace takes smaller values. Figure~\ref{fig:fom_vs_teg} showing \textit{FoM}\xspace as a function of \ensuremath{T^{g}_{e}}\xspace demonstrates how much the \textrm{IBF}\xspace in a \textrm{TPC}\xspace can be suppressed by introducing the \textrm{BPG}\xspace element into its structure. \begin{figure}[htb] \centering \includegraphics*[width=0.49\textwidth]{h_Ti_Te_vs_Te_NeCF4_90_10_GEM_1_B_list_summary_scaled} \caption{\textit{FoM}\xspace vs. \ensuremath{T^{g}_{e}}\xspace for different magnetic field settings.} \label{fig:fom_vs_teg} \end{figure} As follows from Eq.~\ref{eqn:fom}, point (1,1) indicated in the figure by a crossing of dashed lines, corresponds to the case when the \textrm{BPG}\xspace is absent in a \textrm{TPC}\xspace. All graphs in the figure start in the vicinity of point (1,1) at $\ensuremath{\Delta V}\xspace=0$. This shows that the \textrm{BPG}\xspace at a constant voltage makes little change to the \textrm{TPC}\xspace performance in any magnetic field. Rising \ensuremath{\Delta V}\xspace on the \textrm{BPG}\xspace wires in a low magnetic field leads to the loss of primary ionization and increase of the \textrm{IBF}\xspace, which is seen in the figure from the curves remaining above unity even at low \ensuremath{T^{g}_{e}}\xspace. The situation rapidly improves with the increase of the magnetic field, in which the \textrm{BPG}\xspace effectively suppresses the \textrm{IBF}\xspace while keeping most of the primary ionization. Suppression of the \textrm{IBF}\xspace by the \textrm{BPG}\xspace leads to loss of primary electron ionization and thus deteriorates the \textrm{TPC}\xspace $dE/dx$ resolution~\cite{Abelevetal:2014cna}. In the case of the \textrm{BPG}\xspace, this effect can be estimated in its leading order as a loss of primary electron statistics. Assuming that the relative loss of the $dE/dx$ resolution is reciprocal to $\sqrt{\ensuremath{T^{g}_{e}}\xspace}$ one can plot it vs. the \textit{FoM}\xspace as shown in Fig.~\ref{fig:dedx_vs_fom}. \begin{figure}[htb] \centering \includegraphics*[width=0.49\textwidth]{h_Ti_Te_vs_sqrtTe_NeCF4_90_10_GEM_2_B_list_summary_scaled_1} \caption{Dependence of the TPC $dE/dx$ resolution proportional to $1/\sqrt{\ensuremath{T^{g}_{e}}\xspace}$ vs. \textit{FoM}\xspace at high magnetic fields. Values compare to case of no \textrm{BPG}\xspace in the detector, corresponding to point (1,1).} \label{fig:dedx_vs_fom} \end{figure} The curve measured in the $B=1.2$~T shows that $\textrm{IBF}\xspace$ suppression by a factor of 5 is achievable at the expense of $40\%$ deterioration in the $dE/dx$ resolution, and it can be fully suppressed at a cost of 55\% of the resolution loss. If such \textrm{BPG}\xspace is installed in a \textrm{TPC}\xspace with the $dE/dx$ resolution of 10\% and \textrm{IBF}\xspace of 2\%, the resulting performance would be given by the product of these numbers: the $\textrm{TPC}\xspace\&\textrm{BPG}\xspace$ configuration would have $dE/dx$ resolution of 14\% at $\textrm{IBF}\xspace=0.4$\% and no \textrm{IBF}\xspace at the $dE/dx$ resolution of 15.5\%. The trend of the curve measured at $B=1.2$~T improves in a higher magnetic field. Garfield++ toolkit~\cite{Garfield} is used to simulate the \textrm{BPG}\xspace performance. Gas properties are simulated for Ne/CF$_{4}$ (90:10) and Ar/CH$_{4}$ (90:10) to reproduce the experimental conditions. Detector electrodes are modeled using {\it ComponentAnalyticField} class. Electrons are injected 2~mm above the \textrm{BPG}\xspace and ions originate from the volume of 60~$\mu$m in diameter around \textrm{anode}\xspace wires. The simulations use {\it DriftLineRKF} class to calculate the drift lines of the particles. Figure~\ref{fig:mc_to_data} compares simulation to the measured \begin{figure*}[htb] \centering \includegraphics*[width=0.49\textwidth]{mc_to_data_ArCH4_90_10} \includegraphics*[width=0.49\textwidth]{mc_to_data} \caption{Transparencies as a function of \ensuremath{\Delta V}\xspace compared to the Garfield++ simulation for Ar/CH$_{4}$ (90:10) (left) and Ne/CF$_{4}$ (90:10) (right) gas mixtures.} \label{fig:mc_to_data} \end{figure*} results for Ar/CH$_{4}$ gas mixture shown in the left panel and Ne/CF$_{4}$ shown in the right panel. Comparisons are done for settings without a magnetic field and for $B=1.2$~T. In the absence of the magnetic field, the simulation reproduces the electron and ion data within approximately 10\%, comparable to the data measurement accuracy. In the presence of a magnetic field, the simulation curves for ions remain the same, whereas the data shows different shapes. Nevertheless, the point where the \ensuremath{T_{i}}\xspace reaches zero is well reproduced by the simulation. For Ar/CH$_{4}$ gas mixture the simulation curve for \ensuremath{T_{e}}\xspace agrees with the data reasonably well but for Ne/CF$_{4}$ it shows the significant deviation. \section{Uncertainties of the measurements} \label{sec:errors} The nominal accuracy of the devices used in the measurement plays little role in the final results. These include precision of the power supplies, gas flow controllers, magnet, measuring devices, etc. The mechanical tolerance of the setup assembly is within hundreds of microns so that the fields discussed in the paper are known with a typical accuracy of 5\%. The non-uniformity of the gas gain, wire spacing, impact of the field distortion at the edges was studied by changing the illumination angle of the collimator radiation while keeping a similar counting rate. An approximate 2\% difference was found in the result which is assigned to the uncertainties. Detector stability was estimated to contribute up to 5\% uncertainty which is the difference between two identical measurements performed with a month interval during which the detector was reassembled and the gas mixture was changed more than once. Possible residual space-charge effects in the measurements performed with the currents $\ensuremath{i_{c}}\xspace<1$~nA are estimated as 10\% of the difference between the measurements done at $\ensuremath{i_{c}}\xspace=10$~nA and $\ensuremath{i_{c}}\xspace=1$~nA. They contribute up to 2\% at the highest \ensuremath{i_{c}}\xspace in the measurement of \ensuremath{T^{g}_{i}}\xspace. The absolute normalization of the \ensuremath{T^{g}_{e}}\xspace curve explained in Sect.~\ref{sec:charge_measurement} relies on the extrapolation of the curves shown in Fig.~\ref{fig:charges} to zero values. A 3\% uncertainty is added to the result based on the uncertainties in extrapolation. As follows from the setup shown in Fig.~\ref{fig:setup}, applying \ensuremath{\Delta V}\xspace to \textrm{BPG}\xspace shall not affect the \textrm{mesh}\xspace transparencies as long as the average \textrm{BPG}\xspace potential remains the same. However, a small change up to 7\% of the measured \ensuremath{T^{m}_{i}}\xspace value given by Eq.~\ref{eqn:tim}, is observed in the experiment. This value is directly assigned to the \ensuremath{T^{g}_{i}}\xspace as an uncertainty. The contribution of different sources depends on \ensuremath{\Delta V}\xspace. At the highest transparency values, the uncertainties reach 7.5\%. At the transparency values close to zero the systematic uncertainty remains at the level of 0.5\% of the full scale and is approximately symmetric around zero, reflecting the fact that the results are obtained by subtracting measured currents as explained in Sect.~\ref{sec:mesurements}. The uncertainties of \ensuremath{T^{g}_{e}}\xspace and \ensuremath{T^{g}_{i}}\xspace are not the same, but are close in their values and are partially correlated with each other. To preserve the clarity of the plots, the dependence of systematic uncertainty on \ensuremath{\Delta V}\xspace is shown as a band around zero. It has to be taken as the uncertainty estimator for individual curves shown in figures. \section{Introduction} \label{sec:intro} The time projection chambers (\textrm{TPCs}\xspace) are introduced by David Nygren~\cite{Nygren:1974wta} in 1974 and have been successfully used in different particle physics experiments~\cite{Aihara:4332223,Kamae:1986jd,BRAND1989567,ATWOOD1991446,FUCHS1995394,Anderson:2003ur,ALME2010316}. \textrm{TPCs}\xspace have a number of features that make them an attractive technological choice for detectors in high-energy and nuclear physics experiments. Due to their excellent capability to reconstruct 3D topology of charged particles produced in interactions \textrm{TPCs}\xspace are widely used in experiments where the measurement of a multi-particle final state is required. Being operated in an external magnetic field, \textrm{TPCs}\xspace provide high precision momentum measurement of the tracks, down to very low magnitudes. Sampling energy deposition in the gas working volume gives \textrm{TPCs}\xspace the particle identification capabilities. The use of the gas as a working medium makes \textrm{TPCs}\xspace a low radiation length detectors that are easily combined with detectors based on other technologies as it is required in most modern experiments. Last but not least, \textrm{TPCs}\xspace are relatively inexpensive devices. A combination of these features makes \textrm{TPC}\xspace a widely used detector technology after more than 45 years since it was introduced. Together with the advantages, the main setback of \textrm{TPCs}\xspace is the low data taking rate which is a severe constraint on the use of \textrm{TPCs}\xspace in modern experiments requiring high data-taking rates. Among several factors that affect the rates the most difficult to overcome is the space charge that builds up in the \textrm{TPC}\xspace volume and distorts the drift of the primary ionization. Charges in the \textrm{TPC}\xspace volume are carried by slow-moving ions produced in the readout elements of the \textrm{TPC}\xspace. This is known as the positive ion backflow (\textrm{IBF}\xspace) problem. To address the \textrm{IBF}\xspace problem the first \textrm{TPC}\xspace built in 1984~\cite{Aihara:4332223} used a plane of wires called the bipolar gating grid (\textrm{BPG}\xspace) separating \textrm{TPC}\xspace readout elements from the drift volume. Applying positive and negative bias voltages to odd an even wires of the grid stops the ion and electron flow through the \textrm{BPG}\xspace. \textrm{TPCs}\xspace developed in recent years~\cite{Ball:2012xh,Abelevetal:2014cna,COLAS2004226} adopt the concept of amplification element being also the \textrm{IBF}\xspace-stopper. Multiple-layer micropattern detectors used as amplification elements are capable of trapping ions between their layers~\cite{SAULI2006269,1312013,BONDAR2003325,BLATT2006155,Chechik:2003hx,5658080,Bohmer:2012wd,Aiola:2016rld,WANG2019410}. Nevertheless, most of the large \textrm{TPCs}\xspace built by the present time rely on the \textrm{BPG}\xspace to suppress the \textrm{IBF}\xspace~\cite{Hilke_2010}. A \textrm{BPG}\xspace can be operated in synchronous and passive modes~\cite{AMENDOLIA198547,AMENDOLIA1986403}. The former implies that the voltage bias on the wires is synchronized with an external trigger. The duration and the frequency of pulses ensure that all ions are collected on the \textrm{BPG}\xspace. It also results in stopping the electrons going through the \textrm{BPG}\xspace, producing a dead time in the system. In the presence of the magnetic field, the voltages required to stop electrons are higher than those that are required to stop the ions, which allows the \textrm{BPG}\xspace to retain some electron transparency when the ion current is fully shut. It opens the possibility to operate the \textrm{BPG}\xspace in passive mode with constant biases on odd an even wires. Achieving high data-taking rates in a \textrm{TPC}\xspace operated in a passive mode is much easier. All \textrm{TPCs}\xspace built by the large particle experiments up to the present time used the \textrm{BPG}\xspace in a synchronous mode, although passive mode was also considered for the detectors in the magnetic fields above 1~T~\cite{Anderson:2003ur,ALME2010316}. The principle of the \textrm{BPG}\xspace operation in a passive mode is based on the effect that in the presence of the magnetic and electric fields the direction of the electron drift has a component along the vector product of the two fields. The electron drift in this direction is described with the Lorentz angle that is explained in many works~\cite{Hilke_2010,Kent:1984ha,Sauli:1992bu,Blum:1105920} and therefore is not elaborated here. This paper provides a detailed study of the \textrm{BPG}\xspace transparency for electrons and ions in different gas mixtures in the presence of the magnetic field. The results show that the \textrm{BPG}\xspace operated in a passive mode can be used as an effective element to suppress the \textrm{IBF}\xspace in the \textrm{TPCs}\xspace operated in a strong magnetic field, for example in the sPHENIX \textrm{TPC}\xspace~\cite{Adare:2015kwa,sPHENIX:2015irh}. \section{Measurements} \label{sec:mesurements} \subsection{The setup} \label{sec:setup} \begin{figure*}[ht] \centering \includegraphics*[width=\textwidth]{setup} \caption{Schematic view of the experimental setup. Colored (online) lines represent ion (left) and electron (right) drift trajectories, respectively.} \label{fig:setup} \end{figure*} The setup built at the Weizmann Institute consists of the \textrm{BPG}\xspace sandwiched between the ion generating plane and the ion receiving cathode immersed into a gas volume. A schematic view of the setup is shown in Fig.~\ref{fig:setup}. All frames used in the setup have the working area of $27\times27$ mm$^{2}$. The primary ionization is produced by the $^{55}\mathrm{Fe}$\xspace source positioned above the gas volume. At the time of measurement, the source intensity was approximately 3.5~mCi. The amount of gamma radiation illuminating the detector volume is controlled by a collimator. X-rays from the source enter the gas volume through a \textrm{cathode}\xspace electrode built of a thin \textrm{GEM}\xspace electrically connected on both sides and illuminates the entire detector. A volume between the \textrm{cathode}\xspace and the \textrm{BPG}\xspace has a vertical dimension of 12~mm. Photons converting in this volume produce primary ionization that drifts towards the \textrm{BPG}\xspace. The \textrm{BPG}\xspace is built of 50~$\mu$m wires spaced by 1~mm. Odd and even wires of the \textrm{BPG}\xspace are set to a voltage of $V_{g}\pm\ensuremath{\Delta V}\xspace$ fed on the opposite sides of the frame. Thus the adjacent wires have a voltage difference of $2\ensuremath{\Delta V}\xspace$. The \textrm{mesh}\xspace electrode is located 4~mm below the \textrm{BPG}\xspace. It is made of the stainless steel mesh with $0.5\times0.5$~mm$^{2}$ cells and a wire diameter of 50~$\mu$m, providing $\sim$80\% optical transparency. Electrons passing the \textrm{BPG}\xspace and those that are coming from conversions inside the 4~mm space, enter the \textrm{mesh}\xspace electrode. A collimator (not shown in the figure) that immediately follows the \textrm{mesh}\xspace is made of thin dielectric material and limits the working area of the detector to a circle of 20~mm in diameter. The collimator eliminates the edge effects and increases lateral uniformity of the ionization flux. The wire plane located 3~mm below the \textrm{mesh}\xspace is made of 50~$\mu$m Cu/Rh wires spaced by 2.5~mm. Voltages are applied to the wires on opposite sides of the frame. Field wires are grounded and \textrm{anode}\xspace wires are set to 1.7--2.1~kV to provide the desired gas gain depending on the gas mixture. To reduce parasitic currents flowing between \textrm{field}\xspace and \textrm{anode}\xspace wires the grooves are made in the FR4 material of the frames holding the wires. Wires and grooves are covered with epoxy. During the assembly, the \textrm{anode}\xspace and \textrm{field}\xspace wires on the wire plane are directed orthogonal to the \textrm{BPG}\xspace wires. This plane is located 3~mm above the \textrm{pad}\xspace plane that is grounded. Electrons from the conversion of the $^{55}\mathrm{Fe}$\xspace photons that occur in this volume reach the \textrm{anode}\xspace wires without passing through any other element in the setup. The setup is shown in Fig.~\ref{fig:setup}. It is assembled in a $15\times15\times3$ cm$^{3}$ dielectric box and covered with a copper foil on the outside for electrical grounding. The vertical size of the box is constrained by the dimension of the magnet bore in which the dipole field up to 1.2~T is generated by a magnet produced by Danphysik GGG. The field is controlled by Group3 DTM-151 tesla meter with MPT-141 probe providing 0.012\% accuracy. The HV is supplied by CAEN N471 and Lambda Z$^{+}$ 320 power supplies through the low-pass filters with $RC\approx2$~s. All conductive elements in the setup are read out by the floating picoammeters connected to the computer via optical links. Picoammeters are produced by PicoLogic J.D.O.O. in Zagreb~\cite{UTROBICIC201521}. In the working regime, the parasitic currents in the measured channels averaged over 1~s, are in 5--80~pA range at the highest \textrm{anode}\xspace voltages. Signals from the \textrm{pad}\xspace electrode can be switched to the charge measuring channel consisting of the Ortec charge sensitive preamplifier 142 IH followed by a shaping amplifier Ortec 672 and read out by the Ortec multichannel analyzer (MCA), Ametek Easy-MCA 2000. The gas mixtures are prepared in a gas mixing station using calibrated mass flow controllers (Aalborg GFCS). Flow controllers are calibrated by the water displacement method after each change of mixed gases. The accuracy of the quenching fraction in the gas mixture is $\pm$3.5\% for (90:10) gas mixtures and $\pm$1.5\% for (50:50) gas mixtures. Gas flow is set to provide detector volume exchange every 10 minutes preventing possible outgassing from the structural elements of the setup into the working gas atmosphere. The gas used in the measurement comes in the bottles with purity $>99.99$\% and is not recirculated during the measurements. \subsection{Definitions of transparency} \label{sec:method} The measurements are carried out at a sufficiently high gain on \textrm{anode}\xspace wires such that the contribution of the primary ionization to the currents ($i$) can be neglected. Then \begin{equation} -\ensuremath{i_{a}}\xspace=\ensuremath{i_{c}}\xspace+\ensuremath{i_{g}}\xspace+\ensuremath{i_{m}}\xspace+\ensuremath{i_{f}}\xspace+\ensuremath{i_{p}}\xspace \label{eq:currents} \end{equation} is fulfilled with the accuracy of picoammeters. Currents in the equation correspond to \textrm{cathode}\xspace~(\ensuremath{i_{c}}\xspace), \textrm{BPG}\xspace~(\ensuremath{i_{g}}\xspace), \textrm{mesh}\xspace~(\ensuremath{i_{m}}\xspace), \textrm{field}\xspace~(\ensuremath{i_{f}}\xspace), \textrm{pad}\xspace~(\ensuremath{i_{p}}\xspace), \textrm{anode}\xspace~(\ensuremath{i_{a}}\xspace) electrodes respectively, as shown in Fig.~\ref{fig:setup}. The \textrm{BPG}\xspace is considered here as a standalone element in an arbitrary \textrm{TPC}\xspace detector. For electrons and ions traversing the \textrm{BPG}\xspace, its impact can be characterized by transparency parameters denoted as \ensuremath{T^{g}_{e}}\xspace and \ensuremath{T^{g}_{i}}\xspace respectively. From the setup shown in Fig.~\ref{fig:setup} and Eq.~\eqref{eq:currents}. \begin{eqnarray} \ensuremath{T^{g}_{i}}\xspace=\frac{\ensuremath{i_{c}}\xspace}{-\ensuremath{i_{a}}\xspace-\ensuremath{i_{p}}\xspace-\ensuremath{i_{m}}\xspace-\ensuremath{i_{f}}\xspace}. \label{eqn:tig} \end{eqnarray} That is the ion current reaching the \textrm{cathode}\xspace above the \textrm{BPG}\xspace divided by the ion current that flows into the \textrm{BPG}\xspace, {\it i.e.} the ion current emerging from \textrm{anode}\xspace wires less the currents in \textrm{pad}\xspace, \textrm{field}\xspace, and \textrm{mesh}\xspace electrodes. Analogously, one can also define the ion transparency of the \textrm{mesh}\xspace electrode. \begin{eqnarray} \ensuremath{T^{m}_{i}}\xspace=\frac{\ensuremath{i_{c}}\xspace+\ensuremath{i_{g}}\xspace}{-\ensuremath{i_{a}}\xspace-\ensuremath{i_{p}}\xspace-\ensuremath{i_{f}}\xspace}. \label{eqn:tim} \end{eqnarray} \ensuremath{T^{g}_{e}}\xspace cannot be defined as a ratio of currents, because the electron components in all currents, except in \ensuremath{i_{a}}\xspace, are negligibly small. The value of \ensuremath{i_{a}}\xspace depends on the amount of initial ionization in all parts of the setup. Neglecting electron attachment, electrons from photon conversions in the gas arrive at anode wires unless they are captured by the \textrm{BPG}\xspace or the \textrm{mesh}\xspace. By changing \ensuremath{\Delta V}\xspace on \textrm{BPG}\xspace one can suppress or fully block electrons coming from the detector volume above the \textrm{BPG}\xspace. This consideration leads to Eq.~\eqref{eqn:shape} which is the ratio of the anode current, coming from everywhere in the setup, to the current that is coming from below the \textrm{BPG}\xspace. \ensuremath{T^{g}_{e}}\xspace can be deduced from the shape of \ensuremath{i_{a}}\xspace measured as a function of \ensuremath{\Delta V}\xspace. \begin{eqnarray} \frac{\ensuremath{i_{a}}\xspace(\ensuremath{\Delta V}\xspace)}{\ensuremath{i_{a}}\xspace(\ensuremath{E_{d}}\xspace=0)} = 1 + K\ensuremath{T^{m}_{e}}\xspace\ensuremath{T^{g}_{e}}\xspace(\ensuremath{\Delta V}\xspace), \label{eqn:shape} \end{eqnarray} The constant coefficient of $K$ in the equation is a relative amount of primary ionization that is generated above and below the \textrm{BPG}\xspace and amplified on the \textrm{anode}\xspace wires. The $\ensuremath{T^{m}_{e}}\xspace$ is the \textrm{mesh}\xspace transparency to electrons. To first order, these coefficients do not depend on \ensuremath{\Delta V}\xspace since the electric fields around the \textrm{mesh}\xspace are not affected by the voltages between the \textrm{BPG}\xspace wires, see Fig.~\ref{fig:setup}. The \textrm{anode}\xspace current on the left-hand side of the equation is divided by the current, measured when electrons from above are not flowing to the \textrm{BPG}\xspace, $\ensuremath{i_{a}}\xspace(\ensuremath{E_{d}}\xspace=0)$. It is experimentally proven that the same current is measured in the \textrm{anode}\xspace wires when the \textrm{BPG}\xspace is fully closed to primary electrons. In this case, the reduced \ensuremath{i_{a}}\xspace is equal to unity. Thus, the shape of the $\ensuremath{T^{g}_{e}}\xspace$ dependence can be extracted from measuring $\ensuremath{i_{a}}\xspace(\ensuremath{\Delta V}\xspace)$ and using Eq.~\eqref{eqn:shape}. Determining the absolute normalization of \ensuremath{T^{g}_{e}}\xspace from Eq.~\eqref{eqn:shape} requires precise knowledge of the coefficient $K$, which in turn depends on the geometry of all elements in the setup. Instead, for the final results, the absolute normalization of \ensuremath{T^{g}_{e}}\xspace was worked out as follows. Charge distributions from the $^{55}\mathrm{Fe}$\xspace ionization source are measured in the \textrm{pad}\xspace electrode for three different cases: ionization that is coming to the \textrm{anode}\xspace directly; ionization reaching the \textrm{anode}\xspace through the \textrm{mesh}\xspace; ionization reaching \textrm{anode}\xspace through \textrm{mesh}\xspace and \textrm{BPG}\xspace. These distributions are obtained by setting the drift field (\ensuremath{E_{d}}\xspace) between the \textrm{cathode}\xspace and the \textrm{BPG}\xspace and the transfer field (\ensuremath{E_{t}}\xspace) between the \textrm{BPG}\xspace and the \textrm{mesh}\xspace to their nominal values or zero. The three distributions are shown with lines in the left panel of Fig.~\ref{fig:charges}, the corresponding field settings are mentioned in the figure legend. \begin{figure*}[htb!] \centering \includegraphics[width=0.49\textwidth]{h_NeCF4_90_10_B0_Vg10_7_1mm_notSub_fill} \includegraphics[width=0.49\textwidth]{h_Te_NeCF4_90_10_GEM_1_B0_list_scale} \caption{Left: Charge distributions measured in the \textrm{pad}\xspace electrode at different field configurations. Right: \ensuremath{T^{g}_{e}}\xspace as a function of \ensuremath{\Delta V}\xspace calculated from the currents and the charge measurements. The magnitude of the current measurement at $\ensuremath{\Delta V}\xspace=0$ is set to the value of the charge measurement and therefore is not shown.} \label{fig:charges} \end{figure*} Primary ionization from the photons converting above the \textrm{BPG}\xspace drifts through the \textrm{BPG}\xspace and the \textrm{mesh}\xspace and therefore is attenuated by the factor (\ensuremath{T^{g}_{e}}\xspace\ensuremath{T^{m}_{e}}\xspace). Primary ionization from the photons converting between the \textrm{BPG}\xspace and the \textrm{mesh}\xspace is attenuated by the factor of \ensuremath{T^{m}_{e}}\xspace only. Therefore by subtracting distributions one can compare two distributions before and after the \textrm{BPG}\xspace. The ratio of the mean values of theses two distributions after extrapolating them to zero provides the absolute normalization of \ensuremath{T^{g}_{e}}\xspace. The right panel of Fig.~\ref{fig:charges} shows the result of this measurement with filled markers. The results are compared to the current measurement based on Eq.~\eqref{eqn:shape} and shown with open markers. All points in the current measurement are multiplied by the same factor such that the first point, at $\ensuremath{\Delta V}\xspace=0$, gets the value of the charge measurement. After this both curves agree in the region $\ensuremath{\Delta V}\xspace<20$~V. Above 20~V the charge measurement becomes difficult because peaks shown in the left panel of the figure disappear and distributions shift close to zero values. Results presented in this paper are based on the measurement deduced from the currents and normalized from the charges. \subsection{Source intensity} \label{sec:spece-charge} The amount of ionization let into the system by the collimator is chosen as a compromise. Lowering currents to a few pA level requires the extension of the measurement cycle to hours, and makes smaller effects enter the consideration such as control over detector stability, better knowledge of the baseline values, additional control over low-frequency micro discharges, etc. Raising the currents results in the build-up of the space charge in the setup that alters electric fields around the \textrm{BPG}\xspace. Although the space charge problem is typically associated with much larger detectors, the setup is shown in Fig.~\ref{fig:setup} with the working area of approximately 3~cm$^2$ and relatively short gaps operates at much higher current densities than most of the larger detectors. The space charge effects are studied with the \ensuremath{T^{g}_{i}}\xspace curve measured with different currents. It is done by attenuating $^{55}\mathrm{Fe}$\xspace source with the collimator and by lowering the gas gain. The results are shown in Fig.~\ref{fig:current_test}. Current ranges are characterized by the \textrm{cathode}\xspace current $\ensuremath{i_{c}}\xspace(\ensuremath{\Delta V}\xspace=0)$, typically the largest current used to produce the corresponding curve. \begin{figure}[htb] \centering \includegraphics*[width=0.49\textwidth]{current_test} \caption{\ensuremath{T^{g}_{i}}\xspace as a function of \ensuremath{\Delta V}\xspace measured using different current ranges indicated with \ensuremath{i_{c}}\xspace(\ensuremath{\Delta V}\xspace$=0$) in the legend.} \label{fig:current_test} \end{figure} The study shows that the measurements made with currents below 500~pA are hard to reproduce and are considered unstable, whereas in the range above 1~nA the space-charge effects start to develop at $\ensuremath{\Delta V}\xspace=0$. Thus for the results presented in this paper the currents in the \textrm{cathode}\xspace are kept below 1~nA to avoid space-charge effects and prevent zero-current miss-measurement. \subsection{Measurement procedure} \label{sec:proc} \subsubsection{General settings} All measurements follow the standard procedure. The gas flow is set to approximately 30~cm$^{3}$/min and the detector is flushed over 1~h, corresponding to more than 10 detector and tubing volumes exchanges. The magnetic field is set to the desired value. Voltages are set on \textrm{anode}\xspace, \textrm{cathode}\xspace, and both types of the \textrm{BPG}\xspace wires. Pad, \textrm{field}\xspace, and \textrm{mesh}\xspace electrodes remain grounded. The gas gain is set to the nominal value of 3500 measured by the position of ionization peak in the \textrm{pad}\xspace. The collimator is adjusted to produce the \ensuremath{i_{c}}\xspace current from the $^{55}\mathrm{Fe}$\xspace source close to 1~nA. The detector is operated in this stage for approximately 30~min after which the settings are additionally adjusted if needed. The following three measurements are performed in each gas mixture at each magnetic field magnitude and combined to produce the final results reported in this paper. \subsubsection{Transfer field scan} \label{sec:et_measurement} The \textrm{BPG}\xspace transparency to ions and electrons strongly depends on the magnitudes of \ensuremath{E_{d}}\xspace and \ensuremath{E_{t}}\xspace, which choice is closely related to properties of gas and many other considerations~\cite{ALME2010316,603731}, including the Lorentz angle. Since optimization of the gas mixture and \ensuremath{E_{d}}\xspace is not feasible in the scope of this paper, to make comparative studies of the \textrm{BPG}\xspace performance, different gases are measured in the same field configuration, called "main" in which \ensuremath{E_{d}}\xspace is kept at a constant value of 320~V/cm in all measurements. To find dependence on \ensuremath{E_{t}}\xspace the voltages on \textrm{cathode}\xspace and \textrm{BPG}\xspace are set to values that provide $\ensuremath{E_{d}}\xspace=320$~V/cm and $\ensuremath{E_{t}}\xspace=0.5\ensuremath{E_{d}}\xspace =160$~V/cm. The two voltages are then increased in steps such that \ensuremath{E_{t}}\xspace is incremented by $0.125\ensuremath{E_{d}}\xspace$ until it reaches $2.5\ensuremath{E_{d}}\xspace$. After each voltage change, the detector is operated for a waiting time of 5~min without any change and then the measurement is taken averaged over 1~min. Results of the \ensuremath{E_{t}}\xspace scan are shown in Fig.~\ref{fig:edscan}. \begin{figure}[htb] \centering \includegraphics*[width=0.49\textwidth]{EtEd_NeCF4_9010} \caption{Transparencies as a function of $\ensuremath{E_{t}}\xspace/\ensuremath{E_{d}}\xspace$ scan for $\ensuremath{E_{d}}\xspace=320$~V/cm. \ensuremath{T_{e}}\xspace is the product of $\ensuremath{T^{g}_{e}}\xspace\ensuremath{T^{m}_{e}}\xspace$ normalized to unity at maximum.} \label{fig:edscan} \end{figure} The electron transparency curve \ensuremath{T_{e}}\xspace shown with circles is the product of $(\ensuremath{T^{g}_{e}}\xspace\ensuremath{T^{m}_{e}}\xspace)$. Since the electric field below \textrm{mesh}\xspace is defined by the voltage on \textrm{anode}\xspace wires and is much stronger than \ensuremath{E_{t}}\xspace, \ensuremath{T^{m}_{e}}\xspace is high and therefore does not strongly depend on \ensuremath{E_{t}}\xspace, as long as \ensuremath{E_{t}}\xspace is low. A small decrease in \ensuremath{T_{e}}\xspace above $Et/Ed>1.5$ seen in the figure may indicate a departure from this regime. \ensuremath{T_{e}}\xspace rises with increasing \ensuremath{E_{t}}\xspace and reaches a maximum around $\ensuremath{E_{t}}\xspace=1.5\ensuremath{E_{d}}\xspace$. \ensuremath{T^{g}_{i}}\xspace, shown with crosses, steadily decreases with increasing \ensuremath{E_{t}}\xspace. This, however, is offset by an increase of \ensuremath{T^{m}_{i}}\xspace shown in the plot with diamonds. Analogous effects would also be present in a real detector in which \ensuremath{E_{t}}\xspace coupled to the \textrm{TPC}\xspace amplification plane below the \textrm{BPG}\xspace, would extract ions into the drift volume of the detector. Those two effects nearly cancel each other above $\ensuremath{E_{t}}\xspace = \ensuremath{E_{d}}\xspace$ as shown in Fig.~\ref{fig:edscan} with square symbols, which are the product of (\ensuremath{T^{g}_{i}}\xspace\ensuremath{T^{m}_{i}}\xspace). As a result of this study the working setting is chosen $w=\ensuremath{E_{t}}\xspace/\ensuremath{E_{d}}\xspace = 1.5$, $\ensuremath{E_{t}}\xspace = 480$~V/cm. Since the result measured in different gases are comparable, and following the decision to use the constant \ensuremath{E_{d}}\xspace, the same \ensuremath{E_{t}}\xspace value is used in all measurements to facilitate comparisons between different gas mixtures. \subsubsection{Charge measurement} \label{sec:charge_measurement} The \textrm{pad}\xspace electrode is connected to the charge measurement line. To minimize the MCA dead time $<5\%$ the collimator is adjusted to provide the counting rate in the detector below $10^5$~s$^{-1}$. Three measurements are taken with fields set to: \begin{enumerate} \item $\ensuremath{E_{t}}\xspace = \ensuremath{E_{d}}\xspace = 0$, \item $\ensuremath{E_{d}}\xspace = 0$, $\ensuremath{E_{t}}\xspace = 480$~V/cm, \item $\ensuremath{E_{d}}\xspace = 320$~V/cm, $\ensuremath{E_{t}}\xspace = 480$~V/cm. \end{enumerate} Data-taking time for each measurement is 5~min, the measurements are done to collect sufficient statistics. The results of this measurement are shown in the left panel of Fig~\ref{fig:charges} and are used for absolute normalization of the \ensuremath{T^{g}_{e}}\xspace. \subsubsection{\textrm{BPG}\xspace voltage scan} For this measurement \textrm{pad}\xspace electrode is reconnected to the picoammeter, \textrm{cathode}\xspace, and \textrm{BPG}\xspace are kept at the setting as for the last measurement in Sect.~\ref{sec:charge_measurement}, and the collimator is returned to its previous setting. The measurements are taken for \ensuremath{\Delta V}\xspace rising from 0 to 80~V in 2~V increments. The pA's\xspace values are averaged over 60~s time after 10~s waiting time following each change in the voltage settings. Results of this measurement for $\ensuremath{T^{g}_{e}}\xspace$ are shown in the right panel of Fig.~\ref{fig:charges} and for $\ensuremath{T^{g}_{i}}\xspace$ in Fig.~\ref{fig:current_test}.
{ "timestamp": "2020-12-02T02:24:13", "yymm": "2007", "arxiv_id": "2007.13616", "language": "en", "url": "https://arxiv.org/abs/2007.13616" }
\section{Introduction} The hidden economy is perceived by tax authorities around the world, such as the Australian Taxation Office (ATO), as containing businesses that intentionally hide their income to avoid paying the right amount of taxes primarily by not recording or reporting all cash or electronic transactions; and tax evasion refers to an illegal activity that an individual or entity deliberately evades paying a true tax liability \cite{ATO2016}. According to the ATO, the hidden economy is most common in the small business segment, where about 1.6 Million small businesses operating across 233 industries are more likely to receive cash on a regular basis \cite{ATO2016}. Those caught evading taxes are usually subject to criminal charges and substantial penalties \cite{Skinner1985}. Social media platforms such as Facebook, Twitter and Instagram serve billions of users by providing convenient means of communication, content sharing and even payment between different users, allowing for social e-commerce activities to occur. Due to their convenient and anarchic nature, it is easy for unregistered market participants to promote and conduct business activities without paying taxes. The industrial scale of these social e-commerce activities became a worldwide phenomenon, and the total transaction volume is significant. For example, Thailand was reported to have the world’s largest social e-commerce market in 2017, where 51 percent of online shoppers bought products via social media platforms, their value more than doubled to 334.2 Billion (\$10.92 Billion) \cite{PYMNTS.com2019}. According to a McKinsey report \cite{McKinseyCompany2018}, Indonesia's e-commerce spending is at least \$8 Billion a year driven by 30 Million shoppers. Among them, transactions occurred over social media platforms constituted more than \$3 Billion, accounting for around one third of the e-commerce market, projected to be \$55--65 Billion by 2022. Due to the concealment through text and visual content, and the social nature of social media platforms, solicited transactions are difficult for tax authorities to detect, representing the online form of transaction-based tax evasion. Tax leakage from the resulting digitalized hidden economy is significant \cite{Gaspareniene2017} and should be studied. For example, it was reported \cite{McCauley2016} that there is serious transaction-based tax evasion from cross-border online goods sales via social media platforms between China and Australia, resulting in tax losses in both countries: ``up to AUD \$1 Billion in undeclared taxable income may be slipping through the net, leaving a potential tax bill in the hundreds of millions.'' The advancement of machine learning, especially Deep Neural Networks (DNN), provides powerful tools to analyze the textual and visual content on social media. Many exciting applications have been developed with the aim to identify and combat illegal activities on social media, such as cyberbullying \cite{Agrawal2018}, counterfeit products \cite{Cheung2018}, substance use \cite{Hassanpour2019}, and drug dealing \cite{Li2019}. Compared to other applications, detecting transaction-based tax evasion activities is a unique and challenging problem. Unlike illegal substances/goods, goods sold through tax evasion transactions could be perfectly legal when the transaction happens on a registered channel. Therefore, they cannot be easily identified by simple product keywords/images. Moreover, tax evasion transactions are a mixture of different product sources and selling strategies, resulting in greater varieties of textual and visual content than most applications. To tackle the above challenges, we built a DNN-based Regtech tool to automatically detect transaction-based tax evasion activities based on a dataset we collected from Instagram. The contributions of the tool are two-fold: \begin{itemize} \item We collected a dataset of 58,660 Instagram posts related to \#lipstick. We then manually labeled 2,081 sampled posts with multiple properties. The dataset provides a solid baseline to understand sales and hidden economy activities on Instagram, and facilitates the development of detection models. \item We developed a Regtech tool to automatically detect suspicious posts of transaction-based tax evasion activities on social media platforms. The tool utilizes a multi-modal DNN model, which combines comments, hashtags and image modalities with state-of-the-art language and image networks. Our experiments shown that the combined model outperforms any single modality models, achieving an AUC of 0.808 and F1 score of 0.762. The tool could greatly increase the effective detection rate and save enforcement costs for tax authorities. \end{itemize} \section{Related Work} It is difficult for governments to detect transactions in the hidden economy. Associated tax evasion has been a long-standing topic studied by tax administration and compliance scholars \cite{Rogoff2017}. Since transaction-based tax evasion activities moved onto social media platforms, traditional tax auditing detection methods failed to be implemented effectively by tax authorities, and little research has been done. The main focus of tax authorities on social media platforms is to detect taxpayers who evade income tax. They aim to catch taxpayers directly, that is, they seek to detect whether the taxpayer declared their income sourced from their social media accounts honestly \cite{Williamson2016}. Each taxpayer is one detection point. In Australia, the ATO has hired a team of data mining experts to look at things like Facebook and Instagram to see if income reported by taxpayers match up with their actual revenue \cite{Williamson2016}. Transaction-based tax evasion differs from income tax evasion, taxes being evaded are transaction taxes, such as the GST, and each transaction is a stand-alone detection point, giving rise to many more detection points than the number of taxpayers. The Uganda government has taken advantage of technology and is collecting a social media tax based on the taxpayer’s daily use of the platform, but failing to take into account transaction activities\cite{Ratcliffe2019}. As the social media becomes increasingly multi-modal and the unregistered selling activities become more sophisticated, it is essential to assess both textual (e.g., hashtags and comments) and visual (e.g., image and video) content to decide if a post on the social media has the intention to facilitate tax evasion transactions. A successful automatic detection system needs to handle both textual and visual information in a robust and efficient manner. Thankfully, with the recent development of DNN methods, we have seen exciting breakthroughs in many computer vision (CV) and natural language processing (NLP) tasks. In CV, from AlexNet \cite{Krizhevsky2012}, GoogLeNet \cite{Szegedy2015}, ResNet \cite{He2016} to EfficientNet \cite{Tan2019}, deep convolutional neural networks (DCNN) are becoming more powerful in terms of accuracy, scalability and efficiency. Similarly, the NLP models evolved from word embedding (Word2Vec \cite{Mikolov2013}), contextual word embedding (ELMo \cite{Peters2018}) to transformers (BERT \cite{Devlin2018}, XLNet \cite{Yang2019}), increasing in sophistication. Many existing works analyze textual and visual contents on social media platforms using DNN models. In \cite{Agrawal2018}, the authors proposed to detect cyberbullying on Instagram using a variety of textual and visual features, including Word2Vec features from comments, and DCNN features from images. In \cite{Cheung2018}, a DCNN model is used to discover counterfeit sellers on two social media platforms based on shared images. In \cite{Hassanpour2019}, DCNN for images and long short-term memory (LSTM) \cite{Hochreiter1997} for text are used to extract predictive features from Instagram posts to access substance use risk. In \cite{Li2019}, LSTM models are used to extract features from comments and hashtags, the dual-modal features are then combined to detect illicit drug dealing activities. In \cite{Zhang2018}, the authors proposed a framework to predict post popularity using both image and text data from the posts by a user. The majority of existing works process contents of different modalities with separate NLP/CV models, then combine the features using concatenating, stacking or embedding layers to form an end-to-end DNN model. We developed our multi-modal DNN model following the same idea with recent NLP/CV models: adapter-BERT \cite{Houlsby2019} for hashtags and comments, and EfficientNet \cite{Tan2019} for images. The aim is to have a modularized structure with the flexibility to change processing models for each modality, and the extensibility to incorporate more modalities, while keeping the whole network trainable from end-to-end. \section{The Instagram Dataset} \begin{figure*} \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{images/figure-1-a} \caption{A post (\url{https://www.instagram.com/p/B2uFLpmnXLZ/}) not relevant to \#lipstick with no intention to sell. This post is not related to tax evasion activities.} \label{fig:posts:a} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{images/figure-1-b} \caption{A post (\url{https://www.instagram.com/p/B2Y9FItjKxZ/}) not relevant to \#lipstick, but a Daigou is trying to sell a soap product. This post is related to tax evasion activities as the poster is using Instagram to generate unregistered transactions.} \label{fig:posts:b} \end{subfigure} \bigskip \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{images/figure-1-c} \caption{A post (\url{https://www.instagram.com/p/B2v3pAIgD6q/}) relevant to \#lipstick, where a brand is advertising its lipstick. This post is not related to tax evasion activities as the poster is promoting a registered channel (brand owner).} \label{fig:posts:c} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{images/figure-1-d} \caption{A post (\url{https://www.instagram.com/p/B2snt6tAVsZ/}) relevant to \#lipstick, where an unregistered producer is posting advertisement of lipsticks. This post is related to tax evasion activities as the poster is using Instagram to generate unregistered transactions.} \label{fig:posts:d} \end{subfigure} \caption{Example posts from the dataset} \label{fig:posts} \end{figure*} \subsection{Data Collection} To build our dataset, we crawled publicly available posts and their corresponding poster information from Instagram. In the proof-of-concept stage, the posts are collected using hashtag \#lipstick during 22$^{nd}$ to 26$^{th}$ September, 2019. For posts, we collected the username, post timestamp, number of likes, image, post text, and all its comments (we include the original post text as the first comment due to the way Instagram presents the posts). We also extracted the hashtags from the comments as they usually form a significant part of the textual information. Moreover, although not used at this stage, we also collected posters’ user information, including the username, number of followers, number of following users, number of posts, and the user bio. We collected a total of 58,660 posts (short-lived and duplicated posts included), then randomly sampled 3,000 posts for manual data labelling. As some of the Instagram posts were deleted after a short period of time, we check the availability of the posts in November 2019 to ensure all the labelled posts are available for more than 50 days. 711 posts have been deleted at the time of checking. Moreover, we also removed 148 duplicated posts due to the way they are displayed on Instagram. This produced a dataset of 2,081 unique posts sampled from 22$^{nd}$ to 26$^{th}$ September, 2019. \subsection{Data Labelling} For each collected post, we labeled according to 9 properties (as shown in Table~\ref{tab:labels}): availability of post; its relevance to the search keyword \#lipstick; its selling intention; its source; its relation to hidden economy; its image type; the language of the text; the existence of other contact details; and the type of other contact details left on the post. For transaction-based tax evasion, the key property is the post's relationship to hidden economy, which is defined as posts by unregistered sellers and producers who intend to generate hidden economy sales. During the process, Instagram is either their main source of customers, or it is a useful tool to direct sales to the account holder’s other social media tool for the completion of the sale. \newcommand{0.15\textwidth}{0.15\textwidth} \newcommand{0.3\textwidth}{0.3\textwidth} \newcommand{0.5\textwidth}{0.5\textwidth} \begin{table*} \caption{The 9 data labels and their corresponding post properties} \label{tab:labels} \begin{tabular}{p{0.15\textwidth} p{0.3\textwidth} l} \toprule Label & Description & Options\\ \midrule Availability & Whether the post is still available (not being deleted) at time of labelling & \begin{minipage}[t]{0.5\textwidth} \begin{itemize}[leftmargin=*] \item Y: the post is still available \item N: the post is not available \end{itemize} \end{minipage} \\ \midrule Relevance & Whether the post is relevant to \#lipstick & \begin{minipage}[t]{0.5\textwidth} \begin{itemize}[leftmargin=*] \item Y: the post is relevant to \#lipstick or lipstick-related items (Figures~\ref{fig:posts:c} and \ref{fig:posts:d}) \item N: the post is not relevant to \#lipstick and it shows something else (Figures~\ref{fig:posts:a} and \ref{fig:posts:b}) \end{itemize} \end{minipage} \\ \midrule Selling intention & Judging from the text and image, does the poster have an intention to sell or will the post lead to a potential sale? & \begin{minipage}[t]{0.5\textwidth} \begin{itemize}[leftmargin=*] \item Y: Yes, the poster has an intention to sell or the post will lead to a potential sale (Figures~\ref{fig:posts:b}, \ref{fig:posts:c} and \ref{fig:posts:d}) \item N: No, the poster does not have intention to sell or the post will not lead to a potential sale (Figure~\ref{fig:posts:a}) \end{itemize} \end{minipage} \\ \midrule Source of the post & Judging from the text and image of each post, what is the nature of the poster? & \begin{minipage}[t]{0.5\textwidth} \begin{itemize}[leftmargin=*] \item I: Individuals who share posts on Instagram and the account for that individual is of personal nature and is used purely for private sharing purposes (Figure~\ref{fig:posts:a}) \item S: Cosmetics retail shops that have created their own Instagram accounts for advertising and promotion purposes \item B: Brands such as Dior or Mac operating their own official accounts on Instagram for advertising purposes (Figures~\ref{fig:posts:c}) \item M: Makeup artists/makeup tutors/models who recommend products to their fans on Instagram, or to show off how the lipstick looks in a full makeup to promote products or services \item D: Daigous (purchasing agents) who purchase lipsticks on behalf of their customers (Figure~\ref{fig:posts:b}). \item P: Unregistered producers who produce lipsticks without license and sell their homemade products (Figure~\ref{fig:posts:d}) \end{itemize} \end{minipage} \\ \midrule Hidden economy transactions & Judging from the text and image, is the post related to hidden economy transactions and thereby resulting in tax evasion? & \begin{minipage}[t]{0.5\textwidth} \begin{itemize}[leftmargin=*] \item Y: Yes, the post is related to hidden economy transactions and will result in tax evasion (Figures~\ref{fig:posts:b} and \ref{fig:posts:d}) \item N: No, the post is not related to hidden economy transactions and will not result in tax evasion (Figures~\ref{fig:posts:a} and \ref{fig:posts:c}) \end{itemize} \end{minipage} \\ \midrule Image content type & What is the content of the image on the post? & \begin{minipage}[t]{0.5\textwidth} \begin{itemize}[leftmargin=*] \item B: A body part, such as the lip or arm \item P: A product, such as matte lipstick, glossy lipstick or balm tint (Figures~\ref{fig:posts:b} and \ref{fig:posts:c}) \item P+B: Both the product and body part (Figure~\ref{fig:posts:d}) \item A: An advertisement or marketing poster \item S: A screenshot of the conversation between two users \end{itemize} \end{minipage} \\ \midrule Language & What language does the poster write in? & \begin{minipage}[t]{0.5\textwidth} E.g., English, Indonesian, Malaysian, Thai, Spanish, Chinese, French, Arabic and Vietnamese \end{minipage} \\ \midrule Has other contact details & Were other contact details left on the post (most posters who aim to make sales will leave contact details of other social media platforms/messaging apps)? & \begin{minipage}[t]{0.5\textwidth} \begin{itemize}[leftmargin=*] \item Y: Yes, there are other contact details left on the post (Figure~\ref{fig:posts:b}) \item N: No, there are no other contact details left on the post (Figures~\ref{fig:posts:a}, \ref{fig:posts:c} and \ref{fig:posts:d}) \end{itemize} \end{minipage} \\ \midrule Specifics of other contact details & What are the other contact details left on the post (if any)? & \begin{minipage}[t]{0.5\textwidth} E.g., WhatsApp, WeChat, Facebook, Line, Phone, etc. \end{minipage} \\ \bottomrule \end{tabular} \end{table*} \section{Deep Neural Network Models} To detect transaction-based tax evasion activities of Instagram users, we analyzed their posts to extract features from the posted images, hashtags, and comments. As images and texts have different data structures and modalities, two DNN architectures, i.e, adapter-BERT \cite{Houlsby2019} and EfficientNet \cite{Tan2019}, were used for text and image feature extraction respectively. The features were extracted automatically from the post data using the established DNN models and were mapped into a joint feature space of both image and text features. A multi-modal DNN model was then developed, which took the joint feature as input, to detect transaction-based tax evasion activities. The workflow of the proposed method is illustrated in Figure~\ref{fig:flowchart}. \begin{figure*} \centering \includegraphics[width=\textwidth]{images/figure-2} \caption{The schematic flow-chart of the proposed method} \label{fig:flowchart} \end{figure*} \subsection{Image and Text Feature Extraction} The adapter-BERT architecture \cite{Houlsby2019} was used to extract features from the textual content, i.e., hashtags and comments. Since the textual content contains multiple languages, a multilingual tokenizer was used to convert words into token vectors, which supported all the languages in our dataset as presented in Table 1. For image feature extraction, the EfficientNetB7 model \cite{Tan2019} was used, as it represents the state-of-the-art in object detection while being 8.4 times smaller and 6.1 times faster on inference than the best existing models in the ImageNet Challenge \cite{Deng2009}. The image and text features were then concatenated, resulting in a high dimensional feature vector for each post sample. The text features extracted by adapter-BERT are 768-dimensional for both the hashtag input and the comment input, and the image features extracted by EfficientNetB7 are 2,560-dimensional, so the joint feature space is 768 + 768 + 2560 = 4096-dimensional. \subsection{Multi-Modal DNN Model} We implemented a multi-modal DNN model by combining three different basic models, including two adapter-BERT models for hashtags and comments, and an EfficientNetB7 model for images. The logit layers of the three models were merged using a concatenate layer. A dropout layer was then added on top of the concatenate layer, followed by an output dense layer, where a positive prediction corresponds to a tax evasion activity, vice versa. Both adapter-BERT models were implemented using the bert-for-tf2 python package (v0.14) \cite{kpe2020} and pre-trained using the entire Wikipedia dump for the top 100 languages in Wikipedia \cite{Devlin2020}. The same meta-parameters were used for building the adapter-BERT models: max sequence length = 64, adapter size = 64. The EfficientNetB7 model was implemented using the efficientnet package (v1.10) \cite{Yakubovskiy2020}. The ImageNet pre-trained weights were used to initialize the EfficientNetB7 model. To address the imbalanced distribution of the positive and negative samples, we set the class weights to (negative: 0.4, positive: 1.6) according to the ratio of each class sample size to total sample size scaled by the number of classes. The Adam optimizer \cite{Kingma2014} with a learning rate of 0.0001 was used for training the model in 100 epochs. \section{Results} 400 posts were randomly selected for testing the model, and the remaining 1,681 posts were used for training. An internal validation set (20\% of the training samples) was split from the training set and used to evaluate the model’s performance during training. To test the contribution of individual text and image inputs and the effectiveness of multi-modal inputs, we compared our proposed model with the three basic DNN models using only hashtag, comment and image inputs respectively. The same initialization method, meta parameter, optimization method and learning rate as for the multi-modal model were used for the three basic models. We used Precision, Recall, F1 Score, and the Area Under Curve (AUC) to evaluate the models’ performance. \begin{table} \caption{Performance of the DNN models based on different input features} \label{tab:results} \begin{tabular}{lrrr} \toprule Input & Precision & Recall & F1 Score\\ \midrule hashtags & 0.444 & \textbf{0.890} & 0.593\\ comments & 0.656 & 0.855 & 0.742\\ images & \textbf{0.756} & 0.645 & 0.696\\ multi-modal & 0.722 & 0.807 & \textbf{0.762}\\ \bottomrule \end{tabular} \end{table} The results of the proposed multi-modal DNN model and the compared individual basic models are presented in Table~\ref{tab:results}, and the Receiver Operating Characteristics (ROC) curves of these models are illustrated in Figure~\ref{fig:roc}. The model using hashtag features has the most imbalanced performance with the highest recall (0.89), but the lowest precision (0.444) and F1 score (0.593). The model using comment features is less imbalanced and has a substantially better F1 score (0.742). The model using image features achieved the highest precision (0.756), showing that it was most sensitive to tax evasion activities, but its overall performance (F1 score = 0.696) was compromised with the lowest recall (0.645). The differences in precision and recall between the image model and text models indicate that the visual and textual contents may provide complementary information to each other for hidden economy activity detection. This finding is further evidenced by the improved performance of the multi-modal model (F1 score = 0.762, AUC = 0.808), which outperformed any single modality models. \begin{figure} \centering \includegraphics[width=\linewidth]{images/figure-3} \caption{ROC curves of the DNN models based on different input features} \label{fig:roc} \end{figure} \section{Discussion} In order for tax authorities to effectively deploy this technology to aid in detecting transaction-based tax evasion, cost is an important factor. The very few examples of actual implementation of data-matching by tax authorities have been somewhat cost ineffective. Take the work-program of the ATO as example, in order to mitigate the major tax integrity risk of the hidden economy, their budget was AUD 39.5 Million in 2015-16, employing around 400 people, of which only 6 were data mining experts \cite{ATO2016}. Their work includes manually viewing social media posts. In terms of cost, the proof-of-concept stage of our model was time and labor efficient, where 3 labelers spent 60 hours to complete manual labelling. Our model markedly improves the efficiency at detecting and confirming posts that relate to transaction-based tax evasion. Without the detection model, tax officers will need to randomly select the posts, and can expect to detect about 22 tax evasion activities per 100 posts (464 out of 2081). With our method, our model will identify the suspicious posts first, then tax officers will manually confirm whether these posts relate to tax evasion. We can expect to identify about 72 real tax evasion activities out of 100 recommended suspicious posts. Therefore, with the same amount of effort, the efficiency can be improved by more than 3 times. In our current labelled dataset, about 10 percent of the samples have video clips instead of images. Since our current method could not take video clips as input, we replaced the video clips with random noise images for visual feature extraction. Features extracted from random noise images do not provide useful information for our detection task, thereby may compromise the performance of the model. As shown in Figure 3, a marked drop in precision was observed in the image model's performance. For future work, we will implement a video modality module to enhance the model's applicability and improve the model's performance. Another interesting finding of the results is that textural features tend to have higher recall, whereas image features have higher precision, and the combined features can outperform any individual type of features. It would be worthy of further study to investigate the complementary nature of different features and to understand the relationship between them, e.g., what words and images would have higher weights in detecting tax evasion activities or in other applications. \section{Conclusions and Roadmap} In conclusion, we developed a Regtech tool that automatically detects transaction-based tax evasion activities on social media platforms. In the proof-of-concept stage, we collected a dataset of Instagram posts about \#lipstick and manually annotated sampled posts with multiple labels related to sales and tax evasion activities. The dataset provides a solid baseline to understand sales and hidden economy activities on Instagram. We then developed a multi-modal DNN model to automatically detect transaction-based tax evasion activities from the posts. We adopted a modularized structure for the DNN model so that the processing sub-module for each modality can be changed, and new modality could be incorporated easily. Evaluation results confirm the efficiency and effectiveness of the Regtech tool. This tool could help tax authorities to identify audit targets in an efficient and effective manner, and combat social e-commerce tax evasion in scale. As a roadmap to a full-fledged tool, we aim to extend the detection capability to more products (other than lipstick), and eventually have a robust detection model even if the product is unseen by the model. This continuous process will also expand on our novel dataset. We will make available this dataset to the research community once we develop a prototype to the full-fledged tool. Moreover, we plan to incorporate additional modality to further improve the detection rate (e.g., the tax evasion nature of 58 out of the 2,081 posts can only be decided by a combination of post content and the poster’s information). This proof of concept model has attracted attention from both the State Administration of Taxation in the People’s Republic of China (the PRC) and the Australian Federal Police (AFP). Our team has formed a collaborative relationship with the Tax Science and Research Institute, a department under the State Administration of Taxation in the PRC. Collaborating with their frontier research lab, we aim to test this project as part of the designing stage in China’s Golden Tax Project, with the plan to incorporate other products that are popular with Daigous in our model design. For example, due to the COVID-19, there is worldwide shortage of medical protective equipment/masks, the PRC government is keen to regulate the unregistered sales of those products. We are waiting for them to provide a list of these popular products. As for the collaboration with AFP, the detection of hidden economy transaction on social media platforms covers several legal aspects, and the AFP is also interested in the regulation of other legal issues such as infringement of IP, fake products and smuggling. \begin{acks} We acknowledge our data and labeling team: Alex Huang, Pei-Wen Sophie Zhong, and Jun Zhao. Special thanks to Fujitsu Australia Limited for providing the computational resources for this study. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2020-07-28T02:38:30", "yymm": "2007", "arxiv_id": "2007.13525", "language": "en", "url": "https://arxiv.org/abs/2007.13525" }
\section{Introduction}\label{Sec1} \IEEEPARstart{R}apid growth of wireless networks and the emergence of variety of applications, including the Internet of Things (IoT), massive machine-type communication (mMTC), and critical controls, necessitate sophisticated solutions to secure data transmission. Traditionally, the main objective of security schemes, using either cryptographic or information-theoretic approaches, is to secure data in the presence of adversary eavesdroppers. However, a stronger level of security can be obtained in wireless networks if the existence of communication is hidden from the adversaries. To this end, recently, there has been increasing attention to investigate covert communication, also referred to as communication with \textit{low probability of detection} (LPD), in various scenarios with the goal of hiding the existence of communication \cite{bash2013limits, bash2015hiding, che2013reliable, wang2016fundamental, bloch2016covert,tahmasbi2018first, arumugam2019embedding,arumugam2018covert}. Generally speaking, covert communication refers to the problem of reliable communication between a transmitter Alice and a receiver Bob while maintaining a low probability of detecting the existence of communication from the perspective of a warden Willie \cite{bash2015hiding}. {In contrast to traditional cryptographic schemes and similar to physical-layer security schemes, covert communication exploits the physical layer of a communication network to provide security. The most important difference in the setting of physical-layer security and covert communication is the functionality of the illegitimate parties, i.e., the eavesdropper Eve and the warden Willie. In fact, while covert communication attempts to hide the existence of the communication from the warden, physical-layer security schemes aim at minimizing the information obtained by the eavesdroppers through exploiting the dynamic characteristics of the wireless medium \cite{bloch2011physical_book}. Therefore, as opposed to covert communication, physical-layer security does not provide protection against the detection of a transmission. Hence, covert communication can provide a stronger level of security while also achieving privacy of the transmitter by guaranteeing a negligible detection probability of the transmission at a warden.} {The information-theoretic limits on the rate of covert communication have been presented in \cite{bash2013limits} for additive white Gaussian noise (AWGN) channels. More specifically, assuming the communication blocklength to be $n$, it has been proved in \cite{bash2013limits} that $\mathcal{O}(\sqrt{n})$ bits of information can be transmitted to Bob, reliably and covertly, in $n$ uses of the channel, as $n\to\infty$.} The same square root law has been developed for binary symmetric channels (BSCs) in \cite{che2013reliable} and discrete memoryless channels (DMCs) in \cite{wang2016fundamental}. Moreover, the principle of channel resolvability has been used in \cite{bloch2016covert} to develop a coding scheme that can reduce the number of required shared key bits. Also, the first- and second-order asymptotics of covert communication over binary-input DMCs have been studied in \cite{tahmasbi2018first}. The covert communication setup has also been extended to broadcast channels \cite{arumugam2019embedding} and to multiple-access channels \cite{arumugam2018covert} from an information-theoretic perspective. {The achievable covert rate (i.e., the ratio of the number of information bits to the number of channel uses) in the aforementioned framework is zero as $n$ grows large since $\lim_{n\to\infty}\mathcal{O}(\sqrt{n})/n=0$.} However, it is demonstrated that positive covert rates can be achieved by introducing additional uncertainty, from Willie's perspective, into the system. For instance, it is shown in \cite{lee2015achieving,goeckel2016covert} that Willie's uncertainty about his noise power helps achieving positive covert rates. Moreover, by considering slotted AWGN channels, it is proved in \cite{bash2016covert} that positive covert rates are achievable if the warden does not know when the transmission is taking place. The possibility of achieving positive-rate covert communication is further demonstrated for several other scenarios such as amplify-and-forward (AF) relaying networks with a greedy relay attempting to transmit its own information to the destination on top of forwarding the source's information \cite{hu2018covert}, dual-hop relaying systems with channel uncertainty \cite{wang2019covert}, a downlink scenario under channel uncertainty and with a legitimate user as the cover \cite{shahzad2017covert}, and a single-hop setup with a full-duplex receiver acting as a jammer \cite{shahzad2018achieving}. Additionally, covert communication in the presence of a multi-antenna adversary, under delay constraints, and for the case of quasi-static wireless fading channels is considered in \cite{shahzad2019covert}. In \cite{hu2019covert}, channel inversion power control is adopted to achieve covert communication with the aid of a full-duplex receiver. Covert communication in the context of unmanned aerial vehicle (UAV) networks is considered in \cite{zhou2019joint}. { Physical-layer security has been investigated in \cite{wang2018physical,liu2020robust} for visible light communication (VLC) and can be extended to covert communication.} Very recently, the problem of joint covert communication and secure transmission in untrusted relaying networks in the presence of multiple wardens has been considered in \cite{forouzesh2020covert}. Moreover, the benefits of beamforming in improving the performance of covert communication in the presence of a jammer has been studied in \cite{forouzesh2020covert2}. Prior studies on covert communication in wireless networks mostly consider omni-directional transmission over conventional radio frequency (RF) wireless links. However, a superior performance can be potentially attained when performing the covert communication over the millimeter-wave (mmWave) bands. In particular, operating over the mmWave bands not only increases the covertness thanks to directional beams, but also increases the transmission data rates given much more available bandwidths and enables ultra-low form factor transceivers due to the lower wavelengths used compared to the conventional RF counterpart. This makes the mmWave band a suitable option for covert communication to increase the security level of wireless applications involving critical data. Also, with the advancement in mmWave communications and rapid development of mmWave cellular networks in the fifth generation of wireless networks (5G) and beyond that, mmWave systems will serve as major components for a wide range of emerging wireless networking applications and use cases. This necessitates secure transmission schemes for mmWave systems and further highlights the importance of covert mmWave communication. The channel model and system architecture of mmWave communication systems significantly differ from those of RF communication. In particular, communication over the mmWave bands can exploit directive beams, thanks to the deployment of massive antenna arrays, to compensate for the significant path loss over this range of frequency\footnote{{Directive beams can also be exploited over RF systems through beamforming technology. However, given much smaller wavelengths at the mmWave bands compared to the RF bands, it is much easier to realize large antenna arrays and (narrow) directive beams, especially at mobile users, over mmWave systems.}}. Meanwhile, the significant susceptibility of directive mmWave links to blockage results in a nearly bimodal channel depending on whether a line-of-sight (LOS) link exists between the transmitter and receiver \cite{andrews2017modeling}. Furthermore, the properties of mmWave and RF channels, including path loss and statistical distribution of fading, are often modeled very differently. Therefore, the existing results on covert communication cannot immediately be extended to covert communication over the mmWave bands. In this paper, we study covert communication over mmWave channels from a communication theory perspective. More specifically, we analyze the performance of the system in the limit as the blocklength $n$ grows large. In order to achieve a positive-rate covert communication, the transmitter Alice is equipped with two antenna arrays each pointed to a different direction and carrying independent data streams. The first antenna array forms a directive beam for covert data transmission to Bob. The second array is used to generate another beam toward Willie as a jamming signal while the transmit power is changed independently across the transmission blocks in order to achieve desired covertness. The research in this paper is the first attempt in analytical studies of covert communication over mmWave systems. It is worth mentioning that a conceptual framework for covert mmWave communication was envisioned in \cite{cotton2009millimeter} without providing analytical studies. To the best of the authors' knowledge, no analytical characterization for covert mmWave system has been carried out in prior works. Very recently, after the appearance of the initial version of this work \cite{jamali2019covert}, Zhang \textit{et al.} \cite{zhang2020joint} studied joint beam training and data transmission for covert mmWave communication. More specifically, the authors of \cite{zhang2020joint} aimed at jointly optimizing the beam training duration (to establish a directional link between Alice and Bob), the training power, and the data transmission power to maximize the effective covert rate while satisfying the covertness constraint on Willie. { The main contributions of the paper are summarized as follows. \begin{itemize} \item We characterize Willie's optimal detection performance in terms of the overall (minimum) detection error rate, and derive the closed form for the expected value of the detection error rate from Alice's perspective. \item To characterize the performance of the desired link, we obtain the closed-form expression for the outage probability of the Alice-Bob link, and then formulate the optimal covert rate that is achievable in our proposed setup. \item We further obtain tractable forms for the ergodic capacity of the Alice-Bob link involving only one-dimensional integrals that can be computed in closed forms for most ranges of the channel parameters. \item We highlight how the results of the paper can be extended to more practical scenarios, particularly to the cases where perfect information about Willie's location is not available to Alice. We also provide several important directions for future research on covert mmWave communication. \item We present extensive numerical analysis to study the system performance in various aspects. \end{itemize} } The rest of the paper is organized as follows. In Section \ref{Sec2}, we briefly summarize the mmWave channel model and describe the proposed covert mmWave communication setup. In Section \ref{Sec3}, we analyze Willie's overall error rate with an optimal radiometer detector, and then obtain its expected value from Alice's perspective. Section \ref{Sec4} is devoted to studying the performance of the Alice-Bob link, in terms of the outage probability, effective covert rate, and ergodic capacity. Discussions about various realistic scenarios, including imperfect knowledge about Willie's location, as well as some future research directions are provided in Section \ref{Sec5}. Finally, extensive numerical results are presented in Section \ref{Sec6}, and the paper is concluded in Section \ref{Sec7}. \section{Channel and System Models}\label{Sec2} { In this section, we first briefly characterize mmWave channels and describe their distinct properties to enable explaining the system model presented afterwards. \subsection{MmWave Channel Model}\label{Sec2A}} Recent experimental studies have demonstrated that mmWave links are highly sensitive to blocking effects \cite{rappaport2013millimeter,andrews2017modeling}. In order to model this characteristic, a proper channel model should differentiate between the LOS and non-LOS (NLOS) channel models. Therefore, two different sets of parameters are considered for the LOS and NLOS mmWave links, and a deterministic function $P_{\rm LOS}(d_{ij}) \in [0,1]$, that is a non-increasing function of the link length $d_{ij}$ (in meters) between the nodes $i$ and $j$, is defined to characterize the probability of an arbitrary link of length $d_{ij}$ being LOS. In this paper, we consider a generic function $P_{\rm LOS}(d_{ij})$ throughout our analysis and use the model $P_{\rm LOS}(d_{ij})={\rm e}^{-d_{ij}/200}$, suggested in \cite{andrews2017modeling}, for our numerical analysis. Next, we briefly describe how the LOS and NLOS channels can be characterized. Similar to \cite{andrews2017modeling}, we express the channel coefficient for an arbitrary mmWave link between the transmitter $i$ and receiver $j$ as $h_{ij}=\tilde{h}_{ij}\sqrt{{G}_{ij}L_{ij}}$, where $\tilde{h}_{ij}$, ${G}_{ij}$, and $L_{ij}$ are the channel fading coefficient, the total directivity gain (including both the transmitter and the receiver beamforming gains), and the path loss of the $i$-$j$ mmWave link, respectively. This model is widely used in the literature for analytical tractability purposes. The reader is referred to \cite{andrews2017modeling} and the references therein for more details on mmWave channel modeling and also the validity of this model. \begin{table}[t] \centering \caption{Probability mass function (PMF) of the directivity gain of a node $q$ with beamsteering error \cite{di2015stochastic}.} \label{T1} \begin{tabular}{M{0.6cm}||M{2.1cm}M{3cm}} $k$ & $1$ & $2$\\ \hline\hline {\vspace{0.1cm}} $g^{(q)}_{k}$ {\vspace{0.05cm}} & $M^{(q)}_{\mathcal{X}}$ & $m^{(q)}_{\mathcal{X}}$ \\ \hline {\vspace{0.1cm}} $b^{(q)}_{k}$ {\vspace{0.2cm}} &$F_{|\mathcal{E}^{(q)}_{\mathcal{X}}|}\left({\theta^{(q)}_{\mathcal{X}}}/{2}\right)$& $1-F_{|\mathcal{E}^{(q)}_{\mathcal{X}}|}\left({\theta^{(q)}_{\mathcal{X}}}/{2}\right)$ \\ \hline \end{tabular} \end{table} {To characterize the path loss $L_{ij}$ of the $i$-$j$ link with the length $d_{ij}$, we consider different path loss exponents ($\alpha_{\rm L},\alpha_{\rm N}$) and intercepts ($C_{\rm L},C_{\rm N}$) for the LOS and NLOS links, respectively. Let $L^{(\rm L)}_{ij}$ and $L^{(\rm N)}_{ij}$ denote the path losses of the LOS and NLOS links, respectively. Then the path loss $L_{ij}$ is either equal to $L^{(\rm L)}_{ij}=C_{\rm L}d_{ij}^{-\alpha_{\rm L}}$ with probability $P_{\rm LOS}(d_{ij})$ or equal to $L^{(\rm N)}_{ij}=C_{\rm N}d_{ij}^{-\alpha_{\rm N}}$ with probability $1-P_{\rm LOS}(d_{ij})$. Note that the path loss in the NLOS links can be much higher than that of the LOS path due to the weak diffractions in the mmWave bands \cite{rappaport2013millimeter}. To ascertain the total directivity gain ${G}_{ij}$, we use the common sectored-pattern antenna model \cite{bai2015coverage,di2015stochastic} which approximates the actual array beam pattern by a step function, i.e., with a constant main lobe gain $M^{(q)}_{\mathcal{X}}$ over the beamwidth $\theta^{(q)}_{\mathcal{X}}$ and a constant side lobe gain $m^{(q)}_{\mathcal{X}}$ otherwise, where ${\mathcal{X}}\in\{{\rm TX,RX}\}$ and ${q}\in\{{i,j}\}$. Then, for a given link, if the spatial arrangement of the beams of the transmitter and receiver are known, the total directivity gain can be obtained from the product of the gains of the transmitter and receiver. If the main lobe of a node $q$ (either transmitter or receiver) is pointed to another node, we assume that an additive beamsteering error exists, denoted by a symmetric random variable (RV) $\mathcal{E}^{(q)}_{\mathcal{X}}$, in the vicinity of the transmitter-receiver direction. Same as in \cite{di2015stochastic}, it is assumed that node $q$ has a gain equal to $M^{(q)}_{\mathcal{X}}$ if $|\mathcal{E}^{(q)}_{\mathcal{X}}|<\theta^{(q)}_{\mathcal{X}}/2$, which occurs with probability $F_{|\mathcal{E}^{(q)}_{\mathcal{X}}|}(\theta^{(q)}_{\mathcal{X}}/2)$ with $F_X(x)$ being the cumulative distribution function (CDF) of the RV $X$. Otherwise, it has a gain equal to $m^{(q)}_{\mathcal{X}}$. Then the probability mass function (PMF) of the directivity gain of a node $q$ with beamsteering error can be expressed as a RV taking the values $g_k^{(q)}$ with probabilities $b_k^{(q)}$, $k\in\{1,2\}$, as summarized in Table \ref{T1}. Finally, it is common in the literature to model the fading amplitude of mmWave links as independent Nakagami-distributed RVs with shape parameter $\nu\geq 1/2$ and scale parameter $\Omega=\E[|{\tilde{h}}_{ij}|^2]=1$, and consider different Nakagami parameters for the LOS and NLOS links as $\nu_{\rm L}$ and $\nu_{\rm N}$, respectively \cite{andrews2017modeling,bai2015coverage}. In the case of Nakagami-$m$ fading with parameters $\nu_{\mathcal{B}}$, $\mathcal{B}\in\{{\rm L},{\rm N}\}$, and $\Omega=1$, $|{\tilde{h}}_{ij}|^2$ has a normalized gamma distribution with shape and scale parameters of $\nu_{\mathcal{B}}$ and $1/\nu_{\mathcal{B}}$, respectively. Therefore, the probability density function (PDF) of $|{\tilde{h}}_{ij}|^2$ is given by \cite{simon2005digital} \begin{align}\label{pdf_gamma} f_{|{\tilde{h}}_{ij}|^2}(y)=\frac{ {\nu_{\mathcal{B}}}^{\nu_{\mathcal{B}}} y^{\nu_{\mathcal{B}}-1}}{\Gamma(\nu_{\mathcal{B}})}\exp\left(-\nu_{\mathcal{B}}y\right). \end{align} As it will be clarified later, in order to derive tractable closed-form expressions, we will often assume in this paper that the shape parameter $\nu_{\mathcal{B}}$ is an integer. Note that, from an information-theoretic perspective, mmWave communications, and in general wideband communications under power constraints, can be viewed as low-capacity scenarios \cite{fereydounian2018channel,jamali2020massive,jamali2018low} suggesting a natural framework for covert mmWave communication.} \subsection{System Model}\label{Sec2B} We consider the well-known setup for covert communication comprised of three parties: a transmitter Alice is intending to covertly communicate to a receiver Bob over the mmWave bands when a warden Willie is attempting to detect the existence of this communication. Alice employs a dual-beam mmWave transmitter consisting of two antenna arrays. The first antenna array is used for the transmission to Bob while the second array is exploited as a jammer to enable positive-rate covert mmWave communication. { Note that, although the number of antenna elements in mmWave arrays is typically large to compensate for the significant propagation loss through beamforming (directionality gain), the wavelengths are much smaller than that of the RF communication (e.g., $5$ \si{mm} at $60$ \si{GHz} versus $60$ \si{mm} at $5$ \si{GHz}). Therefore, it is feasible to realize large mmWave antenna arrays in a small package thanks to the recent advancements in antenna circuit design \cite{andrews2017modeling,hong2014study}. Additionally, it is practical to consider two separate antenna arrays in a given mmWave transmitter. In particular, a first-of-the-kind mmWave antenna system prototype has been presented in \cite{hong2014study} that integrated two separate mmWave antenna arrays, each of size $1\times 16$, inside a Samsung cell phone (one at the top and the other at the bottom of the cell phone). Moreover, the authors in \cite{rangan2014millimeter} proposed incorporating several mmWave antenna arrays throughout a mobile device to provide path diversity from blockage by human obstructions.} Given the above transmission model, when Bob is not in the main lobe of the Alice-Willie link, he receives the jamming signal gained with the side lobe of the second array in addition to receiving the desired signal from Alice with the main lobe of the first array. Similarly, when Willie is not in the main lobe of the Alice-Bob link, he receives the desired signal gained with the side lobe of the first array in addition to receiving the jamming signal from Alice with the main lobe of the second array. On the other hand, when Bob is in the main lobe of the Alice-Willie link (or equivalently, Willie is in the main lobe of the Alice-Bob link), both of the received signals by Bob and Willie are gained with main lobes\footnote{{In such extreme cases, both Bob and Willie receive the desired signal with the same gain from Alice. Given the small wavelengths at the mmWave bands, one can exploit relatively large antenna arrays to realize three-dimensional (3D) beamforming \cite{razavizadeh2014three,forouzesh2020covert2} at Alice's first array to further focus the beam (in three dimensions) toward Bob and reduce the chance of Willie receiving the desired signal with the (large) main lobe gain. Further investigation on this direction is left for future work.}}. Throughout our analysis in Sections \ref{Sec3} and \ref{Sec4}, we assume that Alice, Bob, and Willie are in some fixed locations (hence, having some given directivity gains). And we leave the discussion about various realistic scenarios, such as imperfect knowledge of Willie's location, to Section \ref{Sec5}. Let the channel coefficients between Alice's first and second arrays and the node $j\in\{b,w\}$ (representing Bob and Willie) be denoted by $h_{aj,f}$ and $h_{aj,s}$, respectively. Then it can be observed that the path loss gains are the same, i.e., $L_{aj,f}=L_{aj,s}\triangleq L_{aj}$, while the fading gains $|\tilde{h}_{aj,f}|^2$ and $|\tilde{h}_{aj,s}|^2$ are independent normalized gamma RVs\footnote{{Note that the fading coefficients can be considered uncorrelated if the antenna arrays are spaced more than half a wavelength \cite{molisch2012wireless}. Given that the wavelengths are very small at the mmWave bands, e.g., $5$ \si{mm} at $60$ \si{GHz}, it is easy to realize tens of wavelengths of spacing between the arrays and ensure independence between the fading coefficients.}}. We assume quasi-static fading channels meaning that fading coefficients remain constant over a block of $n$ channel uses. We further assume that Alice transmits the desired signal with a publicly-known power $P_a$ while the jamming transmit power $P_J$ of the second array is not known and is changed independently across transmission blocks. In this paper, we assume that $P_J$ is drawn from a uniform distribution over the interval $[0,P_J^{\rm max}]$ while the results can be extended to other distributions using a similar approach. Let $G_{aj,f}$ and $G_{aj,s}$ denote the total directivity gains of the links between Alice's first and second arrays and the node $j\in\{b,w\}$, respectively. Then, the received signals by Bob and Willie at each channel use $i$, for $i=1,2,...,n$, are given by \begin{align} \mathbf{y}_{b}(i)=&\sqrt{P_aG_{ab,f}L_{ab}}~\tilde{h}_{ab,f}\mathbf{x}_a(i)\label{yb}\nonumber\\ &+\sqrt{P_JG_{ab,s}L_{ab}}~\tilde{h}_{ab,s}\mathbf{x}_J(i)+\mathbf{n}_b(i),\\ \mathbf{y}_{w}(i)=&\sqrt{P_aG_{aw,f}L_{aw}}~\tilde{h}_{aw,f}\mathbf{x}_a(i)\label{yw}\nonumber\\ &+\sqrt{P_JG_{aw,s}L_{aw}}~\tilde{h}_{aw,s}\mathbf{x}_J(i)+\mathbf{n}_w(i), \end{align} respectively, where $\mathbf{x}_a$ and $\mathbf{x}_J$ are the desired signal and the jamming signal, respectively, each having a zero-mean Gaussian distribution satisfying $\E[|\mathbf{x}_a(i)|^2]=\E[|\mathbf{x}_J(i)|^2]=1$. Moreover, $\mathbf{n}_b$ and $\mathbf{n}_w$ are zero-mean Gaussian noise components at Bob and Willie's receivers with variances $\sigma^2_b$ and $\sigma^2_w$, respectively. Finally, note that the results derived in this paper can be applied to a similar system model, though with Rayleigh fading channels, by substituting $\nu_{\mathcal{B}}=1$. This is because the normalized gamma distribution simplifies to the exponential distribution with mean one in the special case of $\nu_{\mathcal{B}}=1$. \section{Willie's Detection Error Rate}\label{Sec3} As discussed earlier, Willie's goal is to detect whether Alice is transmitting to Bob or not. It is assumed that Willie has a perfect knowledge about the channel between himself and Alice, and applies binary hypothesis testing while being unaware of the value of $P_J$. {The null hypothesis $\mathbb{H}_0$ states that Alice did not transmit to Bob, and the alternative hypothesis $\mathbb{H}_1$ specifies that a transmission from Alice to Bob occurred. Willie's decision of hypothesis $\mathbb{H}_1$ when $\mathbb{H}_0$ is true is referred to as a \textit{false alarm} and its probability is denoted by $P_{\rm FA}$. Moreover, Willie's decision in favor of $\mathbb{H}_0$ when $\mathbb{H}_1$ is true is referred to as a \textit{missed detection} with the probability denoted by $P_{\rm MD}$. Then Willie's overall detection error rate is defined as $P_{e,w}\triangleq P_{\rm FA}+P_{\rm MD}$. } We say that a positive-rate covert communication is possible if for any $\epsilon>0$ there exists a positive-rate communication between Alice and Bob satisfying $P_{e,w}\geq1-\epsilon$ as the number of channel uses $n\to\infty$. { In this section, we first derive the minimum value of $P_{e,w}$, denoted by $P_{e,w}^*$, under the assumption of complete knowledge of the channels and an optimal radiometer detector at Willie. We also assume that Willie observes infinitely large number of channel uses. It is worth mentioning that such assumptions correspond to the worst-case scenario for the covertness requirement as they result in the minimum error rate for Willie. We then derive the closed-form expression of the expected value of $P_{e,w}^*$ form Alice's perspective in Section \ref{Sec3B}.} \subsection{$P_{e,w}$ with the Optimal Detector at Willie}\label{Sec3A} As it is proved in \cite[Lemma 2]{sobers2017covert} for AWGN channels and also pointed out in \cite[Lemma 1]{shahzad2017covert}, the optimal decision rule that minimizes Willie's detection error is given by \begin{align}\label{radiometer} T_w\triangleq\frac{1}{n}\sum_{i=1}^{n}|\mathbf{y}_{w}(i)|^2 \underset{\mathbb{H}_0}{\overset{\mathbb{H}_1}{\gtrless}}\tau, \end{align} where $\tau$ is Willie's detection threshold for which we obtain the corresponding optimal value/range later in this subsection. Using \eqref{yw} and the definition of $T_w$ in \eqref{radiometer}, we can write $T_w$ under hypothesis $\mathbb{H}_0$, denoted by $T_w^{\mathbb{H}_0}$, as \begin{align}\label{TH0_1} T_w^{\mathbb{H}_0}=\left(P_JG_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2+\sigma^2_w\right)\frac{\chi^2_{2n}}{n}, \end{align} where $\chi^2_{2n}$ denotes a chi-squared RV with $2n$ degrees of freedom. According to the strong law of large numbers, $\frac{\chi^2_{2n}}{n}$ converges to $1$, \textit{almost surely}, as $n\to\infty$. Therefore, using Lebesgue's dominated convergence theorem \cite{browder2012mathematical}, we cam replace $\frac{\chi^2_{2n}}{n}$ by $1$ to rewrite $T_w^{\mathbb{H}_0}$ as \begin{align}\label{TH0} T_w^{\mathbb{H}_0}=P_JG_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2+\sigma^2_w. \end{align} Similarly, $T_w$ under hypothesis $\mathbb{H}_1$ can be obtained as \begin{align}\label{TH1} T_w^{\mathbb{H}_1}&\!=\!P_aG_{aw,f}L_{aw}|\tilde{h}_{aw,f}|^2\!+\!P_JG_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2\!+\!\sigma^2_w. \end{align} { One can observe that if Willie has a complete knowledge about the jamming power $P_J$, he can choose any threshold in the interval $T_w^{\mathbb{H}_0}\leq \tau \leq T_w^{\mathbb{H}_1}$ to achieve $P_{\rm FA}=P_{\rm MD}=0$, and hence $P_{e,w}=0$ (recall that we assumed Willie has full knowledge about the realization of the Alice-Willie channel to constitute a worst-case scenario for the covertness requirement). Alternatively, if $P_J$ is known to Willie with a probability $q>0$, then we cannot satisfy the covertness requirement for the $\epsilon$ values smaller than $q$. In other words, some sort of randomness is required in the system model to enable covert communication. In the following theorem, we characterize the optimal threshold of Willie's detector and its corresponding minimum detection error rate under the assumption that $P_J$ is completely unknown to Willie and changes randomly per transmission block according to a uniform distribution over the interval $[0,P_J^{\rm max}]$.} \begin{theorem}\label{Thm1} The optimal threshold $\tau^*$ for Willie's detector is in the interval \begin{align}\label{tau*} {\tau^*\in[\min(\lambda_1, \lambda_2), \max(\lambda_1, \lambda_2)],} \end{align} and the corresponding minimum detection error rate is \begin{align}\label{pe} P_{e,w}^*=\left\{\begin{matrix} \hspace{-2.9cm}0, &\lambda_1<\lambda_2, \\ 1-\frac{P_aG_{aw,f}|\tilde{h}_{aw,f}|^2}{P_J^{\rm max}G_{aw,s}|\tilde{h}_{aw,s}|^2}, &\lambda_1\geq\lambda_2, \end{matrix}\right. \end{align} where $\lambda_1\triangleq P_J^{\rm max}G_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2+\sigma^2_w$ and $\lambda_2\triangleq P_aG_{aw,f}L_{aw}|\tilde{h}_{aw,f}|^2+\sigma^2_w$. \end{theorem} \begin{proof} Using \eqref{TH0}, the false alarm probability is given by \begin{align}\label{pfa} P_{\rm FA}&=\Pr\left(T_w^{\mathbb{H}_0}>\tau\right)=\Pr\left(P_J>\frac{\tau-\sigma^2_w}{G_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2}\right)\nonumber\\ &=\left\{\begin{matrix} \hspace{-3.5cm}1, &\tau<\sigma^2_w, \\ 1-\frac{\tau-\sigma^2_w}{P_J^{\rm max}G_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2}, &\sigma^2_w\leq\tau\leq\lambda _1,\\ \hspace{-3.5cm}0,&\tau\geq\lambda_1. \end{matrix}\right. \end{align} Also, by \eqref{TH1} the missed detection probability is given by \begin{align}\label{pmd} P_{\rm MD}&=\Pr\left(T_w^{\mathbb{H}_1}<\tau\right)=\Pr\left(P_J<\frac{\tau-\lambda_2}{G_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2}\right)\nonumber\\ &=\left\{\begin{matrix} \hspace{-2.8cm}0, &\tau<\lambda_2, \\ \frac{\tau-\lambda_2}{P_J^{\rm max}G_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2}, &\lambda_2\leq\tau\leq\lambda _3,\\ \hspace{-2.8cm}1,&\tau\geq\lambda_3, \end{matrix}\right. \end{align} where $\lambda_3\triangleq \lambda_2+P_J^{\rm max}G_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2$. Next, we consider the following two cases. \textit{Case I:} When $\lambda_1<\lambda_2$, Willie's receiver can choose any thresholds in the interval $[\lambda_1,\lambda_2]$ to get both $P_{\rm FA}=0$ and $P_{\rm MD}=0$, resulting in zero detection error $P_{e,w}\triangleq P_{\rm FA}+P_{\rm MD}$. \textit{Case II:} When $\lambda_1\geq\lambda_2$, we can write the overall detection error rate $P_{e,w}\triangleq P_{\rm FA}+P_{\rm MD}$, using \eqref{pfa} and \eqref{pmd}, as \begin{align}\label{Pew} P_{e,w}=\left\{\begin{matrix} \hspace{-3.45cm}1, &\tau\leq\sigma^2_w, \\ 1-\frac{\tau-\sigma^2_w}{P_J^{\rm max}G_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2}, &\sigma^2_w\leq\tau\leq\lambda _2,\\ \hspace{-0.52cm}1-\frac{P_aG_{aw,f}|\tilde{h}_{aw,f}|^2}{P_J^{\rm max}G_{aw,s}|\tilde{h}_{aw,s}|^2}, &\lambda_2\leq\tau\leq\lambda _1,\\ \hspace{-0.6cm}\frac{\tau-\lambda_2}{P_J^{\rm max}G_{aw,s}L_{aw}|\tilde{h}_{aw,s}|^2}, &\lambda_1\leq\tau\leq\lambda _3,\\ \hspace{-3.45cm}1,&\tau\geq\lambda_3. \end{matrix}\right. \end{align} Therefore, based on \eqref{Pew}, the receiver never chooses $\tau\leq\sigma^2_w$ or $\tau\geq\lambda_3$ since they result in the worst performance $P_{e,w}=1$. Moreover, \eqref{Pew} monotonically decreases, with respect to $\tau$, in the interval $\sigma^2_w\leq\tau\leq\lambda _2$ until it reaches the constant value corresponding to $P_{e,w}$ in the interval $\lambda_2\leq\tau\leq\lambda _1$, and then it monotonically increases in the interval $\lambda_1\leq\tau\leq\lambda _3$ until it reaches $1$. Therefore, the constant value of the detection error rate in the interval $\lambda_2\leq\tau\leq\lambda _1$ is the minimum value of $P_{e,w}$ for $\lambda_1\geq\lambda_2$ that can be attained using any threshold in the interval $[\lambda_2,\lambda_1]$. \end{proof} \noindent\textbf{Remark 1.} Eq. \eqref{pe} shows that for small values of $P_J^{\rm max}$ with $P_J^{\rm max}G_{aw,s}|\tilde{h}_{aw,s}|^2\leq P_aG_{aw,f}|\tilde{h}_{aw,f}|^2$ Willie can attain a zero error rate negating the possibility of achieving a positive-rate covert communication as $n\to\infty$. Although increasing $P_J^{\rm max}$ beyond $P_aG_{aw,f}|\tilde{h}_{aw,f}|^2/(G_{aw,s}|\tilde{h}_{aw,s}|^2)$ can increase $P^*_{e,w}$ and enable a positive-rate covert communication ($P^*_{e,w}\to1$ as $P_J^{\rm max}\to\infty$), it also degrades the performance of the desired Alice-Bob link as we will see in Section \ref{Sec4}. The superiority of covert mmWave communication to that of omni-directional RF communication becomes then apparent by observing the beneficial impact of beamforming. In fact, in the received signal by Willie, $P_J$ is gained by $G_{aw,s}$ which is much larger than the gain $G_{aw,f}$ of $P_a$; this simultaneously increases the jamming signal and decreases the desired signal received by Willie, i.e., significantly degrades the performance of Willie's detector. It will be shown in Section \ref{Sec4} that an opposite situation happens for the Alice-Bob link where the desired signal is gained with $G_{ab,f}$ which is much larger than the gain $G_{ab,s}$ of the jamming signal. \begin{figure*}[b] \hrulefill \normalsize \setcounter{equation}{12} \begin{align}\label{Epew} \!\!\!\!\!\!\!\! \E[P^*_{e,w}]\!=\!\!\!\!\!\!\!\sum_{\mathcal{B}\in\{{\rm L,N}\}}\!\!\!\!\!\!P_{aw}(\mathcal{B})\!\sum_{k=1}^{2}b_k^{(a,s)}\!\bigg[1\!+\!S(\nu_{\mathcal{B}},g_k^{(a,s)})\bigg]\!\!\times\!\!\bigg[1\!-\!S(\nu_{\mathcal{B}},g_k^{(a,s)})\!+\!\frac{P_am_{a,f}\nu_{\mathcal{B}}^{\nu_{\mathcal{B}}}}{P_J^{\rm max}g_k^{(a,s)}\eta_{\mathcal{B}}\Gamma(\nu_{\mathcal{B}})}\!\sum_{l=1}^{\nu_{\mathcal{B}}}\!\!\binom{\nu_{\mathcal{B}}}{l}\frac{(-1)^{l}}{l} I(\nu_{\mathcal{B}},l,g_k^{(a,s)})\bigg]\!.\!\! \end{align} \end{figure*} \subsection{$\E[P^*_{e,w}]$ From Alice's Perspective}\label{Sec3B} {Given that Willie is a passive node, we make the realistic assumption that Alice and Bob are unaware of the instantaneous realization of the channel between Alice and Willie. Therefore, they should rely on the expected value of $P_{e,w}^*$.} Note also that the minimum error rate $P_{e,w}^*$ in \eqref{pe} is independent of the beamforming gain of Willie's receiver as it cancels out in the ratio of $G_{aw,f}/G_{aw,s}$ and also in the comparison between $\lambda_1$ and $\lambda_2$. Furthermore, Alice perfectly knows the gain $m_{a,f}$ of the side lobe of her first array to Willie. However, she has uncertainty about the gain $g^{(a,s)}$ of the main lobe of the second array toward Willie due to the misalignment error; it is either $g_1^{(a,s)}\triangleq M_{a,s}$ with probability $b_1^{(a,s)}\triangleq F_{|\mathcal{E}_{a,s}|}\left({\theta_{a,s}}/{2}\right)$ or $g_2^{(a,s)}\triangleq m_{a,s}$ with probability $b_2^{(a,s)}\triangleq 1-F_{|\mathcal{E}_{a,s}|}\left({\theta_{a,s}}/{2}\right)$. Moreover, Alice and Bob do not know whether the Alice-Willie link is LOS or NLOS; hence, they should take into account two possibilities given the LOS probability $P_{\rm LOS}(d_{aw})$. In the following theorem, we characterize the expected value of $P^*_{e,w}$ form Alice's perspective in a closed form. \begin{theorem}\label{col2} The expected value of $P^*_{e,w}$ form Alice's perspective is characterized as \eqref{Epew}, shown at the bottom of this page, where $P_{aw}({\rm L})\triangleq P_{\rm LOS}(d_{aw})$, $P_{aw}({\rm N})\triangleq1-P_{\rm LOS}(d_{aw})$, $\Gamma(\cdot)$ is the gamma function \cite[Eq. (8.310.1)]{gradshteyn2014table}, and $g_k^{(a,s)}$ and $b_k^{(a,s)}$ are defined above for $k\in\{1,2\}$. Moreover, the function $S(\nu_{\mathcal{B}},g_k^{(a,s)})$ is defined as \begin{align} \setcounter{equation}{13} \!S(\nu_{\mathcal{B}},g_k^{(a,s)})\!\triangleq\!\sum_{l=1}^{\nu_{\mathcal{B}}}\!\binom{\nu_{\mathcal{B}}}{l}\!(-1)^l\left(\!1\!+\!l\frac{\eta_{\mathcal{B}}P_J^{\rm max}g_k^{(a,s)}}{P_am_{a,f}\nu_{\mathcal{B}}}\!\right)^{\!\!-\nu_{\mathcal{B}}}\!\!\!\!\!, \end{align} and $I(\nu_{\mathcal{B}},l,g_k^{(a,s)})$, for $\nu_{\mathcal{B}}=1$ and $\nu_{\mathcal{B}}\geq2$, is defined as \begin{align} I(1,l,g_k^{(a,s)})&\triangleq\ln\left(1+l\frac{P_J^{\rm max}g_k^{(a,s)}}{P_am_{a,f}}\right),\label{I1}\\ I(\nu_{\mathcal{B}}\geq2,l,g_k^{(a,s)})&\triangleq\frac{(\nu_{\mathcal{B}}-2)!}{\nu_{\mathcal{B}}^{\nu_{\mathcal{B}}-1}}\bigg[1\nonumber\\ &\hspace{-0.5cm}-\left(1+l\frac{\eta_{\mathcal{B}}P_J^{\rm max}g_k^{(a,s)}}{P_am_{a,f}\nu_{\mathcal{B}}}\right)^{\!\!-\nu_{\mathcal{B}}+1}\bigg]. \end{align} \end{theorem} \begin{proof} Let $P^{\rm C}_{e,w}$, $\lambda_1^{\rm C}$, and $\lambda_2^{\rm C}$ denote the values of $P^*_{e,w}$, $\lambda_1$, and $\lambda_2$, respectively, conditioned on the blockage instance $\mathcal{B}\in\{{\rm L,N}\}$ and the gain $g^{(a,s)}$ of Alice's second array to Willie. Then using \eqref{pe} we have \begin{align}\label{Epew1} &\hspace{-0.2cm}\E[P^{\rm C}_{e,w}]\nonumber\\ &\hspace{0.1cm}=\!\E_{\lambda_1^{\rm C}<\lambda_2^{\rm C}}[P^{\rm C}_{e,w}]\!\Pr(\lambda^{\rm C}_1\!\!<\!\!\lambda^{\rm C}_2)\!+\!\E_{\lambda^{\rm C}_1\geq\lambda^{\rm C}_2}[P^{\rm C}_{e,w}]\!\Pr(\lambda^{\rm C}_1\!\!\geq\!\!\lambda^{\rm C}_2)\!\nonumber\\ &\hspace{0.1cm}=\Pr(\lambda^{\rm C}_1\!\geq\!\lambda^{\rm C}_2)\!\left(\!1\!-\!\frac{P_am_{a,f}}{P_J^{\rm max}g^{(a,s)}}\E_{\lambda^{\rm C}_1\geq\lambda^{\rm C}_2}\!\left[\!\frac{|\tilde{h}^{(\mathcal{B})}_{aw,f}|^2}{|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2}\!\right]\!\right)\!.\! \end{align} The closed form of $\Pr(\lambda^{\rm C}_1\geq\lambda^{\rm C}_2)$ is derived as \begin{align}\label{eqcolproof2} \Pr(\lambda^{\rm C}_1\!\geq\!\lambda^{\rm C}_2)&=\Pr\left(|\tilde{h}^{(\mathcal{B})}_{aw,f}|^2\leq\frac{P_J^{\rm max}g^{(a,s)}}{P_am_{a,f}}|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2\right)\nonumber\\ &\hspace{-1.7cm}\stackrel{(a)}{=}\sum_{l=0}^{\nu_{\mathcal{B}}}\!\binom{\nu_{\mathcal{B}}}{l}\!(-1)^l\E_{|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2}\!\!\left[\exp\!\left(\!-\eta_{\mathcal{B}}l\frac{P_J^{\rm max}g^{(a,s)}}{P_am_{a,f}}|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2\!\right)\!\right] \nonumber\\ &\hspace{-1.7cm}\stackrel{(b)}{=}\sum_{l=0}^{\nu_{\mathcal{B}}}\!\binom{\nu_{\mathcal{B}}}{l}\!(-1)^l\left(1+l\frac{\eta_{\mathcal{B}}P_J^{\rm max}g^{(a,s)}}{P_am_{a,f}\nu_{\mathcal{B}}}\right)^{\!\!-\nu_{\mathcal{B}}}, \end{align} where step $(a)$ follows from Alzer's lemma \cite{alzer1997some}, \cite[Lemma 6]{bai2015coverage} for a normalized gamma RV $X\sim{\rm Gamma}(\nu_{\mathcal{B}},1/\nu_{\mathcal{B}})$, which states that $\Pr\left(X<x\right)$ is tightly approximated by $\left[1-\exp(-\eta_{\mathcal{B}} x)\right]^{\nu_{\mathcal{B}}}$ where $\eta_{{\mathcal{B}}}=\nu_{\mathcal{B}}(\nu_{\mathcal{B}}!)^{-1/\nu_{\mathcal{B}}}$, and then applying the binomial theorem assuming $\nu_{\mathcal{B}}$ is an integer \cite{bai2015coverage}, i.e., \begin{align}\label{FXx} F_X(x)=\sum_{l=0}^{\nu_{\mathcal{B}}}\binom{\nu_{\mathcal{B}}}{l}(-1)^{l}{\rm e}^{-l\eta_{\mathcal{B}}x}. \end{align} Moreover, step $(b)$ is derived using the moment generating function (MGF) of a normalized gamma RV $X$, i.e., $\E[{\rm e}^{tX}]=(1-t/\nu_{\mathcal{B}})^{-\nu_{\mathcal{B}}}$ for any $t<\nu_{\mathcal{B}}$. Moreover, for the expectation term in \eqref{Epew1} we have \begin{align}\label{eqcolproof3} &\hspace{-0.2cm}\E_{\lambda^{\rm C}_1\geq\lambda^{\rm C}_2}\!\left[\frac{|\tilde{h}^{(\mathcal{B})}_{aw,f}|^2}{|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2}\right]\nonumber\\ &\hspace{0.5cm}=\E\!\left[\frac{|\tilde{h}^{(\mathcal{B})}_{aw,f}|^2}{|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2}{\Bigg|}|\tilde{h}^{(\mathcal{B})}_{aw,f}|^2\leq\frac{P_J^{\rm max}g^{(a,s)}}{P_am_{a,f}}|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2\right]\nonumber\\ &\hspace{0.5cm}=\int_{0}^{\infty}\frac{f_{|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2}(y)}{y}\Bigg[\underbrace{\int_{0}^{C_1y}xf_{|\tilde{h}^{(\mathcal{B})}_{aw,f}|^2}(x)dx}_{V_1}\Bigg]dy, \end{align} where $C_1\triangleq\frac{P_J^{\rm max}g^{(a,s)}}{P_am_{a,f}}$, and $f_{|\tilde{h}^{(\mathcal{B})}_{aw,f}|^2}(x)$ and $f_{|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2}(y)$ are the PDFs of the fading coefficients $|\tilde{h}^{(\mathcal{B})}_{aw,f}|^2$ and $|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2$, respectively. Applying the part-by-part integration rule to $V_1$ and then using Alzer's lemma together with the binomial theorem as \eqref{FXx} yields \begin{align}\label{eqcolproof4} V_1=&C_1y\sum_{l_1=0}^{\nu_{\mathcal{B}}}\!\binom{\nu_{\mathcal{B}}}{l_1}\!(-1)^{l_1}{\rm e}^{-l_1\eta_{\mathcal{B}}C_1y}\nonumber\\ &-C_1y-\sum_{l_2=1}^{\nu_{\mathcal{B}}}\!\binom{\nu_{\mathcal{B}}}{l_2}\frac{(-1)^{l_2}}{\eta_{\mathcal{B}}l_2}\left[1-{\rm e}^{-l_2\eta_{\mathcal{B}}C_1y}\right]. \end{align} By plugging \eqref{eqcolproof4} into \eqref{eqcolproof3}, using the MGF of the normalized gamma RV $|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2$, and then noting that $f_{|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2}(y)=\nu_{\mathcal{B}}^{\nu_{\mathcal{B}}}y^{\nu_{\mathcal{B}}-1}{\rm e}^{-\nu_{\mathcal{B}}y}/\Gamma(\nu_{\mathcal{B}})$ we have \begin{align}\label{eqcolproof5} &\!\!\!\E_{\lambda^{\rm C}_1\geq\lambda^{\rm C}_2}\!\left[\!\frac{|\tilde{h}^{(\mathcal{B})}_{aw,f}|^2}{|\tilde{h}^{(\mathcal{B})}_{aw,s}|^2}\!\right]\!=\!C_1\!\sum_{l_1=1}^{\nu_{\mathcal{B}}}\!\binom{\nu_{\mathcal{B}}}{l_1}\!(-1)^{l_1}\!\left(\!1\!+\!l_1\frac{\eta_{\mathcal{B}}C_1}{\nu_{\mathcal{B}}}\right)^{\!\!-\nu_{\mathcal{B}}}\nonumber\\ &-\sum_{l_2=1}^{\nu_{\mathcal{B}}}\!\binom{\nu_{\mathcal{B}}}{l_2}\frac{(-1)^{l_2}\nu_{\mathcal{B}}^{\nu_{\mathcal{B}}}}{\eta_{\mathcal{B}}l_2\Gamma(\nu_{\mathcal{B}})}\bigg[\int_{0}^{\infty}{y^{\nu_{\mathcal{B}}-2}}{\rm e}^{-\nu_{\mathcal{B}}y}dy\nonumber\\ &\hspace{3cm}-\int_{0}^{\infty}{y^{\nu_{\mathcal{B}}-2}}{\rm e}^{-(l_2\eta_{\mathcal{B}}C_1+\nu_{\mathcal{B}})y}dy\bigg]. \end{align} Now given that the parameter $\nu_{\mathcal{B}}$ of Nakagami-$m$ fading is always greater than or equal to $0.5$ and is assumed to be an integer here, we have $\nu_{\mathcal{B}}\in\mathbb{N}$ where $\mathbb{N}$ stands for the set of natural numbers. For $\nu_{\mathcal{B}}\geq2$, by \cite[Eq. (3.351.3)]{gradshteyn2014table} we have $\int_{0}^{\infty}y^{\nu_{\mathcal{B}}-2}{\rm e}^{-\alpha y}dy=(\nu_{\mathcal{B}}-2)!/\alpha^{\nu_{\mathcal{B}}-1}$ for any $\alpha \in \R^+$. On the other hand, for $\nu_{\mathcal{B}}=1$ using \cite[Eq. (2.325.1)]{gradshteyn2014table} we have $\int_{0}^{\infty}y^{-1}{\rm e}^{-\alpha y}dy={\rm Ei}(-\alpha y)|_0^{\infty}$, {where ${\rm Ei}(\cdot)$ is the exponential integral function defined as \cite[Eq. (8.211.1)]{gradshteyn2014table} for negative arguments}. Therefore, following a similar approach to the proof of \cite[Corollary 2]{jamali2019uplink} we can calculate the difference of the two integrals in \eqref{eqcolproof5} as $\lim_{y\to0}[{\rm Ei}(-(l_2\eta_{\mathcal{B}}C_1+\nu_{\mathcal{B}})y)-{\rm Ei}(-\nu_{\mathcal{B}}y)]=\ln([l_2\eta_{\mathcal{B}}C_1+\nu_{\mathcal{B}}]/\nu_{\mathcal{B}})$ which is equal to $\ln(1+l_2C_1)$ for $\nu_{\mathcal{B}}=1$ (note that $\eta_{\mathcal{B}}=1$ for $\nu_{\mathcal{B}}=1$, and ${\rm Ei}(-\infty)=0$). This completes the proof of the theorem given the definition of $I(\nu_{\mathcal{B}},l,g^{(a,s)})$ in \Tref{col2}. \end{proof} \noindent\textbf{Remark 2.} In \Tref{col2}, it is assumed that Willie is not in the main lobe of Alice's first antenna array and hence, receives the desired signal by a side lobe gain $m_{a,f}$. However, if Willie is within the main lobe of the first array, we should include another averaging over the gain $g^{(a,f)}$ of the first array given the beamsteering error, i.e., that gain is either $g_1^{(a,f)}\triangleq M_{a,f}$ with probability $b_1^{(a,f)}\triangleq F_{|\mathcal{E}_{a,f}|}\left({\theta_{a,f}}/{2}\right)$ or $g_2^{(a,f)}\triangleq m_{a,f}$ with probability $b_2^{(a,f)}\triangleq 1-F_{|\mathcal{E}_{a,f}|}\left({\theta_{a,f}}/{2}\right)$. \section{Performance of the Alice-Bob Link}\label{Sec4} In this section, we characterize performance metrics of the Alice-Bob link including its outage probability, maximum effective covert rate (i.e., the rate for which Alice can reliably communicate with Bob while maintaining $\E[P^*_{e,w}]\geq 1-\epsilon$ for any given $\epsilon>0)$, and ergodic capacity. \subsection{Outage Probability}\label{Sec4A} We assume that Alice targets a rate $R_b$ requiring the Alice-Bob link to meet a threshold signal-to-interference-plus-noise ratio (SINR) $\gamma_{\rm th}\triangleq2^{R_b}-1$. Then the outage probability $P_{\rm out}^{\rm AB}\triangleq\Pr({\gamma_{ab}}<\gamma_{\rm th})$ in achieving $R_b$ is characterized, in a closed form, in \Tref{thm3}, where the SINR $\gamma_{ab}$ of the Alice-Bob link is given as follows by using \eqref{yb}: \begin{align}\label{gab} \gamma_{ab}=\frac{{P_aG_{ab,f}L_{ab}}|\tilde{h}_{ab,f}|^2}{{P_JG_{ab,s}L_{ab}}|\tilde{h}_{ab,s}|^2+\sigma^2_b}. \end{align} Note that in addition to $|\tilde{h}_{ab,f}|^2$, $|\tilde{h}_{ab,s}|^2$, and $P_J$, the blockage instance $\mathcal{B}\in\{{\rm L},{\rm N}\}$ and the antenna gains can also change randomly across transmission blocks. In particular, while we assume that the jamming signal arrives with the deterministic side lobe gain $m_{a,s}$, there are still uncertainties in the gains of Alice's first array and Bob's receiver (they are pointing their main lobes) due to the beamsteering error. Therefore, the gain $g^{(a,f)}$ of the main lobe of Alice's first array pointed to Bob is either $g_1^{(a,f)}\triangleq M_{a,f}$ with probability $b_1^{(a,f)}\triangleq F_{|\mathcal{E}_{a,f}|}\left({\theta_{a,f}}/{2}\right)$ or $g_2^{(a,f)}\triangleq m_{a,f}$ with probability $b_2^{(a,f)}\triangleq 1-F_{|\mathcal{E}_{a,f}|}\left({\theta_{a,f}}/{2}\right)$. Similarly, the gain $g^{(b)}$ of Bob's receiver is either $g_1^{(b)}\triangleq M_{b}$ with probability $b_1^{(b)}\triangleq F_{|\mathcal{E}_{b}|}\left({\theta_{b}}/{2}\right)$ or $g_2^{(b)}\triangleq m_{b}$ with probability $b_2^{(b)}\triangleq 1-F_{|\mathcal{E}_{b}|}\left({\theta_{b}}/{2}\right)$. Furthermore, in \Tref{thm3} we assume that Willie is not in the main lobe of Alice's first array. However, if Willie is in the Alice-Bob direction, we should include another averaging of the gain of Alice's second array carrying the jammer signal, i.e., instead of a deterministic $m_{a,s}$ we should consider two possibilities $g_k^{(a,s)}$ with probabilities $b_k^{(a,s)}$, $k\in\{1,2\}$, defined in Section III-B. \begin{theorem}\label{thm3} The outage probability of the Alice-Bob link in achieving the target rate $R_b\triangleq\log_2(1+\gamma_{\rm th})$ is given by \begin{align}\label{pout} &P_{\rm out}^{\rm AB}\!=\!\!\!\!\sum_{\mathcal{B}\in\{{\rm L,N}\}}\!\!\!\!P_{ab}(\mathcal{B})\sum_{k_1=1}^{2}b_{k_1}^{(a,f)}\sum_{k_2=1}^{2}b_{k_2}^{(b)}\bigg[1\!+\!\sum_{l=1}^{\nu_{\mathcal{B}}}\!\binom{\nu_{\mathcal{B}}}{l}(-1)^{l}\nonumber\\ &\hspace{1cm}\times\exp\!\bigg(\!-\frac{l\eta_{\mathcal{B}}\gamma_{\rm th}\sigma^2_b}{P_ag_{k_1}^{(a,f)}g^{(b)}_{k_2}L_{ab}^{(\mathcal{B})}}\bigg)V(\nu_{\mathcal{B}},l,g^{(a,f)}_{k_1})\bigg], \end{align} where $P_{ab}({\rm L})\triangleq P_{\rm LOS}(d_{ab})$ and $P_{ab}({\rm N})\triangleq 1-P_{\rm LOS}(d_{ab})$. Also, $V(\nu_{\mathcal{B}},l,g^{(a,f)}_{k_1})$, for $\nu_{\mathcal{B}}=1$ and $\nu_{\mathcal{B}}\geq2$, is defined as \begin{align} &\hspace{-2.8cm}\!V(1,l,g^{(a,f)}_{k_1})\!\triangleq\!\frac{P_ag^{(a,f)}_{k_1}}{P_J^{\rm max}l\gamma_{\rm th}m_{a,s}}\ln\!\bigg(\!1\!+\!\frac{P_J^{\rm max}l\gamma_{\rm th}m_{a,s}}{P_ag^{(a,f)}_{k_1}}\!\bigg),\!\\ V(\nu_{\mathcal{B}}\geq2,l,g^{(a,f)}_{k_1})&\triangleq\frac{\nu_{\mathcal{B}}P_ag^{(a,f)}_{k_1}}{P_J^{\rm max}l\eta_{\mathcal{B}}\gamma_{\rm th}m_{a,s}(\nu_{\mathcal{B}}-1)}\nonumber\\ &\hspace{-0.7cm}\times\bigg[1-\bigg(1+\frac{P_J^{\rm max}l\eta_{\mathcal{B}}\gamma_{\rm th}m_{a,s}}{\nu_{\mathcal{B}}P_ag^{(a,f)}_{k_1}}\bigg)^{\!\!1-\nu_{\mathcal{B}}}\bigg]. \end{align} \end{theorem} \begin{proof} Given the SINR of the Alice-Bob link in \eqref{gab}, the outage probability conditioned on the blockage instance $\mathcal{B}$ as well as the antenna gains $g^{(a,f)}$ and $g^{(b)}$ is characterized as follows: \begin{align}\label{proofthm3-1} P_{\rm out,C}^{\rm AB}\triangleq&\Pr(\gamma_{ab}<\gamma_{\rm th}|\mathcal{B},g^{(a,f)},g^{(b)})\nonumber\\ \stackrel{(a)}{=}&\Pr\left(|\tilde{h}_{ab,f}^{(\mathcal{B})}|^2<C_2P_J|\tilde{h}_{ab,s}^{(\mathcal{B})}|^2+C_3\right)\nonumber\\ &\hspace{-1.4cm}\stackrel{(b)}{=}\sum_{l=0}^{\nu_{\mathcal{B}}}\binom{\nu_{\mathcal{B}}}{l}(-1)^{l}{\rm e}^{-l\eta_{\mathcal{B}}C_3}\E_{P_J,|\tilde{h}^{(\mathcal{B})}_{ab,s}|^2}\!\!\left[{\rm e}^{-l\eta_{\mathcal{B}}C_2P_J|\tilde{h}^{(\mathcal{B})}_{ab,s}|^2}\!\right]\nonumber\\ &\hspace{-1.4cm}\stackrel{(c)}{=}\sum_{l=0}^{\nu_{\mathcal{B}}}\binom{\nu_{\mathcal{B}}}{l}(-1)^{l}{\rm e}^{-l\eta_{\mathcal{B}}C_3}\E_{P_J}\!\!\left[\!\left(1+\frac{l\eta_{\mathcal{B}}C_2P_J}{\nu_{\mathcal{B}}}\right)^{\!\!-\nu_{\mathcal{B}}}\right] \nonumber\\ &\hspace{-1.4cm}\stackrel{(d)}{=}\!1\!+\!\sum_{l=1}^{\nu_{\mathcal{B}}}\!\binom{\nu_{\mathcal{B}}}{l}\!\frac{(-1)^{l}{\rm e}^{-l\eta_{\mathcal{B}}C_3}}{P_J^{\rm max}}\!\!\int_{0}^{P_J^{\rm max}}\!\!\!\!\!\!\!\Big(\!1\!+\!\frac{l\eta_{\mathcal{B}}C_2x}{\nu_{\mathcal{B}}}\Big)^{\!\!-\nu_{\mathcal{B}}}\!\!dx,\!\! \end{align} where in step $(a)$ we have defined $C_2\triangleq \gamma_{\rm th}m_{a,s}/(P_ag^{(a,f)})$ and $C_3\triangleq \gamma_{\rm th}\sigma^2_b/(P_ag^{(a,f)}g^{(b)}L_{ab}^{(\mathcal{B})})$. Moreover, step $(b)$ follows by Alzer's lemma together with the binomial theorem as \eqref{FXx}, and step $(c)$ is derived using the MGF of the normalized gamma RV $|\tilde{h}^{(\mathcal{B})}_{ab,s}|^2$. Finally, taking the integral in step $(d)$ and recalling the definition of the function $V(\nu_{\mathcal{B}},l,g^{(a,f)}_{k_1})$ from the statement of the theorem complete the proof. \end{proof} \subsection{Maximum Effective Covert Rate}\label{Sec4B} Given any target data rate $R_b$, Alice and Bob can have the effective communication rate $\overline{R}_{a,b}\triangleq R_b(1-P_{\rm out}^{\rm AB})$, where their outage probability $P_{\rm out}^{\rm AB}$ in achieving the target rate $R_b$ is obtained using \Tref{thm3}. The goal here is to determine the optimal value of $P_J^{\rm max}$ that maximizes $\overline{R}_{a,b}$ while also satisfying the covertness requirement, i.e., $\E[P^*_{e,w}]\geq 1-\epsilon$ for any given $\epsilon>0$. We first note that $\E[P^*_{e,w}]$ and $P_{\rm out}^{\rm AB}$ both monotonically increase with $P_J^{\rm max}$ (see also Fig. \ref{Fig1} and Fig. \ref{Fig2} for the visualization). Then, in order to obtain the maximum effective covert rate $\overline{R}^*_{a,b}$ achievable in our setup, we need to pick the smallest possible value for $P_J^{\rm max}$ given that $\overline{R}_{a,b}$ is monotonically decreasing with respect to $P_J^{\rm max}$. This smallest possible value for $P_J^{\rm max}$, denoted by $P_{J,\rm opt}^{\rm max}$, must also satisfy the covertness requirement $\E[P^*_{e,w}]\geq 1-\epsilon$ for the given $\epsilon>0$. Now, given that $\E[P^*_{e,w}]$ monotonically increases with $P_J^{\rm max}$, the solution of the equation $\E[P^*_{e,w}]=1-\epsilon$ for $P_J^{\rm max}$ defines $P_{J,\rm opt}^{\rm max}$. This observation is summarized in the following proposition. Note, however, that the optimal rate per \Pref{prop4} needs to be evaluated numerically. \begin{proposition}\label{prop4} Given fixed system and channel parameters, fixed covertness requirement $\epsilon$, and target data rate $R_b$, the maximum effective covert rate achievable in the considered setup is equal to $R_b(1-P_{\rm out}^{*\rm AB})$, denoted by $\overline{R}^*_{a,b}$, {where $P_{\rm out}^{*\rm AB}$ is equal to $P_{\rm out}^{\rm AB}$, specified in \eqref{pout}, evaluated in $P_{J,\rm opt}^{\rm max}$ that is the solution of the equation $\E[P^*_{e,w}]=1-\epsilon$ for $P_J^{\rm max}$.} \end{proposition} \subsection{Ergodic Capacity}\label{Sec4C} In addition to characterizing the maximum effective covert rate $\overline{R}^*_{a,b}$ given a target rate $R_b$, provided in Section \ref{Sec4B}, it is desirable to determine the achievable average data rate of the Alice-Bob link, referred to as its \textit{ergodic capacity}, given fixed values for the parameters involved in the model\footnote{We assume that the set of parameters is chosen such that the covert communication requirement $\E[P^*_{e,w}]\geq 1-\epsilon$ is satisfied for any $\epsilon>0$. Note that $\E[P^*_{e,w}]$ depends only on the values of the design parameters as well as the statistics of the RVs involved and not on their instantaneous realizations.}. The ergodic capacity $\E[R_{a,b}]$ of the Alice-Bob link is obtained while assuming that the threshold/target data rate $R_b$ is adjusted by the channel conditions, i.e., $\gamma_{\rm th}=\gamma_{ab}$, implying that Bob's decoder can always decode the received signal without outage. In fact, given the instantaneous SINR $\gamma_{ab}$, specified in \eqref{gab}, Alice can reliably transmit to Bob with the data rate equal to $\log_2(1+\gamma_{ab})$. Therefore, on average, the data rate $\E[R_{a,b}]\triangleq \E[\log_2(1+\gamma_{ab})]$ is achievable for the Alice-Bob link, where the expectation is over the RVs involved in \eqref{gab}. In the following theorem, we characterize $\E[R_{a,b}]$ in a tractable form that involves only one-dimensional integrals over one of the fading coefficients. \begin{theorem}\label{Thm5} The ergodic capacity $\E[R_{a,b}]$ of the Alice-Bob link is given by \begin{align}\label{eq4C1} \!\E[R_{a,b}]=&\frac{P_a}{m_{a,s}P_J^{\rm max}\ln 2}\sum_{\mathcal{B}\in\{{\rm L,N}\}}\!\!\!P_{ab}(\mathcal{B})\sum_{k_1=1}^{2}\!b_{k_1}^{(a,f)}g_{k_1}^{(a,f)}\nonumber\\ &\times\sum_{k_2=1}^{2}\!b_{k_2}^{(b)}\sum_{l=1}^{\nu_{\mathcal{B}}}\binom{\nu_{\mathcal{B}}}{l}\frac{(-1)^{l}}{l\eta_{\mathcal{B}}}\left[\mathcal{J}_1-\mathcal{J}_2-\mathcal{J}_3\right], \end{align} where $\mathcal{J}_1$, $\mathcal{J}_2$, and $\mathcal{J}_3$ are defined in the form of one-dimensional integrals as follows: \begin{align} \mathcal{J}_1&\triangleq\int_{0}^{\infty}\frac{1}{y}\Bigg[{\rm eEi}\Bigg(\frac{l\eta_{\mathcal{B}}}{P_ag_{k_1}^{(a,f)}}\Bigg[m_{a,s}P_J^{\rm max}y\nonumber\\ &\hspace{3.5cm}+\frac{\sigma^2_b}{g_{k_2}^{(b)}L_{ab}^{(\mathcal{B})}}\Bigg]\Bigg)\Bigg]f_Y(y)dy,\label{J1}\\ \mathcal{J}_2&\triangleq\left[{\rm eEi}\!\left(\frac{l\eta_{\mathcal{B}}\sigma^2_b}{P_ag_{k_1}^{(a,f)}g_{k_2}^{(b)}L_{ab}^{(\mathcal{B})}}\right)\right]\int_{0}^{\infty}\frac{1}{y}f_Y(y)dy,\label{J2}\\ \mathcal{J}_3&\triangleq\int_{0}^{\infty}\frac{1}{y}\!\left[\ln\!\left(\!1\!+\!\frac{m_{a,s}g_{k_2}^{(b)}L_{ab}^{(\mathcal{B})}P_J^{\rm max}}{\sigma^2_b}y\right)\right]\!f_Y(y)dy\label{J3}, \end{align} where ${\rm eEi}(x)\triangleq {\rm e}^x{\rm Ei}(-x)$, and $f_Y(y)$ is the PDF of a normalized gamma RV as in \eqref{pdf_gamma}. \end{theorem} \begin{proof} Based on the system model considered in this paper and our earlier discussions, we have \begin{align}\label{eqthm5p1} \!\E[R_{a,b}]\!=\!\!\!\!\!\!\sum_{\mathcal{B}\in\{{\rm L,N}\}}\!\!\!\!\!\!P_{ab}(\mathcal{B})\!\!\sum_{k_1=1}^{2}\!b_{k_1}^{(a,f)}\!\!\sum_{k_2=1}^{2}\!b_{k_2}^{(b)}\E[R_{a,b}|\mathcal{B},g^{(a,f)}\!,g^{(b)}], \end{align} where $\E[R_{a,b}|\mathcal{B},g^{(a,f)}\!,g^{(b)}]$ is the ergodic capacity conditioned on the blockage instance $\mathcal{B}$, and the antenna gains $g^{(a,f)}$ and $g^{(b)}$. Given the definition of the ergodic capacity $\E[R_{a,b}]\triangleq \E[\log_2(1+\gamma_{ab})]$ and the expression of the SINR $\gamma_{ab}$ in \eqref{gab}, we have \begin{align}\label{eqthm5p2} \!\E[R_{a,b}|\mathcal{B},g^{(a,f)}\!,g^{(b)}]\!=\!\E_{Y,P_J}\!\!\Bigg[{\int_{0}^{\infty}\log_2(1+C'_1x)f_X(x)dx}\Bigg], \end{align} where $X\triangleq |\tilde{h}_{ab,f}^{(\mathcal{B})}|$ and $Y\triangleq |\tilde{h}_{ab,s}^{(\mathcal{B})}|$ represent the RVs associated with the involved fading coefficients with PDFs $f_X(x)$ and $f_Y(y)$, respectively. Moreover, $C'_1\triangleq 1/(C'_2P_JY+C'_3)$ with $C'_2\triangleq m_{a,s}/(P_ag^{(a,f)})$ and $C'_3\triangleq \sigma^2_b/(P_ag^{(a,f)}g^{(b)}L_{ab}^{(\mathcal{B})})$. Observe that for a given RV $Z$ with the PDF $f_Z(z)$ and CDF $F_Z(z)$ we have the following part-by-part integration equality \begin{align}\label{eqthm5p3} &\int_{a}^{b}\log_2(1+cz)f_Z(z)dz=\frac{1}{\ln 2}\Bigg[c\int_{a}^{b}\frac{1-F_Z(z)}{1+cz}dz\nonumber\\ &+(1-F_Z(a))\ln(1+ca)-(1-F_Z(b))\ln(1+cb)\Bigg], \end{align} with $c$ being a constant. Then the integral involved in \eqref{eqthm5p2} is computed as follows: \begin{align}\label{eqthm5p4} &\int_{0}^{\infty}\log_2(1+C'_1x)f_X(x)dx\stackrel{(a)}{=}\frac{C'_1}{\ln 2}\int_{0}^{\infty}\frac{1-F_X(x)}{1+C'_1x}dx \nonumber\\ &\hspace{1cm}\stackrel{(b)}{=}\frac{1}{\ln 2}\sum_{l=1}^{\nu_{\mathcal{B}}}\binom{\nu_{\mathcal{B}}}{l}(-1)^{l}{\rm e}^{l\eta_{\mathcal{B}}/C'_1}{\rm Ei}({-l\eta_{\mathcal{B}}/C'_1}), \end{align} where step $(a)$ is by using \eqref{eqthm5p3}, and noting that $(1-F_X(0))\ln(1+C'_1\times0)=0 $ and \begin{align} \lim_{x\to\infty}(1-F_X(x))\ln(1+C'_1x)=0, \end{align} since $1-F_X(x)=-\sum_{l=1}^{\nu_{\mathcal{B}}}\binom{\nu_{\mathcal{B}}}{l}(-1)^{l}{\rm e}^{-l\eta_{\mathcal{B}}x}$ decays exponentially while $\ln(1+C'_1x)$ grows logarithmically with $x$. Moreover, step $(b)$ is derived by first using Alzer's lemma together with the binomial theorem as \eqref{FXx}, and then applying \cite[Eq. (3.352.4)]{gradshteyn2014table}. Now by substituting \eqref{eqthm5p4} into \eqref{eqthm5p2}, we have for the conditional ergodic capacity as \begin{align}\label{eqthm5p5} &\E[R_{a,b}|\mathcal{B},g^{(a,f)}\!,g^{(b)}]=\frac{1}{P_J^{\rm max}\ln 2}\sum_{l=1}^{\nu_{\mathcal{B}}}\binom{\nu_{\mathcal{B}}}{l}(-1)^{l}\E_{Y}\!\!\Bigg[\nonumber\\ &\underbrace{{\int_{0}^{P_J^{\rm max}}\!\!\!\!\!\!\exp\!\left(l\eta_{\mathcal{B}}(C'_2Yt\!+\!C'_3)\right){\rm Ei}\!\left(-l\eta_{\mathcal{B}}(C'_2Yt\!+\!C'_3)\right)dt}}_{I_1}\Bigg]\!.\! \end{align} The integral term $I_1$ can be computed in a closed form as \begin{align}\label{eqthm5p6} I_1\stackrel{(a)}{=}&\frac{1}{C'_2Y}\int_{C'_3}^{C'_2YP_J^{\rm max}+C'_3}\exp\!\left(l\eta_{\mathcal{B}}z\right){\rm Ei}\!\left(-l\eta_{\mathcal{B}}z\right)dz\nonumber\\ \stackrel{(b)}{=}&\frac{1}{l\eta_{\mathcal{B}}C'_2Y}\bigg[{\rm eEi}\!\left(l\eta_{\mathcal{B}}(C'_2YP_J^{\rm max}+C'_3)\right)\nonumber\\ &-{\rm eEi}\!\left(l\eta_{\mathcal{B}}C'_3\right)-\ln\!\left(1+C'_2YP_J^{\rm max}/C'_3\right)\bigg], \end{align} where step $(a)$ is by defining $z\triangleq C'_2Yt\!+\!C'_3$, and step $(b)$ is obtained using \Lref{lemma_app1} in Appendix \ref{App_A} and defining ${\rm eEi}(x)\triangleq {\rm e}^x{\rm Ei}(-x)$. Finally, taking the expectation of $I_1$ in \eqref{eqthm5p6} over $Y$ and then plugging \eqref{eqthm5p5} back into \eqref{eqthm5p1} complete the proof. \end{proof} To the best of our knowledge, the one-dimensional integrals in \eqref{J1}-\eqref{J3} cannot be computed in closed forms for all values of $\nu_{\mathcal{B}}$. In particular, obtaining closed-form expressions for the special case $\nu_{\mathcal{B}}=1$, which corresponds to Rayleigh fading channels, is not straightforward mainly due to the fractional term $1/y$ in the integrands. On the other hand, as delineated in the following proposition, deriving the closed forms of \eqref{J1}-\eqref{J3} for the special case $\nu_{\mathcal{B}}=2$ is straightforward. \begin{proposition}\label{Prop6} For $\nu_{\mathcal{B}}=2$, the closed-form expressions for $\mathcal{J}_1$, $\mathcal{J}_2$, and $\mathcal{J}_3$, defined in \Tref{Thm5}, are as follows: \begin{align} \mathcal{J}_1&=\frac{4P_ag_{k_1}^{(a,f)}}{l\eta_{\mathcal{B}}m_{a,s}P_J^{\rm max}\!-\!2P_ag_{k_1}^{(a,f)}}\Bigg[{\rm eEi}\left(\!\frac{2\sigma^2_b}{g_{k_2}^{(b)}L_{ab}^{(\mathcal{B})}m_{a,s}P_J^{\rm max}}\!\right)\nonumber\\ &\hspace{2cm}-{\rm eEi}\left(\frac{l\eta_{\mathcal{B}}\sigma^2_b}{P_ag_{k_1}^{(a,f)}g_{k_2}^{(b)}L_{ab}^{(\mathcal{B})}}\right)\Bigg],\\ \mathcal{J}_2&=2~\!{\rm eEi}\!\left(\frac{l\eta_{\mathcal{B}}\sigma^2_b}{P_ag_{k_1}^{(a,f)}g_{k_2}^{(b)}L_{ab}^{(\mathcal{B})}}\right),\label{J2prop}\\ \mathcal{J}_3&=-2~\!{\rm eEi}\!\left(\frac{2\sigma^2_b}{m_{a,s}g_{k_2}^{(b)}L_{ab}^{(\mathcal{B})}P_J^{\rm max}}\right). \end{align} \end{proposition} \begin{proof} The proof follows by substituting $f_Y(y)=4y{\rm e}^{-2y}$. Then the closed forms for $\mathcal{J}_1$ and $\mathcal{J}_3$ are derived by applying \cite[Corollary 1]{jamali2019uplink} and \cite[Eq. (4.337.2)]{gradshteyn2014table}, respectively. \end{proof} It is worth mentioning at the end that rather complicated closed forms can also be obtained for the case of $\nu_{\mathcal{B}}>2$ by employing, e.g., \cite[Eq. (06.35.21.0016.01)]{wolfram}, \cite[Eq. (3.351.3)]{gradshteyn2014table}, and \cite[Eq. (4.358.1)]{gradshteyn2014table} to solve the integrals involved in \eqref{J1}, \eqref{J2}, and \eqref{J3}, respectively. \section{Practical Scenarios, Discussions, and Future Directions}\label{Sec5} In this section, we first describe the localization issue in covert mmWave communications and propose a potential design approach that can be incorporated in the context of the system model in this paper. We then establish how the performance metrics of the proposed scheme can be characterized using the earlier results in this paper. Finally, we highlight several interesting future research directions. \subsection{Localization Issue}\label{Sec5A} One of the important aspects in the design of a typical mmWave communication network is the localization of the nodes. This is mainly due to the highly directive beams used in mmWave communication systems. In the context of the system model in this paper, while it is important in the design of the system to know both Bob's and Willie's locations, obtaining the information about Willie's location is ought to be much more challenging. In fact, the legitimate parties Alice and Bob can apply sophisticated beam training approaches to establish a directional link. However, since Willie is a passive node, it is more difficult for Alice to obtain precise information about Willie's location. Speaking of Willie's location, both Alice's distance to Willie and the spatial direction between them are important to set up the covert mmWave communication system. However, the distance between Alice and Willie is less challenging if the spatial direction between them is known or if the direction issue is properly addressed in the system design. In fact, all of our earlier derivations are in terms of the link length $d_{aw}$ between Alice and Willie. Therefore, the performance metrics change with respect to the distance between Alice and Willie, and one needs to adopt a new set of values for the involved parameters to ensure the covertness requirement while, in the mean time, maximizing the effective rate between Alice and Bob. Note that the uncertainty about Alice's distance to Willie also exists in conventional RF-based covert communication systems incorporating omni-directional antennas and is not particular to the case of covert mmWave communication. On the other hand, the uncertainty about the spatial direction between Alice and Willie is more challenging as it directly impacts the design architecture for Alice's transmitter. Throughout the paper we assumed that Willie's direction is known to Alice such that the main lobe of Alice's second antenna array, carrying the jamming signal, is pointed toward Willie. However, in the case of uncertainty about Willie's direction we might not be able to do that; as a result, the jamming signal may arrive to Willie with a much lower gain of the side lobe instead of the main lobe of the second array. This deteriorates the system performance by improving Willie's detection performance which in turn degrades the Alice-Bob link performance by, e.g., requiring Alice to employ lower signal powers $P_a$ or larger jammer powers $P_J^{\rm max}$ to satisfy the covertness requirement. One immediate solution to address the aforementioned issue on the uncertainty about Willie's direction is to employ an antenna array with a wide (main lobe) beamwidth to transmit the jamming signal. However, it is very difficult to cover the whole space (except the Alice-Bob direction) using a single wide main lobe \cite{zhang20165g}. Therefore, Alice may prefer to employ several wide-beam antenna arrays to transmit the jamming signal. In this case, the main lobe of each array covers a certain spatial direction such that the union of the main lobes covers the whole space except the Alice-Bob direction. As a result, from the design perspective, we no longer need to know Willie's direction as the whole space is covered by multiple antenna arrays leaving a null space (or negligible side lobes) toward Bob's direction. However, from the analysis point of view, it is not easy to derive tractable forms for the system performance metrics as discussed next. In fact, assuming the sectored-pattern antenna model, as in Section \ref{Sec2A}, each antenna array has a main lobe and a side lobe. Therefore, the jammer signal arrives at Willie by a main lobe from the antenna array covering the Alice-Willie direction in addition to several side lobes each from all other antenna arrays. Similarly, the jammer signal arrives at Bob by the side lobe of all arrays other than the first antenna array. Given the spatial distance between the antenna arrays, the channel between each side lobe and the receiver, either Willie or Bob, has to be assigned an independent fading coefficient. Therefore, the received signals by Willie and Bob involve several independent Nakagami fading coefficients making it difficult to derive tractable forms for the performance metrics. In the next subsection, we elucidate how the results in the paper can be applied to approximate the performance metrics with respect to this multi-array system model that resolves the issue of uncertainty about Alice-Willie direction. \noindent\textbf{Remark 3.} The system model considered in this paper and the subsequent analytical results are directly applicable to the case where an external jammer node with a single or multiple antenna arrays is used to transmit the jamming signal instead of (an) extra antenna array(s) in Alice. \subsection{Approximate Performance of the Multi-Array Transmitter}\label{Sec5B} Consider a system model same as the one described in Section \ref{Sec2B} except that Alice is equipped with $N_J$ wide-beam arrays, instead of one, each carrying the same jamming signal and together covering the whole space except the Alice-Bob direction. The main lobe gain of the antenna arrays carrying the jamming signals is assumed to be the same, denoted by $M_{a,s}$. Now, we make the following two assumptions to approximate the system performance using tractable forms. \textit{1- Zero side lobe gains from jamming arrays to Willie:} Note that, regardless of Willie's location, the main portion of jamming signal reaches Willie by a main lob gain $M_{a,s}$ from one of $N_J$ arrays, denoted by the $j_1$-th array, that is covering Willie's spatial direction. Then the received jamming signal by Willie at the $i$-th channel use is expressed as \begin{align}\label{ywj} \mathbf{y}_{w,J}(i)=&\sqrt{P_JL_{aw}g^{(w)}}~\mathbf{x}_J(i)\Bigg[\tilde{h}_{aw,j_1}\sqrt{M_{a,s}}\nonumber\\ &\hspace{2.8cm}+\sum_{\substack{ j'=1\\ j'\neq j_1}}^{N_J}\tilde{h}_{aw,j'}\sqrt{m_{a,j'}}\Bigg], \end{align} where $g^{(w)}$ is Willie's beamforming gain, $m_{a,j'}$ is the side lobe gain of the $j'$-th array, and $\tilde{h}_{aw,j'}$ is the fading coefficient from Alice's $j'$-th jamming array to Willie. Assuming that the main lobe gain is much larger that the side lobe gains, we can expect the summation term inside the bracket to have negligible contribution compared to the term $\tilde{h}_{aw,j_1}\sqrt{M_{a,s}}$ for \textit{typical} realizations of channel fading coefficients. Note that $N_J$ is relatively small given the wide beamwidths used. Additionally, the fading coefficients $\tilde{h}_{aw,j'}$'s have different phases and hence, the summation term does not blow up with $N_J$. Therefore, we can approximate $\mathbf{y}_{w,J}(i)$ as \begin{align}\label{ywj_approx} \mathbf{y}_{w,J}(i)\approx\sqrt{P_JL_{aw}M_{a,s}g^{(w)}}~\tilde{h}_{aw,j_1}\mathbf{x}_J(i). \end{align} \textit{2- A single side lobe from jamming arrays to Bob:} Same as in \eqref{ywj}, the received jamming signal by Bob at the $i$-th channel use is expressed as \begin{align}\label{ybj} \mathbf{y}_{b,J}(i)&=\sqrt{P_JL_{ab}g^{(b)}}~\mathbf{x}_J(i)\sum_{j''=1}^{N_J}\tilde{h}_{ab,j''}\sqrt{m_{a,j''}}\nonumber\\ &\stackrel{(a)}{\approx}\sqrt{P_JL_{ab}m_{a,j_2}g^{(b)}}~\tilde{h}_{ab,j_2}\mathbf{x}_J(i), \end{align} where $\tilde{h}_{ab,j''}$ is the fading coefficient from Alice's $j''$-th jamming array to Bob. Moreover, step $(a)$ in \eqref{ybj} is obtained by considering the largest side lobe gain, denoted by $j_2$-th array, as the dominant term of the summation. Now, given the above two assumptions, it is easy to observe that the performance of the new multi-array system model can be approximated according to our earlier results on the dual-array model. The only difference is that we no longer need to take the average over the gain of the jamming array to Willie to compute $\E[P^*_{e,w}]$ since that gain is deterministically equal to $M_{a,s}$ given the multi-array architecture. All the other derivations remain the same. \subsection{Future Directions}\label{Sec5C} { Given the superiorities of covert communication over the mmWave bands compared to that of RF systems, and that not much work has been done in this area, significant effort is needed to fill the gap on various aspects of covert mmWave communication. In the following, we discuss some possibilities for future research in this direction. \subsubsection{Uncertainty about Willie's Location}\label{Sec5C1} In Section \ref{Sec5A}, we explained the importance of obtaining Willie's location information. However, it is desirable to explore how Alice's uncertainty about such information impacts the system performance, e.g., the effective covert rate (see, e.g., \cite{forouzesh2020covert2}). Also, exploring potential approaches that enable obtaining partial information about Willie's location and then characterizing their performance is a viable research direction. \subsubsection{Precise Performance Characterization of the Multi-Array System Model}\label{Sec5C2} In Section \ref{Sec5B}, we highlighted how the performance of the multi-array system model proposed in Section \ref{Sec5A} can, \textit{approximately}, be characterized using the analytical results in this paper. One might be able to provide a more rigorous analysis by eliminating the two assumptions made in Section \ref{Sec5B}. To this end, some analytical tools, such as \cite{karagiannidis2001distribution}, to characterize the weighted sum of the involved RVs in \eqref{ywj} and \eqref{ybj} are potentially useful. \subsubsection{Distribution of the Jamming Signal Power $P_J$}\label{Sec5C3} In this paper, we considered a uniform distribution for $P_J$ in the interval $[0,P_J^{\rm max}]$. Characterizing the performance metrics for the considered covert mmWave communication system model under other statistical distributions for $P_J$ is a straightforward yet important follow-up research that can help determining the optimal/best distribution(s) for the jamming signal power. {\subsubsection{Covert Communication under Partial Knowledge about $P_J$} In practice, we may be interested in satisfying the covertness requirement for a certain $\epsilon$ and not for all $\epsilon>0$. In that case, one might be able to slightly degrade the covertness requirement with the hope of improving the quality of the Alice-Bob transmission. Characterizing performance-covertness trade-offs and developing efficient schemes to allow Bob achieving some knowledge about the jamming signal (while minimizing Willie's knowledge) are important directions for future research.} \subsubsection{Covert MmWave Communication under other Potential System Models} In this paper, we have incorporated jamming signals with random realizations per transmission block to enable positive-rate covert mmWave communication in the limit of large blocklengths. One can explore covert mmWave communication under other potential system models, such as uncertainty about the channel gains, noise power, transmission blocks, etc., by utilizing results already established in the literature (see, e.g., \cite{ lee2015achieving,goeckel2016covert,bash2016covert,hu2018covert,wang2019covert,shahzad2017covert,shahzad2018achieving,shahzad2019covert,hu2019covert}). \subsubsection{Multiple Alice/Bob/Willie} Throughout this paper, we considered the conventional setting of covert communication which consists of a single legitimate transmitter Alice, a single legitimate receiver Bob, and a single warden Willie. Extending the results of the paper to more realistic scenarios, consisting of multiple legitimate transmitters and receivers and multiple wardens (see, e.g., \cite{forouzesh2020covert,chen2021uav,arghavani2021game}), is an important direction for future research. } \section{Numerical Results}\label{Sec6} In this section, we provide numerical results for various performance metrics delineated in \Tref{col2}, \Tref{thm3}, \Pref{prop4}, and \Tref{Thm5}. The parameters listed in Table \ref{T2} are considered in our numerical analysis unless explicitly mentioned. It is assumed that the beamsteering error follows a Gaussian distribution with mean zero and variance $\Delta^2$; hence, $F_{|\mathcal{E}|}(x)={\rm erf}(x/(\Delta\sqrt{2}))$ where ${\rm erf}({\cdot})$ denotes the error function \cite{di2015stochastic}. Moreover, the blockage model $P_{\rm LOS}(d_{ij})={\rm e}^{-d_{ij}/200}$ \cite{andrews2017modeling} is used throughout the numerical analysis. \begin{table}[t] \centering \caption{Parameters used for the numerical analysis.} \label{T2} \begin{tabular}{M{1.95in}||M{1.1in}} Coefficients & Values\\ \hline \hline\vspace{0.05cm} Link lengths $(d_{aw},d_{ab})$ & $(25,25)$ \si{m}\\\hline\vspace{0.05cm} Path loss exponents $(\alpha_{\rm L},\alpha_{\rm N})$ & $(2,4)$\\\hline\vspace{0.1cm} Path loss intercepts $(C_{\rm L},C_{\rm N})$ & $(10^{-7},10^{-7})$ \\\hline \vspace{0.05cm} Main lobe gains $(M_{a,f},M_{a,s},M_{b})$&$(15,15,15)$ \si{dB}\\\hline\vspace{0.05cm} Side lobe gains $(m_{a,f},m_{a,s},m_{b})$&$(-5,-5,-5)$ \si{dB}\\\hline\vspace{0.05cm} Transmit power of Alice's first array, $P_a$ & $20$ \si{dBm}\\\hline\vspace{0.05cm} Noise power $(\sigma^2_w,\sigma^2_b)$ & $(-74,-74)$ \si{dBm}\\\hline\vspace{0.05cm} Nakagami fading parameters $(\nu_{\rm L},\nu_{\rm N})$ & $(3,2)$\\\hline\vspace{0.05cm} Array beamwidths $(\theta_{a,f},\theta_{a,s},\theta_{b})$ & $(30^{\rm o},30^{\rm o},30^{\rm o})$\\\hline\vspace{0.05cm} Beamsteering error parameter, $\Delta$ & $5^{\rm o}$ \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=3.6in]{Figure1.eps} \caption{The expected value $\E[P^*_{e,w}]$ of Willie's detection error rate for a benchmark scenario with $M_{a,s}=15$ \si{dB}, $m_{a,f}=-5$ \si{dB}, $P_a=20$ \si{dBm}, $\theta_{a,s}=30^{\rm o}$, and $\Delta=5^{\rm o}$. The effect of different parameters is explored by considering the values $M_{a,s}=20$ \si{dB}, $m_{a,f}=0$ \si{dB}, $P_a=5$ \si{dBm}, $\theta_{a,s}=15^{\rm o}$, and $\Delta=15^{\rm o}$ while keeping the rest of the parameters exactly the same as the benchmark scenario.}\label{Fig1} \vspace{-0.15in} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.6in]{Figure2.eps} \caption{The outage probability of the Alice-Bob link for various values of the transmit power $P_a$, threshold rate $R_b$, and noise variance $\sigma^2_b$. The elements of the triples in the legend are $P_a$ in \si{dBm}, $R_b$, and $\sigma^2_b$ in \si{dBm}, respectively.}\label{Fig2} \vspace{-0.1in} \end{figure} Fig. \ref{Fig1} shows the expected value $\E[P^*_{e,w}]$ of Willie's detection error rate for a benchmark scenario, corresponding to the parameters listed in Table \ref{T2}, as a function of $P_J^{\rm max}$. Moreover, the impact of some relevant parameters, i.e., $M_{a,s}$, $m_{a,f}$, $P_a$, $\theta_{a,s}$, and $\Delta$ is evaluated by changing each of these parameters while keeping the rest of the parameters exactly the same as the benchmark scenario. As expected, $\E[P^*_{e,w}]$ monotonically increases with $P_J^{\rm max}$ {since a larger jamming signal will degrade Willie's performance to a greater extent.} Also, reducing $P_a$ degrades Willie's performance since the power level of the desired signal is reduced making it more difficult to be detectable by Willie. Moreover, increasing $M_{a,s}$ deteriorates Willie's performance by exposing his receiver to a more intense jamming signal. On the other hand, decreasing $\theta_{a,s}$ or increasing $\Delta$ decrease $\E[P^*_{e,w}]$ since they reduce the probability of Willie receiving the jamming signal with the main lobe of Alice's second array. Finally, increasing $m_{a,f}$ also improves Willie's performance by revealing a higher level of the desired signal, gained by $m_{a,f}$, to Willie. The outage probability of the Alice-Bob link is illustrated in Fig. \ref{Fig2} for various values of the transmit power $P_a$, threshold rate $R_b$, and noise variance $\sigma^2_b$. The rest of the parameters are the same as those in Table \ref{T2}. As expected, $P_{\rm out}^{\rm AB}$ monotonically increases with $P_J^{\rm max}$. {Moreover, the outage probability increases by increasing the threshold rate $R_b$ since it is harder to guarantee a larger target rate (without outage). Additionally, the reliability of Alice-to-Bob transmission degrades by increasing the noise variance $\sigma^2_b$ while increasing $P_a$ improves the performance by exposing a higher level of the desired signal to Bob.} \begin{table}[t] \centering \caption{Covert rates for $\epsilon=0.05$ and various threshold rates. } \label{T3} \begin{tabular}{M{0.7cm}||M{0.9cm}M{0.8cm}M{0.8cm}M{0.8cm}M{0.8cm}M{0.8cm}} $R_b$ & $0.1$ & $0.5$& $1$& $2.5$ & $5$ & $10$ \\ \hline\hline \vspace{0.08cm} $P_{\rm out}^{*\rm AB}$ & $0.00314$ & $0.04253$ & $0.0935$ & $0.121$ & $0.1308$ & $0.9913$ \\ \hline\vspace{0.08cm} $\overline{R}^*_{a,b}$ & $0.0997$ & $0.4787$ & $0.9065$ & $2.1975$ & $4.3459$ & $0.0866$ \\ \hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[trim=0.8cm 0cm -0.8cm 0cm ,width=3.6in]{Figure3.eps} \caption{The effective covert rate $\overline{R}^*_{a,b}$ and the corresponding optimal outage probability $P_{\rm out}^{*\rm AB}$ as a function of target rate $R_b$. In addition to the benchmark scenario, two other scenarios of Fig. \ref{Fig1}, namely those obtained by changing $P_a$ from $20$ \si{dBm} to $5$ \si{dBm}, and $\theta_{a,s}$ from $30^{\rm o}$ to $15^{\rm o}$, are also considered.}\label{Fig3} \end{figure} Effective covert rates corresponding to the benchmark scenario in Fig. \ref{Fig1} are summarized in Table \ref{T3} for $\epsilon=0.05$ and various threshold rates. To obtain these results, we first numerically solved the equation $\E[P^*_{e,w}]=1-\epsilon$ for $P_J^{\rm max}$, given the parameters corresponding to the benchmark scenario. This resulted in the optimal value of $P_{J,\rm opt}^{\rm max}=15.52$ \si{dBm}. Then we computed the corresponding optimal outage probabilities $P_{\rm out}^{*\rm AB}$, for various target rates, according to \Tref{thm3}. The effective covert rate $\overline{R}^*_{a,b}$ and the corresponding optimal outage probability $P_{\rm out}^{*\rm AB}$ for the considered benchmark scenario is also plotted, as a function of target rate $R_b$, in Fig. \ref{Fig3}. Furthermore, Fig. \ref{Fig3} includes the results of $\overline{R}^*_{a,b}$ and $P_{\rm out}^{*\rm AB}$ for two other scenarios of Fig. \ref{Fig1}, namely those obtained by changing $P_a$ from $20$ \si{dBm} to $5$ \si{dBm}, and $\theta_{a,s}$ from $30^{\rm o}$ to $15^{\rm o}$. It is observed that, for a given link, the effective covert rate first increases and then decreases by increasing the threshold rate. {This is because, after some point, the outage probability $P_{\rm out}^{*\rm AB}$ quickly transitions from $0$ to $1$.} The maximum effective covert rate that is achievable for the benchmark scenario is $5.0743$ that is obtained for the target rate of $R_b=6.42$ with the corresponding optimal outage probability of $P_{\rm out}^{*\rm AB}=0.2096$. Moreover, maximum effective covert rates of $\overline{R}^*_{a,b}=2.0585$ and $3.2223$ are achievable at the target rates of $R_b=2.88$ and $4.46$ with the corresponding optimal outage probabilities of $P_{\rm out}^{*\rm AB}=0.2853$ and $0.2775$ for the scenarios of $P_a=5$ \si{dBm} and $\theta_{a,s}=15^{\rm o}$, respectively. The optimal values of $P_J^{\rm max}$ for these two scenarios are $P_{J,\rm opt}^{\rm max}=0.52$ \si{dBm} and $25.91$ \si{dBm}, respectively. Although reducing $\theta_{a,s}$ to $15^{\rm o}$ does not directly impact the performance of the Alice-Bob link (e.g., the outage probability or ergodic capacity), it requires much stronger jamming signals with $P_{J,\rm opt}^{\rm max}=25.91$ \si{dBm} to satisfy the covertness requirement which significantly degrades the performance compared to the benchmark scenario. On the other hand, the performance drop-off of the case $P_a=5$ \si{dBm} is a direct consequence of the much lower transmit power used compared to the benchmark scenario though a much weaker jamming signal of $P_{J,\rm opt}^{\rm max}=0.52$ \si{dBm} is enough to satisfy the covertness constraint. \begin{figure}[t] \centering \includegraphics[width=3.6in]{Figure4.eps} \caption{The ergodic capacity $\E[R_{a,b}]$ of the Alice-Bob link for the benchmark scenario, corresponding to the parameters in Table \ref{T2}, and several other setups obtained by changing $M_{a,f}$, $m_{a,s}$, $P_a$, $\theta_{a,f}$, and $\Delta$.}\label{Fig4} \end{figure} The ergodic capacity $\E[R_{a,b}]$ of the Alice-Bob link is shown in Fig. \ref{Fig4} for the benchmark scenario, corresponding to the parameters in Table \ref{T2}. Moreover, the impact of several parameters is examined by changing each one while keeping the reset of the parameters as Table \ref{T2}. As expected, $\E[R_{a,b}]$ monotonically decreases by $P_J^{\rm max}$. { Moreover, enlarging $M_{a,f}$, $P_a$, and $\theta_{a,f}$ positively impacts the ergodic capacity by exposing a higher level of the desired signal to Bob. Additionally, increasing $m_{a,s}$ reduces $\E[R_{a,b}]$ by imposing a stronger jammer on Bob. Finally, increasing $\Delta$ negatively impacts the ergodic capacity by reducing the chance of receiving the desired signal at Bob with a main-lobe gain.} It is worth mentioning that mmWave links benefit from much larger bandwidths compared to RF links; hence, the results in Figs. \ref{Fig3} and \ref{Fig4} imply much higher data rates, in bits per second, compared to that of RF communication counterparts. {Finally, it is important to study the performance loss as a result of the existence of the warden Willie. Note that the performance metrics in the absence of Willie can be obtained from the results of the paper by studying the performance of our system model in the limit of $P_J^{\rm max}\to 0$ (i.e., $P_J^{\rm max}\to -\infty$ \si{dBm}). Observe that the outage probability and ergodic capacity results plotted in Figs. \ref{Fig2} and \ref{Fig4}, respectively, as a function of $P_J^{\rm max}$, start at some constant values and then change once the value of $P_J^{\rm max}$ is large enough to impact the performance of the Alice-Bob link. Therefore, the constant values of these plots at very small values of $P_J^{\rm max}$ correspond to the outage probability and ergodic capacity of the system in the absence of Willie. Hence, Figs. \ref{Fig2} and \ref{Fig4} clearly illustrate the performance loss due to the existence of Willie as a function of $P_J^{\rm max}$. In fact, the outage probability increases and the ergodic capacity decreases as we increase $P_J^{\rm max}$ to support a stronger level of covertness due to the existence of the warden.} \section{Conclusions}\label{Sec7} In this paper, we investigated covert communication over mmWave links. We employed a dual-beam transmitter to simultaneously transmit the desired signal to the destination and propagate a jamming signal to degrade the warden's performance. We characterized Willie's detection error rate and the closed-form of its expected value from Alice's perspective. We then derived the closed-form expression for the outage probability of the Alice-Bob link which enabled us to formulate the optimal achievable covert rates. We further obtained tractable forms for the ergodic capacity of the Alice-Bob link involving only one-dimensional integrals that can be computed in closed forms for most ranges of the channel parameters. Moreover, we elucidated how the results can be extended to more practical scenarios, taking into account the uncertainty about Willie's location. We also highlighted several interesting directions for future research on covert mmWave communication. Through comprehensive numerical studies, we analyzed the behavior of the derived performance metrics with respect to variety of channel and system parameters. Our results demonstrated the advantages of covert mmWave communication compared to the RF counterpart, calling for further research on this novel area. \appendices \section{A Useful Lemma for the Integration over ${\rm Ei}(\cdot)$}\label{App_A} In \cite[Lemma 1]{jamali2019uplink}, a useful lemma is proved for the integral of $\int_{c_1}^{c_2}{\rm e}^{bx}{\rm Ei}(ax)dx$ with $c_1,c_2>0$, $a<0$, and $b\in\mathbb{R}$ such that (s.t.) $a+b<0$. In this appendix, we prove that the same result, with a slight change, can be applied to the case of $b=-a$, i.e., $a+b=0$ (see, e.g., \cite[Eq. (06.35.21.0014.01)]{wolfram}). \begin{lemma}\label{lemma_app1} For any $c_1,c_2>0$ and $a<0$, we have \begin{align}\label{app1_1} \int_{c_1}^{c_2}{\rm e}^{-ax}{\rm Ei}(ax)dx=&\frac{1}{-a}\Big[{\rm e}^{-ac_2}{\rm Ei}(ac_2)\nonumber\\ &-{\rm e}^{-ac_1}{\rm Ei}(ac_1)-\ln\left({c_2}/{c_1}\right)\Big]. \end{align} \end{lemma} \begin{proof} Note based on \cite[Lemma 1]{jamali2019uplink} that for $c_1,c_2>0$, $a<0$, and $b\in\mathbb{R}$ s.t. $a+b<0$, we have \begin{align}\label{app1_2} \int_{c_1}^{c_2}{\rm e}^{bx}{\rm Ei}(ax)dx=\frac{1}{b}\left[{\rm e}^{bt}{\rm Ei}(at)-{\rm Ei}([a+b]t)\right]\!{\Big |}_{c_1}^{c_2}, \end{align} where $f(t)|_{c_1}^{c_2}\triangleq f(c_2)-f(c_1)$ for the function $f(t)$. In the case of $b=-a$ per \Lref{lemma_app1}, the argument of the second exponential integral function ${\rm Ei}([a+b]t)$ in \eqref{app1_2} is zero. Based on \cite[Eq. (1)]{harris1957tables}, $\lim_{x\to 0}{\rm Ei}(x)=\gamma+\ln|x|$, where $\gamma=0.57721$ is the Euler's constant. Therefore, we can write ${\rm Ei}([a+b]t){\big |}_{c_1}^{c_2}$ for the case of $a=-b$ as \begin{align}\label{app1_3} \!\!{\rm Ei}([a+b]t){\big |}_{c_1}^{c_2}\!=\!\lim_{x\to 0} {\rm Ei}(xt){\big |}_{c_1}^{c_2}\!=\!\lim_{x\to 0} \ln\!\left(\!\frac{|xc_2|}{|xc_1|}\!\right)\!\!=\!\ln\!\left(\!\frac{c_2}{c_1}\!\right)\!.\! \end{align} This together with some similar arguments as the proof of \cite[Lemma 1]{jamali2019uplink} completes the proof of \Lref{lemma_app1}. \end{proof}
{ "timestamp": "2021-10-26T02:26:44", "yymm": "2007", "arxiv_id": "2007.13571", "language": "en", "url": "https://arxiv.org/abs/2007.13571" }
\section{Introduction} \label{sec:introduction} In the age of Big Data, modern-day computational data science heavily relies on generating highly complex data-driven models. However, the consistent increase in data volume is challenging the processing power for data analytics and machine learning (ML) frameworks. The Python programming language has evolved into the de-facto standard for the data science community. Therein, the default choice for many applications is the SciPy stack~\cite{viranen2020scipy}, which is built upon the computational library NumPy~\cite{walt2011numpy}. More recently, deep-learning libraries such as TensorFlow~\cite{abadi2015tensorflow} and PyTorch~\cite{paszke2019pytorch}, have begun to bridge the gap between small scale workstation-based computing and high-performance multi-node computing by offering GPU-accelerated kernels. Although these frameworks include options for manually distributing computations, they are generally confined to the processing capabilities of a singular computation node. As the size of datasets increase, single-node computations are at best impractical and at worst impossible. In response to these challenges, we propose HeAT\footnote{\url{https://github.com/helmholtz-analytics/heat}} -- the Helmholtz Analytics Toolkit: an open-source library with a NumPy-like API for distributed and GPU-accelerated computing on general-purpose clusters and high performance computing (HPC) systems. HeAT implements parallel algorithms for both low-level array computations as well as higher-level data analytics and machine learning methods. It does so by interweaving process-local PyTorch tensors with communication via the Message Passing Interface (MPI)~\cite{mpi2015mpi}. The familiar API facilitates the transition of existing Python code to distributed applications, thus opening the doorway to HPC computing for domain-specialized scientists. Due to its design, HeAT consistently performs better in terms of execution time and scalability when compared to similar frameworks. The remainder of this paper is organized as follows. \cref{sec:related-work} will present related work in the field of distributed and GPU-accelerated array computations in Python. \cref{sec:framework-design-and-implementation} will explain HeAT's programming model, as well as its array and communication design concepts. In \cref{sec:performance-result}, an empirical performance evaluation is presented. \cref{sec:discussion} discusses the advantages and limitations of HeAT's programming model with respect to other frameworks. Finally, \cref{sec:conclusion} concludes the presented work. \section{Related Work} \label{sec:related-work} NumPy is arguably the most widely used Python-based data science library. It offers powerful data structures and algorithms for vectorized matrix and tensor operations, allowing for the implementation of efficient numerical and scientific programs. The high popularity of NumPy, likely due to the similarity of its functions to the mathematical formulation, has led to its API representing a widely recognized and accepted standard for data science programming. Scikit-learn \cite{pedregosa2011sklearn} is a NumPy-based machine learning framework that offers a wide range of ready-to-use high-level algorithms for clustering, classification and regression, as well as selected pre-processing steps like feature selection and dimensionality reduction. This makes it a very attractive solution for application scientists. However, neither NumPy nor scikit-learn support out-of-the-box GPU usage. Several packages have addressed GPU acceleration for array computations. The just-in-time compiler Numba \cite{lam2015numba} can compile Python functions to CUDA code. While Numba relies on custom annotation of the Python code with decorators for acceleration, the CUDA-accelerated array computation library CuPy \cite{nishino2017cupy} allows for GPU computation in analogy to the NumPy interface. Unfortunately, it does not provide the full range of NumPy functionality. RAPIDS's~\cite{rapids} CUDA-accelerated libraries CuDF and CuML aim towards higher-level machine learning functionality beyond low-level array computation. CuDF offers GPU dataframe computations similar to the Pandas library~\cite{reback2020pandas}. CuML offers high-level machine learning algorithms to some extent. However, the currently available function space of these libraries is limited. With the increasing interest in deep learning methods, novel frameworks have emerged that place special focus onto tensor linear algebra and neural networks. Many libraries like PyTorch \cite{paszke2019pytorch}, TensorFlow~\cite{abadi2015tensorflow}, MXNet~\cite{chen2015mxnet} or JAX~\cite{bradburg2018jax} focus mainly on deep learning applications, and as such they do not actively target NumPy-API compatibility. On the other hand, they enable simple transitions between CPU and GPU computation. Unfortunately, they primarily provide high-level algorithms for neural networks and often lack many traditional algorithms, for example clustering or ensemble methods. All of the aforementioned libraries work around Python's parallel computation limitations by implementing computationally intensive kernels in low-level programming languages, like C++ or CUDA, and invoking the respective calls at run-time. This allows Python users to exploit powerful features such as vectorization, threading, or the utilization of accelerator hardware. However, these frameworks are designed for single computation node usage, with specific configurations enabling multi-core computation. This limits their potential application to small and medium size problems. For larger datasets, excessive computation times and increasing memory consumption pose arduous challenges. Some of these ML libraries provide a basic infrastructure for distributed computation. PyTorch, for example, includes two such packages. Firstly, the distributed remote procedure call (RPC) framework handles communication and by that enables distributed automatic differentiation (AD). Secondly, the \texttt{distributed} package implements low-level bindings to MPI, Gloo, and NCCL. However the set of MPI functionality is not complete as it targets the communication functions specifically required for data-parallel neural networks. Furthermore, the communication aspect of algorithms must be implemented by the user. This requires, at minimum, a basic understanding of distributed computing. Similar restrictions apply for the distributed extensions of TensorFlow, JAX, and MXNet. Frameworks like DeepSpeed~\cite{rajbh2019zero} and Horovod~\cite{sergeev2018horovod} offer multi-node computations on both CPU and GPU. These approaches are limited to deep learning applications, as they mainly focus on data-parallelism for neural networks and do not target general distributed array-based computations. Phylanx~\cite{tohid2018phylanx} implements Python bindings for C and C++ array computation algorithms, which closely mimic NumPy's API, but it does not support higher-level machine learning functionality or GPU usage. Intel’s DAAL~\cite{chen2017benchmarking} provides only high-level algorithms employing MapReduce~\cite{dean2008mapreduce}, with the focus on accelerated multi-CPU usage. However, it does not offer the means for low-level array computations or GPU usage. Furthermore, functionality requiring communication beyond the scope of MapReduce must be implemented by the user. Legate~\cite{bauer2019legate} focuses on distributed multi-node multi-GPU computations for low- to high-level array operations by employing a global address space programming interface. While its design enables shared-distributed memory computation, a wide range of functionality is currently not implemented. When it comes to easy distributed array computations with a NumPy-like API and support of high-level algorithms similar to scikit-learn, Dask~\cite{rocklin2015dask} has become the most popular framework amongst application scientists. It employs dynamic task scheduling for parallel execution of NumPy operations on CPU-based multi-node systems. The Dask execution model envisages one scheduler process and multiple worker instances. The scheduler sends workload via serialized RPC calls to the workers. Networking between the processes builds on the available system network architecture. GPU usage can be enabled by coupling Dask to RAPIDS's cuML and cuDF. Due to its popularity, Dask can be considered the current benchmark in Python-first distributed data analysis and ML computations. There are non-Python based distributed data analysis tools such as the Java-based Spark~\cite{spark}. A recent comparison~\cite{sparkvdask} between Spark and Dask has shown that they preform similarly. \section{Design and Implementation} \label{sec:framework-design-and-implementation} HeAT is an open-source library that implements data structures, functions, and methods for array-based numerical data analysis and machine learning. Due to its NumPy-like API, users are generally familiar with the programming approach. The implementation of (distributed) higher-level algorithms adheres to the scikit-learn API. This interface design makes the conversion of existing data analytics applications to distributed HeAT applications straightforward. An example can be seen in \cref{lst:stddev}. Furthermore, small-scale program prototypes can be developed, which can be transitioned transparently to HPC systems without major code or algorithmic changes. Distributed HeAT applications are typically faster, and their memory limitations are those of the entire system, rather than those of a single node. As a result, HeAT facilitates the algorithmic development and efficient deployment of large-scale data analytics and machine learning applications. \begin{lstlisting}[ caption={Implementation of a function calculating the standard deviation of an array, demonstrating the API compatibility between NumPy and HeAT.}, label={lst:stddev} ] import numpy as np import heat as ht def np_stddev(a, axis=0): return np.sqrt((a - a.mean(axis)) ** 2) def ht_stddev(a, axis=0): return ht.sqrt((a - a.mean(axis)) ** 2) \end{lstlisting} The central component of HeAT is the \texttt{DNDarray} data structure, an N-dimensional array transparently composed of computational objects on one or more processes. The process-local objects are PyTorch tensors, allowing HeAT functions to use both CPUs and GPUs. A detailed description of the \texttt{DNDarray} design will be given in \ref{subsec:dndarrays}. For distributed memory computing, communication between processes is crucial. MPI controls communication among parallel processes on distributed memory systems via a set of message sending and receiving functions. Communication can take place between two processes, i.e. point-to-point communication, or within groups of processes in the MPI communicator, i.e. global communication. HeAT's MPI-based custom communications backend is described in \cref{subsec:distributed-computation}. \subsection{Programming Model} \label{subsec:programming-model} HeAT realizes a single-program-multiple-data (SPMD) programming model~\cite{darema2001spmd} using PyTorch and MPI. Additionally, the framework's processing model is inspired by the bulk-synchronous parallel (BSP)~\cite{valiant1990bsp} model. In practice, computations proceed in a series of hierarchical supersteps, each consisting of a number of process-local computations and subsequent inter-process communications. In contrast to the classical BSP model, communicated data is available immediately, rather than after the next global synchronization. In HeAT, global synchronizations only occur for collective MPI calls as well as at the program start and termination. A schematic overview is depicted in \cref{fig:bsp}. \begin{figure}[t] \resizebox{.95\linewidth}{!}{ \centering \input{images/bsp.tikz}} \caption{The BSP-inspired parallel processing model utilized by HeAT. Computation steps are marked as light blue blocks, possible communication as blue arrows, and implicit or explicit synchronization points as vertical black bars.} \label{fig:bsp} \end{figure} The process-local computations are implemented using PyTorch as the computation engine. Each computation is processed eagerly, i.e. when issued to the interpreter. The scheduling onto the hardware is controlled by the respective runtime environment of PyTorch. For the CPU backend, these are the synchronous schedulers of OpenMP~\cite{dagum1998openmp} and Intel TBB~\cite{pheatt2008tbb}. For the GPU backend, it is the asynchronous scheduler of the NVIDIA CUDA~\cite{nickolls2008cuda} runtime system. HeAT provides the MPI "glue", utilizing the \texttt{mpi4py}~\cite{dalcin08mpi4py} module, for the communication in each superstep. Users can freely access these implementation details, although it is neither necessary nor recommended to modify the communication routines. \subsection{DNDarrays} \label{subsec:dndarrays} \begin{figure*} \begin{subfigure}[b]{0.22\linewidth} \input{images/array-split-None.tikz} \caption{\texttt{split=None}} \end{subfigure} \begin{subfigure}[b]{0.22\linewidth} \input{images/array-split-0.tikz} \caption{\texttt{split=0}} \end{subfigure} \hspace{.05cm} \begin{subfigure}[b]{0.22\linewidth} \captionsetup{skip=-10pt} \input{images/array-split-1.tikz} \caption{\texttt{split=1}} \end{subfigure} \hspace{.5cm} \begin{subfigure}[b]{0.22\linewidth} \captionsetup{skip=-12pt} \input{images/array-split-2.tikz} \caption{\texttt{split=2}} \end{subfigure} \caption{Distribution of a 3-D \texttt{DNDarray} across three processes: (a), \texttt{DNDarray} is not distributed, i.e., \texttt{split=None}, each process has access to the full data; (b), (c), and (d): \texttt{DNDarray} is distributed along axis 0, 1 or 2 (\texttt{split=0}, \texttt{split=1}, or \texttt{split=2}, respectively). An example for case (b) is available in \cref{lst:shape}. In each case, the data chunk labeled p$_n$ resides on process $n$, with $n = 0$, $1$, or $2$.} \label{fig:split_illustration} \end{figure*} At the core of HeAT is the Distributed N-Dimensional Array, \texttt{DNDarray} (cf. \cref{lst:shape}). The \texttt{DNDarray} object is a virtual overlay of the disjoint PyTorch tensors, which store the numerical data on each MPI process. A \texttt{DNDarray}'s data may be redundantly allocated on each node, or one-dimensionally decomposed into evenly-sized chunks with a maximum size difference of one element along the decomposition axis. This data distribution strategy aims to balance the workload between all processes. During computation, API calls may redistribute data items. However, completed operations automatically restore the uniform data distribution. To steer the one-dimensional data decomposition and other parallel processing behavior, HeAT users can utilize a number of additional attributes and parameters: \begin{itemize} \item \texttt{split}: the singular axis, or dimension, along which a \texttt{DNDarray} is to be decomposed (see \cref{fig:split_illustration} and \cref{lst:shape}). If split is \texttt{None}, a redundant copy is created on each process \item \texttt{device}: the computation device, i.e. CPU or GPU, on which the \texttt{DNDarray} is allocated \item \texttt{comm}: the MPI communicator, i.e. the set of participating processes, for distributed computation (\cref{subsec:distributed-computation}) \item \texttt{shape}: the dimensionality of the global data \item \texttt{lshape}: the dimensionality of the process-local data \end{itemize} As stated, process-level operations on \texttt{DNDarray}s are performed via PyTorch functions, thus employing their C++ core library \texttt{libtorch} to achieve high efficiency. Interoperability with external libraries such as NumPy and PyTorch is self-evident. Data contained in a NumPy \texttt{ndarray} or a PyTorch \texttt{Tensor} can be imported into a \texttt{DNDarray} via the \texttt{heat.array()} function with the optional \texttt{split} attribute. In the opposite direction, data exchange with NumPy is enabled by the \texttt{DNDarray.numpy()} method. \begin{lstlisting}[ caption={A \texttt{DNDarray} distributed across three processes as illustrated in \cref{fig:split_illustration}(b).}, label={lst:shape} ] import heat as ht a = ht.zeros((5, 4, 3), split=0) a.shape [0/3] >>> (5, 4, 3) [1/3] >>> (5, 4, 3) [2/3] >>> (5, 4, 3) a.lshape [0/3] >>> (2, 4, 3) [1/3] >>> (2, 4, 3) [2/3] >>> (1, 4, 3) \end{lstlisting} A \texttt{DNDarray} can reside in a node's main memory for the CPU backend or, if available, in the VRAM of GPUs. Individual \texttt{DNDarray}s can be assigned to hardware devices via the \texttt{device} attribute or the default device can be defined as shown in \cref{lst:gpu}. \begin{lstlisting}[ caption={Programmatic ways of allocating \texttt{DNDarray} on different devices.}, label={lst:gpu} ] import heat as ht # a single allocation a = ht.zeros((1,), device="gpu") a >>> DNDarray([0.], device="gpu") # setting a default device ht.use_device("gpu") b = ht.ones((1,)) b >>> DNDarray([1.], device="gpu") \end{lstlisting} \subsection{Distributed Computation} \label{subsec:distributed-computation} Many algorithms using a distributed \texttt{DNDarray} will require communication. HeAT has a custom MPI-based communication layer composed of wrappers of point-to-point and global MPI functions. PyTorch's \texttt{distributed} package does not support many of the required MPI functions, such as \texttt{alltoall} or custom reduction operations. Furthermore, tensors are sent via the network as contiguous buffers in their original storage layout, so that the information on their N-dimensional structure is lost in communication. Therefore, HeAT's communication layer is based on the Python library \texttt{mpi4py}~\cite{dalcin08mpi4py}, which offers an interface to the most common MPI functions and enables the communication of contiguous Python buffer objects. The \texttt{DNDarray} memory representation is encoded in the one-dimensional buffer via strides (steps between elements) along the respective dimension. A main challenge in communicating an arbitrarily split \texttt{DNDarray} is the preservation of this data structure. The HeAT communication module internally handles buffer preparation as the interface between the \texttt{DNDarray} and the \texttt{mpi4py} functionality. \begin{figure} \centering \resizebox{0.98\linewidth}{!}{ % \input{images/array-resplit.tikz} } \caption{Internal handling of a \texttt{resplit(None)} operation on a two-dimensional \texttt{DNDarray} with \texttt{split=1} in HeAT, i.e., data replication on all nodes. It depicts the on-the-fly creation of MPI datatypes for the strided access into the output and input buffers.} \label{fig:array-resplit} \end{figure} For point-to-point communications (e.g. \texttt{send, recv}), buffer preparation is trivial as the data can be sent contiguously from one process and unpacked by the receiving process. More considerable efforts must be made for communication involving collective operations. For gathering operations (e.g. \texttt{gather, allgather}), the node-local \texttt{Tensor} to be sent by each process must have the correct memory layout, which is dependent on the split axis of the \texttt{DNDarray}. For scattering operations (e.g. \texttt{scatter, alltoall}), the data chunks must be packed correctly along the split axis before distribution. HeAT addresses the packing issues by creating custom MPI data types, which wrap the local \texttt{Tensor} buffer. First, the \texttt{DNDarray}'s dimensions are permuted such that the dimension along which data collection or distribution will take place is the first dimension. Then, custom data types are created, via the MPI function \texttt{Create\_vector}, to iteratively pack the dimensions from the last to the first. The individual data types at each dimension are defined via the \texttt{DNDarray}'s strides. The creation of such a buffer is schematically shown in \cref{fig:array-resplit}. Here, a split \texttt{DNDarray} is assembled to \texttt{split=None} via HeAT's \texttt{allgather} function. With this internal buffer handling, HeAT offers a unified interface that provides communication without exposing the internal data representation. Based on the MPI layer, a \texttt{resplit} function is provided to change the split axis of a \texttt{DNDarray} if required. Re-splitting a \texttt{DNDarray} adheres to load balancing, i.e., the data is uniformly distributed across processes as previously stated. However, caution must be taken when using \texttt{resplit} as it is based on global MPI communication functions, thus requiring both significant communication and local memory. In cases where CUDA-aware MPI is available, communications can be performed directly between GPUs. Otherwise, data must be copied from the GPU to the CPU, sent to the target CPU, then copied to the target GPU. This increases the communication overhead, and therefore the run time, of many functions. \section{Performance Results} \label{sec:performance-result} Performance of HeAT was evaluated by benchmarking algorithms that are commonly used in data science, and comparing the results to implementations in other frameworks. NumPy and PyTorch were chosen for single-node baseline evaluation. NumPy because it is the most frequently used library for Python-based array computation, and PyTorch because it is the underlying node-local eager execution engine for HeAT. For multi-node experiments, Dask was selected. Algorithms were chosen based on their algorithmic complexity and availability as ready-to-use implementations in the investigated frameworks. Two types of low-level algorithms were benchmarked: the computation of statistical moments (\cref{subsubsec:performance-mean}) and the computation of pairwise Euclidean distances, i.e. the function \texttt{cdist} (\cref{subsubsec:performance-cdist}). These operations are available in Dask, PyTorch, and NumPy as designated functions under the same name and mathematical definition. As HeAT aims to provide both low-level and high-level functionality, two commonly used ML algorithms were also selected: \textit{k}-means clustering (\cref{subsubsec:performance-kmeans}) and least absolute shrinkage and selection operator (LASSO) regression (\cref{subsubsec:performance-lasso}). These two algorithms are provided by Dask and scikit-learn as end-user implementations. For comparison with PyTorch, algorithms were specifically implemented with as much provided PyTorch functionality as possible. All benchmark scripts are available on HeAT's GitHub repository. \subsection{Execution Environment} \label{subsec:execution-environment} \begin{table}[h] \caption{Software packages used for performance benchmarks.} \begin{center} \begin{tabular}{|l l|l l|} \hline \multicolumn{2}{|l|}{\textbf{General}}&\multicolumn{2}{l|}{\textbf{Python}} \\ \textbf{\textit{Package}} & \textbf{\textit{Version}}& \textbf{\textit{Package}}& \textbf{\textit{Version}} \\ \hline CUDA & 10.2& dask & 2.12.0 \\ GCC & 8.3.0 &dask-ml & 1.2.0 \\ HDF5 & 1.10.5 & dask-mpi & 2.0.0 \\ Intel Cluster Studio XE & 2019.03 &heat & 0.4.0 \\ ParaStationMPI & 5.2.2-1 & mpi4py & 3.0.3 \\ Python & 3.6.8 & numpy$^{\mathrm{a}}$ & 1.15.2 \\ & & sklearn & 0.22.2 \\ & &torch &1.5.0 \\ \hline \multicolumn{4}{l}{$^{\mathrm{a}}$using \texttt{Intel MKL 2019.1}.} \end{tabular} \label{tab:software-env} \end{center} \end{table} The software environment for benchmarking is summarized in \cref{tab:software-env}. The experiments were run on a machine learning HPC system comprised of 15 compute nodes with commodity components at the Jülich Supercomputing Centre (JSC). Each node is equipped with two 12-core Intel Xeon Gold 6126 CPUs, \SI{206}{\giga\byte} of DDR3 main memory, and four NVIDIA Tesla V100 SXM2 GPUs with \SI{32}{\giga\byte} VRAM per card. The GPUs communicate node-internally via an NVLink interconnect. The system is optimized for GPUDirect communication across node boundaries, supported by 2x Mellanox \SI{100}{\giga\bit} EDR InfiniBand links. Though HeAT supports CUDA-aware MPI, it was not used, as to make experiments comparable to Dask, which cannot make use of this. MPI capable commodity clusters should show similar differences between HeAT and Dask unless the cluster is specifically tuned for non-standard use cases. \subsection{Datasets} \label{subsubsec:datases} Three datasets were chosen to demonstrate the effectiveness of HeAT for different data characteristics and to mimic common use-cases: \begin{enumerate} \item The Cityscapes dataset~\cite{cordts2016cityscapes} contains 5\:000 high-resolution images with fine-grained annotations. Each image is 2\:048$\times$1\:024 pixels with three 256-bit RGB color channels per pixel, which have been flattened and combined into a short-fat matrix with 5\:000$\times$6\:291\:456 entries, i.e. \SI{117.19}{\giga\byte}. \item The SUSY dataset~\cite{baldi2014searching} contains 5\,000\,000 samples from Monte Carlo simulations of high-energy particle collisions. Each sample has 18 features, consisting of kinematic properties measured by particle detectors and high-level functional derivations of those measurements~\cite{uciMachineLearningRepository}. The total data size is \SI{343.33}{\mega\byte} \item The EURopean Air pollution Dispersion-Inverse Model data (EURAD-IM)~\cite{elbern20004dvar} contains parameters from an Eulerian meso-scale chemistry transport model as part of the Copernicus Atmosphere Monitoring Service (CAMS)\footnote{https://atmosphere.copernicus.eu/}. For our experiments, $10^7$ data points and 100 parameters, i.e. \SI{7.45}{\giga\byte}, of the model have been chosen and stored in a tall-skinny matrix. \end{enumerate} The Cityscapes and SUSY datasets are publicly available, the EURADS dataset is available upon request. All datasets were converted from their original sources into solitary data matrices and stored as floating point values in HDF5 files~\cite{hdf52020hdf5}. While both Dask and HeAT utilize parallel I/O for \texttt{h5py}~\cite{collette2014h5py}, they handle data decomposition differently. HeAT automatically decomposes data upon \texttt{DNDarray} creation when given a split axis by the user (cf. \cref{subsec:dndarrays}). Dask offers an automatic data decomposition scheme, the recommended setup as per the Dask documentation \cite{daskdocumentation}, and manually specifying the size of the memory-distributed data chunks. In the following, Dask's performance with automatic chunking is indicated with Dask-auto, whereas Dask-tuned indicates the manual chunking mirroring HeAT's data decomposition scheme. All measurements with HeAT are performed with \texttt{split=0}, unless otherwise stated. \subsection{Experiments} \label{subsec:applications} All of the following experiments are composed of weak scaling and strong scaling experiments. Measurements are the average of 9 runs, preceded by a warm up run. The error bars indicate the empirical standard deviation. In several cases the errors are too small to be visible compared to the data point itself. A fractional number of nodes for weak scaling GPU runs refers to the usage of the equivalent fraction of a node's resources, i.e. one (0.25) or two (0.5) out of the nodes four GPUs. For Dask, the actual program code is provided in a separate script that connects the scheduler to the workers via a \texttt{dask.distributed.Client} instance. The discovery of the scheduler is done manually by passing an IP address, or via information on a shared filesystem. Networking between the processes builds on network sockets and utilizes Infiniband using TCP over IB. Each worker maintains its execution state by writing into journaling directories. Weak scaling refers to the process of increasing the amount of computation resources while maintaining the workload on each MPI process. Ideal weak scaling behaviour is a constant runtime for each measurement as the number of processing units is increased. This indicates solid scalability for larger datasets. Results for weak scaling experiments are presented as the average maximum runtime across the processing units. Strong scaling refers to the process of increasing the amount of computation resources while the total workload on the system remains constant. Ideally, the runtime in strong scaling measurements is inversely proportional to the number of processing units. We present our strong scaling results in units of speedup as compared to a single-node NumPy-based implementation. \subsubsection{Statistical Moments} \label{subsubsec:performance-mean} \begin{figure*}[tb] \centering \begin{subfigure}[c]{\textwidth} \input{images/moments.tikz} \end{subfigure} \caption{Mean and standard deviation measurements of the Cityscapes dataset. (A) weak scaling of \texttt{mean(data, axis=None)}, (B) weak scaling of \texttt{std(data,axis=None)}, (C) strong scaling of \texttt{mean(data, axis=None)}, (D) weak scaling of \texttt{mean(data, axis=0)}. Cf. \cref{subsubsec:performance-mean}. 0.25 and 0.5 nodes refer to usage of only one and two out of the node's four GPUs.} \label{fig:scaling-moments} \end{figure*} Calculation of mean and standard deviation are arguably the most frequently used calculations in all of computing. In a distributed context, it is inefficient to compute statistical moments with multiple passes over the dataset. Therefore, HeAT calculates statistical moments using the numerically stable single-pass algorithms presented in \cite{moments}. These experiments utilise the Cityscapes dataset. The experiments shown in this section show some of the most common applications of statistical moments: the mean of the entire dataset, the mean along the largest dimension, and the standard deviation of the entire dataset. In its native form, Dask is designed for distributed computation on CPUs. Multi-node GPU usage can be enabled by coupling of Dask to CuPy. In order to provide comparison of HeAT's multi-GPU performance, benchmarks for Dask were originally intended to be performed with and without CuPy. However, during benchmarks we found that the native binding of CuPy to Dask only enables the usage of a single GPU per node, while HeAT enables multi-GPU per node usage. While possible for Dask to utilize multiple GPUs per node, substantial modifications to each algorithm requiring communication must be done manually by the user. Therefore multi-GPU benchmarks on Dask with CuPy were only performed for mean calculations of the entire dataset. Exemplary code for the algorithmic definition can be seen in the \cref{lst:DaskCupy} (Appendix). \cref{fig:scaling-moments} shows strong and weak scaling of the mean operation, the weak scaling of standard deviation, and the weak scaling of the mean along the largest axis. For weak scaling runs, each node had 300 rows of the matrix. For strong scaling runs, 1\:200 rows were used globally. Dask-tuned failed with memory errors above 4 nodes for the standard deviation calculations. Single-GPU measurements were much faster than multi-GPU measurements as no communication is required. Furthermore, PyTorch is faster than HeAT on a single GPU as the HeAT functions are wrappers of PyTorch functions. Favorable strong scaling is clearly shown in the HeAT CPU measurements (\cref{fig:scaling-moments} (C)). HeAT outperforms NumPy and is nearly equal to Dask-auto, for a single node. However, beyond one node, HeAT clearly improves upon these results, while all Dask measurements do not show significant improvement with the addition of more computing resources. CuPy shows positive scaling behavior, however it is the slowest of the three parallel frameworks. As the number of nodes increases, the time required for communication also increases. Once this time eclipses the time required for computation the performance degrades. This effect can be seen for the HeAT GPU measurements beyond four nodes. The effect is common in distributed computing as one must balance the number of MPI processes and the size of the dataset to avoid excessive communication calls. HeAT calculations on CPU for these experiments show nearly ideal weak scaling. However, the measurements of the mean along the largest axis show Dask outperforming HeAT until the number of nodes is increased beyond 8. This is due to the large difference between the complexity of the calculation and the amount of communication required by HeAT. As the complexity of the calculation increases, the efficiency of HeAT remains constant and at 15 nodes shows slight improvement. Whereas the efficiency of Dask degrades as the node count increases. All of these measurements were also conducted with the \texttt{split=1} data distributed scheme for HeAT. The difference between the runtimes for \texttt{split=0} and \texttt{split=1} was on average \SI{-0.0226 \pm 0.0809}{\second}. \subsubsection{Pairwise Euclidean Distances} \label{subsubsec:performance-cdist} Pairwise distance calculations are a vital part of data analysis used in many ML algorithms, such as clustering and neighborhood methods~\cite{debus2020high}. However, due to the quadratic growth in computational complexity, these computations scale notoriously poorly. Moreover, their excessive memory consumption poses a major challenge, which often limits the number of samples which can be processed. As a consequence, many applications employ a form of dimensionality reduction, yielding only approximate solutions. HeAT implements a custom distance computation function via ring communication which works regardless of the employed distance metric. For scaling experiments, the $L_2$ norm (Euclidean distance, commonly referred to as \texttt{cdist}) was utilized. Benchmarks were conducted on the SUSY dataset. For weak scaling, the number of samples was increased by the square root of the number of nodes, because the computational load grows quadratically. The first 12\:910, 18\:258, 25\:820, 36\:515, 51\:640, 73\:030, and 100\:000 samples were used for $N=$ 0.25, 0.5, 1, 2, 4, 8 and 15 nodes, respectively. For strong scaling runs, the first 40\:000 samples were used. Results are displayed in \cref{fig:scaling-cdist}, where HeAT's implementation of \texttt{cdist} shows significantly lower computation times compared to the other frameworks. It also provides small speedup for CPU and large speedup for GPU over the NumPy implementation. On one GPU (0.25 nodes), HeAT outperforms PyTorch because it employs quadratic expansion via matrix multiplication for the calculation of the squared differences rather than relying on PyTorch's \texttt{cdist} function: \begin{equation} \|X - Y\|_2^2 = \|X\|_2^2 + \|Y\|_2^2 - 2\;X\cdot Y \end{equation} On CPU, PyTorch's intrinsic \texttt{cdist} function is faster; however, experiments for \textit{k}-means, cf. \cref{subsubsec:performance-kmeans}, will show that this speedup depends on the data matrix shape (tall-skinny vs short-fat). Overall scaling behaviour of the function is not optimal, as the communication overhead grows proportionally to the number of processes. Nevertheless, the HeAT implementation is able to solve the problem of memory usage at large sample sizes, whereas Dask's computation at 14 Nodes with 100\:000 samples aborted due to memory overflow. \begin{figure}[tb] \centering \hspace{-0.45cm} \begin{subfigure}[c]{0.44\textwidth} \input{images/cdist-weak.tikz} \end{subfigure} \begin{subfigure}[c]{0.44\textwidth} \input{images/cdist-strong.tikz} \end{subfigure} \caption{Pairwise euclidean distances: weak (\emph{upper}) and strong scaling measurements (\emph{lower}), cf. \cref{subsubsec:performance-cdist}. 0.25 and 0.5 nodes refer to usage of only one and two out of the node's four GPUs.} \label{fig:scaling-cdist} \end{figure} \subsubsection{\textit{k}-means} \label{subsubsec:performance-kmeans} \textit{k}-means \cite{macqueen1967kmeans} is a vector quantization method originating from the field of signal processing. It is commonly used as an unsupervised clustering algorithm that is able to assign all observations, $x$, within a dataset into $k$ disjoint partitions, each forming a cluster, ($C_i$). Formally, the method minimizes the inter-cluster variance: \begin{equation} \argmin_{C}\sum_{i=1}^{k}\sum_{x\in C_i}\|x - c_i\|_2^2 \end{equation} for each cluster centroid $c_i$. The \textit{k}-means clustering problem is generally NP-hard, but can be efficiently approximated using an iterative optimization method, such as detecting a local minimum, i.e. Lloyd's algorithm~\cite{lloyd1982least}. HeAT's \textit{k}-means implementation is dominated by element-wise vector operations in the distance matrix computation between the data points and the centroids, and reduction operations for finding the best matching centroids. Benchmarks in this experiment were conducted on the Cityscapes dataset. The dataset sizes used were analogous to those used in the moments experiments. For each benchmark, we have performed 30 iterations of Lloyd's algorithm at eight assumed centroids. Weak scaling measurements in \cref{fig:scaling-kmeans} (upper panel) show that HeAT outperforms Dask by at least an order of magnitude. Furthermore, HeAT demonstrates solid scalability on both CPU and GPU. For Dask we were unable to complete the measurement procedure for all node configurations. While it was sporadically possible to complete the benchmark with a four-node configuration, it would terminate with an out-of-memory exception before the completion of the measurement sequence. For 8 and 15 nodes, we were unable to obtain any measurements due to excessive memory consumption. \begin{figure}[tb] \centering \begin{subfigure}[c]{0.44\textwidth} \input{images/kmeans-weak.tikz} \end{subfigure} \begin{subfigure}[c]{0.44\textwidth} \input{images/kmeans-strong.tikz} \end{subfigure} \caption{\textit{k}-means clustering: weak (\emph{upper}) and strong scaling measurements (\emph{lower}), cf. \cref{subsubsec:performance-kmeans}. 0.25 and 0.5 nodes refer to usage of only one and two out of the node's four GPUs.} \label{fig:scaling-kmeans} \end{figure} For HeAT, a single GPU shows better overall performance compared to multiple GPUs. The difference in runtime between PyTorch and HeAT on a single node can be explained by the differences in distance matrix computation (c.f. \cref{subsubsec:performance-cdist}). Strong scaling measurements are shown in \cref{fig:scaling-kmeans} (lower panel). For these measurements, 600 rows of the dataset were used for all runs. Here, we obtain similar conclusions as in the weak scaling measurements. HeAT outperforms Dask by a significant margin and shows more favorable scaling behaviour. Again, Dask experienced out-of-memory issues. While HeAT's CPU computations scale approximately linearly, the GPU backend shows strong linearity. \newline \subsubsection{LASSO} \label{subsubsec:performance-lasso} LASSO is a regression method of simultaneously applying regularization and parameter selection. Its basic form is an extension of the ordinary linear regression (OLS) method by introducing an L1-norm penalty of the parameters scaled by the regularization parameter. The corresponding objective function reads \begin{equation} \label{equ:lossfkt} E(w) = \|y - X w \|_2^2 + \lambda \|w_{-} \|_1 \end{equation} where $y$ denotes the $n$ samples of the output variables, $X \in \mathbb{R}^{n \times m}$ denotes the \emph{system matrix} in which $m-1$ columns represent the different features in which each column represents the constant bias term and each of the $n$ rows represent one data sample, $w \in \mathbb{R}^{m}$ denotes the regression coefficients, $w_{-} \in \mathbb{R}^{m-1}$ the regression coefficients of the features, and $\lambda$ the regularization parameter. In addition to the L2-norm regularization approach (i.e., Ridge-regression), LASSO favors not only smaller model parameters but, depending on the regularization parameters, can force selected model parameters to be zero. It is a popular method to determine the importance of input variables with respect to one or more dependent output variables. In this experiment, a LASSO algorithm is used to determine the most important model parameters of the EURAD-IM model on the errors of ozone forecasts of the model at measurement sites\footnote{obtained from the centralised AirBase database maintained by the European Environment Agency (EEA). The AirBase database collects near real time data from the European countries bound under Decision 97/101/EC engaging exchange of information on ambient air quality.}. In order to minimize the objective function, a coordinate descent algorithm with a proximal gradient soft threshold applied to each coordinate was implemented in HeAT, Dask, NumPy, and PyTorch. For the weak scaling measurements, the LASSO algorithm is run for 20 iterations on a data sample size of 714\:280 samples per node. The HeAT CPU measurements show good weak scaling behaviour (\cref{fig:scaling-lasso}, upper panel) with the lowest runtime compared to the Dask and HeAT GPU versions. Dask shows poor weak scaling due to the incompleteness of Dask with respect to NumPy operations. For example, assignments to Dask arrays are not supported by the library itself but are heavily utilized in the implemented LASSO algorithm. Consequently, Dask cannot make efficient use of its lazy evaluation concept~\cite{daskdocumentation} for this algorithm. The HeAT GPU version also does not scale well, albeit with a significantly lower runtime than Dask. This is due to the high number of communication operations required. \begin{figure}[tb] \centering \begin{subfigure}[c]{0.44\textwidth} \input{images/lasso-weak.tikz} \end{subfigure} \begin{subfigure}[c]{0.44\textwidth} \input{images/lasso-strong.tikz} \end{subfigure} \caption{LASSO regression: weak (\emph{upper}) and strong scaling measurements (\emph{lower}), cf. \cref{subsubsec:performance-lasso}. 0.25 and 0.5 nodes refer to usage of only one and two out of the node's four GPUs.} \label{fig:scaling-lasso} \end{figure} Strong scaling measurements (\cref{fig:scaling-lasso}, lower panel) were conducted for the entire sample set. The trends observed in the weak scaling measurements are also visible here. Dask shows almost no scaling, whereas the HeAT CPU measurements indicate a good scaling behaviour. For the HeAT GPU implementation the speedup decreases with the increase in computing resources. Overall, it is apparent that HeAT outperforms Dask by more than two orders of magnitude. \section{Discussion} \label{sec:discussion} We have presented HeAT, a Python-based framework for distributed data analytics and machine learning on multiple CPUs and GPUs. It offers transparent parallel data handling and operations to exploit the available hardware, be it personal workstations or supercomputers. The NumPy-like API enables users to easily translate existing NumPy code into distributed applications. As the user-base of HPC resources is predominately composed of domain experts with limited knowledge of parallel programming concepts, the number of exposed configuration parameters for parallel constructs is minimized. These constructs are both powerful and versatile enough for the implementation of a large variety of distributed algorithms. PyTorch has been selected as an eager, node-local compute engine for HeAT. As a direct result, HeAT benefits from PyTorch's highly optimized functions. However, PyTorch does not offer parallel data decomposition or distributed algorithms. HeAT provides a custom communication layer for N-dimensional array objects and specifically designed hierarchical algorithms to leverage the full potential of PyTorch for distributed environments. This strategy enables the efficient utilization of the underlying hardware by exploiting locality in the memory hierarchy. Memory distribution for scalable algorithms requires efficient communication between compute nodes at runtime via high-speed network links. HeAT is designed to leverage the potential of such systems. Significant efforts have been made to efficiently utilize the available hardware, including multi-GPU architectures, while avoiding central bottlenecks, such as workload schedulers, and excessive I/O, e.g. serial file access or journaling. Weak and strong scaling experiments on a number of applications (\cref{sec:performance-result}) demonstrate that HeAT consistently outperforms Dask by up to two orders of magnitude. Moreover, larger datasets caused Dask to raise memory errors in several experiments. HeAT enables users to access a substantial portion of maximum potential performance via a high-level interface, relatively independent of data characteristics. Coupling of Dask with CuPy, CuDF, or CuML can in principle be used for distributed GPU computation. Within the performance evaluations \cref{subsubsec:performance-mean}, a multi-GPU benchmark for Dask with CuPy was conducted for the mean calculation, which did not show any improvement. During these experiments it was found that this is only valid for a multi-node single-GPU setup. We hypothesize that Dask does not take care of copying the data from GPU to CPU before communicating and thus, the communication stack cannot properly access the VRAM. In order to enable multi-node multi-GPU support, data must be moved manually from CPU to GPU and back for every operation utilizing communication as demonstrated in \cref{lst:DaskCupy} (Appendix). For high-level algorithms, this is a very cumbersome task requiring a substantial understanding of distributed programming. As discussed previously, most frameworks for parallel deep learning applications primarily focus on data-parallelism. However, generalized model parallelism is not available to the authors' knowledge. HeAT's programming model facilitates straight-forward data-parallelism as well as model parallelism and pipelining. The use of a custom communication layer allows for the implementation of distributed automatic differentiation, which is a vital part for a distributed model architecture. First steps in this direction are already underway with \texttt{mpi4torch}~\cite{mpi4torch}, a prototype for distributed AD. This subsequently offers the opportunity for the development of other high-level differentiable algorithms. In light of the ever increasing need for machine learning models to yield reliable predictions, considerable efforts have been put towards the development of probabilistic approaches. HeAT's programming model and internal design give access to all levels of algorithmic development and by such offers an intuitive way to implement such approaches. \section{Conclusion} \label{sec:conclusion} With HeAT, we address the needs of an ever-growing community of scientists, both in academia and industry, who seek to accelerate the process of extracting information from Big Data. To this end, we have set upon the task of combining data analytics and machine learning algorithms with state-of-the-art high-performance computing concepts into an easy-to-use Python library. We have demonstrated that even in its current early stage, HeAT offers great potential. The convergence of speed and usability sets it up to redefine high-performance data analytics by putting high levels of parallelism within reach of scientists in academia and industry alike. HeAT offers a way to easily develop application-specific algorithms while leveraging the available computational resources, setting it apart from other approaches and libraries. \newpage \section*{Acknowledgment} \label{sec:acknowledgment} The authors would like to thank the system administrators at the Jülich Supercomputing Centre and in particular Dr. Alexandre Strube for their continuous support in maintaining the benchmarking HPC system. Furthermore, we want to thank the Helmholtz Analytics Framework collaboration for thorough feedback and valuable suggestions. \bibliographystyle{IEEEtranDOI}
{ "timestamp": "2020-11-12T02:12:39", "yymm": "2007", "arxiv_id": "2007.13552", "language": "en", "url": "https://arxiv.org/abs/2007.13552" }
\section{Introduction} \label{sec1} The Hepatitis C Virus (HCV) is a blood-borne virus identified as the main cause of the liver disease [\citenum{JJJ,Prati,Wasley}]. Globally, about three percent of the world population (170 million) are dealing with the HCV and 71 million people have chronic hepatitis C infection [\citenum{JJJ,Zhang,ZhangZhou,Chen}]. Several studies showed that the chronic stage of HCV will develop the cirrhosis and liver cancer in the case of no treatment and approximately 339 thousands people die every year due to these diseases[\citenum{JJJ,Bisceglie}]. Despite previously mentioned statistics which makes HCV infection as one of the important health threats, this disease received little attention especially in the regions with higher rate of infectiousness [\citenum{Zhang}].\\ Although fatigue and jaundice were mentioned as symptoms of the HCV, but this disease often has no considerable symptom, even in the advanced stages. This is the reason that the HCV outbreak is called "the silent epidemic" [\citenum{Zhang,YuanYang}]. Several different ways were reported for HCV prevalence, which include sharing injection equipments, unsafe sexual contacts, inadequately sterilization of syringes and needles especially for health-care personnel and transfusion of polluted blood [\citenum{JJJ,Klevens}]. Even though these are the main causes of HCV epidemic, but some other reasons may also be critical in some societies based on special conditions. For instance in developed countries, since there is precise control on the blood transfusion, the importance of injecting drug use in transmission of the disease has increased compared to the transfusion of polluted blood and its products [\citenum{Prati,Klevens}].\\ Natural cure at the chronic stage of HCV is not common, but it can happen for about 10-15\% of patients that the RNA of HCV is indistinguishable in their serum [\citenum{ZhangZhou,Chen}]. For the rest of patients (80-85 \%) that the HCV could not be healed by their immune system response, some drug therapy regimes should be employed. Hepatitis C drugs have recently had some developments. Available safe, highly effective and endurable combinations of oral antivirals that act directly, have currently developed for this disease [\citenum{Zhang,Banerjee}]. Although vaccination is the vital way of controlling different viral diseases, but unfortunately there is no vaccine for the HCV yet [\citenum{ZhangZhou}]. Therefore, preventing this disease has an important role in stopping the extension of the its epidemic.\\ In the present work, a nonlinear adaptive method is developed for treatment and control of the HCV epidemic. For this purpose, the recently published nonlinear HCV epidemiological model in [\citenum{Zhang}] is employed and different parametric uncertainties are taken into account, despite the previous optimal strategies [\citenum{Zhang}]. The main goal of the proposed control scheme is the population decrease in the unaware susceptible and chronically infected compartments in the existence of parametric uncertainties. Accordingly, two control inputs (efforts to inform susceptible individuals and treatment rate) are employed to track descending desired populations of the previously mentioned compartments. The asymptotic stability and tracking convergence of the closed-loop system having uncertainties are proven using the Lyapunov stability theorem and the Barbalat's lemma. Innovations of this research are as follows: \begin{itemize} \item For the first time a nonlinear adaptive method is developed to control the HCV epidemic by defining a novel Lyapunov function candidate that provides the tracking convergence proof. \item Due to the lack of accurate information about HCV model parameters in each society, parametric uncertainty is taken into account in this research for the first time, and the defined control objectives are achieved in the presence of these inaccuracies. \item In all of the previous studies that have been conducted on the control of the HCV outbreak, populations of some undesired compartments at the end of investigation period were considered as the criterion for designing control inputs [\citenum{Oare,Okosun,Zhang,khodaei,khodaei2}]. However, in the present work for the first time, the populations of two unawared susceptible and chronically infected classes during the entire treatment period are considered as the criterion and control inputs are designed in a way to track desired values instead of focusing on their final populations at the end of process. \end{itemize} The rest of this chapter is organized as follows. In Sec. 2, related research work will be explained. Description of the dynamic model and the proposed control scheme will be presented in Sec. 3 and Sec. 4, respectively. Results of simulations will be depicted and discussed in Sec. 5, and the concluding remarks will be mentioned in Sec. 6. \section{Related research work} \label{Related} Previous related studies are presented in this section and divided into three parts, including mathematical modeling, and Optimal control for HCV and adaptive control strategies for different biological systems.\\ Several analytical analyses were conducted on the dynamic modeling of the HCV epidemic which are presented here. Martcheva and Chavez [\citenum{Martcheva}] presented a simple mathematical model with three compartmental variables including susceptible, acutely infected and chronically infected. They considered different epidemiological observations in the model. Yuan and Yang [\citenum{YuanYang}] added the exposed class to the previous model [\citenum{Martcheva}]. They considered that the susceptible individuals transmit to the exposed compartment in the case of having contact with the infected compartments. Zhang and Zhou [\citenum{ZhangZhou}] added a new term in the model of Yuan and Yang [\citenum{YuanYang}], which denotes the death rate due to the HCV. Hu and Sun [\citenum{Hu}] proposed another epidemiological model for the HCV with four classes in which the recovered compartment was taken into account for the first time. Naturally the recovered people transmit to this class from the acutely infected and chronically infected compartments and become immune against this. Ainea et al. [\citenum{Ainea}] extended the previous model [\citenum{Hu}] by adding the exposed class. Both of these models [\citenum{Hu}] and [\citenum{Ainea}] considered the HCV disease-induced death rate for both acutely-infected and chronically infected classes. Shen et al. [\citenum{Shen}] proposed a dynamical model with six classes including susceptible, exposed, acutely infected, chronically infected, treated and recovered populations. They propounded treatment influence for the first time and classified treated people in the distinct class. Shi and Cui [\citenum{Shi}] improved the model in [\citenum{Shen}] and divided the treated class into two different classes by defining the treatment for chronical infection and awareal reinfection. \\ Some researches have been conducted for optimal control of the HCV outbreak. Okosun [\citenum{Oare}] employed a SITV (susceptible, acutely infected, treated and chronically infected) model for the HCV that was an extended form of the dynamics presented in [\citenum{YuanYang}]. This model [\citenum{Oare}] included the treatment compartment and considered movement for susceptible, treated, and acutely and chronically infected people among their compartments. Some time dependent optimal control strategies are proposed, in order to control the HCV disease. Cost function is calculated for these strategies in order to evaluate effectiveness of the control methods and select the most efficient one. Okosun and Makinde [\citenum{Okosun}] employed a SEITV (susceptible, exposed, acutely infected, treated and chronically infected) dynamical model for the HCV outbreak considering the screening rate and drug efficacy as control inputs for acutely and chronically infected populations and used the Pontryagin's Principle to solve the optimal control problem. Another epidemiological model was investigated in [\citenum{Zhang}] for the HCV outbreak in which the susceptible class was divided into aware and unaware classes. Moreover, they considered two control inputs including screening and treatment rates for the HCV epidemic model which were determined by an optimal control law. In [\citenum{Zhang}], the dynamics was formulated with the susceptibility reduction due to the publicity and the treatment process to identify the feasible effect of public concerns and treatment on the HCV. An optimal neuro-fuzzy strategy was also introduced in [\citenum{khodaei}] in order to control the HCV epidemic. They [\citenum{khodaei}] employed the mathematical model proposed in [\citenum{Okosun}] as a deterministic model and utilized the genetic algorithm to obtain optimal control inputs. \\ As described, all of previous studies on the control of HCV epidemic were conducted on the optimal strategies. On the other hand, some other research works were performed on the adaptive control of different diseases as presented here. Moradi et al. [\citenum{Moradi}] suggested a Lyapunov-based adaptive method to control three different hypothetical models of the cancer chemotherapy inside the human body and compared results among these models. In the next step of this research [\citenum{Sharifi2}], a composite adaptive strategy has been developed for online identification of cancer parameters during the chemotherapy process. Boiroux et al. [\citenum{Boiroux}] employed a model predictive controller for the type 1 diabetes' model and used an adaptive controller to balance the blood glucose. They determined the model parameters based on clinical information of past patients. Aghajanzadeh et al. [\citenum{Aghajanzadeh}] suggested an adaptive control strategy for hepatitis B virus infection inside the human body by antiviral drugs. They considered model parameters uncertainties on model parameters and employed adaptive controller to control the dynamic despite uncertainties of the system. Sharifi and Moradi[\citenum{Sharifi}] designed a robust scheme with adaptive gains to control the influenza epidemic, considering its dynamic model's uncertainties. Padmanabhan et al. [\citenum{Padmanabhan}] proposed an optimal adaptive method to control the sedative drug in anesthesia administration. They employed an integral reinforcement learning method in order to overcome the uncertainty of parameter values.\\ \section{Dynamic model of hepatitis C virus epidemic} \label{sec2} Mathematical modeling is an useful way of analysis for epidemiology of a disease. These models have two important capabilities: 1. finding out mechanistic understanding of the disease, and 2. exploring potential outcomes of the epidemic under different conditions [\citenum{Probert}]. For assessment of the proposed method for the HCV prevalence control in a population, a nonlinear compartmental model is used with five different classes including unaware susceptible (\textit{$S_u$}), aware susceptible (\textit{$S_a$}), acutely infected (\textit{I}), chronically infected (\textit{C}) and the treated (\textit{T}) humans [\citenum{Zhang}]. The susceptible compartment is divided into two classes, including aware and unaware people. Note that aware people have information about the HCV transmission ways and preventing methods despite unaware population. Since there is no available vaccine for the HCV, informing people about preventing methods is a very important way to reduce the risk of infection for susceptible people [\citenum{JJJ}]. Therefore, the unaware susceptible individuals (\textit{$S_u$}) will be infected in contact with the infected population (\textit{I, C} and \textit{T}) with a higher rate in comparison with the aware susceptible individuals (\textit{$S_a$}) [\citenum{Zhang}]. Thus, the transmission rate for unaware susceptible humans (\textit{$S_u$}) should be considered larger than this rate for aware susceptible humans (\textit{$S_a$}) in the dynamic model [\citenum{Zhang}]. The nonlinear mathematical model of HCV epidemic is as follows: \begin{align} &\dot{S_u}=b-\lambda_{S_u}\frac{S_u}{N}-(\mu+u_1(t))S_u+(1-q)\gamma I \nonumber \\ &\dot{S_a}=u_1(t) S_u-\lambda_{S_a}\frac{S_a}{N}-\mu S_a+(1-p)\xi T \nonumber \\ &\dot{I}=\lambda_{S_u}\frac{S_u}{N}+\lambda_{S_a}\dfrac{S_a}{N}-(\mu + \gamma)I \label{DynEq} \\ &\dot{C}=q \gamma I-(\mu + u_2(t) + \theta)C + p \xi T \nonumber \\ &\dot{T}=u_2(t) C- (\mu + \xi)T \nonumber \end{align} where $\lambda_{S_u}=\beta (I+K_1 C + K_2 T)$ and $\lambda_{S_a}=\alpha \lambda_{S_u}$. $u_1$ and $u_2$ are control inputs and defined as effort rate to inform unaware susceptible individuals and treatment rate for chronically infected class, respectively. \textit{N} denotes the total population and will be calculated as: \begin{align} N=S_u+S_a+I+C+T \end{align} The population of unaware susceptible (\textit{$S_u$}) increases with the rate of \textit{b}. Unaware and aware susceptible individuals are also infected in contact with acutely and chronically infected and treated individuals at the rates of $\lambda_{S_u}$ and $\lambda_{S_a}$, respectively. Infectiousness rate for acutely infected people is higher than chronically infected individuals, and the treated people have the lowest rate; thus, it is assumed that $K_1>K_2$ [\citenum{Zhang,ZhangZhou}]. The total population (\textit{N}) decreases with two different rates $\mu$ and $\theta$, where $\mu$ denotes the rate of natural death that decreases populations of different compartments. However, $\theta$ is the rate of HCV induced death and decreases the population of the chronically infected compartment (\textit{C}). \\ During the acute stage (\textit{I}), the HCV could have different behaviors for each patient based on his/her immune system response. For 15 to 25\% of cases in this stage, the RNA of HCV becomes indistinguishable in their blood serum and the ALT level returns to the normal range. This observation is defined by the term $(1-q)\gamma I$ in the proposed HCV dynamics [\citenum{Zhang,Chen}]. Approximately, the immune system in 75-85\% of the patients could not remove the hepatitis C virus in the acute stage and their disease becomes advanced to the chronic stage. Note that if the HCV RNA reamins in the patient's blood for at least six months after onset of acute infection, the chronic level of the disease will appear which is defined by the term $q\gamma I$ in Eq. (\ref{DynEq}) [\citenum{ZhangZhou,Chen}]. Finally, the defeat in the treatment process is defined by the term \textit{p}. The treated population decreases by the rate of $\xi T$ and join the chronic class by the rate of $p \xi T$ in the case of treatment defeat and the rest of this population $(1-p) \xi T$ will join aware susceptible class if the treatment be successful. The schematic diagaram of the proposed nonlinear dynamics of the HCV epidemic is depicted in the Fig. \ref{SchModel} and descriptions of the parameters are presented in Table \ref{ParDef} [\citenum{Zhang}].\\ \begin{figure*} \begin{center} \includegraphics[width=0.75\textwidth]{SchModel.JPG} \caption{Schematic diagram of transition among different classes of HCV epidemic} \label{SchModel} \end{center} \end{figure*} \begin{table} \caption{Parameters of the mathematical model of the HCV (\ref{DynEq}) [\citenum{Zhang}]} \label{ParDef} \begin{tabular}{ll} \hline Parameter & Description \\ \hline b & Rate of birth \\ $\mu$ & Rate of death \\ $\beta$ & Transmission coefficient \\ $K_1$ & Chronic stage infectiousness relative to acute stage \\ $K_2$ & Treated individuals' infectiousness relative to acute ones \\ $\alpha$ & Rate of being infected for aware people relative to unaware ones \\ $\gamma$ & Leaving rate of acutely infected class \\ $q$ & Progressing proportion from acute stage to chronic one \\ $\xi$ & Transferring rate from treated class to other ones \\ $p$ & Moving back proportion from treated class to chronic one \\ $\theta$ & HCV induced death rate \\ \hline \end{tabular} \end{table} \section{Nonlinear adaptive controller formulation for epidemiology of HCV} \label{sec3} In the present section, a new nonlinear adaptive controller is formulated for uncertain hepatitis C virus epidemic. The main purpose of the control method is to minimize the populations of unaware susceptible (\textit{$S_u$}) and chronically infected (\textit{C}) classes. Two control inputs $u_1(t)$ and $u_2(t)$ are considered in order to reach this objective. $u_1(t)$ denotes the effort rate to inform the susceptible individuals from the HCV by media publicity, educational campaigns, public service advertising and so on, and $u_2(t)$ is employed to reflect the rate of treatment on chronically infected individuals [\citenum{Zhang}]. \\ Using the above-mentioned control inputs, the populations of unaware susceptible (\textit{$S_u$}) and chronically infected (\textit{C}) classes will decrease by tracking some desired values. Moreover, due to decrease of the mentioned components, the number of aware susceptible (\textit{$S_a$}) and treated (\textit{T}) individuals will increase and decrease, respectively. The Lyapunov theorem is employed to prove stability of the closed-loop system. In addition, some adaptation laws are defined in order to update estimated parameters of the system to guarantee the stability and robustness of the system against uncertainties of the dynamic model. A conceptual diagram of the proposed nonlinear feedback controller with adaptive scheme is illustrated in Fig. \ref{SchControl}. \\ \begin{figure} \begin{center} \includegraphics[scale=0.45]{SchControl.JPG} \caption{Conceptual diagram of nonlinear adaptive method developed to control the HCV epidemic in the existence of uncertainties on parameters of the model} \label{SchControl} \end{center} \end{figure} \subsection{Nonlinear adaptive control laws} \label{subsec3.1} Control inputs ($u_1(t)$, $u_2(t)$) could be calculated using dynamics of the unaware susceptible and chronically infected compartments from Eq. (\ref{DynEq}) as: \begin{align} u_1&=-\frac{\dot{S_u}}{S_u}+\frac{b}{S_u}-\frac{\beta}{N} (I+K_1C+K_2T)-\mu+(1-q)\gamma \frac{I}{S_u} \label{u1} \\ u_2&=-\frac{\dot{C}}{C}+q \gamma \frac{I}{C}-(\mu + \theta) + p\xi \frac{T}{C} \label{u2} \end{align} \\ \textbf{Property.} The right-hand sides of Eqs. (\ref{u1}) and (\ref{u2}) can be linearly parameterized in terms of their available parameters. $\phi_1$ and $\phi_2$ are considered to be the arbitrary variables instead of $\dot{S_u}$ and $\dot{C}$. Now the equations could be rewritten as: \begin{align} -\frac{\dot{S_u}}{S_u}+\frac{b}{S_u}-\frac{\beta}{N} (I+K_1C+K_2T)-\mu+(1-q)\gamma \frac{I}{S_u} \label{u1par}&= -\frac{\phi_1}{S_u}+Y_1 \theta_1 \\ -\frac{\dot{C}}{C}+q \gamma \frac{I}{C}-(\mu + \theta) + p\xi \frac{T}{C}= -\frac{\phi_2}{C}+Y_2 \theta_2 \label{u2par} \end{align} where $Y_1$ and $Y_2$ are the regressor matrices contain known functions of HCV epidemic variables. $\theta_1$ and $\theta_2$ are parameter vectors which contain unknown parameters of the dynamic. Eqs. (\ref{RegPar1}) and (\ref{RegPar2}). Accordingly, these matrices and vectors are defined as \begin{align} Y_1= &\begin{bmatrix} \dfrac{1}{S_u} & -\dfrac{I}{N} & -\dfrac{C}{N} & -\dfrac{T}{N} & \dfrac{I}{S_u} & -1 \end{bmatrix}; \quad \theta_1= \begin{bmatrix} b & \beta & \beta K_1 & \beta K_2 & (1-q)\gamma & \mu \end{bmatrix}^T \label{RegPar1}\\ Y_2= &\begin{bmatrix} \dfrac{I}{C} & \dfrac{T}{C} & -1 \end{bmatrix}; \quad \theta_2= \begin{bmatrix} q \gamma & p \xi & (\mu + \theta) \end{bmatrix}^T \label{RegPar2} \end{align} This regressor presentation is used to summarize the equations and define the adaptation and control laws. In order to design nonlinear control laws, two new variables $\phi_1$ and $\phi_2$ are defined as follows: \begin{align} \phi_1&=\dot{S}_{u_d}-\lambda_1 \tilde{S}_u \\ \phi_2&=\dot{C}_d-\lambda_2 \tilde{C} \end{align} where $\lambda_1$ and $\lambda_2$ are the controller gains and considered to be positive and constant. Now, the nonlinear adaptive control laws are defined as \begin{align} u_1&=-\frac{\dot{S_{u_d}}-\lambda_1 \tilde{S_u}}{S_u}+Y_1 \hat{\theta}_1 \label{ConLaw1} \\ u_2&=-\frac{\dot{C_d}-\lambda_2 \tilde{C}}{C}+Y_2 \hat{\theta}_2 \label{ConLaw2} \end{align} where $\hat{\theta}_1$ and $\hat{\theta}_2$ are the vectors of estimated parameters.\\ In the next section, taking advantages of the Lyapunov stability theorem, it will be proven that the control laws (\ref{ConLaw1}) and (\ref{ConLaw2}) together with some adaptation laws, guarantee the tracking convergence, stability and robustness for the treatment of HCV outbreak. \subsection{Stability proof and adaptation laws} The closed-loop dynamics of the system is achieved firstly by substituting the control laws (\ref{ConLaw1}) and (\ref{ConLaw2}) into the dynamics of HCV epidemic (\ref{DynEq}): \begin{align} \frac{\dot{\tilde{S}}_u+\lambda_1 \tilde{S}_u}{S_u}&=-Y_1 \tilde{\theta}_1 \label{ClsLp1}\\ \frac{\dot{\tilde{C}}+\lambda_2 \tilde{C}}{C}&=-Y_2 \tilde{\theta}_2 \label{ClsLp2} \end{align} where $\tilde{\theta}_i$ (for \textit{i}=1, 2) is defined as $\hat{\theta}_i-\theta_i$.\\ The adaptation laws are designed to update parameters' estimation to keep the system's robustness against uncertainties, as \begin{align} \dot{\hat{\theta}}_1^T&=S_u \tilde{S}_u Y_1 \Gamma_1 \label{AdpLaw1}\\ \dot{\hat{\theta}}_2^T&=C \tilde{C} Y_2 \Gamma_2 \label{AdpLaw2} \end{align} where $\Gamma_1$ and $\Gamma_2$ are the adaptation gain matrices and considered to be positive definite.\\ Now, employing the Lyapunov stability theorem [\citenum{Slotine}] and based on the previously derived close-loop dynamics (\ref{ClsLp1})-(\ref{ClsLp2}) and the designed adaptation laws (\ref{AdpLaw1})-(\ref{AdpLaw2}), the tracking convergence, stability and robustness for aware susceptible and chronically infected classes will be proven. With this aim, a positive definite Lyapunov-candidate-function is selected as \begin{align} V=\frac{1}{2}[\tilde{S}_u^2+\tilde{C}^2+\tilde{\theta}_1^T \Gamma_1^{-1}\tilde{\theta}_1 + \tilde{\theta}_2^T \Gamma_2^{-1}\tilde{\theta}_2] \label{Lyapunov} \end{align} The Lyapunov function's time derivative is determined: \begin{align} \dot{V}=\tilde{S}_u\dot{\tilde{S}}_u +\tilde{C}\dot{\tilde{C}} + \dot{\hat{\theta}}_1^T \Gamma_1^{-1}\tilde{\theta}_1 + \dot{\hat{\theta}}_2^T \Gamma_2^{-1}\tilde{\theta}_2 \label{Vdot} \end{align} It should be mentioned that $\dot{\tilde{\theta}}=\dot{\hat{\theta}}$ because $\theta$ is constant ($\dot{\theta}$ is zero). By substituting the adaptation laws (\ref{AdpLaw1}) and (\ref{AdpLaw2}) into (\ref{Vdot}), the time derivative of V is simplified to: \begin{align} \dot{V}=-\lambda_1 \tilde{S}_u^2 -\lambda_2 \tilde{C}^2 \label{Vdot2} \end{align} As mentioned in the previous descriptions, $\lambda_1$ and $\lambda_2$ are considered to be positive; thus, the Lyapunov function's time derivative is negative-semi-definite. Thus, based on the Barbalat's Lemma (described in the Appendix A) and the Lyapunov stability theorem [\citenum{Slotine}], it is proven that $\tilde{S}_u$ and $\tilde{C}$ converge to the zero. In other words, employing the suggested adaptive feedback control strategy ensures the tracking convergence and stability ($\tilde{S}_u \rightarrow 0$ and $\tilde{C} \rightarrow 0$ as $t\rightarrow \infty$) in the presence of uncertainties. Thus, the numbers of unaware susceptible (\textit{$S_u$}) and chronically infected (\textit{C}) converge to the desired values ($S_u\rightarrow S_{u_{d}}$ and $C\rightarrow C_{d}$). \\ \section{Results and discussion} For effectiveness evaluation of the proposed method, some simulations are presented in this section. Note that computer simulations have proven to be useful for evaluating the spread behavior of infectious diseases [\citenum{Orbann}]. In the present study, the simulation process is performed in the Simulink-Matlab environment. The parameters' values of the HCV epidemic model (\ref{DynEq}) are listed in Table \ref{ParVal}. \begin{table} \begin{center} \caption{Values of the HCV parameters in its mathematical model (\ref{DynEq}) [\citenum{Zhang}]} \label{ParVal} \begin{tabular}{lc} \hline Parameter & Value \\ \hline b & 0.012 \\ $\mu$ & 0.006 \\ $\beta$ & 0.15 \\ $K_1$ & 0.5 \\ $K_2$ & 0.2 \\ $\alpha$ & 0.1 \\ $\gamma$ & 4 \\ $q$ & 0.2 \\ $\xi$ & 0.8 \\ $p$ & 0.5 \\ $\theta$ & 0.001 \\ \hline \end{tabular} \end{center} \end{table} A small society with total population of 1310 people at the beginning of investigation is used. The following desired scenarios are considered for reduction of unaware susceptible individuals (\textit{$S_{u_d}$}) and treatment of chronically infected people (\textit{$C_d$}). \begin{align} S_{u_d}&=(S_{u_0}-S_{u_f})exp(-a_1t)+ S_{u_f} \label{Desired1} \\ C_d&=(C_0-C_f)exp(-a_2t)+ C_f \label{Desired2} \end{align} where \textit{$a_1$} and \textit{$a_2$} are the desired population reduction rates. \textit{$S_{u_0}$} and \textit{$S_{u_f}$} are the initial and final (steady state) populations of unaware susceptible class, respectively. \textit{$C_0$} and \textit{$C_f$} are the initial and final (steady state) populations of chronically infected compartment, respectively.\\ The presented reduction and treatment scenarios (\ref{Desired1}) and (\ref{Desired2}) are employed in these simulations as the desired decreasing behavior of the HCV epidemic control. However, other continuously decreasing fashion can be used as desired scenarios without loss of generality. Values of parameters in the desired HCV population reduction scenarios (~\ref{Desired1}) and (\ref{Desired2}) are listed in Table \ref{DesVal}. These scenarios for unaware susceptible and chronically infected compartments are shown in Fig. \ref{Desired}.\\ \begin{table} \begin{center} \caption{Values of parameters in the desired HCV population reduction scenarios (\ref{Desired1}) and (\ref{Desired2})} \label{DesVal} \begin{tabular}{lc} \hline Parameter & Value \\ \hline $S_{u_0}$ & 1000 \\ $C_0$ & 100 \\ $S_{u_f}$ & 0 \\ $C_f$ & 0 \\ $a_1$ & 0.4 \\ $a_2$ & 0.2 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \begin{center} \includegraphics[scale=0.75]{Desired.eps} \caption{Desired scenarios for reduction of unaware susceptible and chronically infected compartments in the HCV epidemic} \label{Desired} \end{center} \end{figure} In the absence of control inputs, the HCV infection will extend in the society based on Eq. \ref{DynEq}. Accordingly, the treated population will decrease and reach zero exponentially due to the lack of treatment process. In that case (no control input), unaware and aware susceptible individuals will get the hepatitis C virus in contact with the infected people in \textit{I} and \textit{C} compartments and will join the acutely infected class (\textit{I}). Since there is no treatment for acutely infected individuals (as seen in Eq. (\ref{DynEq})), the HCV disease will progress and reach the chronic stage. Thus, the population of the chronically infected compartment (\textit{C}) will increase and the populations of all other compartments will decrease. Figure \ref{No control} depicts the above-mentioned points about the HCV outbreak in the case of no control input.\\ \begin{figure} \begin{center} {\includegraphics[scale=0.75]{No_control1.eps}}\label{No control1}\\ (a)\\ { \includegraphics[scale=0.75]{No_control2.eps}}\label{No control2}\\ (b) \caption{Populations of (a) unaware and aware susceptible, and (b) acutely infected, chronically infected and treated classes in the absence of control inputs} \label{No control} \end{center} \end{figure} However, applying the proposed strategy based on the designed nonlinear control laws (\ref{ConLaw1}) and (\ref{ConLaw2}) with the obtained adaptation laws (\ref{AdpLaw1}) and (\ref{AdpLaw2}), the population changes in different compartments in the presence of 20\% parametric uncertainty are depicted in Fig. \ref{With control}.\\ \begin{figure} \begin{center} {\includegraphics[scale=0.75]{With_control1.eps}}\label{With control1}\\ (a)\\ {\includegraphics[scale=0.75]{With_control2.eps}}\label{With control2} \\ (b) \caption{Populations of (a) unaware and aware susceptible, and (b) acutely infected, chronically infected and treated classes in the presence of control inputs ($u_1$ and $u_2$) based on the proposed laws (\ref{ConLaw1}) and (\ref{ConLaw2})} \label{With control} \end{center} \end{figure} As seen, due to the employment of the first control input (\textit{$u_1$}), the unaware susceptible individuals (\textit{$S_u$}) reduce and join the aware susceptible class (\textit{$S_a$}). Since the aware susceptible people become less infectious than unaware ones because of the control input $u_1$, extension of the HCV infection decreases compared with the no-control-input case (shown in Fig. \ref{No control}). Moreover, using the treatment rate as the second control input (\textit{$u_2$}), the population of chronically infected compartment (\textit{C}) decreases (Fig. \ref{With control}) based on the described scenarios ($C_d$ in Fig. \ref{Desired}). Thus, the populations of unaware susceptible and chronically infected classes reduce and the population of aware susceptible increases in Fig. \ref{With control}, which are in accordance with the HCV dynamics (\ref{DynEq}). Although 20\% parametric uncertainty is taken to account for the nonlinear model, simulation results show that the proposed control strategy satisfied its objective which is convegence to desired population reduction and treatment scenarios ($S_u \rightarrow S_{u_d}$ and $C \rightarrow C_d$). Figure \ref{Convergence} depicts the desired and real populations of unaware susceptible and chronically infected classes,which implies the appropriate convergence performance using the nonlinear controller. The corresponding tracking errors are presented in Fig. \ref{Error}. \\ \begin{figure} \begin{center} \includegraphics[scale=0.75]{Convergence.eps} \caption{Convergence of unaware susceptible and chronically infected populations ($S_u$ and \textit{C}) to their desired values ($S_{u_d}$ and $C_d$)} \label{Convergence} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.75]{Error.eps} \caption{Tracking errors between the desired and real values of unaware susceptible and acutely infected compartments} \label{Error} \end{center} \end{figure} As described, two control inputs are adjusted according to the proposed nonlinear adaptive strategy in order to prevent the HCV outbreak. The first control input $u_1(t)$ denotes the effort rate to inform the susceptible individuals from the HCV and the second one $u_2(t)$ is the treatment rate for chronically infected individuals. These control inputs are considered to be normalized in Eq. (\ref{DynEq}) to be in the range of [0,1]. The obtained values for these inputs using the proposed control strategy are shown in Fig. \ref{Control inputs}, which satisfy the physiological constraint ($u_1 \in [0,1]$). This implies that the considered desired scenarios (\ref{Desired1}) and (\ref{Desired2}) for reduction of unaware susceptible and chronically infected compartments comply with the control input limitations. Figure \ref{Par} illustrates the tuning of estimated parameters ($\hat{\theta}_1$ and $\hat{\theta}_2$) based on the designed adaptation laws (\ref{AdpLaw1}) and (\ref{AdpLaw2}) in the presence of 20\% uncertainty.\\ \begin{figure} \begin{center} \includegraphics[scale=0.75]{Control_inputs.eps} \caption{Control inputs ($u_1$ and $u_2$) during the treatment period of HCV epidemic} \label{Control inputs} \end{center} \end{figure} \begin{figure} \begin{center} {\includegraphics[scale=0.75]{Par1.eps}}\label{Par1}\\ (a)\\ {\includegraphics[scale=0.75]{Par2.eps}}\label{Par2} \\ (b) \caption{Estimation of parameters in (a) $\theta_1$ and (b) $\theta_2$ during the treatment period based on the adaptation laws (\ref{AdpLaw1}) and (\ref{AdpLaw2}), respectively} \label{Par} \end{center} \end{figure} \subsection{System response to different uncertainty levels} In this section, effects of different uncertainty levels are investigated for the HCV epidemic dynamics. For this purpose, 50\%, 70\% and 90\% uncertainties are considered on the initial guess of parameters in $ \theta_1 $ and $ \theta_ 2$ (defined in Eqs. (\ref{RegPar1}) and (\ref{RegPar2})). Performance of the adaptation laws (\ref{AdpLaw1}) and (\ref{AdpLaw2}) on tuning of estimated model parameters is investigated in Fig. \ref{Uncer}. As discussed and proven in Sec. \ref{sec3}, these adaptation laws guarantee that the estimation errors of the HCV dynamic parameters remain bounded against different uncertainty levels. \\ \begin{figure} \begin{center} {\includegraphics[scale=0.75]{Uncertainty1.eps}}\label{Uncer1}\\ (a)\\ {\includegraphics[scale=0.75]{Uncertainty2.eps}}\label{Uncer2}\\ (b) \caption{Adaptation of (a) $\theta_1(1)$ and (b) $\theta_2(1)$ using Eqs. (\ref{AdpLaw1}) and (\ref{AdpLaw2}), respectively, for different uncertainty levels} \label{Uncer} \end{center} \end{figure} Figure \ref{Uncerror} shows the population errors of unaware susceptible and chronically infected classes in tracking their desired value ($\tilde{S}_u=S_u-S_{u_d}$ and $\tilde{C}=C-C_d$).\\ \begin{figure} \begin{center} {\includegraphics[scale=0.75]{UncErr1.eps}}\label{Uncerror1}\\ (a)\\ {\includegraphics[scale=0.75]{UncErr2.eps}}\label{Uncerror2}\\ (b) \caption{The difference between (a) unaware susceptible population and its desired value ($\tilde{S}_u=S_u - S_{u_d}$) and (b) chronically infected population and its desired value ($\tilde{C}=C-C_d$) for different parametric uncertainty levels} \label{Uncerror} \end{center} \end{figure} As observed in Fig. \ref{Uncerror} the increment of parametric uncertainties, increases the magnitude of errors ($\tilde{S_u}$ and $\tilde{C}$) and their initial variations. However, after a period of time (about 0.2 year), the errors magnitudes has reached zero, which means that the tracking convergence has been achieved for different values of uncertainties. In other words, the population of unaware susceptible and chronically infected compartments converged to their desired values ($S_u\rightarrow S_{u_{d}}$ and $C\rightarrow C_{d}$) in the existence of different levels of uncertainty.\\ \section{Conclusion} In the present study, a new nonlinear adaptive strategy was developed to control the hepatitis C virus epidemic based on a mathematical model having uncertainties. For the first time, an adaptive feedback controller was employed to decrease the populations of unaware susceptible and chronically infected compartments based on the desired scenarios. Two control inputs were employed for this goal. The first one $u_1(t)$ is the effort rate to inform the susceptible individuals from the HCV and the second one $u_2(t)$ is the rate of treatment for chronically infected people. The Lyapunov stability theorem and the Barbalat's lemma were used to prove the tracking convergence to desired treatment scenarios. The proposed control laws and adaptation laws provided the stability of the closed-loop HCV epidemic system in the presence of parametric uncertainties. Results of numerical simulations showed that by adjusting the control inputs and the estimated parameters based on this strategy, number of the unaware susceptible and chronically infected individuals are decreased. As a result, population of the aware susceptible was increased and population of the acutely infected and treated classes reached out to zero at the end of process. Moreover, the obtained results implied that the tracking convergence is achieved for a wide range of uncertainties. Designing optimal trajectories and employing unstructured uncertainties can be considered as the next steps of this research in the future. \\
{ "timestamp": "2020-07-28T02:38:29", "yymm": "2007", "arxiv_id": "2007.13522", "language": "en", "url": "https://arxiv.org/abs/2007.13522" }
\section{Introduction} Given a differential system with input of the form $\overset{.}{y}(t)=f(y(t),{\bf u}(t))$ and an initial condition $y(0)=y_0$, the calculation of a control ${\bf u}(\cdot)$ that minimizes a given cost function ({\em optimal control}) rarely has an analytical solution, and numerical methods must be used to obtain approximate solutions. Among these numerical methods, there are 3 main classes: \begin{enumerate} \item methods that reduce the problem to a {\em convex optimization} problem, which are very much in demand since the adaptation of interior point methods by Nesterov and Nemirovskii \cite{NesterovN94}, and which are notably used in receding horizon methods, also called Model Predictive Control (MPC) \cite{Mayne14}; \item methods based on the resolution of an Hamilton-Jacobi-Bellman (HJB) equation (see, e.g., \cite{Falcone1999,Saluzzi18}), using the {\em Dynamic Programming Principle} (DPP)~\cite{Bellman57}; \item methods based on the {\em Pontryagin Maximum Principle} (PMP) \cite{kirk1970optimal}. \end{enumerate} On the other hand, since the 1960s and the invention of Interval Arithmetic~\cite{Moore66}, one has been looking for safe enclosures for the approximate values of ODEs computed by numerical methods. Therefore, extensions of numerical methods, called set-based (or symbolic) methods, have been used, which, instead of manipulating points, manipulate sets (typically real intervals or products of real intervals) in order to enclose the exact values. These methods of control synthesis are called ``correct-by-design'' or ``guaranteed''. In addition to ensuring that a set (typically an interval) containing the exact solution is obtained at the end, set-based methods allow taking into account bounded disturbances. They are said to be ``robust''. Since the beginning of interval arithmetic, these set-based methods have experienced a great development. The manipulated sets, originally products of real intervals \cite{Moore66}, have taken specialized convex forms such as polytopes \cite{HanK06}, parallelotopes \cite{Lohner87}, zonotopes \cite{girard2005reachability}, spheres \cite{SNR17} or ellipsoids~\cite{Neumaier1993}. In this context, numerical integration methods classically take set-based forms using extensions of Taylor's methods (see, e.g., \cite{AlthoffSB07,BerzH98,BerzM98,CAS12,Lohner87,Nedialkov99,NedialkovKS04}). Numerical methods 1 and 2 of {\em optimal control} have themselves been subject to set-based extensions to take account of uncertainties (unlike method 3, which is very sensitive to initial conditions, and {\em a priori} unsuitable for set-based extensions). Extensions of numerical methods of class 1 are thus given in \cite{MayneSR05,SchurmannA17b,SchurmannA17,SchurmannKA18}, while extensions of numerical methods of class 2 are given in \cite{LeCoentF19,CoentF19,LygerosTS99,MitchellBT01,MitchellT03,ReissigR19}. These extensions have the respective advantages and disadvantages of their numerical counterparts. Set-based methods of class 1 are efficient (polynomial complexity in $n$-dimension of the problem, i.e., state vector dimension), but calculate {\em a priori} only {\em local} optimals. Set-based methods of class 2 compute global optimals, but undergo the ``curse of dimensionality'' (exponential complexity in the dimension $M$ of the state space), and are limited to low dimensional problems. Recently, in the numerical framework, Heymann et al. \cite{Heymann18} have shown that, for certain problems, numerical methods of class~2 can give better solutions than numerical methods of class~1. They have built a numerical solver, called ``Bocop'', that implements both classes of methods % \cite{Bocop}, and have given a set of examples that allows to evaluate and compare them \cite{BocopExamples}. We show in this paper that a set-based method of class~2 combining DPP and a guaranteed Euler integration method \cite{LeCoentF19}, also allows us to compute approximate optimal solutions with good precision. Besides, these solutions enjoy the property of {\em robustness} against uncertainties on initial conditions and bounded disturbances. We demonstrate the practical interest of our method on an example taken from the Bocop solver. We also give a variant of our set-based method, inspired by the principle of Model Predictive Control \cite{Mayne14}, that allows us to compute approximate optimal solutions more quickly, at the cost of losing the robustness property. \paragraph{Plan of the paper:} In \cref{sec:robust}, we explain the principle of our method of optimal control synthesis, and give the associated correctness results (convergence and robustness); we compare the results of our method with those obtained by the Bocop numeric solver on an example of Magnetic Resonance Imaging. In \cref{sec:receding}, we give an efficient variant of our method inspired by the Model Predictive Control Approach but observe the loss of the robustness property. We conclude in \cref{sec:conclusion}. \section{Robust optimal control}\label{sec:robust} \subsection{Explicit Euler time integration}\label{ss:Euler} We consider here a time discretization of time-step $\tau$, and we suppose that the control law ${\bf u}(\cdot)$ is a {\em piecewise-constant} function, which takes its values on a {\em finite} set $U$, called ``set of modes''. Given $u\in U$, let us consider the differential system controlled by $u$: $$\frac{dy(t)}{dt}=f_u(y(t)).$$ where $f_u(y(t))$ stands for $f({\bf u}(t),y(t))$ with ${\bf u}(t)=u$ for $t\in[0,\tau]$. We use $Y_{t,y_0}^u$ to denote the exact continuous solution $y$ % of the system at time~$t\in[0,\tau]$ under constant control $u$, with initial condition~$y_0$. This solution is approximated using the {\em explicit Euler} integration method. We use $\tilde{Y}^u_{t,y_0}\equiv y_0+tf_u(y_0)$ to denote Euler's approximate value of $Y^u_{t,y_0}$ for $t\in [0,\tau]$. Given a sequence of modes (or ``pattern'') $\pi := u_1\cdots u_k\in U^k$, we denote by $Y_{t,y_0}^{\pi}$ the solution of the system under mode $u_1$ on $t\in [0,\tau)$ with initial condition~$y_0$, extended continuously with the solution of the system under mode $u_{2}$ on $t\in[\tau,2\tau]$, and so on iteratively until mode $u_k$ on $t\in[(k-1)\tau,k\tau)$. The control function ${\bf u}(\cdot)$ is thus piecewise constant with ${\bf u}(t)=u_{n}$ for $t\in [(n-1)\tau,n\tau)$, $1\leq n\leq k$. Likewise, we use $\tilde{Y}_{t,y_0}^{\pi}$ to denote Euler's approximate value of $Y_{t,y_0}^\pi$ for $t\in [0,k\tau)$ defined by $\tilde{Y}_{t,y_0}^{u_1\cdots u_n}=\tilde{Y}_{t,y_0}^{u_1\cdots u_{n-1}}+tf_{u_n}(\tilde{Y}_{t,y_0}^{u_1\cdots u_{n-1}})$ for $t\in [0,\tau)$ and $2\leq n\leq k$. The approximate solution $\tilde{Y}_{t,y_0}^{\pi}$ is here a continuous piecewise linear function on $[0,k\tau)$ starting at $y_0$. \subsection{Finite horizon control problems}\label{ss:approx} The optimization task is to find a control pattern $\pi\in U^k$ which guarantees that all states in a given set $S=[0,1]^M\subset \mathbb{R}^M$\footnote{We take here $S=[0,1]^M$ for the sake of notation simplicity, but $S$ can be any convex subset of $\mathbb{R}^M$.} are steered at time $t_{end}=k\tau$ as closely as possible to an end state $y_{end}\in S$. Let us explain the principle of the method based on DPP and Euler integration method used in~\cite{LeCoentF19,CoentF19}. We consider the {\em cost function}: $J_{k}:S\times U^k\rightarrow \mathbb{R}_{\geq 0}$ defined by: $$J_{k}(y,\pi)=\|Y_{k\tau,y}^{\pi}-y_{end}\|,$$ where $\|\cdot\|$ denotes the Euclidean norm in $\mathbb{R}^M$\footnote{We consider here the special case where the cost function is only made of a ``terminal'' subcost. The method extends to more general cost functions. Details will be given in the extended version of this paper.}. We consider the {\em value function} ${\bf v}_k:S\rightarrow \mathbb{R}_{\geq 0}$ defined by: $${\bf v}_k(y) := \min_{\pi\in U^k}\{J_{k}(y,\pi)\}\equiv \min_{\pi\in U^k}\{\|Y_{k\tau,y}^{\pi}-y_{end}\|\}.$$ Given $k\in\mathbb{N}$ and $\tau\in\mathbb{R}_{>0}$, we consider the following {\em finite time horizon optimal control problem}: Find for each $y\in S$ \begin{itemize} \item the {\em value} ${\bf v}_k(y)$, i.e. $$\min_{\pi\in U^k}\{\|Y_{k\tau,y}^{\pi}-y_{end}\|\},$$ \item and an {\em optimal pattern}: $$\pi_k(y) := arg\min_{\pi\in U^k}\{\|Y_{k\tau,y}^{\pi}-y_{end}\|\}.$$ \end{itemize} In order to solve such optimal control problems, a classical ``direct'' method consists in {\em spatially discretizing} the state space $S=[0,1]^M$ (i.e., the space of values of $y$). We consider here a uniform partition of~$S$ into a finite number $N$ of cells of equal size: in our case , this means that interval $[0,1]$ is divided into $K$ subintervals of equal size, and $N=K^M$. A cell thus corresponds to a $M$-tuple of subintervals. The center of a cell coresponds to the $M$-tuple of the subinterval midpoints. The associated grid ${\cal X}\subset S$ is the set of centers of the cells of~$S$. The center~$z\in{\cal X}$ of a cell $C$ is considered as the $\varepsilon$-{\em representative} of all the points of $C$. We suppose that the cell size is such that $\|y - z\|\leq\varepsilon$, for all $y\in C$ (i.e. $K\geq \sqrt{M}/2\varepsilon$). We suppose that $S$ is ``controlled Euler-invariant'' in the sense that, for all $y\in S$, there exits $u\in U$ such that % $\tilde{Y}_{\tau,y}^u\in S$. We say that such $u$ is {\em admissible for $y$}, and we denote by $Adm(y)$ the (non-empty) set of modes admissible for $y\in S$. In this context, the method proceeds as follows (cf. \cite{LeCoentF19}): we consider the points of ${\cal X}$ as the vertices of a finite oriented graph; there is a connection from $ z\in {\cal X}$ to $ z'\in {\cal X}$ if $ z'$ is the $\varepsilon$-representative of the Euler-based image $(z +\tau f_u( z))$ of~$z$, for some $u\in U$. We then compute using dynamic programming the ``path of length $k$ with minimal cost'' starting at $ z$: such a path is a sequence of $k+1$ connected points $ z\ z_k\ z_{k-1}\ \cdots\ z_1$ of ${\cal X}$ which minimizes the distance $\| z_1-y_{end}\|$. This procedure allows us to compute a pattern $\pi^\varepsilon_k(z)$ of length $k$, which approximates the optimal pattern $\pi_k(y)$. \begin{definition} The function $next^{u}: {\cal X}\rightarrow {\cal X}$ is defined by: \begin{itemize} \item if $u\in Adm(z)$, then: % $next^{u}( z)= z'$, where $ z'\in{\cal X}\subset S$ is the $\varepsilon$-representative of $\tilde{Y}_{\tau, z}^{u}$. \item otherwise (i.e., $\tilde{X}_{\tau, z}^{u}\not\in S$): $next^{u}(z)=\bot$. \end{itemize} \end{definition} \begin{definition} For all point $z\in {\cal X}$, the {\em spatially discrete value function} ${\bf v}^{\varepsilon}_k:{\cal X}\rightarrow \mathbb{R}_{\geq 0}$ is defined by: \begin{itemize} \item for $k=0$, ${\bf v}_k^{\varepsilon}( z)=\| z-y_{end}\|$, \item for $k\geq 1$, % ${\bf v}_{k}^{\varepsilon}(z)=\min_{u\in Adm(z)}\{{\bf v}_{k-1}^{\varepsilon}(next^u( z))\}$. \end{itemize} \end{definition} \begin{definition} The {\em approximate optimal pattern of length $k$} associated to $z\in{\cal X}$, denoted by $\pi_k^{\varepsilon}( z)\in U^k$, is defined by: \begin{itemize} \item if $k=0$, $\pi_k^{\varepsilon}( z)=\mbox{nil}$, \item if $k\geq 1$, $\pi_k^{\varepsilon}( z) = {\bf u}_k( z) \cdot \pi'$ where $${\bf u}_{k}( z)=arg \min_{u\in Adm(z)\subseteq U}\{{\bf v}_{k-1}^{\varepsilon}(next^u( z))\}$$ and $\pi' =\pi_{k-1}^{\varepsilon}( z')$ \ \ with \ $ z'=next^{{\bf u}_k( z)}( z)$. \end{itemize} \end{definition} It is easy to construct a procedure $PROC_k^{\varepsilon}$ which takes a point $ z\in {\cal X}$ as input, and returns % an approximate optimal pattern~$\pi_k^{\varepsilon}\in U^k$. \begin{remark} The complexity of $PROC_k^\varepsilon$ is $O(m\times k\times N)$ where $m$ is the number of modes ($|U|=m$), $k$ the time-horizon length ($t_{end}=k\tau$) and $N$ the number of cells of ${\cal X}$ ($N=K^M$). \end{remark} \subsection{Correctness of the method}\label{ss:error} Given a point $y\in S$ of $\varepsilon$-representative $z\in {\cal X}$, and a pattern $\pi^\varepsilon_k$ returned by~$PROC_k^\varepsilon(z)$, we are now going to show that the distance $\|\tilde{Y}_{k\tau,z}^{\pi^\varepsilon_k}-y_{end}\|$ converges to~${\bf v}_k(y)$ as $\varepsilon\rightarrow 0$. We first consider the ODE: $\frac{dy}{dt}=f_u(y)$, and give an upper bound to the error between the exact solution of the ODE and its Euler approximation (see~\cite{CoentF19,SNR17}). \begin{definition}\label{def:delta} Let $\mu$ be a given positive constant. Let us define, for all $u\in U$ and $t\in [0,\tau]$, $\delta^u_{t,\mu}$ as follows: $$\mbox{if } \lambda_u <0:\ \ \delta^u_{t,\mu}=\left(\mu^2 e^{\lambda_u t}+ \frac{C_u^2}{\lambda_u^2}\left(t^2+\frac{2 t}{\lambda_u}+\frac{2}{\lambda_u^2}\left(1- e^{\lambda_u t} \right)\right)\right)^{\frac{1}{2}}$$ $$\mbox{if } \lambda_u = 0:\ \ \delta^u_{t,\mu}= \left( \mu^2 e^{t} + C_u^2 (- t^2 - 2t + 2 (e^t - 1)) \right)^{\frac{1}{2}}$$ $$\mbox{if } \lambda_u > 0:\ \ \delta^u_{t,\mu}=\left(\mu^2 e^{3\lambda_u t}+ \frac{C_u^2}{3\lambda_u^2}\left(-t^2-\frac{2t}{3\lambda_u}+\frac{2}{9\lambda_u^2} \left(e^{3\lambda_u t}-1\right)\right)\right)^{\frac{1}{2}}$$ where $C_u$ and $\lambda_u$ are real constants specific to function $f_u$, defined as follows: $$C_u=\sup_{y\in S} L_u\|f_u(y)\|,$$ where $L_u$ denotes the Lipschitz constant for $f_u$, and $\lambda_u$ is the OSL constant associated to $f_u$, i.e., the minimal constant such that, for all $y_1,y_2\in S$: $$\langle f_u(y_1)-f_u(y_2), y_1-y_2\rangle \leq \lambda_u\|y_1-y_2\|^2,$$ where $\langle\cdot,\cdot\rangle$ denotes the scalar product of two vectors of $S$. \end{definition} \begin{proposition}\label{prop:basic}\cite{SNR17} Consider the solution $Y_{t,y_0}^u$ of $\frac{dy}{dt}=f_u(y)$ with initial condition~$y_0$ of $\varepsilon$-representative $z_0$ (hence such that $\|y_0-z_0\|\leq\varepsilon$), and the approximate solution $\tilde{Y}_{t,z_0}^u$ given by the explicit Euler scheme. For all $t\in[0,\tau]$, we have: $$\|Y_{t,y_0}^u-\tilde{Y}_{t,z_0}^u\|\leq \delta^u_{t,\varepsilon}.$$ \end{proposition} \cref{prop:basic} underlies the principle of our set-based method where set of points are represented as balls centered around the Euler approximate values of the solutions. This illustrated in~\cref{fig:illustration}: for any initial condition $x^0$ belonging to the ball $B(\tilde{x}^0,\delta(0))$, the exact solution $x^1\equiv Y_{\tau,x^0}^u$ belongs to the ball $B(\tilde{x}^1,\delta(\tau))$ where $\tilde{x}^1 \equiv\tilde{Y}_{\tilde{x}^0,\tau}^u$ denotes the Euler approximation of the exact solution at $t=\tau$, and $\delta(\tau)\equiv\delta^u_{\tau,\delta(0)}$. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{figures/RP20_illustration_prop1.png} \caption{Illustration of \cref{prop:basic}} \label{fig:illustration} \end{figure} \begin{lemma}\label{lemma:1}\cite{CoentF19} Consider the system $\frac{dy}{dt}=f_u(y)$ where the OSL constant $\lambda_{u}$ associated to $f_u$ is negative, and initial error $e_0:=\|y_0-z_0\|>0$. Let $G_u:=\frac{\sqrt{3}e_0|\lambda_u|}{C_u}$. Consider the (smallest) positive root $$\alpha_u := 1+|\lambda_{u}|G_{u}/4-\sqrt{1+(\lambda_{u}G_{u}/4)^2}$$ of equation: $-\frac{1}{2}|\lambda_{u}| G_{u} +(2+\frac{1}{2}|\lambda_{u}|G_{u})\alpha-\alpha^2 = 0.$ Suppose: $\frac{|\lambda_u|G_u}{4}<1.$% Then we have $0<\alpha_u< 1$, and, for all $t\in[0,\tau]$ with $\tau\leq G_u(1-\alpha_u)$: $$\delta_{e_0}^u(t) % \leq e_0.$$ \end{lemma} \begin{remark}\label{rk:subsampling} If $\tau > G_u(1-\alpha_u)$, we can make use of {\em subsampling}, i.e., decompose~$\tau$ into a sequence of elementary time steps $\Delta t$ with $\Delta t \leq G_u(1-\alpha_u)$ in order to be still able to apply \cref{lemma:1}. % Let us point out that \cref{lemma:1} (and the use of subsampling) allows to ensure set-based reachability with the use of procedure $PROC_k^\varepsilon$. Indeed, in this setting, the explicit Euler scheme leads to decreasing errors, and thus, point based computations performed with the center of a cell can be applied to the entire cell. \end{remark} We suppose henceforth that, for all $u\in U$, the system $\frac{dy}{dt}=f_u(y)$ satisfies: $$(H):\ \ \ \lambda_u<0,\ \frac{|\lambda_u|G_u}{4}<1\ \mbox{ and }\ \tau \leq G_u(1-\alpha_u),\ \mbox{ for all } u\in U.$$ We have: \begin{theorem}\label{th:2} (Convergence) \cite{CoentF19}. Let $y\in S$ be a point of $\varepsilon$-representative $z\in {\cal X}$. Let $\pi_k^\varepsilon$ be the pattern returned by $PROC_k^\varepsilon(z)$, and $\pi^* := \mbox{argmin}_{\pi\in U_k} \|Y^\pi_{k\tau,y}-y_{f}\|$. Let ${\bf v}_k(y) := \|Y_{k\tau,y}^{\pi^*}-y_{end}\|$ be the exact optimal value of $y$. % The approximate optimal value of~$y$, $\|\tilde{Y}_{k\tau,y}^{\pi_k^\varepsilon}-y_{end}\|$, converges to ${\bf v}_k(y)$ as $\varepsilon\rightarrow 0$. \end{theorem} \cref{th:2} formally justifies the correctness of our method of optimal control synthesis by saying that the approximate optimal values computed by our method converge to the exact optimal values when the mesh size tends to $0$. Furthermore, we have: \begin{theorem}\label{th:1} (Robustness)\cite{CoentF19}. Let $y\in S$ be a point of of $\varepsilon$-representative $z\in {\cal X}$, and $\pi_k^\varepsilon$ the pattern returned by $PROC_k^\varepsilon(z)$. We have: $$\|Y_{t,y}^\pi-\tilde{Y}_{t,z}^\pi\|\leq \varepsilon,\ \ \ \mbox{ for all }\ \pi\in U^k \mbox{ and } t\in[0,k\tau].$$ It follows that, for two points $y_1,y_2\in S$ having the same $\varepsilon$-representative $z\in{\cal X}$, we have: $$|\|\tilde{Y}_{k\tau,y_1}^{\pi_k^\varepsilon}-y_{end}\|-\|\tilde{Y}_{k\tau,y_2}^{\pi_k^\varepsilon}-y_{end}\||\leq \varepsilon.$$ \end{theorem} Last inequality of \cref{th:1} says that the approximate optimal values of $y_1$ and $y_2$ are equal up to $\varepsilon$. This reflects the {\em robustness} of our method of optimal control synthesis against {\em uncertainties on initial conditions}. As for uncertainties on initial conditions, one has similar robustness results accounting for dynamical {\em bounded disturbances}, as explained in \cref{appendixun}. \subsection{Implementation} The implementation of the robust and variant methods has been done in Python. Each method corresponds to a program of around 500 lines. The source code is available at \href{https://lipn.univ-paris13.fr/~jerray/synchro/}{\nolinkurl{lipn.univ-paris13.fr/~jerray/synchro/}}. In the experiments below, the program runs on a 2.80 GHz Intel Core i7-4810MQ CPU with 8\,GiB of memory. \subsection{Example: Magnetic Resonance Imaging (MRI)}\label{ss:MRI} Considering a system $q$ consisting of two different particles with spins $q_1, q_2$ (see \cite{BCCM13,BCCM14}). The magnetization vectors $q_1 = (y_1, z_1) \in \mathbb{R}^2$ and $q_2 = (y_2, z_2) \in \mathbb{R}^2$ satisfy the differential system: \begin{equation*} q_1: \begin{cases} \overset{.}{y_1} = 2 \pi T_{m} t_{m}(-\Gamma_1 y_1 - u_2 z_1)\\ \overset{.}{z_1} = 2 \pi T_{m} t_{m}(\gamma_1 (1 - z_1) + u_2 y_1) \end{cases} \label{q1} \end{equation*} \begin{equation*} q_2: \begin{cases} \overset{.}{y_2} = 2 \pi T_{m} t_{m} (-\Gamma_2 y_2 - u_2 z_2) \\ \overset{.}{z_2} = 2 \pi T_{m} t_{m} (\gamma_2 (1 - z_2) + u_2 y_2) \end{cases} \label{q2} \end{equation*} with: $\Gamma_1 = \frac{1}{T_{12}\Omega_{max}}$, $\gamma_1 = \frac{1}{T_{11}\Omega_{max}}$, $\Gamma_2 = \frac{1}{T_{22}\Omega_{max}}$, $\gamma_2 = \frac{1}{T_{21}\Omega_{max}}$, and $u_2 \in [-1, 1] $ the magnetic field (control). Let $\Omega_{max}=202.95$, $T_{11}=2$, $T_{12}=0.3$, $T_{21}=2.5$, $T_{22}=2.5$, $T_{m}=26.17$ and $t_{m}=2$. The goal is to make $q_1$ reach the origin $(0, 0)$ at a given time $t=t_{end}$ % while maximizing the ``contrast'' $\|q_2(t_{end})-q_1(t_{end})\|=\|q_2(t_{end})\|$. In order to account for the (soft) constraint $q_1(t_{end})=(0,0)$, we integrate in the cost function $J_k$ a ``penalty term'' of the form $\Vert q_1(t_{end})\Vert^2$. Our goal is thus to minimize the terminal cost: $\alpha \Vert q_1(t_{end})^2\Vert - \beta \Vert q_2(t_{end})-q_1(t_{end}) \Vert^2$. The domain $S$ of the states $(q_1,q_2)\equiv ((y_1,z_1),(y_2,z_2))$ is equal to $[-1,1]^2\times [-1,1]^2\equiv [-1,1]^4$. The grid~${\cal X}$ corresponds to a discretization of $S=[-1,1]^4$, where each component interval $[-1,1]$ is uniformly discretized into a set of $K$ points. The codomain $[-1,1]$ of the original continuous control function $u_2(\cdot)$ is itself discretized into a finite set~$U$ with our method. After discretization, $u_2(\cdot)$ is a piecewise-constant function that takes its values in the finite set $U$ made of 30 values uniformly taken between $-1$ and $1$. The function $u_2(\cdot)$ can change its value every $\tau$ seconds. In the following experiments, we use the following parameter values: $\alpha = 0.99$, $\beta = 0.01$, $\tau = 1/250$, $k=215$, $t_{end}=k\tau=0.86$, and $q_1(0)=(0,1)$. We will consider the cases $K=10$ (coarse grid) and $K=20$ (finer grid). One can check that assumption $(H)$ is satisfied in both cases. In order to test the robustness of the method, we will consider the cases $q_2(0)=(0,1)$ and $q_2(0)=(0.1,1)$. For $K=10$ and $q_2(0)=(0, 1)$, we have $q_2(t_{end}) = (0.6567, -0.2558)$, and the optimal value of the contrast is $\|q_2(t_{end})\|= 0.7048$. The CPU computation takes 389 seconds. See~\cref{fig:init00-grid10}. For $q_2(0)=(0.1, 1)$, the synthesized control and the results are identical, which demonstrates the robustness of our method. For $K= 20$, and $q_2(0)=(0, 1)$, we have $q_2(t_{end}) = (0.6439, -0.2913)$, and the contrast is $\|q_2(t_{end})\|= 0.7067$. (see \cref{fig:init00-grid20}). The CPU computation takes 3657 seconds. See~\cref{fig:init00-grid20}. For $q_2(0)=(0.1, 1)$, the synthesized control and the results are again identical, thus confirming the robustness of our method. \begin{figure}[h!] \centering \includegraphics[scale=0.35]{figures/constrast-MRI-q1-grid-init_q1=0,1-init_q2=0,1,1-alpha=0,99-beta=0,01-dt=1-on-250-tf=0,86-grid=10_times_10.png} \includegraphics[scale=0.35]{figures/constrast-MRI-q2-grid-init_q1=0,1-init_q2=0,1,1-alpha=0,99-beta=0,01-dt=1-on-250-tf=0,86-grid=10_times_10.png} \includegraphics[scale=0.35]{figures/constrast-MRI-control-init_q1=0,1-init_q2=0,1,1-alpha=0,99-beta=0,01-dt=1-on-250-tf=0,86-grid=10_times_10.png} \caption{Robust method applied to MRI for $K = 10$ and initial condition $q_2(0)=(0.1, 1)$, with $q_1=(y_1, z_1)$ (top left), $q_2=(y_2, z_2)$ (top right) and control $u_2$ (bottom). When applied to $q_2(0)=(0,1)$, the method gives the same results.} \label{fig:init00-grid10} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.35]{figures/constrast-MRI-q1-grid-init_q1=0,1-init_q2=0,1,1-alpha=0,99-beta=0,01-dt=1-on-250-tf=0,86-grid=20_times_20.png} \includegraphics[scale=0.35]{figures/constrast-MRI-q2-grid-init_q1=0,1-init_q2=0,1,1-alpha=0,99-beta=0,01-dt=1-on-250-tf=0,86-grid=20_times_20.png} \includegraphics[scale=0.35]{figures/constrast-MRI-control-init_q1=0,1-init_q2=0,1,1-alpha=0,99-beta=0,01-dt=1-on-250-tf=0,86-grid=20_times_20.png} \caption{Robust method applied to MRI for $K = 20$ and initial condition $q_2(0)=(0.1, 1)$, with $q_1=(y_1, z_1)$ (top left), $q_2=(y_2, z_2)$ (top right) and control $u_2$ (bottom). When applied to $q_2(0)=(0,1)$, the method gives the same results.} \label{fig:init00-grid20} \end{figure} For comparison, we now perform the same experiments with the version of the numerical solver Bocop using convex optimization \cite{Bocop}. For $q_2(0)=(0,1)$, we have with Bocop: $q_2(t_{end}) = (0.0499, -0.7938)$; the contrast is $\|q_2(t_{end})\|= 0.6746$. The CPU computation time is 230 seconds. See \cref{fig:init00-Bocop} (\cref{appendixdeux}). For $q_2(0)=(0.1,1)$, we have, with Bocop: $q_2(t_{end}) = (0.0877, -0.6631)$; the contrast is $\|q_2(t_{end})\|= 0.6689$. The CPU computation time is 43 seconds. See \cref{fig:init01-Bocop} (\cref{appendixdeux}). We can see on this example that Bocop is {\em not} robust against slight changes of initial conditions, the generated optimal trajectories being very different from each other. The optimal values of the contrast computed by Bocop and our program are comparable. However, the CPU times of Bocop are smaller than those of our program (especially for $K=20$). \section{A Variant of the Method with Receding Horizon}\label{sec:receding} The control computed by our method is robust, but its synthesis is time-costly because it requires a fine partition of the state space in order to decrease the error caused by the space discretization. We are now considering a variant of our method, inspired by the Model Predictive Control Method (MPC) which uses a {\em receding horizon} \cite{Mayne14}. In the original method, for a $k$-horizon problem ($t_{end}=k\tau$), to a point $y\in S$ is applied the optimal pattern $\pi(z)$ of length $k$ computed for the $\varepsilon$-representative $z$ of $y$ (returned by $PROC_k^\varepsilon(z)$). In the variant inspired by MPC, we apply at point y only the first mode $u_1$ of $\pi(z)$, thus obtaining the point $y_1=y+\tau f_{u_1}(y)$. Then, unlike the original method, we do not apply the second mode $u_2$ of $\pi(z)$, but we apply the first mode $u'_1$ of the optimal pattern $\pi(z_1$) (returned by $PROC_k^\varepsilon(z_1)$), where $z_1$ denotes the $\varepsilon$-representative of $y_1$. This gives $y_2=y_1+\tau f_{u'_1}(y_1)$ (and not $y_1+\tau f_{u_2}(y_1)$ as before). And so on, iteratively, one applies each time the first mode of the optimal pattern $\pi(z_n)$ returned by $PROC_k^\varepsilon(z_n)$, where $z_n$ denotes the $\varepsilon$-representative of the solution~$y_n$ computed at $t=n\tau$ ($1\leq n\leq k-1$). This variant is not any longer robust: trajectories from two close starting points do not usually stay close to each other anymore. On the other hand, the computed values converge much faster to the exact optimal values as $\varepsilon$ tends to~0. This allows us to compute values of similar precision with the variant method, using a much coarser grid (bigger $\varepsilon$). The variant method is therefore more efficient than the original method. We demonstrate this gain of efficiency and loss of robustness on the MRI example of \cref{ss:MRI}. We first synthesize the optimal control for $K=10$ and $q_2(0)=(0, 1)$, in which case we have: $q_2(t_{end}) = (0.0499, -0.7938)$, and the contrast is $\|q_2(t_{end})\|= 0.7954$. (see \cref{fig:init00bis}). For $K=10$ and $q_2(0)=(0.1, 1)$, we have: $q_2(t_{end}) = (0.1015, -0.7141)$, and the contrast $\|q_2(t_{end})\|$ is $0.7210$. (see \cref{fig:init01bis}). For both cases, the CPU computation takes 34 seconds. We can see on this example that, unlike the original method, the variant method is {\em not} robust, a small difference between the initial conditions ($q_2(0)=(0,1)$ {\em vs.} $q_2(0)=(0.1,1)$) leading to very different trajectories. For $K=20$ and $q_2(0)=(0,1)$, we have $q_2(t_{end}) = (-0.06225, -0.5874)$, and the contrast $\|q_2(t_{end})\|$ is $0.5906$. (see \cref{fig:init00-grid20bis} in \cref{appendixtrois}). The CPU computation now takes 443 seconds. For $K=20$ and $q_2(0)=(0.1,1)$, we have $q_2(t_{end}) = (-0.1088, -0.7192)$, and the contrast $\|q_2(t_{end})\|$ is $0.7274$. (see \cref{fig:init01-grid20bis} in \cref{appendixtrois}). The CPU computation now takes 501 seconds. On the MRI example, the CPU times of the variant method are thus much smaller than those of the original method, and comparable to those of Bocop. % Besides, the optimal values of the contrast computed by the variant method are slightly better than those computed by Bocop. The variant method is thus more efficient than the original method, but does not retain its robustness property. There is therefore a trade-off to be found between robustness (guaranteed with the original method) and efficiency (obtained with the MPC variant). \begin{figure}[h!] \centering \includegraphics[scale=0.35]{figures/constrast-MRI-q1-grid-init_q1=0,1-init_q2=0,1-alpha-=0,99-beta=0,01-2dt-dt=1-on-250,0-N=333-tf=0,666.png} \includegraphics[scale=0.35]{figures/constrast-MRI-q2-grid-init_q1=0,1-init_q2=0,1-alpha-=0,99-beta=0,01-2dt-dt=1-on-250,0-N=333-tf=0,666.png} \includegraphics[scale=0.35]{figures/constrast-MRI-control-init_q1=0,1-init_q2=0,1-alpha-=0,99-beta=0,01-2dt-dt=1-on-250,0-N=333-tf=0,666.png} \caption{Variant with receding horizon applied to MRI for initial condition $q_2(0)=(0, 1)$, with $q_1=(y_1, z_1)$ (top left), $q_2=(y_2, z_2)$ (top right) and control $u_2$ (bottom).} \label{fig:init00bis} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.35]{figures/constrast-MRI-q1-grid-init_q1=0,1-init_q2=0,1,1-alpha-=0,99-beta=0,01-2dt-dt=1-on-250,0-N=323-tf=0,646.png} \includegraphics[scale=0.35]{figures/constrast-MRI-q2-grid-init_q1=0,1-init_q2=0,1,1-alpha-=0,99-beta=0,01-2dt-dt=1-on-250,0-N=323-tf=0,646.png} \includegraphics[scale=0.35]{figures/constrast-MRI-control-init_q1=0,1-init_q2=0,1,1-alpha-=0,99-beta=0,01-2dt-dt=1-on-250,0-N=323-tf=0,646.png} \caption{Variant with receding horizon applied to MRI for initial condition $q_2(0)=(0.1, 1)$, with $q_1=(y_1, z_1)$ (top left), $q_2=(y_2, z_2)$ (top right) and control $u_2$ (bottom).} \label{fig:init01bis} \end{figure} The results of \cref{sec:robust,sec:receding} for $x_2(0)=(0.1,1)$ are recapitulated in \cref{tab:recap}. \begin{table}[htp] \begin{centering} \begin{tabular}{|l|c|c|c|c|c|} \hline & robust method(K=10) & robust method(K=20) & variant(K=10) & variant(K=20) & Bocop\\ \hline Robust? & yes & yes & no & no & no \\ \hline Contrast: & 0.7048 & 0.7669 & 0.7210 & 0.7273 & 0.6746 \\ \hline CPU time (s): & 389 & 3657 & 34 & 501 & 230 \\ \hline \end{tabular} \caption{Summary table of results} \label{tab:recap} \end{centering} \end{table} \section{Conclusion}\label{sec:conclusion} As pointed out in \cite{Heymann18,Bocop}, numerical methods of optimal control, based on DPP, can compete on low dimensional examples, with methods based on convex optimization. Along these lines, we show in this paper that a set-based method of optimal control combining DPP and a guaranteed Euler integration method, allows to synthesize a correct-by-design optimal control that is {\em robust} against uncertainties on initial conditions and bounded disturbances. We have demonstrated the practical interest of our method on an example taken from the numerical Bocop solver. We have observed similar results in experiments with other case studies from Bocop, that will be given in the extended version of this paper. We have also considered a variant of our method with a receding horizon, that makes the control synthesis more efficient at the cost of losing the robustness property. There is therefore a trade-off to be found between robustness (guaranteed with the original method) and efficiency (obtained with the variant using a receding horizon). \newcommand{LNCS}{LNCS} \ifdefined \renewcommand*{\bibfont}{\small} \printbibliography[title={References}] \else \bibliographystyle{splncs04} %
{ "timestamp": "2020-07-28T02:43:01", "yymm": "2007", "arxiv_id": "2007.13644", "language": "en", "url": "https://arxiv.org/abs/2007.13644" }
\section{Introduction}\label{INTRO} Since its introduction, Morse Theory has been an active field of research and connections with many different areas of Mathematics have been found. That interaction has led to several adaptations of Morse Theory to different contexts, for example: PL versions by Banchoff \cite{Banchoff,Banchoff2} and by Bestvina and Brady \cite{Bestvina} and a purely combinatorial approach by Forman \cite{Forman,Forman2}. Nowadays, not only pure mathematics benefit from that interaction, but also applied mathematics \cite{Ghristbook} due to the importance of discrete settings. Roughly speaking, Morse Theory addresses the study of the topology (homology, originally) of a space by breaking it into ``elementary'' pieces. That is achieved by the so called Fundamental or Structural Theorems of Morse Theory, which assert that the object of study (for example a smooth manifold or a simplicial complex) has the homotopy type of a CW-complex with a given cell structure determined by the criticality of a Morse function defined on it \cite{Forman2,Milnor}. Originally Morse Theory began with the definition of Morse function itself, that is, a smooth function with non-degenerate critical points so, the only critical objects allowed were points. That was overcame by Morse-Bott theory \cite{Bott2}, which broadened the class of critical objects by including non-degenerate critical submanifolds. To avoid the critical objects must be non-degenerate, this led to the introduction of Lusternik-Schnirelmann theory \cite{LUSTERNIK_viejo}. In the context of Morse Theory, Morse inequalities guarantee that the number of critical points of a Morse function $f\colon X\to \mathbb{R}$ is an upper bound for the homological complexity of the space $X$. The role of the Morse inequalities in the setting of Lusternik-Schnirelmann theory is played by the so called Lusternik-Schnirelmann theorem, which asserts that the weighted sum of the number of critical objects is an upper bound for the category of the space \cite{LUSTERNIK_viejo}. Recent works have shown that it is possible to approach important problems regarding posets by using topological methods. See for example Barmak and Minian's work on the realizability of groups as the automorphism groups of certain posets \cite{BARMAK_posets} or Stong's work on groups on the way the homotopy type of the poset of non-trivial $p$-subgroups ordered by inclusion determines algebraic properties of the group \cite{Stong_groups}. Moreover, it is expected that recent discrete analogues of some classic concepts from differential topology shed light on their originals counterparts \cite{Forman_6}. Therefore, it makes sense to study the topology of finite spaces by means of some adapted version of Morse theory to this context. An invariant of a space $X$ is the smallest number of open sets that cover $X$ and that satisfy certain properties, such as: being elementary in a certain sense, for instance acyclic or contractible (see \cite{Fox,Larranhaga} for more examples). More generally, and analogously, a categorical invariant of an object $X$ (such as a simplicial complex) can be defined as the smallest number of subobjects needed to cover $X$ and that verify certain properties (see for example \cite{QuiMos2,SamQuiErJa,SamQuiErJa2,Tanaka3}). In vague terms, an invariant provides a certain measure for the complexity of an object. For example, the Lusternik-Schnirelmann category measures, in a particular manner, how far is a space from being contractible. This work addresses two aims. First, to develop Morse-Bott theory in the context of finite spaces, generalizing both Morse theory for posets introduced by Minian \cite{Minian} and discrete Morse-Bott theory \cite{Forman3}. In particular, we prove an integration result for matchings, the Fundamental Theorems of Morse-Bott theory in this setting and several generalization of Morse inequalities for arbitrary matchings. Second, we introduce an homological category and we prove the Lusternik-Schnirelmann theorem. We describe the main motivations for the latter goal. First, the absence of a discrete Lusternik-Schnirelmann theorem for arbitrary matchings, not even in the simplicial setting, and second, the gap in the literature even for a Lusternik-Schnirelmann theorem for Morse (acyclic) matchings. Nevertheless, several attempts were made. On the one side, a subset of the authors in joint work with Scoville proved a Lusternik-Schnirelmann theorem for a notion of simplicial category and acyclic matchings in the context of simplicial complexes \cite{SamQuiNiJa}. However, in order to do so, they developed another notion of criticality which leads to a different and non-equivalent definition of discrete Morse function. On the other side, first Scoville and Aaronson \cite{Scoville}, then Tanaka \cite{Tanaka_LS_thm}, and afterwards Knudson and Johnson \cite{Knudson}, approached the task by defining another categorical invariant but keeping the usual definition of discrete Morse function. The organization of the paper is as follows. In Section \ref{sec:preliminaries} we recall some definitions and standard results about posets, finite topological spaces and regular complexes. Section \ref{sec:matchings_morse_bott_functions} is devoted to the study of Morse-Bott theory in the context of posets. In Section \ref{sec:fundamental_theorems_and_consequences} we prove the Fundamental Theorems of Morse-Bott Theory in this setting and exploit some of their consequences. Finally, in Section \ref{sec:homological_lusternik-schnirelmann_theorem} we introduce a new notion of homological category and prove the corresponding Lusternik-Schnirelmann theorem for general matchings. \section{Finite Spaces, Posets and Simplicial Complexes} \label{sec:preliminaries} This section is devoted to introduce the objects we will work with. Most of the material is well established in the literature, for further details or proofs the reader is referred to \cite{Barmak_book,BarMin12,Bloch,Farmer,SamQuiDaSVil,Minian,Wachs}. \subsection{Finite spaces and posets} It is well known that finite posets and finite $T_0$-spaces are in bijective correspondence. If $(X, \leq)$ is a poset, a topology $\mathcal{T}$ on $X$ is given by taking the sets $$U_x:=\{y\in X: y\leq x\}$$ as a basis. On the other hand, if $X$ is a finite $T_0$-space, define for each $x\in X$ the {\it minimal open set} $U_x$ as the intersection of all open sets containing $x$. Then $X$ may be given an order by defining $y\leq x$ if and only if $U_y\subset U_x$. It is easy to see that these correspondences are mutual inverses of each other. Moreover a map between posets $f\colon X\to Y$ is order preserving if and only if it is continuous when considered as a map between the associated finite spaces. All posets will be assumed to be finite and by finite space we will mean $T_0$-space. We will use the notion of finite ($T_0$-)space and poset interchangeably. We need to introduce some terminology. \begin{definition} A {\it chain} in a poset $X$ is a subset $C\subseteq X$ such that if $x,y\in C$, then either $x\leq y$ or $y\leq x$. \end{definition} \begin{definition} The {\it height} of a poset $X$ is the maximum length of the chains in $X$, where the chain $x_0<x_1<\ldots <x_n$ has length $n$. The height $h(x)$ of an element $x\in X$ is the height of $U_x$ with the induced order. \end{definition} \begin{definition} A poset $X$ is said to be {\it homogeneous} of degree $n$ if all maximal chains in $X$ have length $n$. A poset is {\it graded} if $U_x$ is homogeneous for every $x\in X$. In that case, the {\em degree} of $x$, denoted by $\deg(x)$, is its height. \end{definition} We will denote both the height and degree of an element by superscripts, for example $x^{(p)}$. Let $X$ be a finite poset, $x,y\in X$. If $x< y$ and there is no $z\in X$ such that $x< z< y$, we write $x\prec y$. For $x\in X$ we also define $\widehat{U}_x:=\{w\in X\colon w<x\}$ as well as $F_x:=\{y\in X : y\geq x\}$ and $\widehat{F}_x:=\{y\in X : y>x\}$. \subsection{The McCord functors} We now recall McCord functors between posets and simplicial complexes \cite{McCord}. Given a poset $X$, we define its {\it order complex} $\mathcal{K}(X)$ as the simplicial complex whose $k$-simplices are the non-empty $k$-chains of $X$. Furthermore, given an order preserving map $f\colon X\to Y$ between posets, we define the simplicial map $\mathcal{K}(f)\colon \mathcal{K}(X) \to \mathcal{K}(Y)$ given by $\mathcal{K}(f)(x)=f(x)$. Conversely, if $K$ is a simplicial complex, we define the face poset of $K$, $\Delta(K)$, as the poset of simplices of $K$ ordered by inclusion. Given a simplicial map $\phi\colon K \to L $ we define the order preserving map $\Delta(\phi)\colon \Delta(K) \to \Delta(L)$ given by $\Delta(\phi)(\sigma)=\phi(\sigma)$ for each simplex $\sigma$ of $K$. The face poset functor can be defined analogously for regular CW-complexes. That is, given a regular CW-complex $K$, $\Delta(K)$ is the poset of cells of $K$ ordered by inclusion. Given a cellular map $\phi\colon K \to L $ we define the order preserving map $\Delta(\phi)\colon \Delta(K) \to \Delta(L)$ given by $\Delta(\phi)(\sigma)=\phi(\sigma)$ for each cell $\sigma$ of $K$. Note that for the simplicial complex $K$, $\mathcal{K}\Delta(K)$ is $\mathrm{sd}(K)$, the first barycentric subdivision of $K$. By analogy, the first subdivision of a finite poset $X$ is defined as $\Delta\mathcal{K}(X)$. \begin{theorem}\label{thm:mccords_thms} The following statements hold: \begin{enumerate} \item Let $X$ be a finite $T_0$-space. Then there is a map $\mu_X\colon |\mathcal{K}(X)|\to X$ which is a weak homotopy equivalence. \item Let $K$ be a simplicial complex. Then there is a map $\mu_K\colon |K|\to \Delta(K)$ which is a weak homotopy equivalence. \end{enumerate} \end{theorem} The maps $\mu_X\colon |\mathcal{K}(X)|\to X$ and $\mu_K\colon |K|\to \Delta(K)$ will be referred as McCord maps. For details and a proof of the result above see \cite{Barmak_book}. \subsection{Cellular poset homology} We shall consider a special kind of posets called cellular. They were first introduced by Farmer \cite{Farmer} and then recovered by Minian \cite{Minian}. Farmer's definition is more general while Minian's one is more adequate for our purposes. That is why we present the latter one. \begin{definition}[\cite{Minian}] The poset $X$ is {\it cellular} if it is graded and for every $x\in X$, $\widehat{U}_{x}$ has the homology of a $(p-1)$-sphere, where $p$ is the degree of $x$. \end{definition} Let $X$ be a cellular poset. We denote by $H_*(X)$ the singular homology of $X$. Unless stated otherwise, homology will be considered with integers coefficients. However, the constructions work as well for homology modules with coefficients in any principal ideal domain. We recall the construction due to Farmer \cite{Farmer} and Minian \cite{Minian} of a ``cellular homology theory'' for cellular posets. \begin{definition} Given a finite graded poset $X$, we define $X^{(p)}$ as the subposet of elements of degree less or equal to $p$, i.e. $$X^{(p)}=\{x\in X\colon \deg(x)\leq p\}.$$ \end{definition} Given the cellular poset $X$, there is a natural filtration by the degree $$X^{(0)}\subset X^{(1)}\subset \cdots X^{(n)}=X$$ which allows to define a {\it cellular chain complex} $(C_*,d)$ as follows: $$C_p(X)=H_p(X^{(p)},X^{(p-1)})=\bigoplus_{\deg(x)=p}H_{p-1}(\widehat{U}_x),$$ which is a free abelian group with one generator for each element of $X$ of degree $p$. The differential $d\colon C_p(X)\to C_{p-1}(X)$ is defined as the composition $$\begin{tikzcd} H_p(X^{(p)},X^{(p-1)}) \arrow[r, "\partial"] & H_{p-1}(X^{(p-1)}) \arrow[r, "j"] & H_{p-1}(X^{(p-1)},X^{(p-2)}) \end{tikzcd}$$ where $j$ is the canonical map induced by the inclusion and $\partial$ is the conecting homomorphism coming from the long exact sequence associated to the pair $(X^{(p)},X^{(p-1)})$. It can be shown \cite{Minian} that the differential $$d\colon C_p(X)\to C_{p-1}(X)$$ can be written as $d(x)=\sum_{w\prec x}\epsilon(x,w)w$ where the incidence number $\epsilon(x,w)$ is the degree of the map $$\widetilde{\partial}\colon \mathbb{Z} =H_{p-1}(\widehat{U}_x)\to H_{p-2}(\widehat{U}_w)=\mathbb{Z}.$$ which coincides with the connecting morphism of the Mayer-Vietoris sequence associated to the covering $\widehat{U}_x=(\widehat{U}_x-\{w\})\cup U_w$ \cite{Minian}. \begin{theorem}[{\cite[Theorem 3.7]{Minian}}] \label{thm:cellular_homology_isomorphis_homology_cellular_posets} Let $X$ be a cellular poset, then $$H_*(C_*(X))\cong H_*(X).$$ \end{theorem} \subsection{Homologically admissible posets} We recall the notion of homologically admissible posets introduced by Minian \cite{Minian}. We denote by $\mathcal{H}(X)$ the Hasse diagram associated to the poset $X$. \begin{definition}[\cite{Minian}] Let $X$ be a poset. An edge $(w,x)\in \mathcal{H}(X)$ is homologically admissible if $\widehat{U}_x-\{w\}$ is acyclic. A poset is {\it homologically admissible} if all its edges are homologically admissible. \end{definition} The importance of homologically admissible posets, lies, partially, in the following result. \begin{lema}[{\cite[Remark 3.9]{Minian}}] \label{lema:homolog_admissible_incidence_numbers} If $(w,x)$ is a homologically admissible edge of a cellular poset $X$, then the incidence number $\epsilon(x,w)$ is $1$ or $-1$. \end{lema} \begin{remark} The face posets of regular CW-complexes are homologically admissible \cite[Remark 2.6]{Minian}. However, not every homologically admissible poset is the face poset of a regular CW-complex \cite[Example 2.7]{Minian}. \end{remark} \begin{lemma}[\cite{Minian}] \label{lemma:homologiaclly_admissible_is_cellular} Let $X$ be a poset. If $X$ is homologically admissible, then it is cellular. \end{lemma} \begin{remark} In Lemma \ref{lemma:homologiaclly_admissible_is_cellular} it is assumed that the empty set is not acyclic. \end{remark} \subsection{Euler Characteristic} \begin{definition} Let $X$ be a finite graded poset of degree $n$. Denote by $X^{(=p)}$ the elements of degree $p$. The {\em graded Euler-Poincar\'{e} characteristic} of $X$ is defined as the number: $$\chi_{g}(X)=\sum_{i=0}^n(-1)^{i}\#X^{(=p)}.$$ \end{definition} It is clear that given a poset $X$ of the form $X=\Delta(K)$ for a finite simplicial complex $K$, then $\chi_{g}(X)=\chi(\mathcal{K}(X))$. Moreover, as a consequence of Minian's result (Theorem \ref{thm:cellular_homology_isomorphis_homology_cellular_posets}), the standard homological argument (see for example \cite[p. 146-147]{Hatcher}) proves that for a finite cellular poset $X$, $\chi_{g}(X)=\chi(\mathcal{K}(X))$. However, this does not hold in general for finite posets as the following example illustrates: \begin{example} Consider the poset $X$ represented in Figure \ref{fig:non_coincidence_of_euler_characs}. Due to the homotopic invariance of $\chi$, $\chi(\mathcal{K}(X))=1$ because $X$ is contractible by removing beat points. However, $\chi_{g}(X)=2$. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{eu_charac.eps} \caption{A poset where $\chi_{g}(X)\neq\chi(\mathcal{K}(X))$.} \label{fig:non_coincidence_of_euler_characs} \end{figure} \end{example} \section{Dynamics and Morse-Bott functions for posets}\label{sec:matchings_morse_bott_functions} In this section we generalize the notion of discrete Morse-Bott function, introduced by Forman \cite{Forman3}, to the context of posets. This also generalizes that of Morse functions on posets defined by Minian \cite{Minian}. \subsection{Morse functions} We recall the definition of Morse function for posets introduced by Minian \cite{Minian}. \begin{definition} Let $X$ be a finite poset. A {\it Morse function} is a function $f\colon X \to \mathbb{R}$ such that, for every $x\in X$, we have $$ \#\{y\in X\colon x\prec y \text{ and } f(x)\geq f(y)\}\leq 1 $$ and $$ \#\{z\in X \colon z\prec x \text{ and } f(z)\geq f(x)\}\leq 1. $$ If $f$ is a Morse function, an element $x\in X$ is said to be {\it critical} if $$ \#\{y\in X \colon x\prec y \text{ and } f(x)\geq f(y)\}=0 $$ and $$ \#\{z\in X \colon z\prec x \text{ and } f(z)\geq f(x)\}=0. $$ \end{definition} The set of critical points is denoted by $\mathrm{crit}{f}$. The images of the critical points are called {\it critical values} and the real numbers which are not critical are called {\it regular values}. The points which are not critical values are said to be {\it regular points}. \subsection{Matchings} Forman \cite{Forman4} introduced combinatorial vector fields. It is easy to see that this notion can be substituted by the concept of matching introduced to the context of discrete Morse Theory by Chari \cite{Chari}. \begin{definition} A {\it matching} in a poset $X$ is a subset $\mathcal{M} \subset X\times X$ such that \begin{itemize} \item $(x, y) \in \mathcal{M}$ implies $x\prec y$; \item each $x \in X$ belongs to at most one element in $\mathcal{M}$. \end{itemize} \end{definition} Given a poset $X$, let us denote by $\mathcal{H}(X)$ its associated Hasse diagram. If $\mathcal{M}$ is a matching in $X$, write $\mathcal{H}_{\mathcal{M}}(X)$ for the directed graph obtained from $\mathcal{H}(X)$ by reversing the orientations of the edges which are not in $\mathcal{M}$. Any node of $\mathcal{H}(X)$ not incident with any edge of $\mathcal{M}$ is called {\it critical}. The set of all critical nodes of $\mathcal{M}$ is denoted by $C_{\mathcal{M}}$. \begin{definition} Let $\mathcal{M}$ be a matching on a poset $X$ and let $x^{(p)}$ and $\tilde{x}^{(p)}$ be two elements of $X$. An $\mathcal{M}$-path, $\gamma$, of index $p$ from $x^{(p)}$ to $\tilde{x}^{(p)}$ is a sequence: $$\gamma \colon x=x_0^{(p)}\prec y_0^{(p+1)} \succ x_1^{(p)}\prec y_1^{(p+1)} \succ \cdots \prec y_{r-1}^{(p+1)} \succ x_r^{(p)}=\tilde{x}^{(p)}$$ such that for each $i=0,1,\ldots, r-1$ with $r\geq 1$: \begin{enumerate} \item $(x_i,y_i) \in \mathcal{M}$, \item $x_i\neq x_{i+1}$. \end{enumerate} \end{definition} A {\it $\mathcal{M}$-cycle} $\gamma$ in $\mathcal{H}_{\mathcal{M}}(X)$ is a closed $\mathcal{M}$-path in $\mathcal{H}_{\mathcal{M}}(X)$ seen as a directed graph. And the matching $\mathcal{M}$ is said to be a {\it Morse matching} if $\mathcal{H}_{\mathcal{M}}(X)$ is acyclic. \subsection{Critical subposets}\label{subsec:critical_subposets} In this subsection we develop the notion of critical subposet ({\it chain recurrent set}) by means of matchings generalizing the analogous notion introduced by Forman \cite{Forman3} in the context of discrete Morse Theory. \begin{definition} Let $\mathcal{M}$ be a matching on $X$. We say that $x^{(p)}\in X$ is an element of the {\it chain recurrent set} $\mathcal{R}$ if one of the following conditions holds: \begin{itemize} \item $x$ is a critical point of $\mathcal{M}$. \item There is a $\mathcal{M}$-cycle $\gamma$ in $\mathcal{H}_{\mathcal{M}}(X)$ such that $x\in \gamma$. \end{itemize} \end{definition} The chain recurrent set decomposes into disjoint subsets $\Lambda_i$ by means of the equivalence relation defined as follows: \begin{enumerate} \item If $x$ is a critical point, then it is only related to itself. \item Given $x,y\in \mathcal{R}$, $x\neq y$, $x \sim y$ if there is cycle $\gamma$ such that $x,y\in \gamma$. \end{enumerate} Let $\Lambda_1,\ldots,\Lambda_k$ be the equivalence classes of $\mathcal{R}$. The $\Lambda_i's$ are called {\it basic sets}. Each $\Lambda_i$ consists of either a single critical point of $\mathcal{M}$ or a union of cycles \begin{example} Consider the finite model of $\mathbb{R}P^2$ depicted in Figure \ref{fig:RP2} (see \cite[Example 7.1.1]{Barmak_book}). There is a critical point which is also a basic set, depicted with a cross. Moreover, the dashed and dotted arrows represent another two basic sets, each consisting of one cycle. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{RP2_matching.eps} \caption{A finite model of $\mathbb{R}P^2$.} \label{fig:RP2} \end{figure} \end{example} \subsection{Integration of matchings}\label{subsection:morse-bott_and_matchings} When working on the differentiable category, Morse theory generalizes naturally to Morse-Bott Theory. The purpose of this subsection is to generalize Minian's integration result for matchings \cite[Lemma 3.12]{Minian} to the context of Morse-Bott functions and arbitrary matchings. \begin{definition} Given a matching on a finite poset $X$ a function $f\colon X\to \mathbb{R}$ is said to be a {\it Morse-Bott} or {\it Lyapunov} function if it is constant on each basic set and it is a Morse function away from the chain recurrent set. \end{definition} We say that the {\em critical values} of a Morse-Bott function are the images of the basic sets. The ideas of Forman's proof of \cite[Theorem 2.4]{Forman3} generalize to the context of graded posets giving: \begin{theorem}[Integration of matchings]\label{thm:integration_matchings} Let $X$ be a finite graded poset and let $\mathcal{M}$ be a matching in $X$. Then there exists a Morse-Bott function $f\colon X\to \mathbb{R}$ such that: \begin{enumerate} \item If $x^{(p)}\notin \mathcal{R}$ and $y^{(p+1)}\succ x$, then $$\begin{cases} f(x)<f(y), & \text{ if } (x,y) \notin \mathcal{M}, \\ f(x)\geq f(y), & \text{ if } (x,y) \in \mathcal{M}. \end{cases}$$ \item If $x^{(p)} \in \mathcal{R}$ and $y^{(p+1)}\succ x$, then $$\begin{cases} f(x)=f(y), & \text{ if } x\sim y, \\ f(x)< f(y), & \text{ if } x \nsim y. \end{cases}$$ \end{enumerate} \end{theorem} We introduce the following definition: given a finite poset $X$ and a Morse-Bott function $f\colon X\to \mathbb{R}$, for each $a\in \mathbb{R}$ we write $$X_a=\bigcup\limits_{f(x)\leq a}{U_x}.$$ \subsection{Morse-Smale matchings} In this subsection we generalize the notion of Morse-Smale vector field from the context of simplicial complexes \cite{Forman3} to the setting of finite spaces. Let $X$ be a homologically admissible poset and let $\mathcal{M}$ be a matching on $X$. A $\mathcal{M}$-cycle $\gamma$ is {\em prime} if they do not exist a natural number $n>1$ and a $\mathcal{M}$-cycle $\widetilde{\gamma}$ such that $\gamma$ is the concatenation of $\widetilde{\gamma}$ $n$ times (see \cite[Definition 5.3]{Forman3} for details). An equivalence relation on the set of $\mathcal{M}$-cycles is defined as follows. Two $\mathcal{M}$-cycles $\gamma$ and $\widetilde{\gamma}$ are equivalent if $\widetilde{\gamma}$ is the result of varying the starting point of $\gamma$ (see \cite[p. 631]{Forman3} for an example). An equivalence class of $\mathcal{M}$-cycles is called a {\em closed $\mathcal{M}$-orbit}. The equivalence class of $\gamma$ is denoted by $[\gamma]$. The concepts of {\em prime closed $\mathcal{M}$-orbit} and {\em index of an closed $\mathcal{M}$-orbit} are defined as expected (see \cite{Forman3} for details). A special kind of matching which will play an important role is the following. In a certain sense, we control the complexity of the chain recurrent set. \begin{definition} Let $X$ be a homologically admissible poset. A matching $\mathcal{M}$ on $X$ is a {\em Morse-Smale matching} if the chain recurrent set $\mathcal{R}$ consists only of critical points and pairwise disjoint prime closed $\mathcal{M}$-orbits. \end{definition} \section{Fundamental Theorems and consequences}\label{sec:fundamental_theorems_and_consequences} The purpose of this Section is to prove the Fundamental Theorems of Morse Theory for Morse-Bott functions on posets and obtain some consequences. \subsection{Fundamental Theorems} In what follows, we extend the equivalence relation defined in Subsection \ref{subsec:critical_subposets} from $\mathcal{R}$ to all $X$ by saying that a point which is not critical is an equivalence class on its own. \begin{definition} Given a finite poset $X$, $x\in X$ and a matching $\mathcal{M}$ on $X$, we define: $$\partial [x]=\{w\in X \colon w\prec \tilde{x} \text{ for some } \tilde{x}\sim x \text{ but } w \nsim \tilde{x}\}.$$ \end{definition} \begin{example} Consider the poset depicted in Figure \ref{fig:RP2}. In Figure \ref{fig:RP2_d[x]} we show $\partial [x]$ for any $x$ in the dashed cycle of Figure \ref{fig:RP2}. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{RP2_matching_dx.eps} \caption{Example of $\partial [x]$.} \label{fig:RP2_d[x]} \end{figure} \end{example} We introduce some auxiliary notation. For each edge $(x,y)\in \mathcal{M}$, we say that $x$ is the {\em source} of the edge and $y$ is the {\em target}. For convenience, we define the {\em source} and {\em target maps} (only defined for elements in the matching $\mathcal{M}$) as follows: given $(x,y)\in \mathcal{M}$, $s(y)=x$ and $t(x)=y$. The lemma below follows from the definition of matching: \begin{lema}\label{lema:technical_Lyapunov} Let $\gamma$ be a cycle of index $p$ and let $u^{(p-1)}\in X$, $\tilde{v}^{(p)}\in X$, $w^{(p+1)}\in X$ and $r^{(p+2)}\in X$ such that $u,\tilde{v},w,r\notin \gamma$. Then it holds the following: $$t(u)\notin \gamma, \; t(\tilde{v})\notin \gamma, \; s(w)\notin \gamma \; \text{ and } \; s(r)\notin \gamma.$$ \end{lema} Our next result is a homological collapsing theorem for Morse-Bott functions. As a consequence of the Lemma \ref{lema:technical_Lyapunov}, the elements of a cycle can not be connected by arrows with elements which are not in the cycle. Therefore, the result below follows from \cite[Theorem 4.2.2]{SamQuiDaSVil}. \begin{theorem}\label{thm:homological_collapse_thm_Lyapunov} Let $X$ be a finite homologically admissible poset and let $f\colon X \to \mathbb{R}$ be a Morse-Bott function. If $[a,b]$ contains no critical values, then $i\colon X_a \hookrightarrow X_b$ induces an isomorphism in homology. \end{theorem} In this generalized context, we also have a result which explains what happens when a critical value is reached. \begin{theorem}\label{thm:homot_poset_crit_points_Lyapunov} Let $X$ be a finite homologically admissible poset and let $f\colon X \to \mathbb{R}$ be a Morse-Bott function. If $f(x)\in [a,b]$ is a critical value and there are no other values of $f$ in $[a,b]$, then $X_b=X_a \cup_{\partial [x]}[x]$. \end{theorem} \begin{proof} There are two cases to consider. First, assume that $[x]$ is a critical point, then the results reduces to \cite[Theorem 4.2.8]{SamQuiDaSVil}. So, assume $[x]$ is a cycle of index $p$. Let $\tilde{f}\colon X/\sim \to \mathbb{R}$ denote the function induced by $f$ on the set of equivalence classes. We may assume that $\tilde{f}$ is injective, that $\tilde{f}([x])>a$ and that the only critical subposet in $f^{-1}([a,b])$ is $[x]$. Since $[x]$ is a cycle and $f(x)$ is a critical value, then given $y^{(p+1)}\succ \tilde{x}$ and $y\notin [x]$ with $\tilde{x}\in [x]$, $f(y)>f(\tilde{x})$. Hence, $f(y)>b$ and Lemma \ref{lema:technical_Lyapunov} guarantees that $f(z)>b$ for every $z>\tilde{x}$, $z\notin [x]$. Therefore, $[x]\cap X_a=\emptyset$. Given any $w^{(p-1)} \prec \tilde{x}^{(p)}$, $\tilde{x}\in [x]$ and $w\notin [x]$ or $w^{(p)} \prec \tilde{x}^{(p+1)}$, $\tilde{x}\in [x]$ and $w\notin [x]$, due to the criticality of $[x]$, it holds that $f(w)<f(\tilde{x})$. Therefore $f(w)<a$ and $w\in X_a$. Hence $\partial [x] \subset X_a$. That is, $X_b=X_a \cup_{\partial [x]}[x]$. \end{proof} \subsection{Morse-Bott inequalities}\label{subsubsection:morse_bott_pitcher_inequalities} In this subsection we generalize Morse-Bott inequalities from the context of CW-complexes \cite[Theorem 3.1]{Forman3} to the setting of posets. This result can be seen a combinatorial analogue of a theorem due to Conley \cite[Theorem 1.2]{Franks} \cite{Conley}. Again, we assume that our coefficients are any principal ideal domain $R$. From now on the poset $X$ is assumed to be homologically admissible. Given a subposet $Y\subset X$ we denote by $\bar{Y}$ the subposet $\cup_{x\in Y}U_x$ and by $\dot{Y}=\bar{Y}-Y$. \begin{definition} For each $k\geq 0$, we define $$m_k=\sum_{\text{basic sets } \Lambda_i} \mathrm{rank \,} H_k(\bar{\Lambda}_i,\dot{\Lambda}_i).$$ \end{definition} Observe that in the particular case we have a Morse matching, then the basic sets are just critical points and $m_k$ is the number of critical points of index $k$. \begin{lema} If the index of the basic set $\Lambda_i$ is $p$, then $H_k(\bar{\Lambda}_i,\dot{\Lambda}_i)=0$ unless $k=p,p+1$. Moreover, if $\Lambda_i$ is just a critical point $x^{(p)}$, then $H_k(\bar{\Lambda}_i,\dot{\Lambda}_i)=0$ for $k\neq p$ and the principal domain of coefficients, $R$, for $k=p$. \end{lema} \begin{proof} For convenience, during the proof we will denote $\Lambda_i=\Lambda$. Since all the posets involved are cellular we can use cellular homology. Consider the Homology Long Exact Sequence for the pair $(\bar{\Lambda},\dot{\Lambda})$: $$ \begin{tikzcd}[cramped] \cdots \arrow[r, ""] & H_{p}(\bar{\Lambda}) \arrow[r, ""] & H_{p}(\bar{\Lambda},\dot{\Lambda}) \arrow[r, ""] & H_{p-1}(\dot{\Lambda}) \ar[out=-15, in=150]{dll}\\ & H_{p-1}(\bar{\Lambda}) \arrow[r, "j"] & H_{p-1}(\bar{\Lambda},\dot{\Lambda}) \arrow[r, "\partial"] & H_{p-2}(\dot{\Lambda}) \ar[out=-15, in=150]{dll}{\cong}\\ & H_{p-2}(\bar{\Lambda}) \arrow[r, ""] & H_{p-2}(\bar{\Lambda},\dot{\Lambda}) \arrow[r, ""] & H_{p-3}(\dot{\Lambda}) \arrow[r, ""] & \cdots \\ \end{tikzcd} $$ First of all, the homomorphism $H_{k}(\dot{\Lambda})\to H_{k}(\bar{\Lambda})$ is an isomorphism for $k\leq p-2$, so $H_{k}(\bar{\Lambda},\dot{\Lambda})=0$ for $k\leq p-2$. Second, we have that: $$H_{p-1}(\bar{\Lambda},\dot{\Lambda})=\ker \partial=\mathrm{Im} j\cong \frac{H_{p-1}(\bar{\Lambda})}{\ker j}\cong \frac{H_{p-1}(\bar{\Lambda})}{\mathrm{Im} i}.$$ Third, the homomorphism $H_{p-1}(\dot{\Lambda}) \to H_{p-1}(\bar{\Lambda})$ induced by the inclusion is surjective by the construction of cellular homology. Therefore $H_{p-1}(\bar{\Lambda},\dot{\Lambda})=0$. Fourth, if $\Lambda$ is just a critical point $x^{(p)}$, then $H_k(\bar{\Lambda},\dot{\Lambda})=H_k(U_x,\widehat{U}_x)$ and by cellularity of $X$ and the Homology Long Exact Sequence for the pair $(U_x,\widehat{U}_x)$ the result follows. \end{proof} We denote by $b_k$ the Betti number of dimension $k$ with coefficients in the principal domain $R$. Taking into account the ideas involved in the proof of \cite[Theorem 3.1]{Forman3} and our Theorems \ref{thm:homological_collapse_thm_Lyapunov} and \ref{thm:homot_poset_crit_points_Lyapunov} yields the Strong Morse-Bott inequalities: \begin{theorem}[Strong Morse-Bott inequalities]\label{thm_strong_morse__bott_inequalities} Let $X$ be a homologically admissible poset and let $\mathcal{M}$ be a matching on $X$. Then, for every $k\geq 0$: $$m_k-m_{k-1}+\cdots +(-1)^k m_0\geq b_k-b_{k-1}+\cdots +(-1)^k b_0.$$ \end{theorem} From the standard argument (see \cite[p. 30]{Milnor}), we obtain the Weak Morse inequalities: \begin{corollary}[Weak Morse-Bott inequalities] \label{coro:weak_morse_ineq} Let $X$ be a homologically admissible poset and let $\mathcal{M}$ be a matching on $X$. Then: \begin{enumerate} \item For every $k \geq 0$, $m_k\geq b_k$. \item $\chi(X)=\sum_{i=0}^{\deg(X)}(-1)^k b_k=\sum_{i=0}^{\deg(X)}(-1)^k m_k.$ \end{enumerate} \end{corollary} \subsection{Morse-Smale matchings}\label{subsec:Morse_Smale} In this section we generalise \cite[Section 7]{Forman2} to the context of homologically admissible posets while improving some of the results even in the case of simplicial or regular CW-complexes. Let $X$ be a homologically admissible poset and let $\mathcal{M}$ be a Morse-Smale matching on $X$. We denote by $c_k$ the number os critical points of index $k$ and by $A_k$ the number of prime closed $\mathcal{M}$-orbits of index $k$. Denote by $\mu_k$ the minimum number of generators of the torsion subgroup $T_k$ of $H_k(X)$. Combining the proof of \cite[Theorem 7.1]{Forman2} with our Pitcher strengthening of Morse inequalities \cite[Corollary 5.2.3]{SamQuiDaSVil} we obtain the following improvement of \cite[Theorem 7.1]{Forman2}, taking torsion into account. \begin{theorem}\label{thm:morse_bott_inequalities_number_orbits} Let $X$ be a homologically admissible poset and let $\mathcal{M}$ be a Morse-Smale matching on $X$. Let the coefficients $R$ be a principal ideal domain. Then, for every $k\geq 0$: $$A_k+\sum_{i=0}^k (-1)^{i}c_{k-i} \geq \mu_k + \sum_{i=0}^k (-1)^{i}b_{k-i}.$$ \end{theorem} \begin{definition} Let $X$ be a homologically admissible poset and let $\mathcal{M}$ be a Morse-Smale matching on $X$. Endow each element of $X$ with an orientation. Let $\gamma$ be an $\mathcal{M}$-path $$\gamma \colon x_0^{(p)}\prec y_0^{(p+1)} \succ x_1^{(p)}\prec y_1^{(p+1)} \succ \cdots \prec y_{r-1}^{(p+1)} \succ x_r^{(p)}.$$ We define the multiplicity of $\gamma$ by $$\prod_{i=0}^{r-1}-\langle d_{(p+1)}y_i,x_i\rangle_p \langle d_{(p+1)}y_i,x_{i+1}\rangle_p$$ where $d$ is the cellular boundary operator and $\langle \bullet,\bullet \rangle_p$ is the inner product on $C_p(X)$ such that the degree $p$ elements of $X$ are mutually orthogonal. \end{definition} \begin{remark} Observe that the multiplicity of a path is always $1$ or $-1$ due to Lemma \ref{lema:homolog_admissible_incidence_numbers}. \end{remark} \begin{remark} The generalization of \cite[Lemma 4.6]{Forman3} to our context is straightforward. \end{remark} Both \cite[Theorem 7.3]{Forman3} and \cite[Corollary 7.4]{Forman3} generalise to our setting with the same proofs: \begin{theorem}\label{thm:morse_bott_inequalities_multiplicity _one_orbits} Let $X$ be a homologically admissible poset and let $\mathcal{M}$ be a Morse-Smale matching on $X$. Let the coefficients be the field $\mathbb{R}$. Denote by $A'_p$ the number of closed $\mathcal{M}$-orbits of index $p$ and multiplicity 1. Then, for every $k\geq 0$: $$A'_k+\sum_{i=0}^k (-1)^{i}c_{k-i} \geq \sum_{i=0}^k (-1)^{i}b_{k-i}(\mathbb{R}).$$ \end{theorem} \begin{remark} While \cite[Corollary 7.4]{Forman3} refined \cite[Theorem 7.2]{Forman3}, Theorem \ref{thm:morse_bott_inequalities_multiplicity _one_orbits} does not refine our improved Theorem \ref{thm:morse_bott_inequalities_number_orbits}. They are complementary results. \end{remark} \section{Homological Lusternik-Schnirelmann Theorem}\label{sec:homological_lusternik-schnirelmann_theorem} The purpose of this section is to prove a Lusternik-Schnirelmann Theorem for general matchings and a suitable definition of homological category. \subsection{Definition of the homological chain category and first properties} Let $(C_*,\partial)$ denote a free chain complex of abelian groups such that each term $C_p$ is finitely generated and only finitely many of the $C_p$ are non zero. We define the {\em rank} of $C_*$ as $\mathrm{rank \,}(C_*)=\sum_p\mathrm{rank \,}(C_p)$. \begin{definition}\label{def:homological_chain_category_for_chain_complexes} Let $(C_*,\partial)$ be a free chain complex of abelian groups. We define its {\em homological chain category} \begin{equation*} \mathrm{hccat}(C_*)=\inf\Bigg\{ \begin{aligned} \mathrm{rank \,}(B_*)\colon B_* \text{ bounded subcomplex of } C_* \text{ and the } &\\ \text{ inclusion } i\colon B_* \hookrightarrow C_* \text{ is a quasi-isomorphism.} \end{aligned} \Bigg\} \end{equation*} \end{definition} Let $X$ be a topological space. For all the definitions that follow we consider coefficients in $\mathbb{Z}$. We denote by $S_*(X)$ its singular chain complex. \begin{definition} Let $X$ be a topological space. We define its {\em homological chain category} $\mathrm{hccat}(X)=\mathrm{hccat}(S_*(X))$. \end{definition} We introduce a homological lower bound for $\mathrm{hccat}(X)$ analogous to the Pitcher strengthening of Morse inequalities. \begin{prop}\label{prop:homological_lower_bound_HCcat} Let $X$ be a topological space with finitely generated homology. Then $$\sum_k b_k + 2\sum_k \mu_k \leq \mathrm{hccat}(X).$$ \end{prop} \begin{proof} Let us denote by $(B_*,\partial)$ a bounded chain complex whose homology is isomorphic to $H_*(X)$. By standard algebra (see, for example \cite[Theorem 4.11]{Prasolov}), we have $b_k+\mu_k+\mu_{k-1}\leq \mathrm{rank \,}(B_k)$. Now the result follows by a sum indexed by the dimension. \end{proof} \begin{cor} Let $X$ be a homologically admissible poset or a CW-complex with finitely generated homology. Then $$\chi(X) \leq \mathrm{hccat}(X).$$ \end{cor} In fact, the bound given by Proposition \ref{prop:homological_lower_bound_HCcat} is the best possible as a consequence of the following result due to Pitcher \cite[Lemma 13.2]{Pitcher}. \begin{prop} Let $(C_*,\partial)$ be a free chain complex with singular homology groups $H_k(X)$, $k=0,1,\ldots$ Denote by $\mu_k$ the minimum number of generators of the torsion subgroup $T_k$ of $H_k(X)$ and denote by $b_k$ the rank of $H_k(X)$. Then there exists a free chain complex $(L,\partial^{L})$ such that: \begin{enumerate} \item For every $k\geq 0$, the group $L_k$ has rank $b_k+\mu_k+\mu_{k-1}$. \item There exists a monomorphism $i\colon L\hookrightarrow C$ which is a chain map. \item The monomorphism $i\colon L\hookrightarrow C$ is a quasi-isomorphism. \end{enumerate} \end{prop} \begin{corollary}\label{coro:exact_value_hccat} Let $X$ be a topological space with finitely generated homology. Then $$\mathrm{hccat}(X)=\sum_k b_k + 2\sum_k \mu_k.$$ \end{corollary} Moreover, observe that a topological $X$ is acyclic if and only if $\mathrm{hccat}(X)=1$. As a consequence of \cite[Example 1.33]{CLOT} we have the following result relating the homological chain category to the Lusternik-Schnirelmann category: \begin{prop}\label{prop:relation_cat_HCcat} Let $K$ be a simply connected CW-complex with finitely generated homology groups such that there exists $n$ satisfying $H_n(K)\neq 0$ and $H_p(K)=0$ for $p>n$. Then $$\mathrm{cat}(K)\leq \mathrm{hccat}(K).$$ \end{prop} The result does not necessarily hold if we remove the simply connectedness hypothesis, as the following example shows: \begin{example} Consider the Poincar\'{e} homology 3-sphere $M$. Observe that $\mathrm{hccat}(M)=\mathrm{hccat}(\mathbb{S}^3)=2$. However, $\mathrm{cat}(M)\geq 3$ \cite{Fox}. \end{example} \subsection{Homological Lusternik-Schnirelmann Theorem} In this subsection we state and prove a Lusternik-Schnirelmann Theorem for the homological chain category and general matchings on posets. \begin{theorem}\label{thm:LS_theorem_premium_version} Let $X$ be a homologically admissible poset and let $\mathcal{M}$ be a Morse-Smale matching on $X$. Then $$\mathrm{hccat}(X)\leq \sum_{{\mathrm{basic \: sets} \: } \Lambda_i} \mathrm{hccat}(\Lambda_i).$$ In particular, given Morse matching $\mathcal{M}$ on $X$, then $\mathrm{hccat}(X)$ is a lower bound for the number of critical elements of $\mathcal{M}$. \end{theorem} \begin{proof} We will define a Morse matching $\mathcal{M}^{*}$ by means of perturbing $\mathcal{M}$. The idea is to replace each prime closed orbit by two critical points. This will be achieved by removing exactly one of the edges of the matching in each closed orbit. By repeating the technique used in the proof of \cite[Theorem 7.1]{Forman3}, we obtain a Morse matching $\mathcal{M}^{*}$ satisfying $m_p^{*}=c_p+A_p+A_{p-1}$, where $m_p^{*}$ denotes the number of critical points of index $p$ of the matching $\mathcal{M}^{*}$ (see Subsection \ref{subsec:Morse_Smale} for the definition of $A_p$). Recall that $C_*(X)$ denotes the cellular chain complex of $X$. We define a map $V\colon C_p(X)\to C_{p+1}(X)$ as follows: $$V(x)= \begin{cases} -\epsilon(y,x)y, & \text{if there exists } y\in X \text{ with } (x,y)\in \mathcal{M}^{*},\\ 0, & \text{otherwise.} \end{cases}$$ Following the ideas of Minian \cite{Minian}, define the discrete flow operator $\phi \colon C_p(X)\to C_{p}(X)$ as $\phi=\Id+dV+Vd$. The $\phi$-invariant chains $$C_{p}^{\phi}(X)=\{c\in C_p(X)\colon \phi(c)=c\}$$ form a well-defined subcomplex of $(C_{*}(X),d)$ \cite{Minian}. Moreover, the inclusion of $(C_{*}^{\phi}(X),d)$ into $(C_{*}(X),d)$ induces isomorphisms in homology and $C_{p}^{\phi}(X)$ is isomorphic to the free abelian group spanned by the critical $p$-elements of $X$ \cite{Minian}. As a consequence: \begin{equation}\label{eq:HCcat_sum_preliminar} \mathrm{hccat}(C_*(X))\leq \sum_p m_p^{*}=\sum_p c_p+A_p+A_{p-1}. \end{equation} There are two kinds of basic sets for $\mathcal{M}$: critical points and disjoint closed $\mathcal{M}$-orbits. Observe that if $\Lambda_i$ is a critical point, then $\mathrm{hccat}(\Lambda_i)=1$ while if $\Lambda_i$ is a closed orbit, then $\mathrm{hccat}(\Lambda_i)= 2$. So, from Equation (\ref{eq:HCcat_sum_preliminar}), it follows that: $$\mathrm{hccat}(C_*(X))\leq \sum_{ \text{basic sets } \Lambda_i} \mathrm{hccat}(\Lambda_i).$$ Finally, observe that $\mathrm{hccat}(X)=\mathrm{hccat}(C_*(X))$ due to the isomorphism between cellular homology and singular homology for cellular posets (Theorem \ref{thm:cellular_homology_isomorphis_homology_cellular_posets}). \end{proof} \begin{remark} In the proof of Theorem \ref{thm:LS_theorem_premium_version}, Equation (\ref{eq:HCcat_sum_preliminar}) could also be derived as a consequence of combining our Pitcher strengthening of Morse-inequalities \cite[Corollary 5.2.3]{SamQuiDaSVil} applied to the matching $\mathcal{M}^{*}$ with Corollary \ref{coro:exact_value_hccat}. \end{remark} As a consequence of \cite[Theorem 3.3.6]{SamQuiDaSVil}, we obtain the following corollary: \begin{cor}\label{coro:LS_theorem_free_version} Let $X$ be a homologically admissible poset and let $f\colon X\to \mathbb{R}$ be a Morse function. Then $\mathrm{hccat}(X)$ is a lower bound for the number of critical points of $f$. \end{cor} \begin{remark} Let $K$ be a simplicial complex or, more generally, a regular CW-complex $K$. Recall that its face poset $\Delta(K)$ is a homologically admissible poset. Moreover, the chain complex $C_\bullet(\Delta(K),d)$ where $d$ is the cellular boundary operator coincides with the chain complex $C_\bullet(K,\partial)$ where $\partial$ is the cellular -or simplicial in case $K$ is a simplicial complex- boundary operator. Therefore $\mathrm{hccat}(\Delta(K))=\mathrm{hccat}(K)$. Hence, we have in particular a simplicial homological Lusternik-Schnirelmann Theorem. \end{remark} \bibliographystyle{abbrvnat}
{ "timestamp": "2020-07-28T02:40:01", "yymm": "2007", "arxiv_id": "2007.13565", "language": "en", "url": "https://arxiv.org/abs/2007.13565" }
\section{Introduction} \label{sec:intro} Disk-like structures are ubiquitous in the universe, from the galactic disks in spiral galaxies to the accretion disks in active galactic nuclei (AGN) and X-ray binaries, to protoplanetary disks, debris disks, and planetary rings in the formation and evolution process of planetary systems \citep{sellwood_1989,latter2018planetary}. In our solar system, all outer planets have planetary rings and the disk-shaped Kuiper belt has been observed. In addition, there is a theoretical Oort cloud considered as comet reservoir in the outer solar system, including a disk-shaped inner Oort cloud and a spherical outer Oort cloud \citep{oort1950structure,hills1981comet,levison2001origin}, which has not yet been confirmed by observation. Recently, \citet{sefilian2019shepherding} proposed that a debris disc of icy material exists outside the orbit of Neptune, with a combined mass around 10 times that of Earth, whose self-gravity could be responsible for the strange orbital architecture of Trans-Neptunian Objects (TNOs) in the outer solar system \citep{trujillo2014a,batygin2016evidence}. The gravitational effect of a astronomical disk plays an important role in the formation of the dynamical architecture of various systems. \citet{nagasawa2003eccentricity} showed that the self-gravity of a dissipative protoplanetary disk could have a significant impact on the planetary eccentricity in the extrasolar multiple planetary system. In binary systems, many exoplanets have higher eccentricities than that of the planets in the solar system, some of which could be induced via the Lidov-Kozai effect (or resonance; mechanism) \citep{lidov1962evolution,kozai1962secular,holman1997chaotic,innanen1997kozai,wu2003planet,takeda2005high}. However, when the inclination angle between the protoplanetary disk plane, in which planetesimals are embedded, and the binary plane is too large, the growth of the kilometer-sized planetesimals could be inhibited due to the Lidov-Kozai effect \citep{marzari2009planet}. This is a challenge for the current planetary formation theories. In order to understand planet formation in stellar binaries, many authors have investigated the influence of the protoplanetary disk gravity on planetesimal dynamics. Some found that the fast apsidal precession on planetesimal orbit induced by the gravitational effect of an axisymmetric protoplanetary disk can effectively suppress the Lidov-Kozai effect or the excitation of planetesimal eccentricity, which is conducive to the planetesimal growth resulting in the formation of planetary embryo \citep{batygin2011formation,rafikov2013building,rafikov2013planet}. However, it was shown that the gravity of an eccentric disk will instead excite planetesimal eccentricity to high values, leading to high impact velocities, and therefore prevent their growth \citep{marzari2012eccentricity,marzari2013influence,silsbee2014planet,rafikov2014planet,lines2016modelling}. \citet{zhao2012planetesimal} studied the Lidov-Kozai effect on planetesimal dynamics with the perturbations from both the companion star and the circumprimary disk in the inclined binary system. They showed that the Lidov-Kozai effect will be similarly suppressed if the gravitational effect of disk is included, but the Lidov-Kozai effect can work at arbitrarily low inclinations in the Kozai-on region in which planetesimal eccentricities can be excited to extremely high values ($\sim1$). Hence the planetesimal with very high orbital eccentricity may become a ``hot planetesimal" as the shrink of the planetesimal orbit due to the gas drag damping of the gaseous disk. \citet{terquem2010eccentricity} found that the secular perturbation of an annulus disk will lead to the Lidov-Kozai effect for a planet if the planetary orbit is well inside the disk inner cavity. Furthermore, if the planetary orbit crosses the disk but most of the disk mass is beyond the orbit, they found that the oscillations of both the planetary eccentricity and inclination were not observed when the initial inclination of the orbit is below a critical value, which is significantly smaller than 39.2$\degr$. The authors pointed out that the critical value could be 30$\degr$ in some case (case A in the paper). The analogous discussion for a three-dimensional disk and a warped disk can be found in the literature \citet{teyssandier2012orbital} and \citet{terquem2013effects}, respectively. In galactic dynamics, a series of papers \citep{vokrouhlicky1998stellar,vsubr2004star,vsubr2005highly,karas2007enhanced} investigated the secular evolution of the stellar orbits in a galactic center surrounded by a massive accretion disk. They indicated that the stellar orbits near the black hole will undergo the Lidov-Kozai resonance under the perturbation of the accretion disk. This is helpful for the formation of highly eccentric stellar orbit in the vicinity of the black hole and increasing the star-capture rate of the black hole. Hass and \v{S}ubr also studied the Lidov-Kozai resonance in an eccentric stellar disk around a supermassive black hole \citep{haas2016rich,vsubr2016properties}. In this paper, we aim to understand the orbital dynamics under the secular perturbation of a disk through both analytical and numerical methods. We analytically demonstrate for the first time that the gravitational perturbation from a uniform disk will induce the Lidov-Kozai effect (or resonance). In Section \ref{sec:model}, we describe the dynamical model of a massless test particle under the disk perturbation, focusing on the multipole expansion of the disturbing function and its averaging. In Section \ref{sec:qualitative}, we provide a analytic study on the secular problem. Successive numerical studies are presented in Section \ref{sec:numerical}. And results are summarize and discussed in Section \ref{sec:discussion}. \section{THE DYNAMICAL MODEL} \label{sec:model} We consider a massless test particle orbiting around a central body of mass $M_{\star}$ which is surrounded by a disk of mass $m_d$. We assume $m_d\ll M_{\star}$, hence the motion of the particle is dominated by the central body, and the particle's orbit is a Keplerian ellipse but slightly perturbed by the gravitational potential of the disk. The Hamiltonian for this system is written as follows: \begin{equation} F=\frac{\mu}{2a}-V \label{con:eq1} \end{equation} where $\mu=\mathcal{G}M_{\star}$ ($\mathcal{G}$ is the gravitational constant), $a$ is the semimajor axis of the particle's orbit; $V$ is the gravitational potential of the disk. Note that the Hamiltonian has the opposite sign relative to the standard form. For simplicity, in our case the disk is considered to be a two-dimensional uniform disk. The gravitational potential exerted by the uniform disk on the particle is given by \citep{alberti2007dynamics} \begin{equation} V=-2\mathcal{G}\sigma\int_{0}^{R}\int_{0}^{\pi}\frac{\rho d\rho d\theta}{\sqrt{r^2+\rho^{2}-2\rho r\cos \theta \cos \varphi}} \label{con:eq2} \end{equation} where $R$ and $\sigma$ are the radius and constant mass density of the disk, respectively. $r$ is the distance between the particle and the disk's center (or the central body), $\varphi$ is the angle between the position vector $\boldsymbol{r}$ of the particle and the disk plane. Since we are interested in the secular behaviour of the particle's orbit under the perturbation from the disk, we would like to average the perturbing potential $V$ (or the Hamiltonian) over the mean anomaly $M$ of the particle's orbit, and this results in the elimination of the short-period terms in the perturbing potential. This process is know as secular approximation. Unfortunately, it is almost impossible to average Equation (\ref{con:eq2}) directly and then obtain an analytical expression even for a constant mass density $\sigma$. However, when $r/R<1$ (or $r/R>1$), Equation (\ref{con:eq2}) can be expanded in $r/R$ (or $R/r$) by means of Legendre's polynomials $P_n$. This result in \noindent$r/R<1:$ \begin{equation} V=-\frac{2\mathcal{G}m_d}{R}\left\{1-\frac{r}{R}\sin\varphi+\frac{1}{4}\left(\frac{r}{R}\right)^{2}(3\sin^2\varphi-1)+O\bigg(\left(\frac{r}{R}\right)^3\bigg)\right\} \label{con:eq3} \end{equation} where $m_d=\sigma\pi R^2$, and \noindent$r/R>1:$ \begin{equation} V=-\frac{\mathcal{G}m_d}{R}\left\{\frac{R}{r}-\frac{1}{4}\left(\frac{R}{r}\right)^{3}\left(1-\frac{3}{2}\cos^2\varphi\right)+O\bigg(\left(\frac{R}{r}\right)^5\bigg)\right\} \label{con:eq4} \end{equation} The detailed derivation of Equations (\ref{con:eq3}),(\ref{con:eq4}) can be seen in Appendix \ref{sec:A1}. We take the disk plane to be the equatorial plane of the central body, and the inclination of the particle's orbit is measured with respect to this plane. Thus, we have \begin{equation} \sin\varphi=\sin i\cdot|\sin(f+\omega)| \end{equation} where $i,\omega,f$ are the inclination, argument of perihelion, and true anomaly of the particle's orbit respectively (moreover, we use the most common variables $e,\Omega$ to denote the eccentricity and longitude of ascending node of the orbit in this paper). In fact, the key step in the process of averaging Equation (\ref{con:eq3}) is to obtain the average value of the formula $r |\sin(f+\omega)|$, the average value and details are presented in Appendix \ref{sec:A2}. Finally, averaging the potential in Equations (\ref{con:eq3}),(\ref{con:eq4}) over the mean anomaly $M$, we get \noindent$r/R<1:$ \begin{equation} \begin{aligned} \overline{V}=\frac{\mathcal{G}m_d}{R}&\left\{\left(\frac{a}{R}\right)\frac{4\sin i}{\pi}\left[\left(1+\frac{1}{2}e^2\right)-e^2\cos2\omega\right] \right.\\ &+\left.\left(\frac{a}{R}\right)^2\left[\frac{1}{2}\left(1+\frac{3}{2}e^2\right)\left(1-\frac{3}{2}\sin^2i\right)+\frac{15}{8}e^2\sin^2i\cos2\omega\right]\right\} \end{aligned} \label{con:eq8} \end{equation} We drop the constant term independent of the orbital elements in the above expansion. The above expansion is at the quadrupole level of approximation. The first (second) term in the expansion is called the dipole (quadrupole), which is proportional to $a/R$ ($(a/R)^2$). Note that Equation (\ref{con:eq8}) only applies to the inner orbits whose apocenter distances are smaller than the disk radius $R$ (geometrically, the inner orbit is located inside the sphere of radius $R$). Likewise \noindent$r/R>1:$ \begin{equation} \overline{V}=-\frac{\mathcal{G}m_d}{R}\left\{\frac{R}{a}+\frac{1}{8}\left(\frac{R}{a}\right)^3\left(1-\frac{3}{2}\sin^2i\right)(1-e^2)^{-3/2}\right\} \label{con:eq9} \end{equation} Equation (\ref{con:eq9}) only applies to the outer orbits whose pericenter distances are greater than the disk radius $R$. The outer orbit is outside the sphere of radius $R$. A closed form of the potential of uniform disk had been derived in \citet{lass1983gravitational}, involving complete elliptic integrals of three kinds \citep{byrd1971handbook}. The closed form is numerically equivalent to the integral form in Equation (\ref{con:eq2}), and Equations (\ref{con:eq3})(\ref{con:eq4}) can also been obtained by expanding the closed form in the appropriate limits, but the derivations are very complicated and tedious (particularly for the case of $r/R>1$). On the other hand, complete elliptic integrals as well as the potential in closed form can be computed precisely and fast, in comparison with the integral form, hence we will adopt the closed form in our full model where the potential of uniform disk is neither approximated nor averaged (see details in Section \ref{sec:numerical}). \section{QUALITATIVE ANALYSIS OF SECULAR DYNAMICAL BEHAVIOR} \label{sec:qualitative} In this section we star the qualitative study of the dynamics of a particle's orbit under the secular perturbation from the uniform disk. We consider two secular problems: The first is about the inner orbit and the second is about the outer orbit. \subsection{Dynamics for the inner orbit} \label{subsec:case1} It is convenient to understand the dynamical behavior of the orbit using the canonical Delaunay variables \citep{brouwer1961methods}: \begin{equation} \left\{\begin{aligned} &L=\sqrt{\mu a} &l=M\\ &G=\sqrt{\mu a(1-e^2)} &g=\omega\\ &H=\sqrt{\mu a(1-e^2)}\cos i &h=\Omega \end{aligned} \right. \label{con:eq10} \end{equation} In the inner orbit problem, the averaged Hamiltonian for the system considered here is \begin{equation} \overline{F}=\frac{\mu ^2}{2L^2}-\overline{V} \label{con:eq11} \end{equation} with \begin{equation} \begin{aligned} &\overline{V}=\overline{V}_{di}+\epsilon \overline{V}_{quad}\\ &\overline{V}_{di}=k\left(\frac{a}{R}\right)\left[\left(1+\frac{1}{2}e^2\right)-e^2\cos2\omega\right]\sin i\\ &\overline{V}_{quad}=k\left(\frac{a}{R}\right)^2\frac{\pi}{8}\left[\left(1+\frac{3}{2}e^2\right)\left(1-\frac{3}{2}\sin^2i\right)+\frac{15}{4}e^2\sin^2i\cos2\omega\right] \end{aligned} \label{con:eq11} \end{equation} where $k=(4\mathcal{G}m_d /\pi R)$. $\overline{V}_{di}$ and $\overline{V}_{quad}$ are the dipole term and the quadrupole term of the potential $\overline{V}$ in Equation (\ref{con:eq8}), respectively. $\epsilon$ is a dimensionless control parameter which takes a value of $0$ $(1)$ in the dipole (quadrupole) approximation of the potential/Hamiltonian. The averaged Hamiltonian does not depend on the mean anomaly $M$, nor the longitude of ascending node $\Omega$, thus \begin{equation} \dot{L}=\frac{\partial \overline{F}}{\partial l}=0,\:\:\:\dot{H}=\frac{\partial \overline{F}}{\partial h}=0 \label{con:eq13} \end{equation} $L$,$H$ are constant of motion, which implies that the semimajor axis $a$, the $z$ component of the angular momentum of the orbit are conserved in the secular problem. Apparently, $H/L$ also remains constant, it follows that \begin{equation} \sqrt{1-e^2}\cos i=J_z \label{con:eq14} \end{equation} where $J_z$ is a constant (Kozai integral). This indicates oscillations of $e$ and $i$ are coupled and in antiphase. Since $L$,$H$ and the Hamiltonian itself are all constant, the degree of freedom for this system is reduced to one, related to the couple ($G,g$). Thus the system is analytically integrable in principle. The equations of motion about the canonical variables $G,g$ are given by \begin{equation} \dot{G}=\frac{\partial \overline{F}}{\partial g}=-k\left(\frac{a}{R}\right)\left(2-\epsilon\frac{15\pi}{16}\frac{a}{R}\sin i\right)e^2\sin i\cdot\sin2\omega \label{con:eq15} \end{equation} \begin{equation} \begin{aligned} \dot{g}=-\frac{\partial \overline{F}}{\partial G}=-\frac{k}{G}&\left\{\left(\frac{a}{R}\right)\left[(1-e^2)(1-2\cos2\omega)\sin i-\frac{\cos^2i}{\sin i}\left(1+\frac{1}{2}e^2-e^2\cos2\omega\right)\right]\right.\\ &-\left.\epsilon\left(\frac{a}{R}\right)^2\frac{3\pi}{16}\bigg[1-e^2-5\cos^2i+5\left(e^2-\sin^2i\right)\cos2\omega\bigg]\right\} \label{con:eq013} \end{aligned} \end{equation} When $a$ is much smaller than $R$ (i.e., $a/R\ll1$), the quadrupole term effect is negligible and hence the dipole approximation can describe the dynamical behaviour of the system well. Next, we first consider the above equations of motion in the dipole approximation. \subsubsection{Dipole approximation $(\epsilon=0)$} \label{subsubsec:case11} The equilibrium point of the system satisfies the following equations \begin{equation} \left\{\begin{aligned} &\dot{G}=\frac{\partial \overline{F}}{\partial g}=0\\ &\dot{g}=-\frac{\partial \overline{F}}{\partial G}=0 \end{aligned} \label{con:eq17} \right. \end{equation} In the dipole approximation ($\epsilon=0$), solving $\dot{G}=0$, then we get: $\omega=0 ,\pi/2,\pi,3\pi/2$. Substitution of these values of $\omega$ into Equation (\ref{con:eq013}) yields \begin{equation} \dot{g}>0,\:\:\:\text{at}\:\:\omega=0,\pi\:\:(\cos2\omega=1) \end{equation} Thus Equations (\ref{con:eq17}) has no solution at $\omega=0,\:\pi$. At $\omega=\pi/2,\:3\pi/2\:\: (\cos2\omega=-1)$, Equation (\ref{con:eq013}) becomes \begin{equation} \dot{g}=-\frac{k}{G\sin i}\left(\frac{a}{R}\right)\left\{3\frac{G^2}{L^2}-\frac{3}{2}\frac{H^2}{L^2}-\frac{5}{2}\frac{H^2}{G^2}\right\} \end{equation} The above equation is expressed in terms of the Delaunay variables. As mentioned above, where $L$ as well as $H$ are constant. Solving the equation $\dot{g}=0$, one obtains \begin{equation} G^2=\frac{3H^2+\sqrt{9H^4+120L^2H^2}}{12} \end{equation} As $G^2\leq L^2$, it follows that \begin{equation} \left|\frac{H}{L}\right|\leq\frac{\sqrt{3}}{2} \end{equation} namely \begin{equation} \left|\sqrt{1-e^2}\cos i\right|\leq\frac{\sqrt{3}}{2} \:\:\: \text{or} \:\:\: |J_z|\leq\frac{\sqrt{3}}{2} \label{con:eq25} \end{equation} Therefore, when the above inequality is satisfied, $\dot{g}=0$ as well as Equations (\ref{con:eq17}) have solutions, and the system has two equilibrium points at $\omega=\pi/2,3\pi/2$. This implies that the Lidov-Kozai resonance (or effect) will occur for the inner orbits of $|J_z|\leq\sqrt{3}/2$. When $|J_z|>\sqrt{3}/2$, Equations (\ref{con:eq17}) do not have solution and hence there is no equilibrium point for the system. The inner orbits of $|J_z|>\sqrt{3}/2$ do not undergo any secular resonances. Apparently, the critical value of $J_c$ for the occurrence of the Lidov-Kozai resonance is $\sqrt{3}/2$ (for the prograde orbits), i.e. $J_c=\sqrt{3}/2$. Figure \ref{fig:fig2} shows the trajectories in the ($e,\omega$) phase space for the inner orbits with different values of $J_z$. In our numerical calculations, we take such a set of dimensionless parameters: $\mathcal{G}=1,M_{\star}=1, m_d=0.01,R=100$ (hereinafter the same). In Figure \ref{fig:fig2}, all orbits have the semimajor axis $a=10$. Figure \ref{fig:fig2}(a)-(c) correspond to $J_z=0.2,0.5,0.8$ respectively. In Figure\ref{fig:fig2}(a)-(c), the Lidov-Kozai resonance occurs, and there is a stable equilibrium point at $\omega=\pi/2$ surrounded by the libration island (closed trajectories). Variations of $e$ are usually very large when the Lidov-Kozai resonance is active, and small eccentricity can even be excited to near 1 for $J_z=0.2$ (see Figure \ref{fig:fig2}(a)). Moreover, it is predictable that variations of $i$ are also usually very large in the Lidov-Kozai resonance, because $\sqrt{1-e^2}\cos i$ remains constant and $e,i$ oscillate in anti-phase. In Figure \ref{fig:fig2}(d), $J_z=0.9\:(>\sqrt{3}/2)$, the Lidov-Kozai resonance does not occur for the orbits, there is no equilibrium point and variations of $e$ are small. \begin{figure*}[h] \centering \gridline{\fig{ew1.pdf}{0.27\textwidth}{(a)} \hspace{-5cm} \fig{ew2.pdf}{0.27\textwidth}{(b)} } \gridline{\fig{ew3.pdf}{0.27\textwidth}{(c)} \hspace{-5cm} \fig{ew4.pdf}{0.27\textwidth}{(d)} } \caption{$(e,\omega)$ phase space trajectories in the dipole approximation. In the calculations, we set: $\mathcal{G}=1,M_{\star}=1, m_d=0.01,R=100$. And we take $a=10$ for all orbits. Where: (a)$J_z=0.2$; (b) $J_z=0.5$; (c) $J_z=0.8$; (d) $J_z=0.9$. The phase space trajectories in $\omega \in(\pi,2\pi)$ are identical to those in $(0,\pi)$, and they are not be presented here. The blue and magenta lines represent the circulating trajectories in $(0,2\pi)$ and librating trajectories around $\omega=\pi/2$, respectively.} \label{fig:fig2} \end{figure*} The condition making Inequality (\ref{con:eq25}) to hold for any $e$ is: \begin{equation} |\cos i|\leq\frac{\sqrt{3}}{2}\:\:\:\text{or}\:\:\:30\degr\leq i\leq 150\degr \label{con:eq26} \end{equation} In other words, as long as the inclination of the inner orbit is larger than 30$\degr$, the orbit will undergo the Lidov-Kozai effect in which both $e$ and $i$ oscillate dramatically. When the inclination is below 30$\degr$, the Lidov-Kozai effect does not work and the oscillations of both $e$ and $i$ are very small. Figure \ref{fig:fig3} shows the coupled oscillations of $e$ and $i$ for the orbits with the initial inclination angels $i_0=28^\circ,32^\circ,60^\circ$. For $i_0=28^\circ$ (left panel), $e$ oscillates between 0.01 and 0.03, and the oscillation of $i$ does not exceed $0.1^\circ$. The amplitudes of $e,i$ are small. However, when $i_0$ is slightly increased to $32^\circ$, $e$ is excited to 0.22 from 0.01 due to the Lidov-Kozai effect (middle panel). For $i_0=60^\circ$ (right panel), the Lidov-Kozai effect becomes very significant, $e$ dramatically oscillates between 0.01 and 0.83, whereas $i$ between $60^\circ$ and $25^\circ$ (approximately). The oscillations of both $e$ and $i$ are very large. \begin{figure}[ht] \centering \includegraphics[scale=0.2]{i28.pdf} \includegraphics[scale=0.2]{i32.pdf} \includegraphics[scale=0.2]{i60.pdf} \caption{Oscillations of $e$,$i$ over time for the orbits with different initial inclinations in the dipole approximation. The orbits in all panels start with the same initial orbital elements: $a_0=10$, $e_0=0.01$, but the left panel with the initial inclination $i_0=28^\circ$, the middle panel: $i_0=32^\circ$, and the right panel: $i_0=60^\circ$. The red solid line and blue dashed line represent the curves of $e$ and $i$ respectively. The vertical lines illustrate the theoretical values of $T_{evol}$ and $e_{max}$ given by Equations (\ref{con:eq0029}),(\ref{con:eq030}).} \label{fig:fig3} \end{figure} The Lidov-Kozai effect in the disk problem is similar to that in the restricted three-body problem in nature. The difference is that the critical inclination in the disk problem is 30$\degr$ (in the dipole approximation), and in the restricted three-body problem is 39.2$\degr$ \citep{kozai1962secular,innanen1997kozai,naoz2013secular}. Following the analysis of \cite{innanen1997kozai} for the Lidov-Kozai effect in the restricted three-body problem, one can obtain the maximum value reached by the eccentricity and the evolution time $T_{evol}$ to reach the maximum value starting from a small initial eccentricity $e_0$ for the disk problem. We briefly present here the derivation. According to Equations (\ref{con:eq14}),(\ref{con:eq15}), we have \begin{equation} \begin{aligned} &\frac{de}{dt}=2\alpha e\sqrt{1-e^2}\sin i\cdot \sin2\omega \\ &\frac{di}{dt}=2\alpha \frac{e^2}{\sqrt{1-e^2}}\cos i\cdot\sin2\omega \end{aligned} \label{con:eq025} \end{equation} where $\alpha=(k\sqrt{a/\mu})/R$. For the orbit with very small initial eccentricity $e_0$ and large initial inclination $i_0$, the $i$ remains almost constant before $e$ is excited to a large value because $di/dt$ has the factor $e^2$. And when the Lidov-Kozai effect works, $\omega$ will quickly move to a value which makes $\dot{\omega}=0$, thus according to Equation (\ref{con:eq013}) we have $2\sin^2i_0(1-\cos2\omega)=1$. Taking only the first order of $e$, $de/dt$ becomes \begin{equation} \frac{de}{dt}=2\alpha\:e\sin i_0\cdot \sin2\omega=\alpha e\frac{\sqrt{4\sin^2i_0-1}}{\sin i_0} \label{con:eq028} \end{equation} Solving Equation (\ref{con:eq028}), one gets the time $T_{evol}$ it takes to reach $e_{max}$ starting from $e_0$ by \begin{equation} T_{evol}=\tau\ln{\left(\frac{e_{max}}{e_0}\right)}\frac{\sin i_0}{\sqrt{4\sin^2i_0-1}} \label{con:eq0029} \end{equation} with the time scale $\tau$ \begin{equation} \tau=\frac{R^2}{8a^2}\frac{M_{\star}}{m_d}T \end{equation} where $T$ is the orbital period. Note that we must have $4\sin^2i_0>1$, namely $i_0>30^\circ$, for the increase of $e$, which is consistent with the previous analysis (Equation (\ref{con:eq26})). If the initial inclination is smaller than $30^\circ$, the actual growth of eccentricity is very small. For very small initial eccentricity $e_0$ and large initial inclination $i_0$($>30^\circ$), since $\sqrt{1-e^2}\cos i$ remains constant, the eccentricity grows from $e_0$ to $e_{max}$ simultaneously as the inclination drops from $i_0$ to $i_{min}$, that is \begin{equation} \sqrt{1-e_{max}^2}\cos i_{min}=\sqrt{1-e_{0}^2}\cos i_{0} \end{equation} According to Equation (\ref{con:eq028}), the minimum value $i_{min}$ the inclination can drop to is $30^\circ$. Thus, ignoring the small quantity of $e_0^2$, one obtains \begin{equation} e_{max}=\sqrt{1-\frac{4}{3}\cos^2i_0} \label{con:eq030} \end{equation} Two examples illustrating the values of $T_{evol}$ and $e_{max}$ predicted by Equations (\ref{con:eq0029}),(\ref{con:eq030}) are shown in Figure \ref{fig:fig3} (see middle and right panel). Generally, when $a$ takes small values, Equation (\ref{con:eq030}) can provide rather good values for $e_{max}$, but the expected time $T_{evol}$ given by Equation (\ref{con:eq0029}) is a little less than the actual time required to reach the maximum eccentricity. When $a$ takes large values, the quadrupole term effect becomes significant, and hence Equations (\ref{con:eq0029}),(\ref{con:eq030}) derived in dipole approximation may seriously misestimate the actual values of the maximum eccentricity and the evolution time. In addition, we have run some cases with different values of $m_d$, and the results show that the value of $e_{max}$ does not depend on $m_d$ and $T_{evol} \propto 1/m_d$, as expected from Equations (\ref{con:eq030}),(\ref{con:eq0029}). \subsubsection{Quadrupole approximation $(\epsilon=1)$} \label{subsubsec:case12} In the quadrupole approximation, solving $\dot{G}=0$, we still have $\omega=0 ,\pi/2,\pi,3\pi/2$. At $\omega=0,\pi$, Equation (\ref{con:eq013}) becomes \begin{equation} \dot{g}=\frac{k}{G}\left(\frac{a}{R}\right)\left\{\frac{2-e^2-e^2\sin^2i}{2\sin i}-k_{0}(1-e^2)\right\} \end{equation} where $k_{0}=3\pi a/4R$. Solving $\dot{g}=0$, we get \begin{equation} \sin i=\frac{-k_{0}(1-e^2)+\sqrt{k_{0}^2(1-e^2)^2+e^2(2-e^2)}}{e^2} \label{con:eq032} \end{equation} Since $0\leq\sin i\leq1$, in order to make the above equation true, the following inequality must be satisfied: \begin{equation} k_{0}\ge1\ \ \text{or}\ \ \frac{a}{R}\ge\frac{4}{3\pi} \label{con:eq033} \end{equation} Thus, when $k_0>1$ (or $a/R\gtrsim 0.42$), $\dot{g}=0$ has solutions at $\omega=0,\pi$, and the system has the equilibrium points at $\omega=0,\pi$ (for certain values of $J_z$ constrained by Equation (\ref{con:eq032})). However, when $k_0<1$ (or $a/R\lesssim 0.42$), $\dot{g}=0$ has no solution at $\omega=0,\pi$ and hence there are no equilibrium points at $\omega=0,\pi$. At $\omega=\pi/2,3\pi/2$, Equation (\ref{con:eq013}) becomes \begin{equation} \begin{aligned} \dot{g}=-\frac{k}{G}\left(\frac{a}{R}\right)\left\{3(1-e^2)\sin i-\frac{\cos^2i}{\sin i}\left(1+\frac{3}{2}e^2\right)+\frac{k_0}{2}\big[3(1-e^2)-5\cos^2i\big]\right\}\\ \end{aligned} \label{con:eq036} \end{equation} \begin{figure}[t] \centering \includegraphics[scale=0.28]{quadrupole.pdf} \caption{The critical value $J_c$ and the critical inclination angle $i_c$ . The red line is computed in the quadrupole approximation, the purple line in the dipole approximation, and blue points in the full model (see Section \ref{sec:numerical}).} \label{fig:fig05} \end{figure} \begin{figure}[t] \gridline{\fig{ew451.pdf}{0.42\textwidth}{(a)} \hspace{-3cm} \fig{451b.pdf}{0.3\textwidth}{(b)} } \caption{(a): ($e,\omega$) phase space trajectories for $a=45$, $J_z=0.2$ in the quadrupole approximation. There are two unstable equilibrium points at $\omega=0,\pi$ and two stable equilibrium points at $\omega=\pi/2,3\pi/2$. (b): Periodic oscillations of $e$,$i$ over time for the orbit with $a=45$, $e_0=0.05$, $i_0=78.45^\circ$ (i.e. $J_z=0.2$). } \label{fig:fig04} \end{figure} For a certain value of $k_0$ (or $a/R$), we can solve Equation (\ref{con:eq036}) numerically and then obtain the values of $J_z$ which make $\dot{g}=0$ has solutions at $\omega=\pi/2,3\pi/2$. Consequently, corresponding to these values of $J_z$, the system has equilibrium points only at $\omega=\pi/2,3\pi/2$ when $a/R<0.42$, and the orbits undergo the classical Lidov-Kozai resonance as shown in Figure \ref{fig:fig2}. Figure \ref{fig:fig05} provides the critical value $J_c$ and the corresponding critical angle $i_c$ for occurrence of the classical Lidov-Kozai resonance as a function of $a/R$ ranging from $0$ to $0.4$. If $J_z$ is smaller than the critical value $J_c$, the Lidov-Kozai resonance occurs. If $J_z$ is larger than the critical value $J_c$, there are no any secular resonances. Comparison of the red line and the points in Figure \ref{fig:fig05} shows that the quadrupole approximation agrees very well with the full model for $a/R$ below 0.4 (in the full model, the potential of the uniform disk is neither approximated nor averaged). In the dipole approximation, $J_c$ and $i_c$ do not depend on the value of $a/R$. But in the quadrupole approximation, $J_c$ slightly increases from $\sqrt3/2\:(\approx0.866)$ to 0.896 (approximately) as $a/R$ increases from 0 to 0.4, meanwhile, $i_c$ drops from $30^\circ$ to $26.4^\circ$ (in the full model closer to $27^\circ$). When $a/R>0.42$, $\dot{g}=0$ still has solutions at $\omega=\pi/2,3\pi/2$ for some values of $J_z$. However, as mentioned above, $\dot{g}=0$ may also have solutions at $\omega=0,\pi$. Consequently, the equilibrium points of the system may appear at $\omega=0,\pi/2,\pi,3\pi/2$. In this case, the phase space structure as well as the dynamical behaviors of the orbits are different from that of the classical Lidov-Kozai case of $a/R<0.42$. Figure \ref{fig:fig04}(a) shows a new ($e,\omega$) phase space structure with the equilibrium points at $\omega=0,\pi/2,\pi,3\pi/2$. One observes that the small eccentricities cannot be pumped to large values even at very high inclinations, and the corresponding inclination variations are also very small (see Figure \ref{fig:fig04}(b)). Hence, in this case the small eccentricity orbits can be maintained at a highly inclined configuration. This is a significant difference from the classical Lidov-Kozai case, in which the small eccentricities with high inclinations will be excited to large values and the inclined orbits are unstable. In fact, when $a/R>0.42$, there are many other types of the Lidov-Kozai resonance. One of them has been shown in Figure \ref{fig:fig04}, and more types can be seen in Section \ref{sec:numerical}. \subsection{Dynamics for the outer orbit} \label{subsec:case3} In the outer orbit problem, the averaged Hamiltonian is \begin{equation} \overline{F}=\frac{\mu^2}{2L^2}+\frac{\mathcal{G}m_d}{R}\left\{\frac{R}{a}+\frac{1}{8}\left(\frac{R}{a}\right)^3\left(1-\frac{3}{2}\sin^2i\right)(1-e^2)^{-3/2}\right\} \label{con:eq0039} \end{equation} Similarly, $L$,$H$ and $J_z$ remain constant. And \begin{equation} \begin{aligned} &\dot{G}=\frac{\partial \overline{F}}{\partial g}\equiv0\\ &\dot{g}=-\frac{\partial \overline{F}}{\partial G}=\frac{k_1}{L^3G^4}\left(\frac{H^2}{G^2}-\frac{1}{5}\right) \end{aligned} \label{con:eq27} \end{equation} where $k_1=15\mathcal{G}m_d R^2/16$. Apparently, $G$ is constant, and hence $e$ as well as $i$ are also constant. This implies that variations of $e$ and $i$ are small in the full model. $\dot{g}$ remains as a non-zero constant (except for $H^2/G^2=1/5$ or $i=63.4^\circ$), which means the precession of $\omega$ from 0 to $2\pi$ is linear. Thus, the trajectories of the outer orbits are circulating in the phase space, and the outer orbits do not undergo secular resonances. Figure \ref{fig:fig4} shows the phase space trajectories for $a=300$, $J_z=0.6$ in the full model, and in Figure \ref{fig:fig4} only the trajectories of the outer orbits are presented. For these outer orbits, the variations of eccentricity and inclination are small. \begin{figure}[h] \centering \includegraphics[scale=0.25]{fig4.pdf} \hspace{1.cm} \includegraphics[scale=0.25]{fig41.pdf} \caption{$(e,\omega)$ and $(i,\omega)$ phase space trajectories for $a=300$, $J_z=0.6$ in the full model. In the phase spaces only the trajectories of the outer orbits are presented. In the right panel, the top/bottom trajectory corresponds to the orbit with $e_0=0.1$/$e_0=0.6$.} \label{fig:fig4} \end{figure} We have run many cases of the outer orbit in the full model, and the results show that the variations of $e$ and $i$ are small for these outer orbits (even if $a/R$ is only a little greater than 1). Hence the outer orbits have strong stability. \section{NUMERICAL STUDY} \label{sec:numerical} In this section we perform our numerical study based on the full model. We introduce our full model first and show the validity of secular approximation within limits by comparisons with the full model. Then we focus more on the dynamics of the orbit of $a/R\sim1$ that is difficult to investigate by analytical methods. \subsection{Full model} \label{subsec:case41} Consider a massless test particle moving under the gravitational field of a central body and a uniform disk, the equation of motion for the massless particle is given by \begin{equation} \ddot{\boldsymbol{r}}=-\frac{\mu}{r^3}\boldsymbol{r}-\nabla V \label{con:eq34} \end{equation} where $\boldsymbol{r}$ is the position vector of the particle. In the full model, the potential $V$ is neither approximated nor averaged. In order to easily compute the acceleration vector $\nabla V$, in the above equation of motion we adopt the closed form of the potential of uniform disk derived in \citet{lass1983gravitational}, instead of the integral form in Equation (\ref{con:eq2}). Accordingly, $\nabla V$ can also be written in closed form in terms of complete elliptic integrals, which can be computed precisely and easily. The detailed expressions of $\nabla V$ and its computational approaches can be found in \citet{krogh1982gravitational} and \citet{fukushima2010precise}. It needs to remark that the acceleration of the particle becomes infinite at the boundary of the uniform disk, hence we will terminate the calculations once the particle passes through the boundary. Equation (\ref{con:eq34}) is integrated using the Runge-Kutta-Fehlberg 7(8) integrator. In most cases, the integrator conserves the $z$-component of angular momentum and the energy of the orbit within the relative error of $10^{-4}$. A comparison between full model and dipole approximation on $(e,\omega)$ phase portraits for $a/R=0.1$ is illustrated in Figure \ref{fig:fig5}. In these phase portraits, the equilibrium points and the trajectories given in the dipole approximation are almost identical with that of the full model. This indicates that the dipole approximation can sufficiently describe the dynamical behaviour of the orbit when $a/R$ takes a small value. \begin{figure*}[h] \gridline{\fig{ewa1.pdf}{0.25\textwidth}{(a)} \hspace{-5cm} \fig{ewa2.pdf}{0.25\textwidth}{(b)} } \gridline{\fig{ewa3.pdf}{0.25\textwidth}{(c)} \hspace{-5cm} \fig{ewa4.pdf}{0.25\textwidth}{(d)} } \caption{Comparison between the full model and the dipole approximation on $(e,\omega)$ phase portraits for $a/R=0.1$. Where: (a) $J_z$=0.2;(b) $J_z$=0.5;(c) $J_z$=0.8;(d) $J_z$=0.9. In all panels, the blue solid lines describe the trajectories computed in full model, and the red dashed lines describe the trajectories in dipole approximation but starting from the same initial orbital elements as the trajectories in full model. } \label{fig:fig5} \end{figure*} For different values of $a/R$, a comparison between full model and dipole/quadrupole approximation on the behaviours of the eccentricity and inclination is shown in Figure \ref{fig:fig6}. One observes: The quadrupole approximation agrees very well with the full model even $a/R=0.5$, which suggests that the quadrupole approximation is still valid for large values of $a/R$. The dipole approximation and the full model are in good overall agreement up to $a/R=0.4$, except for the oscillation period in the case $a/R=0.4$. In the case $a/R=0.5$, the dipole approximation is significantly inconsistent with the full model. The eccentricity and inclination variations in the dipole approximation are very large, and the initial small eccentricity is still excited to a large value due to the dipole-level Lidov-Kozai effect which does not depend on the value of $a/R$. However, as illustrated in Section \ref{subsubsec:case12}, when $a/R>0.42$ the excitation of the small eccentricity could be ``suppressed" induced by the quadrupole effect. As a result, in the full model/quadrupole approximation the eccentricity and inclination variations are small for $a/R=0.5$. \begin{figure}[h] \centering \includegraphics[scale=0.23]{model_a.pdf} \includegraphics[scale=0.23]{model_b.pdf} \includegraphics[scale=0.23]{model_c.pdf} \caption{The behaviours of eccentricity and inclination for the different values of $a/R$. We consider: $\mathcal{G}=1, M_{\star}=1, m_d=0.01,R=100$. The curves in all panels star with the same initial eccentricity and inclination: $e_0=0.05, i_0=60\degr$, but the left panel with the semimajor axis $a=10$, the middle panel: $a=40$, and the right panel: $a=50$. The red solid lines represent the curves computed in full model, the blue dash lines in dipole approximation, and the black dash lines in quadrupole approximation.} \label{fig:fig6} \end{figure} \subsection{Dynamics for the orbit of $a/R\sim1$} \label{subsec:case42} \begin{figure*}[htbp] \centering \gridline{\fig{ew52.pdf}{0.27\textwidth}{(a)} \fig{ew84.pdf}{0.27\textwidth}{(b)} \fig{ew98.pdf}{0.27\textwidth}{(c)}} \gridline{\fig{ew1255.pdf}{0.27\textwidth}{(d)} \fig{ew1265.pdf}{0.27\textwidth}{(e)} \fig{ew1274.pdf}{0.27\textwidth}{(f)}} \gridline{\fig{ew128.pdf}{0.27\textwidth}{(g)}} \caption{$(e,\omega)$ phase space portraits for the case of $a/R\sim1$ in the full model. Where: (a) $a=50, J_z=0.2$;(b) $a=80, J_z=0.4$;(c) $a=90, J_z=0.8$;(d) $a=120, J_z=0.55$;(e) $a=120, J_z=0.65$;(f) $a=120, J_z=0.74$;(g) $a=120,J_z=0.8$ (the disk radius $R$ is 100). The black lines correspond to librating trajectories, and the blue lines to the circulating trajectories. The equilibrium points appear at $\omega=0,\pi/2,\pi$, and other positions, for example, at $\omega=\pi/6, 5\pi/6$ approximately (see panel(b)).} \label{fig:fig7} \end{figure*} We have analytically surveyed the dynamics of the inner orbit and outer orbit under the secular perturbation of a uniform disk. However, in the previous analysis, we only considered the limiting case of $a/R$ taking small (or large) values. For the case of $a/R\sim1$, octupole and higher order terms become nonnegligible and would need to be taken into account. On the other hand, once the orbits cross the sphere surface $r=R$, our Equations (\ref{con:eq8})(\ref{con:eq9}) break down. Thus, it is rather difficult to study the dynamical behaviour of the orbit of $a/R\sim1$ through analytical methods, we have to resort to numerical methods. We have carried out massive numerical calculations for the case $a/R\sim1$ using the full model. And the results show that the orbits could undergo many more complicated Lidov-Kozai resonances which are different from the classical type (depicted in Figure \ref{fig:fig2}). Figure \ref{fig:fig7} illustrates seven resonant phase space structures, each of which corresponds to a special Lidov-Kozai resonance type. These resonances shown in Figure \ref{fig:fig7} (a),(b),...,(g) will be called Type a,b,...,g, respectively. One observes that these resonances all have equilibrium points at $\omega=0,\pi$. In Type d and Type e there are two equilibrium points at $\omega=\pi/2$, and in Type g the equilibrium point disappears at $\omega=\pi/2$. In particular, in Type b,d the equilibrium points appear at other values of $\omega$ (besides $\omega=0,\pi/2,\pi$) and these values are not fixed. We have demonstrated that the quadrupole term effect can lead to the equilibrium points at $\omega=0,\pi$ in Type a. Consequently, it is plausible to suppose that the appearances of the equilibrium points in the other types are also attributed to the high-order term effects. In fact, which type of resonance the orbit will undergo under the uniform disk perturbation is determined by the parameters $a/R$ and $J_z$. We obtain the distributions of the all resonance types in the parameter space $a/R\times J_z=[0,3]\times[0,1]$ through global numerical calculations (see Figure \ref{fig:fig8}). In our runs, the parameter space in $a/R$ is covered in steps of 0.01 and $J_z$ in steps of 0.01. In Figure \ref{fig:fig8}, the red region gives the distribution of the classical type (as shown in Figure \ref{fig:fig2}(a)), the yellow region gives the distribution of Type a, and green: Type b; purple: Type c; white: Type d; gray: Type g; brown: Type e,f. The blue region is non-resonance region where the Lidov-Kozai resonance does not occur (see Figure \ref{fig:fig2}(d)). As Type e and Type f both have the same number of the stable and unstable equilibrium points, we merge them into the same distribution for simplicity. The differences between ($e,\omega$) phase space portraits caused by varying $a/R$ or $J_z$ in the same distribution region are only the size of libration island, the oscillation amplitudes of $e$ and $\omega$, and the positions of the equilibrium points. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{a_j.pdf} \caption{Distribution of all resonance types in the parameter space $a/R\times J_z$, for the secular problem of the uniform disk perturbation. The red region represents the distribution of the classical Lidov-Kozai resonance as shown in Figure \ref{fig:fig2}(a). The blue region represents the distribution of the non-resonance type shown in Figure \ref{fig:fig2}(d), and the other regions, yellow: Type a; green: Type b; purple: Type c; white: Type d; brown: Type e and Type f; gray: Type g. The points corresponding to the phase space portraits shown in Figure \ref{fig:fig7} are highlighted in black stars.} \label{fig:fig8} \end{figure} As mentioned in the quadrupole approximation, when $a/R>0.42$ there are such Lidov-Kozai resonances in which the system has equilibrium points at $\omega=0,\pi$. The numerical results show the critical value of $a/R$ in the full model is about 0.43, which corresponds to the value of $a/R$ of the boundary point between the red region and yellow region at $J_z=0$ in Figure \ref{fig:fig8}. The analytical value of 0.42 is rather close to the numerical value of 0.43. In the case of $a/R\sim1$, the most main resonance is Type b corresponding to the green region in Figure \ref{fig:fig8}. Liking in Type a, the small eccentricities in Type b cannot grow to large values even at high inclinations, and hence the corresponding inclinations also cannot drop to very low values (as $\sqrt{1-e^2}\cos i$ remains constant). As a result, in Type b the small eccentricity orbits can be maintained at a highly inclined state. Overall, in the case of $a/R\sim1$, the libration islands are a little small in comparison with the case of $a/R\ll 1$, the orbital resonances are not very dramatic, and the variations of the eccentricity as well as the inclination are relatively moderate. In the region of $a/R>1$, the resonances shown in Figure \ref{fig:fig7} gradually fade away as $a/R$ increases, and the blue non-resonance region becomes larger and larger (see Figure \ref{fig:fig8}), more and more orbits no longer undergo secular resonances. Although when $a/R$ is large, such as $a/R=3$, the orbital resonances still exist for some values of $J_z$, the libration islands are rather small in these resonances and appear at high eccentricities (see Figure \ref{fig:fig10}). The most low eccentricity ($e\lesssim 0.6$) orbits (actually the outer orbits) do not undergo secular resonance, their trajectories are circulating from $0$ to $2\pi$ and the eccentricity variations are small. \begin{figure}[htbp] \centering \includegraphics[scale=0.27]{300_2.pdf} \caption{Phase space trajectories for $a=300$,$J_z=0.2$ in the full model. The trajectories in libration islands are depicted in magenta lines. The trajectories of the outer orbits are depicted in blue lines and they are circulating.} \label{fig:fig10} \end{figure} \section{CONCLUSION AND DISCUSSION \label{sec:discussion}} In this paper, we have studied the secular behaviour of a particle's orbit under the gravitational perturbation from a uniform disk. By averaging the multipole expansion of the disturbing potential over the orbit, we develop the secular approximation for the secular problem. We can analytically derive some properties of the system based on the secular approximation. For the inner orbit problem, we first consider the dipole level of the secular approximation, i.e., the dipole approximation. We demonstrated that when the Kozai integral $J_z\leq \sqrt{3}/2$, the Lidov-Kozai resonance occurs for the inner orbits and the system has two equilibrium points at $\omega=\pi/2,3\pi/2$. The critical value $\sqrt{3}/2$ corresponds to the critical inclination $30^\circ$ above which the eccentricity and inclination variations of the orbits are usually large. The maximum eccentricity $e_{max}$ reached by the eccentricity depends only on the initial inclination $i_0$, and $e_{max}$ increases as $i_0$ increases (for the prograde orbits). For the very large value of $i_0$, the eccentricity can be excited to near 1 due to the Lidov-Kozai effect. The oscillation period or evolution time $T_{evol}\propto 1/m_d$. When $a/R\ll1$, the dipole approximation agrees well with the full model. When $a/R$ takes larger values, the dipole approximation is inadequate to describe the behaviour of the system, and hence we need to take into account the quadrupole approximation. In the quadrupole approximation, the critical value of $J_c$ for the occurrence of the Lidov-Kozai resonance slightly increases from $\sqrt{3}/2$($\approx 0.866$) to 0.896 as $a/R$ increases from 0 to 0.4 due to the quadrupole effect, and the corresponding critical inclination $i_c$ drops from $30^\circ$ to about $26.4^\circ$ (the value in the full model is closer to $27^\circ$). When $a/R>0.42$, besides $\omega=\pi/2,3\pi/2$, the equilibrium points of the system could also appear at $\omega=0,\pi$, which leads to the behaviours of the orbits different from that in the classical Lidov-Kozai resonance of $a/R<0.42$. For the outer orbit problem, we find that the outer orbits do not undergo the Lidov-Kozai resonance under the secular perturbation of the uniform disk. The variations of the eccentricity and inclination for the outer orbits are small, and the outer orbits have strong stability. We investigate the secular dynamics of the orbits with $a/R\sim1$ through the numerical methods. We find that there are many more complicated Lidov-Kozai resonance types in which the equilibrium points of the system appear at $\omega=0,\pi/2,\pi,3\pi/2$, even other values of $\omega$. The eccentricity (as well as inclination) oscillations in these types are relatively moderate on the whole. In particular, in some resonance types the highly inclined orbits are stable. We also find that the multipole expansion of the potential due to a Kuzmin disk \citep{kuzmin1956model} is similar to that of the uniform disk in form (see Appendix \ref{sec:A3}). And the difference between them is only in the numerical factor which does not affect the results of qualitative analysis. This implies that the orbit under the secular perturbation of the Kuzmin disk has the similar dynamical behavior with that under the uniform disk. As a result, for the orbit located at the central region of the Kuzmin disk, the Lidov-Kozai effect kicks in for the orbit if its inclination is larger than the critical value $30^{\circ}$ (in the dipole approximation). In addition, \cite{terquem2010eccentricity} mentioned that under the perturbation of the disk with the decreasing surface density $\sigma(\rho)\propto \rho^{-1/2}$, inner radius $R_i=1$AU and outer radius $R_o=100$AU (case A in the paper), the critical inclination $i_c$ for the eccentricity growth is also about $30^{\circ}$. For the problem of the annulus disk with a non-zero inner radius $R_i$, \cite{terquem2010eccentricity} found that the Lidov-Kozai effect of $i_c=39.2^{\circ}$ will occur for the orbit that is well inside the inner cavity of the annulus (i.e. $a\ll R_i$). We have checked the behaviour of the orbit under the perturbation of the unform annulus with a small radius ratio $R_i/R_o$ (below 0.5). When $a\ll R_i$, the orbit is indeed subject to the Lidov-Kozai effect of $i_c=39.2^{\circ}$. For the orbit crossing the annulus, we find that the behaviour of the orbit is similar to that in the uniform disk case. In fact, since the radius ratio $R_i/R_o$ is small, to a certain extent the cumulative effect of the perturbation of the annulus is equivalent to that of the disk with $R_i=0$. Thus, the annulus perturbation will not significantly change the previous dynamical behavior of the orbit. \acknowledgments We thank the anonymous referee for helpful comments and suggestions improving the paper. This work is supported by the Chinese Academy of Sciences, the National Natural Science Foundation of China (NSFC) (Nos. 11673053,11673049), and The Youth Innovation Promotion Association of Chiness Academy of Sciences (2019265).
{ "timestamp": "2020-07-28T02:41:35", "yymm": "2007", "arxiv_id": "2007.13613", "language": "en", "url": "https://arxiv.org/abs/2007.13613" }
\section{Introduction} \label{sec:intro} Massive wireless device connectivity under limited spectrum resources is considered as cornerstone of the wireless Internet-of-Things (IoT) evolution. As a transformative physical-layer technology, non-orthogonal multiple access (NOMA) \cite{Intro-NOMA-book-vaezi2019multiple,NOMA-liu2020non} leverages superposition coding (SC) and successive interference cancellation (SIC) techniques to support simultaneous multiple user transmission in the same time-frequency resource block. Compared with its conventional orthogonal multiple access (OMA) counterpart, NOMA can significantly increase the spectrum efficiency, reduce access latency, and achieve more balanced user fairness \cite{Intro-NOMA-ding2017}. Typically, NOMA functions in either the power domain, by multiplexing different power levels, or the code domain, by utilizing partially overlapping codes \cite{intro-NOMA-dai2018asur}. Cooperative NOMA, which integrates cooperative communication techniques into NOMA, can further improve the communication reliability of users under poor channel conditions, and therefore largely extend the radio coverage \cite{coNOMA-ding2015}. Consider a downlink transmission scenario, where there are two classes of users: 1) near users, which have better channel conditions and are usually located close to the base station (BS); and 2) far users, which have worse channel conditions and are usually located close to the cell edge. The near users perform SIC or joint maximum-likelihood (JML) detection to detect their own information, thereby obtaining the prior knowledge of the far users' messages. Then, the near users act as relays and forward the prior information to the far users, thereby improving the reception reliability and reducing the outage probability for the far users. Many novel information-theoretic NOMA contributions have been proposed. It was shown in \cite{intro-CRS-NOMA-kim2015cap,coopNOMA2017-8108407,coNOMA-AF-eb2019} that a significant improvement in terms of the outage probability can be achieved, compared to the non-cooperative counterpart. The impact of user pairing on the outage probability and throughput was investigated in \cite{copNOMA-zhou2018dynamic}, where both random and distance-based pairing strategies were analyzed. To address the issue that the near users are energy-constrained, the energy harvesting technique was introduced into cooperative NOMA in \cite{copNOMA-liu2016cooperative}, where three user selection schemes were proposed and their performances were analyzed. Different from the information-theoretic approach aforementioned, in this paper, we aim to uplift the performance of cooperative NOMA from the bit error rate (BER) perspective, and provide specific guidance to a practical system design. Our further investigation indicates that the conventional cooperative NOMA suffers from three main limitations (detailed in Section \ref{sec:lim}). First, the conventional composite constellation design at the BS adopts a separate mapping rule. Based on a standard constellation such as quadrature amplitude modulation (QAM), bits are first mapped to user symbols, which in turn are mapped into a composite symbol using SC. This results in a reduced minimum Euclidean distance. Second, while forwarding the far user's signal, the near user does not dynamically design the corresponding constellation \cite{BER-coNOMA-kara2019,BER-coNOMA-kara2020,NOMA-ber-li2019spatial}, but only reuses the same far user constellation at the BS. Last, the far user treats the near user's interference signal as additive white Gaussian noise (AWGN), which is usually not the case. Besides, it applies maximal-ratio combining (MRC) for signal detection, which ignores the potential error propagation from the near user \cite{BER-coNOMA-kara2019}. These limitations motivate us to develop a novel cooperative NOMA design referred to as deep cooperative NOMA. The essence lies in its holistic approach, taking into account the three limitations simultaneously to perform an end-to-end multi-objective joint optimization. However, this task is quite challenging, because it is intractable to transform the multiple objectives into explicit expressions, not to mention to optimize them simultaneously. To address this challenge, we leverage the interdisciplinary synergy from deep learning (DL) \cite{book-goodfellow2016deep,review-DL-Pqin2019deep,intro-zhang2019deep,onoffline-he2019model,6G-letaief2019roadmap,ch-est-dong2019deep,DL-MIMO-he2020model,ours-lu2020deep}. We develop a novel hybrid-cascaded deep neural network (DNN) architecture to represent the entire system, and construct multiple loss functions to quantify the BER performance. The DNN architecture consists of several structure-specific DNN modules, capable of tapping the strong capability of universal function approximation and integrating the communication domain knowledge with combined analytical and data-driven modelling. The remaining task is how to train the proposed DNN architecture through learning the parameters of all the DNN modules in an efficient manner. To handle multiple loss functions, we propose a novel multi-task oriented training method with two stages. In stage I, we minimize the loss functions for the near user, and determine the mapping and demapping between the BS and the near user. In stage II, by fixing the DNN modules learned in stage I, we minimize the loss function for the entire network, and determine the mapping and demapping for the near and far users, respectively. Both stages involve self-supervised training, utilizing the input training data as the class labels and thereby eliminating the need for human labeling effort. Instead of adopting the conventional symbol-wise training methods \cite{ae-o2017introduction,ae-jsac-aoudia2019model,deepNOMA-ye2020}, we propose a novel bit-wise training method to obtain bit-wise soft probability outputs, facilitating the incorporation of channel coding and soft decoding to combat signal deterioration. Then we examine the specific probability distribution that each DNN module has learned, abandoning the ``black-box of learning in DNN" \cite{DL-inte-9061001} and offering insights into the mechanism and the rationale behind the proposed DNN architecture and its corresponding training method. Besides, we propose a solution to handle the power allocation (PA) mismatch between the training and inference processes to enhance the model adaptation. Our simulation results demonstrate that the proposed deep cooperative NOMA significantly outperforms both OMA and the conventional cooperative NOMA in terms of the BER performance. Besides, the proposed scheme features a low computational complexity in both uncoded and coded cases. The main contributions can be summarized as follows. \begin{itemize} \item We propose a novel deep cooperative NOMA scheme with bit-wise soft probability outputs, where the entire system is re-designed by a hybrid-cascaded DNN architecture, such that it can be optimized in a holistic manner. \item By constructing multiple loss functions to quantify the BER performance, we propose a novel multi-task oriented two-stage training method to solve the end-to-end training problem in a self-supervised manner. \item We carry out theoretical analysis based on information theory to reveal the learning mechanism of each DNN module. We also adapt the proposed scheme to handle the PA mismatch between training and inference, and incorporate it with channel coding. \item Our simulation results demonstrate the superiority of the proposed scheme over OMA and the conventional cooperative NOMA in various channel scenarios. \end{itemize} The rest of this paper is organized as follows. In Section \Rmnum{2}, we introduce the cooperative NOMA system model and the limitations of the conventional scheme. In Section \Rmnum{3}, our deep cooperative NOMA and the multi-task learning problem is introduced, and the two-stage training method is presented, followed by the analysis of the bit-wise loss function. Section \Rmnum{4} provides the theoretical perspective of the design principles. Section \Rmnum{5} discusses the adaptation of the proposed scheme. Simulation results are shown in Section \Rmnum{6}. Finally, the conclusion is presented in Section \Rmnum{7}. \textit{Notation}: Bold lower case letters denote vectors. $(\cdot)^T$ and $(\cdot)^*$ denote the transpose and conjugate operations, respectively. $\diag(\bm{a})$ denotes a diagonal matrix whose diagonal entries starting in the upper left corner are $a_1, \dots, a_n$. $\mathbb{C}$ represents the set of complex numbers. $\mathbb{E}[\cdot]$ denotes the expected value. $\bm{x}(r)$ denotes the $r$-th element of $\bm{x}$. Random variables are denoted by capital font, e.g., $X$ with the realization $x$. Multivariate random variables are represented by capital bold font, e.g., ${\bf Y} = [Y_1, Y_2 ]^T$, ${\bf X}(r)$, with realizations $\bm{y} = [y_1, y_2 ]^T$, $\bm{x}(r)$, respectively. $p(x,y)$, $p(y|x)$, and $I(X; Y )$ represent the joint probability distribution, conditional probability distribution, and mutual information of the two random variables $X$ and $Y$. The cross-entropy of two discrete distributions $p(x)$ and $q(x)$ is denoted by $H( p(x), q(x)) = -\sum_{x} p(x) \log q(x) $. \section{Cooperative NOMA Communication System} \subsection{System Model} \label{sec:sys-model} We consider a downlink cooperative NOMA system with a BS and two users (near user UN and far user UF), as shown in Fig.~\ref{fig:sys-coopeNOMA}. The BS and users are assumed to be equipped with a single antenna. It is considered that only the statistical channel state information (CSI), such as the average channel gains, are available at the BS, the instantaneous CSI of the BS to UN link is available at UN, and the instantaneous CSI of the BS/UN to UF links are available at UF. UN and UF are classified according to their statistical CSI. Typically, they have better and worse channel conditions, respectively. Correspondingly, UN acts as a decode-and-forward (DF) relay and assists the signal transmission to UF. The complete signal transmission consists of two phases, described as follows. In the direct transmission phase, the BS transmits the composite signal to both users. In the cooperative transmission phase, UN performs joint detection, and then forwards the re-modulated UF signal to UF. \begin{figure}[!t] \centering \includegraphics[width=0.9\textwidth]{CoopNOMA} \caption{System model of the cooperative NOMA. UN can adopt the JML or SIC detector. } \label{fig:sys-coopeNOMA} \end{figure} Let $\bm{s}_{N} \in \{ 0,1 \}^{k_{N}}$ and $ \bm{s}_{F} \in \{ 0,1 \}^{k_{F}}$ denote the transmitted bit blocks for UN and UF, with lengths $k_{N}$ and $k_{F}$, respectively. $\bm{s}_{N}$ and $\bm{s}_{F}$ are mapped to user symbols $x_{N}$ and $x_{F}$, taking from $ M_{N}$- and $M_{F}$-ary unit-power constellations $\mathcal{M}_{N} \subset \mathbb{C}$ and $\mathcal{M}_{F} \subset \mathbb{C}$, respectively, where $2^{k_{N}} = M_{N}$ and $2^{k_{F}} = M_{F}$. The detailed transmission process is as follows. In the direct transmission phase, the BS uses SC to obtain a composite symbol \begin{align} x_{S} = \sqrt{{\alpha}_{S,N}} x_{N} + \sqrt{{\alpha}_{S,F}} x_{F}, \ x_{S} \in \mathcal{M}_{S} \subset \mathbb{C} \label{eq:xS-comp} \end{align} and transmits $x_{S}$ to the two users, where ${\alpha}_{S,N}$ and ${\alpha}_{S,F}$ are the PA coefficients with ${\alpha}_{S,N} < {\alpha}_{S,F}$ and ${\alpha}_{S,N} + {\alpha}_{S,F}=1$. $\mathcal{M}_{S}$ is called the composite constellation, and can be written as the sumset $\mathcal{M}_{S} = \sqrt{{\alpha}_{S,N} }\mathcal{M}_{N} + \sqrt{{\alpha}_{S,F}} \mathcal{M}_{F} \triangleq \{ \sqrt{{\alpha}_{S,N} } t_{N} + \sqrt{{\alpha}_{S,F} } t_{F} : t_{N} \in \mathcal{M}_{N}, t_{F} \in \mathcal{M}_{F} \}$. The received signal at the users can be expressed as \begin{align} y_{S,J} = \sqrt{P_{S}} h_{S,J} (\sqrt{{\alpha}_{S,N}} x_{N} + \sqrt{{\alpha}_{S,F}} x_{F} ) + n_{S,J}, \ J \in \{ N,F \}, \label{eq:ySFN} \end{align} where $P_{S}$ is the transmit power of the BS, $n_{S,J} \sim \mathcal{CN}(0,2 \sigma^2_{S,J} )$ denotes the i.i.d complex AWGN, and $h_{S,J}$ denotes the fading channel coefficient. We define the transmit signal-to-noise ratio as SNR$= \frac{P_S}{2\sigma_{S,F}^2} $. After receiving $y_{S,N}$, UN performs JML detection\footnote{Here we introduce JML as an example. Note that SIC can also be used.} given by \begin{align} \label{eq:N-JML} ( \hat{x}_{N}, \hat{x}_{F}^{N}) = \arg\min_{(x_{N}, x_{F}) \in \{ \mathcal{M}_{N} \times \mathcal{M}_{F} \} } & \ \Big\vert y_{S,N} -\sqrt{P_{S}} h_{S,N} (\sqrt{{\alpha}_{S,N}} x_{N} + \sqrt{{\alpha}_{S,F}} x_{F}) \Big\vert^2, \end{align} where $\hat{x}_{N}$ denotes the estimate of $x_{N}$ and $\hat{x}_{F}^{N}$ denotes the estimate of $x_{F}$ at UN. The corresponding estimated user bits $(\hat{\bm{s}}_{N}, \hat{\bm{s}}_{F}^{N}) \in (\{ 0,1 \}^{k_{N}}, \{ 0,1 \}^{k_{F}})$ can be demapped from $( \hat{x}_{N}, \hat{x}_{F}^{N})$. In the cooperative transmission phase, UN transmits the re-modulated signal $\hat{x}_{F}^{N}$ to UF with $\hat{x}_{F}^{N} \in \mathcal{M}_{F}^{N} = \mathcal{M}_{F}$. The received signal at UF can be written as \begin{align} y_{N,F} = \sqrt{P_{N}} h_{N,F} \hat{x}_{F}^{N} + n_{N,F}, \label{eq:yNF} \end{align} where $P_{N}$ is the transmit power of UN, $n_{N,F} \sim \mathcal{CN}(0,2 \sigma^2_{N,F} )$ denotes the AWGN, and $h_{N,F}$ denotes the channel fading coefficient. The entire transmission for UF can be considered as a cooperative transmission with a DF relay, i.e., UN. As UF has the knowledge of $h_{S,F}$ and $h_{N,F}$, by treating the interference term $\sqrt{{\alpha}_{S,N}}x_{N}$ in $y_{S,F}$ as AWGN and leveraging the widely used MRC \cite{NOMA-coop-xu2016novel,coopNOMA2017-8108407,NOMA-ber-li2019spatial}, UF first combines $y_{S,F}$ and $y_{N,F}$ as \begin{equation} y_{F} = \beta_{S,F} y_{S,F} + \beta_{N,F} y_{N,F}, \end{equation} where $\beta_{S,F} = \frac{\sqrt{P_{S}{\alpha}_{S,F}} h_{S,F}^{*} }{P_{S}{\alpha}_{S,N} |h_{S,F} |^2 + 2\sigma_{S,F}^2 } $ and $\beta_{N,F} = \frac{\sqrt{P_{N}} h_{N,F}^{*} }{2\sigma_{N,F}^2}$ \cite{NOMA-ber-li2019spatial}. Then, UF detects its own symbol $x_{F}$ from $y_{F}$ as \begin{align} \label{eq:MRC-F} \hat{x}_{F} = \arg\min_{x_{F} \in \mathcal{M}_{F}} & \ \Big| y_{F} - \big( \beta_{S,F} \sqrt{P_{S}{\alpha}_{S,F}} h_{S,F} + \beta_{N,F} \sqrt{P_{N}} h_{N,F} \big) x_{F} \Big|^2. \end{align} The corresponding estimated bits $\hat{\bm{s}}_{F} \in \{ 0,1 \}^{k_{F}}$ can be demapped from $\hat{x}_{F}$. Hereafter, for convenience, we denote the bit to composite symbol mappings at the BS and UN as $f_{S}$ and $f_{N}$, respectively, and denote the demappings at UN and UF as $g_{N}$ and $g_{F}$, respectively. They are defined as \begin{align} f_{S}: & \ (\{ 0,1 \}^{k_{N}}, \{ 0,1 \}^{k_{F}}) \to \mathcal{M}_{S} \subset \mathbb{C}, \label{eq:fS} \\ f_{N}: & \ \hat{\bm{s}}_{F}^{N} \to \mathcal{M}_{F}^{N} \subset \mathbb{C}, \label{eq:fN} \end{align} and \begin{align} g_{N}: & \ y_{S,N} \to (\hat{\bm{s}}_{N}, \hat{\bm{s}}_{F}^{N}) \in (\{ 0,1 \}^{k_{N}}, \{ 0,1 \}^{k_{F}}), \label{eq:DetN} \\ g_{F}: & \ (y_{S,F}, y_{N,F}) \to \hat{\bm{s}}_{F} \in \{ 0,1 \}^{k_{F}} . \label{eq:DetF} \end{align} The average symbol error rate (SER) and BER are respectively denoted as $\mathcal{P}_{N,e_s}$ and $\mathcal{P}_{N,e_b}$ for UN to detect the UN signal, as $\mathcal{P}_{F,e_s}^{N}$ and $\mathcal{P}_{F,e_b}^{N}$ for UN to detect the UF signal, and as $\mathcal{P}_{F,e_s}$ and $\mathcal{P}_{F,e_b}$ at UF. They are defined as $\mathcal{P}_{N,e_s} = \mathbb{E}_{ {x}_{N} } \big[ \Pr \{ {x}_{N} \neq \hat{x}_{N} \} \big]$, $\mathcal{P}_{N,e_b} = \mathbb{E}_{ \bm{s}_{N} } \big[ \Pr \{ \bm{s}_{N} \neq \hat{\bm{s}}_{N} \} \big]$, $\mathcal{P}_{F,e_s}^{N} = \mathbb{E}_{ {x}_{F} } \big[ \Pr \{ {x}_{F} \neq \hat{x}_{F}^{N} \} \big]$, $\mathcal{P}_{F,e_b}^{N} = \mathbb{E}_{ \bm{s}_{F} } \big[ \Pr \{ \bm{s}_{F} \neq \hat{\bm{s}}_{F}^{N} \} \big]$, $\mathcal{P}_{F,e_s} = \mathbb{E}_{ {x}_{F} } \big[ \Pr \{ {x}_{F} \neq \hat{x}_{F} \} \big]$, and $\mathcal{P}_{F,e_b} = \mathbb{E}_{ \bm{s}_{F} } \big[ \Pr \{ \bm{s}_{F} \neq \hat{\bm{s}}_{F} \} \big]$. Note that SER and BER are functions of the constellation mappings (i.e., $f_{S}$ and $f_{N}$) and demappings (i.e., $g_{N}$ and $g_{F}$). For a given design problem, the parameters $ \{ k_{N}, k_{F}, {\alpha}_{S,N}, {\alpha}_{S,F} \} $ are fixed and we let $P_S=P_N=1$. \subsection{Limitation} \label{sec:lim} The system design above has been widely adopted in the literature \cite{intro-CRS-NOMA-kim2015cap,NOMA-coop-xu2016novel,coopNOMA2017-8108407, NOMA-ber-li2019spatial}. In the following, we specify its three main limitations ({\bf L1})-({\bf L3}), which serve as the underlying motivation for a new system design in Section \ref{sec:proposed-CNOMA}. ({\bf L1}) \textbf{Bit Mapping at the BS:} From the signal detection perspective, the conventional mapping from bit to composite symbol (c.f. \eqref{eq:fS}) uses a separate mapping: first $ (\{ 0,1 \}^{k_{N}}, \{ 0,1 \}^{k_{F}}) \to ( \mathcal{M}_{N},\mathcal{M}_{F} ) $, and then $ ( \mathcal{M}_{N},\mathcal{M}_{F} ) \to \mathcal{M}_{S}$. Typically, we can adopt Gray mapping for $ \{ 0,1 \}^{k_{N}} \to \mathcal{M}_{N}$ and $ \{ 0,1 \}^{k_{F}} \to \mathcal{M}_{F} $, while $\mathcal{M}_{N}$ and $\mathcal{M}_{F}$ are chosen from the standard constellations, e.g., QAM. Then, for designing $f_{S} $ in \eqref{eq:fS}, only $ ( \mathcal{M}_{N},\mathcal{M}_{F} ) \to \mathcal{M}_{S}$ needs to be optimized as follows \begin{IEEEeqnarray}{rCl} & \min_{ \substack{( \mathcal{M}_{N}, \ \mathcal{M}_{F} ) \to \mathcal{M}_{S} \subset \mathbb{C} }} & \quad \Big\{ \mathcal{P}_{N,e_s} (f_{S}, g_{N}), \ \mathcal{P}_{F,e_s}^{N} (f_{S}, g_{N}) \Big\} \label{eq:L1} \\ & \subto & \quad \mbox{predefined condition}, \notag \end{IEEEeqnarray} where $ g_{N}$ here is the JML detector in \eqref{eq:N-JML}, $ \mathcal{P}_{N,e_s} (f_{S}, g_{N})$ and $\mathcal{P}_{F,e_s}^{N} (f_{S}, g_{N})$ characterize the SERs associated with \eqref{eq:N-JML}, and for example, the predefined condition can be the constellation rotation in \cite{NOMA-cons-ye2017constellation}. Clearly, this disjoint design is suboptimal, resulting in a degraded error performance. For example, in Fig.~\ref{fig:cons-xN}, $x_{N}$ and $x_{F}$ are QPSK symbols with Gray mapping. Accordingly, in Fig.~\ref{fig:cons-xS}, $x_{S}$ is the composite symbol for $( {\alpha}_{S,N}, {\alpha}_{S,F} ) = ( 0.4, 0.6 )$. It can be clearly seen that at the symbol level, the composite constellation $\mathcal{M}_{S}$ for $x_{S}$ results in a very small minimum Euclidean distance. Furthermore, a close look at $\mathcal{M}_{S}$ reveals that, at the bit level, the mapping $ (\{ 0,1 \}^{k_{N}}, \{ 0,1 \}^{k_{F}}) \to \mathcal{M}_{S}$ is not optimized. \begin{figure} [!t] \centering \subfigure[For $x_{N} \in \mathcal{M}_{N}$, $x_{F} \in \mathcal{M}_{F}$, and $\hat{x}_{F}^{N} \in \mathcal{M}_{F}^{N}$ (all QPSK)]{ \label{fig:cons-xN} \includegraphics[width=0.46\textwidth]{cons-conven-M4-xN}} \subfigure[For $x_{S} \in \mathcal{M}_{S}$ (composite constellation)]{ \label{fig:cons-xS} \includegraphics[width=0.46\textwidth]{cons-conven-M4-xS}} \caption{Conventional constellations for $M_{N}=M_{F}=4$ and $( {\alpha}_{S,N}, {\alpha}_{S,F} ) = ( 0.4, 0.6 )$.} \label{fig:cons-conven-M4-S1} \end{figure} ({\bf L2}) \textbf{Constellation at UN:} In the cooperative NOMA system, UN acts as a DF relay: first detects ${x}_{F}$ (or equivalently, $\bm{s}_{F}$), and then forwards the re-modulated signal $\hat{x}_{F}^{N}$ to UF. Here, $\mathcal{M}_{F}^{N}$ is assumed in the literature to be exactly the same as the UF constellation $\mathcal{M}_{F}$ at the BS. Clearly, such design for UF may not be optimal because (1) detection errors may occur at UN; (2) UF receives the signals not only from UN, but also from the BS ($y_{S,F}$ including non-AWGN interference). In this case, $\mathcal{M}_{F}^{N}$ should be further designed, rather than simply let $\mathcal{M}_{F}^{N}=\mathcal{M}_{F}$ (known as repetition coding \cite{intro-AF-classic}). ({\bf L3}) \textbf{Detection at UF:} In practice, MRC is widely adopted as it only needs $h_{S,F}$ and $h_{N,F}$. Its design principle can be written as \begin{IEEEeqnarray}{rCl} & \min_{ g_{F} } & \quad \mathcal{P}_{F,e_s} ( f_{S}, f_{N}, g_{N}, g_{F} ) \label{eq:L3} \\ & \subto & \quad \mathcal{M}_{F}^{N} = \mathcal{M}_{F} , \notag \\ & & \quad \hat{x}_{F}^{N} = x_{F} , \notag \end{IEEEeqnarray} where $ \mathcal{P}_{F,e_s} ( f_{S}, f_{N}, g_{N}, g_{F} )$ characterizes the SER associated with \eqref{eq:MRC-F}, $f_{S}$ and $ g_{N}$ are given, and $\mathcal{M}_{F}^{N} = \mathcal{M}_{F}$ is for $f_{N}$. However, it is sub-optimal due to the potential signal detection error at UN (i.e., $\hat{x}_{F}^{N}\neq x_{F}$) \cite{BER-coNOMA-kara2019} and the ideal assumption in \eqref{eq:MRC-F} that the interference term $ \sqrt{{\alpha}_{S,N}}x_{N}$ in $y_{S,F}$ is AWGN. \section{The Proposed Deep Cooperative NOMA Scheme} \label{sec:proposed-CNOMA} \subsection{Motivation} % \label{sec:moti} To overcome ({\bf L1}), a desirable approach is to solve the following problem \begin{IEEEeqnarray}{rCl} & \min_{ f_{S} } & \quad \Big\{ \mathcal{P}_{N,e_b} (f_{S}, g_{N}), \ \mathcal{P}_{F,e_b}^{N} (f_{S}, g_{N}) \Big\} \label{eq:Q1} \end{IEEEeqnarray} with given $ g_{N}$. That is, we use BER as the performance metric, and directly optimize the mapping $ f_S: (\{ 0,1 \}^{k_{N}}, \{ 0,1 \}^{k_{F}}) \to \mathcal{M}_{S} \subset \mathbb{C}$. To handle ({\bf L2}) and minimize the end-to-end BER $\mathcal{P}_{F,e_b} ( f_{S}, f_{N}, g_{N}, g_{F} )$, the constellation $\mathcal{M}_{F}^{N}$ in $ f_{N}$ should be designed by solving the following problem \begin{IEEEeqnarray}{rCl} & \min_{ \substack{ f_{N} } } & \quad \mathcal{P}_{F,e_b} ( f_{S}, f_{N}, g_{N}, g_{F} ) \label{eq:Q2} \end{IEEEeqnarray} with given $f_{S}$, $ g_{N}$, and $g_{F}$. To handle ({\bf L3}), the optimization problem can be re-designed as \begin{align} \min_{ g_{F} } & \quad \mathcal{P}_{F,e_b} ( f_{S}, f_{N}, g_{N}, g_{F} ) \label{eq:Q3} \end{align} with given $f_{S}$, $ f_{N}$, and $g_{N}$, where the ideal assumptions in \eqref{eq:L3}, i.e., $\hat{x}_{F}^{N} = x_{F}$ and $\sqrt{{\alpha}_{S,N}}x_{N}$ is AWGN, are removed. However, addressing ({\bf L1})-({\bf L3}) separately is suboptimal due to the disjoint nature of the mapping and demapping design. This motivates us to take a holistic approach, taking into account ({\bf L1})-({\bf L3}) simultaneously to perform an end-to-end multi-objective optimization as \begin{align} \raisebox{-0.0\normalbaselineskip}[0pt][0pt]{% ({\bf P1})} \qquad \min_{ \substack{ f_{S}, \ f_{N}, \ g_{N}, \ g_{F} } } & \quad \Big\{ \mathcal{P}_{N,e_b} ( f_{S}, g_{N} ) , \ \mathcal{P}_{F,e_b}^{N} ( f_{S}, g_{N} ) , \ \mathcal{P}_{F,e_b} ( f_{S}, f_{N}, g_{N}, g_{F} ) \Big\} . \notag \end{align} Clearly, ({\bf P1}) represents a joint $\big \{ f_{S}, f_{N}, g_{N}, g_{F} \big\}$ design for all objectives in \eqref{eq:Q1}-\eqref{eq:Q3}. \begin{center} \fbox{% \parbox{0.98\textwidth}{% \textbf{Challenge 1:} It is very challenging to find the solutions for ({\bf P1}), because it is difficult to transform the objective functions $\big\{ \mathcal{P}_{N,e_b} ( f_{S}, g_{N} ) , \mathcal{P}_{F,e_b}^{N} ( f_{S}, g_{N} ) , \mathcal{P}_{F,e_b} ( f_{S}, f_{N}, g_{N}, g_{F} ) \big\}$ and optimization variables $\big \{ f_{S}, f_{N}, g_{N},g_{F} \big\}$ into explicit expressions. }% } \end{center} \begin{center} \fbox{% \parbox{0.98\textwidth}{ \textbf{Challenge 2:} Moreover, the three objectives correspond to different users' BER and may be mutually conflicting \cite{deepNOMA-ye2020}. So it is very difficult to minimize them simultaneously \cite{book-deb2014}. }% } \end{center} To overcome these challenges, we propose a novel deep multi-task oriented learning scheme from a combined model- and data-driven perspective. Specifically, by tapping the strong nonlinear mapping and demapping capability of DNN (universal function approximation), we first express $ \big\{ f_{S}, f_{N}, g_{N}, g_{F} \big\}$ by constructing a hybrid-cascaded DNN architecture and then transfer $\big\{ \mathcal{P}_{N,e_b} ( f_{S}, g_{N} ) , \\ \mathcal{P}_{F,e_b}^{N} ( f_{S}, g_{N} ) , \mathcal{P}_{F,e_b} ( f_{S}, f_{N}, g_{N}, g_{F} ) \big\}$ using the bit-level loss functions, so that they can be evaluated empirically. Then, we develop a multi-task oriented two-stage training method to minimize the loss functions through optimizing the DNN parameters in a self-supervised manner. Thereby the input training data also serve as the class labels. \subsection{Deep Cooperative NOMA} \label{sec:AE-CoopNOMA} The block diagram of the proposed deep cooperative NOMA is shown in Fig.~\ref{fig:AE-NOMA}, \begin{figure*}[!t] \centering \includegraphics[width=1\textwidth]{DeepCoopNOMA} \caption{Block diagram of the proposed deep cooperative NOMA including nine trainable DNN modules $\textcircled{\scriptsize 1}$-$\textcircled{\scriptsize 9}$, where $\textcircled{\scriptsize 1}$, $\textcircled{\scriptsize 2}$, and $\textcircled{\scriptsize 6}$ are mapping modules, while the remaining are demapping modules. The inputs $\{ \bm{s}_{N}, \bm{s}_{F} \}$ are bits, and the outputs $\{ \hat{\bm{s} }_{N}, \hat{\bm{s}}_{F}^{N}, \hat{\bm{s}}_{F}\}$ are bit-wise soft probabilities from sigmoid function, e.g., $\hat{\bm{s}}_{F}=[0.96, 0.02]$. The corresponding loss functions are $L_1$ and $L_2$ for UN, and $L_3$ for UF. } \label{fig:AE-NOMA} \end{figure*} where the entire system (c.f. Fig.~1) is re-designed as a novel hybrid-cascaded DNN architecture including nine trainable DNN modules, i.e., three mapping modules and six demapping modules. In essence, the whole DNN architecture learns the mapping between the BS inputs and users outputs to combat the channel fading and noise. Each DNN module consists of multiple hidden layers describing its input-output mapping, including the learnable parameters, i.e., weights and biases. Here, we adopt the offline-training and online-deploying mode in DL. This means that all the DNN modules are deployed without retraining after initial training. At the BS, we propose to use two parallel DNN mapping modules ($\textcircled{\scriptsize 1}$TxS-N and $\textcircled{\scriptsize 2}$TxS-F) with an SC operation to represent the direct mapping $f_{S}$ in \eqref{eq:fS}, which is hereafter referred to as $f_{S}^{\prime} : \{ f_{S,1}^{\prime}, f_{S,2}^{\prime} \}$, denoting the mapping parameterized by the associated DNN parameters. Note that $f_{S,1}^{\prime}$ and $f_{S,2}^{\prime}$ are for $\textcircled{\scriptsize 1}$TxS-N and $\textcircled{\scriptsize 2}$TxS-F, respectively. Their outputs $x_{N}$ and $x_{F}$ are normalized to ensure $\mathbb{E}\{ | x_{N}|^2\} = 1$ and $\mathbb{E}\{ | x_{F}|^2\} = 1$. The composite symbol (c.f. \eqref{eq:xS-comp}) now can be re-expressed by $x_{S} = f_{S}^{\prime}(\bm{s}_{N}, \bm{s}_{F})$. In the direct transmission phase, the received signal at the users can be expressed as \begin{align} y_{S,J} = h_{S,J} f_{S}^{\prime}(\bm{s}_{N}, \bm{s}_{F} ) + n_{S,J}, \ J \in \{ N,F \} . \label{eq:ySFN-DNN} \end{align} At UN, we use three DNN demapping modules ($\textcircled{\scriptsize 3}$RxPreSN, $\textcircled{\scriptsize 4}$RxN-N, and $\textcircled{\scriptsize 5}$RxN-F) to represent the demapping in \eqref{eq:DetN}, referred to as $g_{N}^{\prime} : \{ g_{N,3}^{\prime}, g_{N,4}^{\prime}, g_{N,5}^{\prime} \}$. Note that $g_{N,3}^{\prime}$, $g_{N,4}^{\prime}$, and $g_{N,5}^{\prime}$ are for $\textcircled{\scriptsize 3}$RxPreSN, $\textcircled{\scriptsize 4}$RxN-N, and $\textcircled{\scriptsize 5}$RxN-F, respectively. The received $y_{S,N}$ is equalized as $\frac{h_{S,N}^*y_{S,N}}{|h_{S,N}|^2}$, processed by $\textcircled{\scriptsize 3}$RxPreSN, and then demapped by two parallel DNNs ($\textcircled{\scriptsize 4}$RxN-N and $\textcircled{\scriptsize 5}$RxN-F) to obtain the estimates $\hat{\bm{s} }_{N}$ and $\hat{\bm{s}}_{F}^{N}$, respectively. This process can be expressed as \begin{align} (\hat{\bm{s} }_{N}, \hat{\bm{s}}_{F}^{N}) = g_{N}^{\prime} (y_{S,N}) \in \big( [0,1 ]^{k_{N}}, [ 0,1 ]^{k_{F}} \big) , \label{gN-DNN} \end{align} where $(\hat{\bm{s} }_{N}, \hat{\bm{s}}_{F}^{N}) $ are soft probabilities for each element in the vectors. Integrating \eqref{eq:ySFN-DNN}-\eqref{gN-DNN}, this demapping process at UN can be described as \begin{align} \underbrace{(\hat{\bm{s}}_{N}, \hat{\bm{s}}_{F}^{N} ) = g_{N}^{\prime} }_{\eqref{gN-DNN}} \circ \underbrace{ \mathcal{C}_{S,N} \circ f_{S}^{\prime} (\bm{s}_{N}, \bm{s}_{F})}_{\eqref{eq:ySFN-DNN} \text{ with } J=N}, \label{eq:det-1} \end{align} where $\circ$ is the composition operator and $\mathcal{C}_{S,N} \triangleq \mathcal{C}_{S,N}(y_{S,N}\vert x_{S},h_{S,N})$ denotes the channel function from the BS to UN. We refer to \eqref{eq:det-1} as the first demapping phase. After obtaining $\hat{\bm{s}}_{F}^{N}$, we use the DNN mapping module $\textcircled{\scriptsize 6}$TxN to represent the mapping in \eqref{eq:fN}, denoted as $\hat{x}_{F}^{N} = f_{N}^{\prime}(\hat{\bm{s}}_{F}^{N})$, where $f_{N}^{\prime} = f_{N,6}^{\prime}$. A normalization layer is used at the last layer of $\textcircled{\scriptsize 6}$TxN to ensure $\mathbb{E}\{ | \hat{x}_{F}^{N}|^2\} = 1$. In the cooperative transmission phase, UF receives \begin{align} y_{N,F} = h_{N,F} f_{N}^{\prime}(\hat{\bm{s}}_{F}^{N} ) + n_{N,F}. \label{eq:yNF-DNN} \end{align} Finally at UF, we use three DNN demapping modules ($\textcircled{\scriptsize 7}$RxPreSF, $\textcircled{\scriptsize 8}$RxPreNF, and $\textcircled{\scriptsize 9}$RxF) to represent the demapping in \eqref{eq:DetF} as $g_{F}^{\prime} :\{ g_{F,7}^{\prime}, g_{F,8}^{\prime}, g_{F,9}^{\prime} \} $. Note that $g_{F,7}^{\prime}$, $g_{F,8}^{\prime}$, and $g_{F,9}^{\prime}$ are for $\textcircled{\scriptsize 7}$RxPreSF, $\textcircled{\scriptsize 8}$RxPreNF, and $\textcircled{\scriptsize 9}$RxF, respectively. The received $y_{S,F}$ and $y_{N,F}$ are equalized as $\frac{h_{S,F}^*y_{S,F}}{|h_{S,F}|^2}$ and $\frac{h_{N,F}^*y_{N,F}}{|h_{N,F}|^2}$, processed by the parallel $\textcircled{\scriptsize 7}$RxPreSF and $\textcircled{\scriptsize 8}$RxPreNF, respectively, and then fed into $\textcircled{\scriptsize 9}$RxF to obtain $\hat{\bm{s} }_{F}$. This process can be described as \begin{align} \hat{\bm{s}}_{F} = g_{F}^{\prime} (y_{S,F}, y_{N,F}) \in [ 0,1 ]^{k_{F}} . \label{gF-DNN} \end{align} Note that the soft probability output $\hat{\bm{s}}_{F}$ can serve as the input of a soft channel decoder, which will be explained in Section \ref{sec:c-coding}. Integrating \eqref{eq:ySFN-DNN}-\eqref{gF-DNN}, the end-to-end demapping process at UF can be described as \begin{align} \underbrace{\hat{\bm{s}}_{F} = g_{F}^{\prime}}_{\eqref{gF-DNN}} \big( & \underbrace{\mathcal{C}_{S,F} \circ f_{S}^{\prime} (\bm{s}_{N}, \bm{s}_{F})}_{\eqref{eq:ySFN-DNN} \text{ with } J=F} , \underbrace{ \mathcal{C}_{N,F} \circ f_{N}^{\prime}}_{\eqref{eq:yNF-DNN}} \circ \underbrace{g_{N}^{\prime} \circ \mathcal{C}_{S,N} \circ f_{S}^{\prime} (\bm{s}_{N}, \bm{s}_{F})}_{\eqref{eq:det-1}} \big), \label{eq:det-2} \end{align} where $\mathcal{C}_{S,F} \triangleq \mathcal{C}_{S,F}(y_{S,F}\vert x_{S},h_{S,F})$ and $\mathcal{C}_{N,F} \triangleq \mathcal{C}_{N,F} (y_{N,F}\vert \hat{x}_{F}^{N},h_{N,F})$ denote the channel functions from the BS and UN to UF, respectively. We refer to \eqref{eq:det-2} as the second demapping phase. \begin{figure} [!t] \centering \subfigure[For $ \text{Tx} \in \{ \textcircled{\scriptsize 1}\text{TxS-N}, \textcircled{\scriptsize 2}\text{TxS-F}, \textcircled{\scriptsize 6}\text{TxN} \}$ and $ \text{Rx} \in \{ \textcircled{\scriptsize 4}\text{RxN-N}, \textcircled{\scriptsize 5}\text{RxN-F}, \textcircled{\scriptsize 9}\text{RxF} \}$]{ \label{fig:TxRx} \includegraphics[width=0.48\textwidth]{TxRx.pdf}} \subfigure[For $ \text{RxPre} \in \{ \textcircled{\scriptsize 3}\text{RxPreSN}, \textcircled{\scriptsize 7}\text{RxPreSF}, \textcircled{\scriptsize 8}\text{RxPreNF} \}$]{ \label{fig:RxPre} \includegraphics[width=0.48\textwidth]{RxPre.pdf}} \caption{Block diagram of the layer structure for the DNN modules.} \label{fig:TR} \end{figure} Having presented the overall picture of the proposed DNN architecture, next we scrutinize the layer structure for each individual module. Fig.~\ref{fig:TxRx} shows the modules $ \text{Tx} \in \{ \textcircled{\scriptsize 1}\text{TxS-N}, \textcircled{\scriptsize 2}\text{TxS-F}, \\ \textcircled{\scriptsize 6}\text{TxN} \}$ and $ \text{Rx} \in \{ \textcircled{\scriptsize 4}\text{RxN-N}, \textcircled{\scriptsize 5}\text{RxN-F}, \textcircled{\scriptsize 9}\text{RxF} \}$, which share a common structure with multiple cascaded DNN layers. Fig.~\ref{fig:RxPre} shows the modules $ \text{RxPre} \in \{ \textcircled{\scriptsize 3}\text{RxPreSN}, \textcircled{\scriptsize 7}\text{RxPreSF}, \textcircled{\scriptsize 8}\text{RxPreNF} \}$, which share a common structure with an element-wise multiplication operation at the output layer. The main purpose of the multiplication operation is to extract the key feature for signal demapping. For example, $\textcircled{\scriptsize 3}$RxPreSN is to learn the feature $ |y_{S,N}-h_{S,N}x_{S}|^2 = \vert h_{S,N} x_{S} \vert^2 - 2 \Re\{ h_{S,N}^* x_{S}^* y_{S,N} \} + \vert y_{S,N} \vert^2 $ containing $\vert x_{S} \vert^2$, which is key to signal demapping (c.f. \eqref{eq:N-JML}). The input of $\textcircled{\scriptsize 3}$RxPreSN is $\frac{h_{S,N}^*y_{S,N}}{|h_{S,N}|^2}$. After the multiple cascaded layers learn an estimate of $x_{S}$, e.g., $ax_{S}+b$, the element-wise multiplication operation computes $ \Re \Big\{ \frac{h_{S,N}^*y_{S,N}}{|h_{S,N}|^2} \Big\} \Re \{ ax_{S}+ b \} = \Re \Big\{ x_{S}+ \frac{h_{S,N}^*n_{S,N}}{|h_{S,N}|^2} \Big\} \Re \{ ax_{S}+ b \} $ containing $\Re \{ x_{S} \} ^2$ and $\Im \Big\{ x_{S}+ \frac{h_{S,N}^*n_{S,N}}{|h_{S,N}|^2} \Big\} \Im \{ ax_{S}+ b \} $ containing $\Im \{ x_{S} \} ^2$. Given the above, the DNN based joint optimization problem for the two demapping phases \eqref{eq:det-1} and \eqref{eq:det-2} can now be reformulated as \begin{align} \raisebox{-0.0\normalbaselineskip}[0pt][0pt]{% ({\bf P2})} \quad \min_{ \substack{f_{S}^{\prime}, \ f_{N}^{\prime}, \ g_{N}^{\prime}, \ g_{F}^{\prime} } } & \quad \Big\{ L_{( \bm{s}_{N}, \hat{\bm{s}}_{N} )}(f_{S}^{\prime}, g_{N}^{\prime}) ,\ L_{( \bm{s}_{F}, \hat{\bm{s}}_{F}^{N} )}(f_{S}^{\prime}, g_{N}^{\prime}) , \ L_{( \bm{s}_{F}, \hat{\bm{s}}_{F} )}(f_{S}^{\prime}, f_{N}^{\prime}, g_{N}^{\prime}, g_{F}^{\prime} ) \Big\}, \notag \end{align} where $L_{( \bm{s}_{N}, \hat{\bm{s}}_{N} )}(f_{S}^{\prime}, g_{N}^{\prime}) \triangleq L_1$ denotes the loss between the input-output pair $(\bm{s}_{N}, \hat{\bm{s}}_{N})$ as a function of $\{ f_{S}^{\prime}, g_{N}^{\prime}\}$, and similar definition follows for $ L_{( \bm{s}_{F}, \hat{\bm{s}}_{F}^{N} )}(f_{S}^{\prime}, g_{N}^{\prime}) \triangleq L_2 $ and $ L_{( \bm{s}_{F}, \hat{\bm{s}}_{F} )}(f_{S}^{\prime}, f_{N}^{\prime}, g_{N}^{\prime}, g_{F}^{\prime} ) \triangleq L_3 $. These losses measure the demapping errors for their respective input, and they will be mathematically defined in Section \ref{sec:bit-train}. Note that $\{ L_1, L_2 \}$ are associated with \eqref{eq:det-1}, and $ L_3$ associated with \eqref{eq:det-2} is the end-to-end loss for the entire network. Clearly, ({\bf P1}) has been translated into ({\bf P2}) in a more tractable form, where highly nonlinear mappings and demappings are learned by training the DNN parameter set $\big \{ f_{S}^{\prime}, f_{N}^{\prime}, g_{N}^{\prime}, g_{F}^{\prime} \big\}$. This provides a solution to \textbf{Challenge 1}. However, we still need to address \textbf{Challenge 2}, as ({\bf P2}) involves three loss functions. Typically, this is a multi-task learning (MTL) problem \cite{MTL-ruder2017overview}, which is more complex than the conventional single-task learning. Moreover, the outputs $\{ \hat{\bm{s} }_{N}, \hat{\bm{s}}_{F}^{N}, \hat{\bm{s}}_{F} \}$ are bit-wise probabilities for each input bit, rather than the widely used symbol-wise probabilities for each input symbol \cite{Sync-dorner2018deep,deepNOMA-ye2020}. Therefore, a bit-wise self-supervised training method needs to be developed and analyzed. We will address the MTL in Section \ref{sec:MTL}, and the bit-wise self-supervised training in Sections \ref{sec:self-train} and \ref{sec:bit-train}. \subsection{The Proposed Two-Stage Training Method} \label{sec:2stage} \subsubsection{Multi-Task Learning} \label{sec:MTL} In this MTL problem, minimizing $\{ L_1, L_2, L_3 \}$ simultaneously may lead to a poor error performance. For example, we may arrive at a situation where $L_2$ and $L_3$ are sufficiently small but $L_1$ is still very large. To avoid this, we develop a novel two-stage training method by analyzing the relationship among $\{ L_1, L_2 , L_3\}$. It is clear that $ L_1$ and $L_2$ are related to $\{ f_{S}^{\prime}, g_{N}^{\prime} \}$, while $L_3$ is related to $\{ f_{S}^{\prime}, f_{N}^{\prime}, g_{N}^{\prime}, g_{F}^{\prime} \}$. As $ \{ f_{S}^{\prime}, g_{N}^{\prime} \} \subset \{ f_{S}^{\prime}, f_{N}^{\prime}, g_{N}^{\prime}, g_{F}^{\prime} \}$, this implies a causal structure between $\{ L_1, L_2 \}$ and $L_3$. A more rigorous analysis on this relationship is provided in Appendix \ref{app:P3}. On this basis, ({\bf P2}) can be translated into the following problem \begin{IEEEeqnarray}{rCl} \raisebox{-1.25\normalbaselineskip}[0pt][0pt]{% ({\bf P3}) } \quad \text{Stage I: } \quad & \min_{ f_{S}^{\prime}, \ g_{N}^{\prime} } & \ \big\{ L_1, \ L_2 \big\} \notag \\ \text{Stage II: } \quad & \min_{ f_{N}^{\prime}, \ g_{F}^{\prime} } & \ L_3 \notag \\ & \subto \quad & \ f_{S}^{\prime}, \ g_{N}^{\prime}. \notag \end{IEEEeqnarray} For ({\bf P3}), as shown in Fig.~\ref{fig:AE-NOMA}, in stage I we minimize $L_1$ and $L_2$ through learning $\{ f_{S}^{\prime}, g_{N}^{\prime}\}$ by data training. In stage II, we minimize $L_3$ through learning $\{ f_{N}^{\prime}, g_{F}^{\prime} \}$ by fixing the obtained $\{ f_{S}^{\prime}, g_{N}^{\prime}\}$ in stage I. It is worth noting that stage I is still a MTL problem, but we can minimize $L_1$ and $L_2$ simultaneously since they share the same $\{ f_{S}^{\prime}, g_{N}^{\prime}\}$. \subsubsection{Self-Supervised Training} \label{sec:self-train} For convenience, we express the three loss functions $ L_1$, $L_2$, and $L_3$ in a unified form. On this basis, we elaborate on the self-supervised training method for fading channels. Without loss of generality, we let $k_{N}=k_{F}=k$, and $( k, {\alpha}_{S,N}, {\alpha}_{S,F} )$ are fixed during the training. From ({\bf P2}), $ L_1$, $L_2$, and $L_3$ can be written as \begin{align} L_{( \bm{s}, \hat{\bm{s}} )}( {f^{\prime}}, {g^{\prime}} ) \triangleq & \mathbb{E}_{\bm{s}} \big[ \mathcal{L}( \bm{s}, \hat{\bm{s}} ) \big], \ ( \bm{s}, \hat{\bm{s}} ) \in \big\{ ( \bm{s}_{N}, \hat{\bm{s}}_{N} ), ( \bm{s}_{F}, \hat{\bm{s}}_{F} ), ( \bm{s}_{F}, \hat{\bm{s}}_{F}^{N} ) \big\}, \label{eq:opt-loss-multi} \end{align} where the input bits $\bm{s}$ also serve as the labels, $\hat{\bm{s}}$ denotes the output soft probabilities, and $\mathcal{L}( \bm{s}, \hat{\bm{s}} )$ denotes the adopted loss function such as mean squared error and cross-entropy (CE) \cite[Ch. 5]{book-goodfellow2016deep}. For $ {f^{\prime}}$ and ${g^{\prime}} $ specifically, we have \begin{align} ( f^{\prime}, g^{\prime}) = \left\{\begin{array}{lc} ( f_{S}^{\prime}, g_{N}^{\prime}), & \text{for } ( \bm{s}, \hat{\bm{s}} ) \in \big\{ ( \bm{s}_{N}, \hat{\bm{s}}_{N} ), ( \bm{s}_{F}, \hat{\bm{s}}_{F}^{N} ) \big\}, \\ \big( \{ f_{S}^{\prime}, f_{N}^{\prime} \}, \{ g_{N}^{\prime} , g_{F}^{\prime} \} \big), & \text{for } ( \bm{s}, \hat{\bm{s}} ) = ( \bm{s}_{F}, \hat{\bm{s}}_{F} ). \end{array} \right. \end{align} For a random batch of training examples $\{ ( \bm{s}^{b}, \hat{\bm{s}}^{b})\}_{b=1}^{B}$ of size $B$, the loss in \eqref{eq:opt-loss-multi} can be estimated through sampling as \begin{align} L_{( \bm{s}, \hat{\bm{s}} )}( {f^{\prime}}, {g^{\prime}} ) = \frac{1}{B} \sum_{b=1}^{B} \mathcal{L}( \bm{s}^{b}, \hat{\bm{s}}^{b}) . \label{eq:L-sample} \end{align} We use the stochastic gradient decent (SGD) algorithm to update the DNN parameter set $\{ f^{\prime}, g^{\prime} \}$ through backpropagation \cite[Ch. 6.5]{book-goodfellow2016deep} as \begin{align} \label{eq:sgd-update} \{ f^{\prime}, g^{\prime} \}^{(t)} = \{ f^{\prime}, g^{\prime} \}^{(t-1)} - \tau \nabla L_{( \bm{s}, \hat{\bm{s}} )} \big( \{ f^{\prime}, g^{\prime} \}^{(t-1)} \big) , \end{align} starting with a random initial value $ \{ f^{\prime}, g^{\prime} \}^{(0)}$, where $\tau > 0$, $t$, and $\nabla$ denote the learning rate, iteration index, and gradient operator, respectively. For the specific offline training of ({\bf P3}), following the proposed two-stage training method, the DNN parameter set $\{ f_{S}^{\prime}, f_{N}^{\prime}, g_{N}^{\prime}, g_{F}^{\prime} \}$ is first learned under AWGN channels ($\bm{h} = [ h_{S,N}, h_{S,F}, h_{N,F} ]^T = [3,1,3]^T$) to combat the noise. Then, by fixing $\{ f_{S}^{\prime}, f_{N}^{\prime} \} $, only $\{ g_{N}^{\prime}, g_{F}^{\prime} \} $ are fine-tuned under fading channels ($\bm{h} \sim \mathcal{CN} (0, {\bf \Lambda}) $ with ${\bf \Lambda} = \diag \big( [ \lambda_{S,N}, \lambda_{S,F},\lambda_{N,F} ]^T \big)$) to combat signal fluctuation. Another critical issue is that, in the most literature \cite{ae-o2017introduction,deepNOMA-ye2020}, $L_{( \bm{s}, \hat{\bm{s}} )}( {f^{\prime}}, {g^{\prime}} ) $ only represents the symbol-level CE loss with softmax activation function \cite{book-goodfellow2016deep}, where $\bm{s}$ is represented by a one-hot vector of length $2^k$, i.e., only one element equals to one and others zero \cite{ae-o2017introduction}. Fundamentally different from \cite{ae-o2017introduction,deepNOMA-ye2020}, $L_{( \bm{s}, \hat{\bm{s}} )}( {f^{\prime}}, {g^{\prime}} )$ here characterizes the bit-level loss, thereby requiring further analysis. \subsubsection{Bit-Level Loss} \label{sec:bit-train} Because the inputs $\{ \bm{s}_{N}, \bm{s}_{F} \}$ are binary bits, $L_{( \bm{s}, \hat{\bm{s}} )}( {f^{\prime}}, {g^{\prime}} ) $ minimization is a binary classification problem, where we use the binary cross-entropy (BCE) loss to quantify the demapping error. Accordingly, sigmoid activation function, i.e., $\phi(z)=\frac{1}{1+e^{-z}}$, is used at the output layers of $\textcircled{\scriptsize 4}$RxN-N, $\textcircled{\scriptsize 5}$RxN-F, and $\textcircled{\scriptsize 9}$RxF to obtain bit-wise soft probabilities $\hat{\bm{s}}_{N}$, $\hat{\bm{s}}_{F}^{N}$, and $\hat{\bm{s}}_{F}$, respectively. In this case, following \eqref{eq:opt-loss-multi}, the BCE loss function can be written as \begin{align} \mathcal{L}( \bm{s}, \hat{\bm{s}} ) = & \sum_{r=1}^{k} \mathcal{L}( \bm{s}(r), \hat{\bm{s}}(r) ) \notag \\ = & -\sum_{r=1}^{k} \Big( \bm{s}(r) \log \hat{\bm{s}}(r) + (1-\bm{s}(r) ) \log ( 1-\hat{\bm{s}}(r) ) \Big), \notag \\ & ( \bm{s}, \hat{\bm{s}} ) \in \big\{ ( \bm{s}_{N}, \hat{\bm{s}}_{N} ), ( \bm{s}_{F}, \hat{\bm{s}}_{F}^{N} ), ( \bm{s}_{F}, \hat{\bm{s}}_{F} ) \big\}. \label{eq:BCE} \end{align} In another form, $\mathcal{L}( \bm{s}, \hat{\bm{s}} )$ can be shown as \begin{align} \mathcal{L}( \bm{s}, \hat{\bm{s}} ) = & H(p_{ {f^{\prime}}} ( \bm{s} ), \hat{p}_{ {g^{\prime}}} ( \bm{s} )) \notag \\ = & \sum_{r=1}^{k} \mathbb{E}_{\bm{s}(r)} \big[ H( \bm{s}(r), \hat{\bm{s}}(r) ) \big], \label{eq:H-bit} \end{align} where $H(\cdot)$ represents the cross-entropy between the parameterized distributions $p_{ {f^{\prime}}} ( \bm{s} )$ and $\hat{p}_{ {g^{\prime}}} ( \bm{s} )$. $p_{ {f^{\prime}}} ( \bm{s} )$ denotes the true distribution of $\bm{s}$ for the transmitter with $f^{\prime}$, while $\hat{p}_{ {g^{\prime}}} ( \bm{s} )$ denotes the estimated distribution of $ \bm{s} $ for the receiver with $g^{\prime}$. We can see from \eqref{eq:H-bit} that the optimization is performed for each individual bits in $\bm{s}$. Then, during training, $L_{( \bm{s}, \hat{\bm{s}} )}( {f^{\prime}}, {g^{\prime}} )$ can be computed through averaging over all possible channel outputs $\bm{y} =[ y_{S,N}, y_{S,F}, y_{N,F} ]^T$ according to \begin{align} L_{( \bm{s}, \hat{\bm{s}} )}( {f^{\prime}}, {g^{\prime}} ) = & \sum_{r=1}^{k} \mathbb{E}_{\bm{s}(r) ,\bm{y}} \big[ H( p_{ {f^{\prime}}} ( \bm{s}(r) \vert \bm{y} ), \hat{p}_{ {g^{\prime}}} ( \bm{s}(r) \vert \bm{y} ) )\big] \notag \\ = & H({\bf S}) - \sum_{r=1}^{k} I_{{f^{\prime}} }( {\bf S}(r) ; {\bf Y}) + \sum_{r=1}^{k} \mathbb{E}_{\bm{y}} \big[ D_{\text{KL}}( p_{ {f^{\prime}}} ( \bm{s}(r) |\bm{y} ) \| \hat{p}_{ {g^{\prime}}} ( \bm{s}(r) |\bm{y} ) )\big], \label{eq:BMI-deri-iid} \end{align} where $I( \cdot ; \cdot )$ is the mutual information (MI), and $D_{\text{KL}}( p \| \hat{p})$ is the Kullback-Leibler (KL) divergence between distributions $p$ and $\hat{p}$ \cite{MI-book-cover2012elements}. The first term on the right side of \eqref{eq:BMI-deri-iid} is the entropy of $\bm{s}$, which is a constant. The second term can be viewed as learning $f^{\prime}$ at the transmitter, i.e., $(\{ 0,1 \}^{k_{N}}, \{ 0,1 \}^{k_{F}}) \to \mathcal{M}_{S}$ and $\hat{\bm{s}}_{F}^{N} \to \mathcal{M}_{F}^{N} $. The third term measures the difference between the true distribution $p_{ {f^{\prime}}} ( \bm{s}(r) |\bm{y} )$ at the transmitter and the learned distribution $\hat{p}_{ {g^{\prime}}} ( \bm{s}(r) |\bm{y} )$ at the receiver, which corresponds to $y_{S,N} \to (\hat{\bm{s}}_{N}, \hat{\bm{s}}_{F}^{N}) \in (\{ 0,1 \}^{k_{N}}, \{ 0,1 \}^{k_{F}})$ and $(y_{S,F}, y_{N,F}) \to \hat{\bm{s}}_{F} \in \{ 0,1 \}^{k_{F}}$. \section{A Theoretical Perspective of the Design Principles} \label{sec:theo-prob} In Section \ref{sec:proposed-CNOMA}, we illustrated the whole picture of the proposed DNN architecture for deep cooperative NOMA. In this section, we further analyze the specific probability distribution that each DNN module has learned, through studying the loss functions in \eqref{eq:BMI-deri-iid} for each training stage of ({\bf P3}). \subsection{Training Stage I} In essence, training stage I is MTL over a multiple access channel with inputs $\{ \bm{s}_{N}, \bm{s}_{F}\}$, transceiver $\{ f_{S}^{\prime}, g_{N}^{\prime} \}$, channel function $\mathcal{C}_{S,N}$, and outputs $\{ \hat{\bm{s}}_{N}, \hat{\bm{s}}_{F}^{N}\}$. From information theory \cite{MI-book-cover2012elements}, the corresponding loss functions $L_1$ and $L_2$ for the two tasks can be expressed as \begin{align} L_1 = & H({\bf S}_{N}) - \underbrace{\sum_{r=1}^{k} I_{f_{S}^{\prime} }( {\bf S}_{N}(r) ; Y_{S,N})}_{\text{Conflicting MI}} + \sum_{r=1}^{k} \mathbb{E}_{ y_{S,N} } \bigg[ D_{\text{KL}}( p_{ f_{S}^{\prime}} ( \bm{s}_{N}(r) | y_{S,N} ) \| \hat{p}_{ g_{N}^{\prime}} ( \bm{s}_{N}(r) |y_{S,N} ) )\bigg] \label{eq:LossNN-1} \\ = & H({\bf S}_{N}) - \underbrace{\sum_{r=1}^{k} I_{f_{S}^{\prime} }( {\bf S}_{N}(r) , {\bf S}_{F}(r) ; Y_{S,N} )}_{\text{Common MI}} + \sum_{r=1}^{k} I_{f_{S,2}^{\prime} }( {\bf S}_{F}(r) ; Y_{S,N} | {\bf S}_{N}(r) ) + \sum_{r=1}^{k} \mathbb{E}_{ y_{S,N} } \bigg[ D_{\text{KL}} \notag \\ & \Big( \int_{x_{S} } \underbrace{p_{ f_{S}^{\prime}} ( \bm{s}_{N}(r) | x_{S} )}_{\text{Individual distribution}} \underbrace{p( x_{S}| y_{S,N} )}_{\text{Common distribution}} \mathop{}\!\mathrm{d} x_{S} \Big\| \int_{ \hat{y}_{S,N}} \underbrace{\hat{p}_{ g_{N,4}^{\prime}} ( \bm{s}_{N}(r) | \hat{y}_{S,N} )}_{\text{ Individual module}} \underbrace{\hat{p}_{g_{N,3}^{\prime}} ( \hat{y}_{S,N} | y_{S,N} )}_{\text{ Common module}} \mathop{}\!\mathrm{d} \hat{y}_{S,N} \Big)\bigg] , \label{eq:LossNN} \end{align} where $\hat{y}_{S,N}$ denotes the output signal of $\textcircled{\scriptsize 3}$RxPreSN, and the derivations for \eqref{eq:LossNN-1} and \eqref{eq:LossNN} are given in Appendix \ref{app:A}. Similarly, we have \begin{align} L_2 = & H({\bf S}_{F}) - \underbrace{\sum_{r=1}^{k} I_{f_{S}^{\prime} } ( {\bf S}_{N}(r) , {\bf S}_{F}(r) ; Y_{S,N} )}_{\text{Common MI }} + \underbrace{\sum_{r=1}^{k} I_{f_{S,1}^{\prime} }( {\bf S}_{N}(r) ; Y_{S,N} | {\bf S}_{F}(r) )}_{\text{Conflicting MI}} + \sum_{r=1}^{k} \mathbb{E}_{ y_{S,N} } \bigg[ D_{\text{KL}} \notag \\ & \Big( \int_{x_{S} } \underbrace{p_{ f_{S}^{\prime}} ( \bm{s}_{F}(r) | x_{S} )}_{\text{Individual distribution}} \underbrace{p( x_{S}| y_{S,N} )}_{\text{Common distribution}} \mathop{}\!\mathrm{d} x_{S} \Big\| \int_{ \hat{y}_{S,N}} \underbrace{\hat{p}_{ g_{N,5}^{\prime}} ( \bm{s}_{F}(r) | \hat{y}_{S,N} )}_{\text{ Individual module}} \underbrace{\hat{p}_{g_{N,3}^{\prime}} ( \hat{y}_{S,N} | y_{S,N} )}_{\text{ Common module}} \mathop{}\!\mathrm{d} \hat{y}_{S,N} \Big)\bigg] . \label{eq:LossNF} \end{align} Now we analyze the components of $L_1$ and $L_2$ in \eqref{eq:LossNN-1}-\eqref{eq:LossNF}. Specifically, on one hand, \eqref{eq:LossNN} and \eqref{eq:LossNF} share a common MI term $ \sum_{r=1}^{k} I_{f_{S}^{\prime} }( {\bf S}_{N}(r) , {\bf S}_{F}(r) ; Y_{S,N} )$, which corresponds to the learning of $f_{S}^{\prime}$. On the other hand, \eqref{eq:LossNN-1} and \eqref{eq:LossNF} have conflicting MI terms. That is, minimizing \eqref{eq:LossNN-1} leads to maximizing the second term $ \sum_{r=1}^{k} I_{f_{S}^{\prime} }( {\bf S}_{N}(r) ; Y_{S,N}) $, while minimizing \eqref{eq:LossNF} results in minimizing the third term $ \sum_{r=1}^{k} I_{f_{S,1}^{\prime} }( {\bf S}_{N}(r) ; Y_{S,N} | {\bf S}_{F}(r) ) $ with $f_{S,1}^{\prime} \subset f_{S}^{\prime}$. Clearly, these two objectives are contradictory for learning $f_{S}^{\prime}$. Next, let us observe the KL divergence terms in \eqref{eq:LossNN}-\eqref{eq:LossNF} at the receiver side. The true distributions in \eqref{eq:LossNN} and \eqref{eq:LossNF} share a common distribution term $p(x_{S} | y_{S,N})$, and individual (but related) distribution terms $p_{ f_{S}^{\prime}} ( \bm{s}_{J}(r) | x_{S} )$, $J \in \{N,F\}$. By exploiting this relationship, we use a common demapping module $\textcircled{\scriptsize 3}$RxPreSN to learn the common distribution $\hat{p}_{g_{N,3}^{\prime}} ( \hat{y}_{S,N} | y_{S,N} ) $ for $p(x_{S} | y_{S,N})$, such that $\hat{y}_{S,N}$ learns to estimate $x_{S}$. Then, two individual demapping modules $\textcircled{\scriptsize 4}$RxN-N and $\textcircled{\scriptsize 5}$RxN-F are used to learn $\hat{p}_{ g_{N,4}^{\prime}} ( \bm{s}_{N}(r) | \hat{y}_{S,N} )$ and $\hat{p}_{ g_{N,5}^{\prime}} ( \bm{s}_{F}(r) | \hat{y}_{S,N} )$ for estimating $p_{ f_{S}^{\prime}} ( \bm{s}_{N}(r) | x_{S} )$ and $p_{ f_{S}^{\prime}} ( \bm{s}_{F}(r) | x_{S} )$, respectively. \subsection{Training Stage II} Training stage II is end-to-end training with fixed $\{ f_{S}^{\prime}, g_{N}^{\prime}\}$ learned from stage I. As such, $L_3$ can be expressed as (c.f. \eqref{eq:BMI-deri-iid}) \begin{align} L_3 = & H({\bf S}_{F}) - \sum_{r=1}^{k} I_{f_{N}^{\prime} }( {\bf S}_{F}(r) ; Y_{S,F}, Y_{N,F} ) + \sum_{r=1}^{k} \mathbb{E}_{ y_{S,F},y_{N,F} } \bigg[ D_{\text{KL}}( p_{ f_{N}^{\prime} } ( \bm{s}_{F}(r) | y_{S,F},y_{N,F} ) \| \notag \\ & \hat{p}_{ g_{F}^{\prime}} ( \bm{s}_{F}(r) | y_{S,F},y_{N,F} ) )\bigg] . \label{eq:LossFF} \end{align} Minimizing $L_3$ results in maximizing the second term $\sum_{r=1}^{k} I_{f_{N}^{\prime} }( {\bf S}_{F}(r) ; Y_{S,F}, Y_{N,F} )$, corresponding to optimizing $f_{N}^{\prime}$. By probability factorization, the true distribution in the third term in \eqref{eq:LossFF} can be expressed as \begin{align} p_{ f_{N}^{\prime} } ( \bm{s}_{F}(r) | y_{S,F},y_{N,F} ) = & \int_{x_{S}} \int_{\hat{\bm{s}}_{F}^{N}} p( \bm{s}_{F}(r) | x_{S}, \hat{\bm{s}}_{F}^{N}, y_{S,F},y_{N,F}) p ( x_{S} | y_{S,F} ) p_{ f_{N}^{\prime} } ( \hat{\bm{s}}_{F}^{N} | y_{N,F} ) \mathop{}\!\mathrm{d} \hat{\bm{s}}_{F}^{N} \mathop{}\!\mathrm{d} x_{S} \notag \\ = & \int_{x_{S}} \int_{\hat{\bm{s}}_{F}^{N}} \underbrace{p( \bm{s}_{F}(r) | x_{S}, \hat{\bm{s}}_{F}^{N})}_{\text{Learned by {\textcircled{\tiny 9}}}} \underbrace{p ( x_{S} | y_{S,F} )}_{\text{Learned by {\textcircled{\tiny 7}}}} \underbrace{p_{ f_{N}^{\prime} } ( \hat{\bm{s}}_{F}^{N} | y_{N,F} )}_{\text{Learned by {\textcircled{\tiny 8}}}} \mathop{}\!\mathrm{d} \hat{\bm{s}}_{F}^{N} \mathop{}\!\mathrm{d} x_{S} \label{eq:pTx-st2} , \end{align} where $p( \bm{s}_{F}(r) | x_{S}, \hat{\bm{s}}_{F}^{N}) p ( x_{S} | y_{S,F} ) $ is determined through the stage I training. To exploit such factorization, we introduce auxiliary variables $\hat{y}_{S,F}$ and $\hat{y}_{N,F}$ to estimate $ x_{S}$ and $ \hat{\bm{s}}_{F}^{N} $, respectively, and express the distribution $\hat{p}_{ {g^{\prime}}} ( \bm{s}_{F}(r) | y_{S,F},y_{N,F} )$ in \eqref{eq:LossFF} as \begin{align} \hat{p}_{ {g^{\prime}}} ( \bm{s}_{F}(r) | y_{S,F},y_{N,F} ) = & \int_{ \hat{y}_{S,F} } \int_{ \hat{y}_{N,F} } \hat{p}_{g_{F,9}^{\prime}} ( \bm{s}_{F}(r) | \hat{y}_{S,F},\hat{y}_{N,F} ) \hat{p}_{g_{F,7}^{\prime}} ( \hat{y}_{S,F}| y_{S,F} ) \hat{p}_{g_{F,8}^{\prime}} ( \hat{y}_{N,F}| y_{N,F} ) \notag \\ & \mathop{}\!\mathrm{d} \hat{y}_{N,F} \mathop{}\!\mathrm{d} \hat{y}_{S,F}, \label{eq:pRx-st2} \end{align} where $\hat{y}_{S,F}$ and $\hat{y}_{N,F}$ denote the outputs of demapping modules $\textcircled{\scriptsize 7}$RxPreSF and $\textcircled{\scriptsize 8}$RxPreNF, respectively. Correspondingly, $\hat{p}_{g_{F,7}^{\prime}} ( \hat{y}_{S,F}| y_{S,F} )$ and $ \hat{p}_{g_{F,8}^{\prime}} ( \hat{y}_{N,F}| y_{N,F} )$ describe the learned distributions for these two modules. It can be observed that $\hat{p}_{g_{F,7}^{\prime}} ( \hat{y}_{S,F}| y_{S,F} )$ and $ \hat{p}_{g_{F,8}^{\prime}} ( \hat{y}_{N,F}| y_{N,F} )$ can estimate the true distributions $p ( x_{S} | y_{S,F} ) $ and $p_{ f_{N}^{\prime} } ( \hat{\bm{s}}_{F}^{N} | y_{N,F} ) $, respectively. \begin{table}[!t] \centering \captionsetup{font={small}} \caption{Learned distributions by the DNN demapping modules and the corresponding true ones } \label{table:function-NN} \centering \scalebox{1}{ \begin{tabular}{lll} \toprule Demapping Module & Learned Distribution & True Distribution \\ \midrule $\textcircled{\scriptsize 3}$RxPreSN & $\hat{p}_{g_{N,3}^{\prime}} ( \hat{y}_{S,N} | y_{S,N} ) $ & $p( x_{S}| y_{S,N} )$ \\ $\textcircled{\scriptsize 4}$RxN-N & $\hat{p}_{ g_{N,4}^{\prime}} ( \bm{s}_{N} | \hat{y}_{S,N} )$ & $p_{ f_{S}^{\prime}} ( \bm{s}_{N} | x_{S} )$ \\ $\textcircled{\scriptsize 5}$RxN-F & $\hat{p}_{ g_{N,5}^{\prime}} ( \bm{s}_{F} | \hat{y}_{S,N} )$ & $p_{ f_{S}^{\prime}} ( \bm{s}_{F} | x_{S} )$ \\ $\textcircled{\scriptsize 7}$RxPreSF & $\hat{p}_{g_{F,7}^{\prime}} ( \hat{y}_{S,F}| y_{S,F} )$ & $p( x_{S}| y_{S,F} )$ \\ $\textcircled{\scriptsize 8}$RxPreNF & $\hat{p}_{g_{F,8}^{\prime}} ( \hat{y}_{N,F}| y_{N,F} )$ & $p_{ f_{N}^{\prime} } ( \hat{\bm{s}}_{F}^{N} | y_{N,F} )$ \\ $\textcircled{\scriptsize 9}$RxF & $\hat{p}_{g_{F,9}^{\prime}} ( \bm{s}_{F} | \hat{y}_{S,F},\hat{y}_{N,F} )$ & $p( \bm{s}_{F} | x_{S}, \hat{\bm{s}}_{F}^{N})$ \\ \bottomrule \end{tabular} } \end{table} Table \ref{table:function-NN} summarizes the distributions that the DNN demapping modules have learned. In Section \ref{sec:simu}, we will show that the learned distribution is consistent with the true one. \section{Model Adaptation} In this section, we adapt the proposed DNN scheme to suit more practical scenarios. We first address the PA mismatch between training and inference. Then, we investigate the incorporation of the widely adopted channel coding into our proposed scheme. In both scenarios, our adaptation enjoys the benefit of reusing the original trained DNN modules without carrying out a new training process. \subsection{Adaptation to Power Allocation} \label{sec:pa} In Section \ref{sec:proposed-CNOMA}, the PA coefficients $( {\alpha}_{S,N}, {\alpha}_{S,F} ) $ at the BS are fixed during the training process. However, their values might change during the inference process due to the nonlinear behaviors of the power amplifier in different power regions \cite{PA-popovic2017amping,PA-sun2019behavioral}, resulting in the mismatch between the two processes. Denote the new PA coefficient for inference as $\hat{\alpha}_{S,N}$ for UN, and $\hat{\alpha}_{S,F}$ for UF. As a solution, we propose to scale the received signals for $g_N^{\prime}$ and $g_F^{\prime}$. The goal is to ensure that their input signal-to-interference-plus-noise ratios (SINRs) are equal to those during the inference process, i.e., $\frac{ \hat{\alpha}_{S,N} |h_{S,N}|^2 }{ \hat{\alpha}_{S,F} |h_{S,N}|^2 + 2 \sigma_{S,N}^2 }$ for $\bm{s}_{N}$ demapping by $g_N^{\prime}$, $\frac{ \hat{\alpha}_{S,F} |h_{S,N}|^2 }{ \hat{\alpha}_{S,N} |h_{S,N}|^2 + 2 \sigma_{S,N}^2 }$ for $\bm{s}_{F}$ demapping by $g_N^{\prime}$, and $\frac{ \hat{\alpha}_{S,F} |h_{S,F}|^2 }{ \hat{\alpha}_{S,N} |h_{S,F}|^2 + 2 \sigma_{S,F}^2 }$ for $\bm{s}_{F}$ demapping by $g_{F,7}^{\prime} \subset g_F^{\prime}$. In this case, their new expressions are given by \begin{align} \hat{\bm{s} }_{N} = & g_{N}^{\prime} \bigg( \frac{1}{\omega_{N}} y_{S,N} \bigg) , \label{eq:pa-N} \\ \hat{\bm{s} }_{F}^{N} = & g_{N}^{\prime} \bigg( \frac{1}{\omega_{F}} y_{S,N} \bigg) , \label{eq:pa-NF} \\ \hat{\bm{s}}_{F} = & g_{F}^{\prime} \bigg( \frac{1}{\omega_{F}} y_{S,F}, \ y_{N,F} \bigg) , \label{eq:pa-F} \end{align} where the scaling factors are defined as \begin{align} \omega_{N} = \sqrt{\frac{\hat{\alpha}_{S,N}}{{\alpha}_{S,N}}}, \ \omega_{F} = \sqrt{\frac{\hat{\alpha}_{S,F}}{{\alpha}_{S,F}}} . \end{align} Note that in \eqref{eq:pa-N} and \eqref{eq:pa-NF}, given two different inputs, $g_{N}^{\prime} (\cdot)$ is used twice to obtain $\hat{\bm{s} }_{N}$ and $\hat{\bm{s} }_{F}^{N}$, respectively. We prove in Appendix \ref{app:pa} that the SINR is exactly $\frac{ \hat{\alpha}_{S,N} |h_{S,N}|^2 }{ \hat{\alpha}_{S,F} |h_{S,N}|^2 + 2 \sigma_{S,N}^2 }$ for $\frac{1}{\omega_{N}} y_{S,N}$ in \eqref{eq:pa-N}, $\frac{ \hat{\alpha}_{S,F} |h_{S,N}|^2 }{ \hat{\alpha}_{S,N} |h_{S,N}|^2 + 2 \sigma_{S,N}^2 }$ for $ \frac{1}{\omega_{F}} y_{S,N}$ in \eqref{eq:pa-NF}, and $\frac{ \hat{\alpha}_{S,F} |h_{S,F}|^2 }{ \hat{\alpha}_{S,N} |h_{S,F}|^2 + 2 \sigma_{S,F}^2 }$ for $\frac{1}{\omega_{F}} y_{S,F}$ in \eqref{eq:pa-F}. \subsection{Incorporation of Channel Coding} \label{sec:c-coding} Channel coding has been widely adopted to improve the communication reliability \cite{book-clark2013error}. However, the conventional DNN based symbol-wise demapping \cite{ae-o2017introduction,deepNOMA-ye2020} cannot be directly connected to a soft channel decoder \cite{AE-bit-alberge2018deep,AE-bit-cammerer2020trainable}, such as the soft low-density parity-check code (LDPC) decoder \cite{LDPC-NOMA-pan2018sic} and polar code decoder \cite{pc-zheng2020threshold}. By contrast, our proposed scheme in Section \ref{sec:proposed-CNOMA} outputs bit-wise soft information (c.f. \eqref{eq:det-1}, \eqref{eq:det-2}), enabling the straightforward cascade of a soft channel decoder. Specifically, denote the information bit blocks for UN and UF as $\bm{c}_{N}$ and $\bm{c}_{F}$, respectively. They are encoded as binary codewords $ \langle \bm{s}_{N} \rangle = \mathcal{E} (\bm{c}_{N})$ and $ \langle \bm{s}_{F} \rangle = \mathcal{E} ( \bm{c}_{F} )$ by channel encoder $\mathcal{E}( \cdot )$, and then split into multiple transmitted bit blocks (i.e., $\bm{s}_{N}$ and $\bm{s}_{F}$), which are sent into $f_{S}^{\prime}$. At the receiver, the log-likelihood ratios (LLRs) of bits in $\bm{s}$ are calculated as \begin{align} \text{LLR}(\bm{s}(r)) = \log \Big( \frac{1-\hat{\bm{s}}(r)}{\hat{\bm{s}}(r)} \Big) , \ r \in \{1,2,\cdots,k\}, \end{align} where we interpret $\hat{\bm{s}}(r)$ as the soft probability for bit $\bm{s}(r)$ with $\hat{\bm{s}}(r) = \Pr \{ \bm{s}(r)=1 | \hat{\bm{s}} \}$ \cite{intro-8849796}. The LLRs serve as the input of the soft channel decoder, denoted as $\mathcal{D} ( \cdot )$. At UN, we assume that it decodes its own information $\bm{c}_{N}$ as $\hat{\bm{c}}_{N} = \mathcal{D} ( \text{LLR} (\langle \hat{\bm{s}}_{N} \rangle ) ) $, but still performs $ \hat{x}_{F}^{N} = f_{N}^{\prime} (\hat{\bm{s}}_{F}^{N})$ as in the uncoded case without decoding $\bm{c}_{F}$ (called demapping-and-forward). These two operations are separable because we use two parallel DNNs, i.e., $\textcircled{\scriptsize 4}$RxN-N and $\textcircled{\scriptsize 5}$RxN-F, to obtain $\hat{\bm{s}}_{N}$ and $\hat{\bm{s}}_{F}^{N}$, respectively. Note that this parallel demapping can also reduce the error propagation compared to SIC. At UF, it decodes $\bm{c}_{F}$ as $\hat{\bm{c}}_{F} = \mathcal{D} ( \text{LLR} (\langle \hat{\bm{s}}_{F} \rangle ) ) $. By contrast, the conventional SIC and JML decoding schemes need to decode $\hat{\bm{s}}_{N}$ and $\hat{\bm{s}}_{F}^{N}$ jointly. \section{Simulation Results} \label{sec:simu} In this section, we perform simulation to verify the superiority of the proposed deep cooperative NOMA scheme, and compare it with OMA and the conventional cooperative NOMA scheme. In OMA, the BS transmits $x_{N}$ and $x_{F}$ to UN and UF, respectively, in two consecutive time slots, and there is no cooperation between UN and UF. Default parameters for simulation are: $k=2$ ($M_{N}=M_{F}=4$) and $\sigma_{S,F} = \sigma_{S,N} = \sigma_{N,F} = \sigma$, $\lambda_{S,F}=1$, $\lambda_{S,N}=\lambda_{N,F}$ for the three links. We consider six scenarios (S1-S6), and their parameters are summarized in Table \ref{table:scenario-setup}, where ``cooperative link" refers to the BS to UN to UF link. Note that for S1-S4, we have $\big( \hat{\alpha}_{S,N}, \hat{\alpha}_{S,F} \big) = ( {\alpha}_{S,N}, {\alpha}_{S,F} ) $. \begin{table}[!ht] \centering \captionsetup{font={small}} \caption{ Parameters for scenarios S1-S6} \label{table:scenario-setup} \centering \scalebox{1}{ \begin{tabular}{llll} \toprule Scenario & $\lambda_{S,N}$ & $( {\alpha}_{S,N}, {\alpha}_{S,F} )$ & Explanation \\ \midrule S1 & 10 & $( 0.4, 0.6 ) $ & Balanced PA \\ S2 & 10 & $( 0.25, 0.75 )$ & Optimized PA \\ S3 & 6 & $( 0.25, 0.75 )$ & Weaker cooperative link \\ S4 & 6 & $( 0.1, 0.9 )$ & Unbalanced PA \\ S5 & 10 & $( 0.25, 0.75 )$ & \tabincell{l}{ PA mismatch: $\big( \hat{\alpha}_{S,N}, \hat{\alpha}_{S,F} \big) = ( 0.3, 0.7 ) $ } \\ S6 & 10 & $( 0.25, 0.75 )$ & \tabincell{l}{ PA mismatch: $\big( \hat{\alpha}_{S,N}, \hat{\alpha}_{S,F} \big) = ( 0.2, 0.8 ) $} \\ \bottomrule \end{tabular} } \end{table} For the specific layer structure of each DNN module in Fig.~\ref{fig:AE-NOMA}, all three transmitters ($\textcircled{\scriptsize 1}$, $\textcircled{\scriptsize 2}$ and $\textcircled{\scriptsize 6}$) have the same layer structure, with an input layer (dimension of $k_{N}$ or $k_{F}$) followed by $4$ hidden layers with $16$, $8$, $4$, and $2$ neurons, respectively. Modules $\textcircled{\scriptsize 3}$, $\textcircled{\scriptsize 7}$ and $\textcircled{\scriptsize 8}$ also have the same layer structure. There are three hidden layers of dimensions $64$, $32$ and $2$, respectively. Modules $\textcircled{\scriptsize 4}$, $\textcircled{\scriptsize 5}$, and $\textcircled{\scriptsize 9}$ have three hidden layers of dimensions $128$, $64$ and $32$, respectively, with output of dimension $k_N$ or $k_F$. We adopt tanh as the activation function for the hidden layers \cite{intro-8755300}. We use Keras with TensorFlow backend to implement the proposed DNN architecture, which is first trained under AWGN channels at SNR$=5$~dB, and then $\{ g_{N}^{\prime}, g_{F}^{\prime} \} $ are fine-tuned under Rayleigh fading channels (c.f. Section \ref{sec:self-train}) at a list of SNR values in $[ 15, 5, 6, 7, 30 ]$~dB to achieve a favorable error performance in both low and high SNR regions. We have the learning rate $\tau = 0.001$ and $0.01$ for AWGN and Rayleigh fading channels, respectively. After training, we test the DNN scheme for various SNRs, including those beyond the trained SNRs. In the uncoded case, the demapping rule for bit $\bm{s}(r)$ is $\text{LLR}(\bm{s}(r)) = \log \Big( \frac{1-\hat{\bm{s}}(r)}{\hat{\bm{s}}(r)} \Big) \ \overset{\bm{s}(r)=0}{\underset{\bm{s}(r)=1}{\gtrless}} \ 0$. \subsection{Network Losses $L_1$, $L_2$, and $L_3$ during Testing} \begin{figure}[!t] \centering \subfigure[For S1]{\label{fig:loss-S1} \includegraphics[width=0.47\textwidth]{loss-S1-z-broad} } \subfigure[For S2]{\label{fig:loss-S2} \includegraphics[width=0.47\textwidth]{loss-S2-z-broad} } \subfigure[For S3]{\label{fig:loss-S3} \includegraphics[width=0.47\textwidth]{loss-S3-z-broad} } \subfigure[For S4]{\label{fig:loss-S4} \includegraphics[width=0.47\textwidth]{loss-S4-z-broad} } \caption{Network losses $L_1$, $L_2$, $L_3$, and the average loss $\sum_{t=1}^3 L_t/3$ for different channel scenarios.} \label{fig:loss} \end{figure} Upon obtaining the proposed DNN through training, in Fig.~\ref{fig:loss}, we check whether all the losses $L_1$, $L_2$, and $L_3$ can be significantly reduced by our proposed two-stage training method in Section \ref{sec:2stage}. For each SNR value, $8\times 10^5$ data bits are randomly generated for each user, divided into $B_t = 4\times 10^5$ data blocks with $k=2$ bits per block, and then sent into the DNN. We calculate $L_1$, $L_2$, and $L_3$ according to \eqref{eq:L-sample}, as well as the average loss $\sum_{t=1}^3 L_t/3$. We can see that for all scenarios in Fig.~\ref{fig:loss}, as SNR increases, $L_1$, $L_2$ and $L_3$ each asymptotically decreases to a small value, e.g., $0.13$ for $L_2$ in Fig.~\ref{fig:loss-S1}. The only exception is that $L_1$ in S4 (Fig.~\ref{fig:loss-S4}) asymptotically decreases to $0.25$, because of the relatively small PA coefficient ${\alpha}_{S,N} = 0.1$. Besides, $L_1$, $L_2$, and $L_3$ are all close to the average loss $\sum_{t=1}^3 L_t/3$ within $0.14$. These results indicate that the proposed two-stage training can significantly reduce $L_1$, $L_2$, and $L_3$, and provide a solution to the original MTL problem ({\bf P2}). \subsection{Learned Mappings by DNN Mapping Modules} \begin{figure} [!t] \centering \subfigure[For $\bm{s}_{N} \in \mathcal{M}_{N}$, $\bm{s}_{F} \in \mathcal{M}_{F}$, and $\hat{\bm{s}}_{F}^{N} \in \mathcal{M}_{F}^{N}$, respectively]{ \label{fig:cons-AE-x} \includegraphics[width=0.48\textwidth]{cons-AE-M4-x}} \subfigure[For $\bm{s}_{S} \in \mathcal{M}_{S}$ (composite constellation)]{ \label{fig:cons-AE-xS} \includegraphics[width=0.48\textwidth]{cons-AE-M4-cp-v2}} \caption{Learned constellations by $f_{S}^{\prime}$ and $f_{N}^{\prime}$ with bit mapping for $( {\alpha}_{S,N}, {\alpha}_{S,F} ) = ( 0.4, 0.6 )$.} \label{fig:cons-AE-M4-s2} \end{figure} \begin{figure}[!t] \centering \subfigure[For $\bm{s}_{N} \in \mathcal{M}_{N}$]{ \label{fig:cons-AE-M4-xNb} \includegraphics[width=0.6\textwidth]{cons-AE-M4-xN}} \subfigure[For $\bm{s}_{S} \in \mathcal{M}_{S}$ (composite constellation)]{ \label{fig:cons-AE-M4-cp} \includegraphics[width=0.6\textwidth]{cons-AE-M4-xS}} \caption{Learned constellations by $f_{S}^{\prime}$ for the individual bit positions, where $( {\alpha}_{S,N}, {\alpha}_{S,F} ) = ( 0.4, 0.6 )$, and the red and blue markers denote bit $0$ and $1$, respectively. } \label{fig:cons-AE-M4} \end{figure} As discussed in Section \ref{sec:AE-CoopNOMA}, the proposed DNN can learn mappings $\ (\{ 0,1 \}^{k_{N}}, \{ 0,1 \}^{k_{F}}) \to \mathcal{M}_{S}$ and $\hat{\bm{s}}_{F}^{N} \to \mathcal{M}_{F}^{N} $ automatically, resulting in a new constellation and bit mapping. Fig.~\ref{fig:cons-AE-M4-s2} presents the learned constellations by $f_{S}^{\prime}$ and $f_{N}^{\prime}$ with bit mapping for $( {\alpha}_{S,N}, {\alpha}_{S,F} ) = ( 0.4, 0.6 )$. Fig.~\ref{fig:cons-AE-x} shows the individual constellations $\bm{s}_{N} \in \mathcal{M}_{N}$, $\bm{s}_{F} \in \mathcal{M}_{F}$, and $\hat{\bm{s}}_{F}^{N} \in \mathcal{M}_{F}^{N}$, and it can be seen that $\mathcal{M}_{N}$, $\mathcal{M}_{F}$, and $\mathcal{M}_{F}^{N}$ all have learned parallelogram-like shapes with different orientations and aspect ratios. Fig.~\ref{fig:cons-AE-xS} shows the composite constellation $\mathcal{M}_{S}$, where the minimum Euclidean distance is improved significantly compared with that in Fig.~\ref{fig:cons-xS}, i.e., from $0.2$ to $0.36$. In Section \ref{sec:proposed-CNOMA}, we use the bit-wise binary classification method to achieve the demappings $g_{N}^{\prime}$ and $g_{F}^{\prime}$. In Fig.~\ref{fig:cons-AE-M4}, we demonstrate that the two classes (bit $0$ and $1$) are separable by presenting the location of each individual bit. Specifically, the constellations $\bm{s}_{N} \in \mathcal{M}_{N}$ and $\bm{s}_{S} \in \mathcal{M}_{S}$ in Fig. 6 are presented here in a different form in Figs.~\ref{fig:cons-AE-M4-xNb} and \ref{fig:cons-AE-M4-cp}, respectively. It is clearly shown that these two classes (bit $0$ and $1$) are easily separable for all bit positions. This indicates that the demapping can be achieved. \subsection{Learned Distributions by DNN Demapping Modules} \label{sec:simu-g} \begin{figure} [!t] \centering \subfigure[$\textcircled{\scriptsize 3}$RxPreSN: $\hat{p}_{g_{N,3}^{\prime}} ( \hat{y}_{S,N} | y_{S,N} ) $ ]{ \label{fig:cons-RxPreSN} \includegraphics[width=0.3\textwidth]{cons_RxPreSN.pdf}} \subfigure[$\textcircled{\scriptsize 7}$RxPreSF: $\hat{p}_{g_{F,7}^{\prime}} ( \hat{y}_{S,F}| y_{S,F} )$ ]{ \label{fig:cons-RxPreSF} \includegraphics[width=0.3\textwidth]{cons_RxPreSF.pdf}} \subfigure[$\textcircled{\scriptsize 8}$RxPreNF: $ \hat{p}_{g_{F,8}^{\prime}} ( \hat{y}_{N,F}| y_{N,F} )$]{ \label{fig:cons-RxPreNF} \includegraphics[width=0.3\textwidth]{cons_RxPreNF.pdf}} \subfigure[$p( x_{S}| y_{S,N} ) \propto p(y_{S,N}| x_{S} ) $]{ \label{fig:cons-ySN} \includegraphics[width=0.3\textwidth]{cons_ySN.pdf}} \subfigure[$p( x_{S}| y_{S,F} ) \propto p(y_{S,F}| x_{S} ) $ ]{ \label{fig:cons-ySF} \includegraphics[width=0.3\textwidth]{cons_ySF.pdf}} \subfigure[$ p_{ f_{N}^{\prime} } ( \hat{\bm{s}}_{F}^{N} | y_{N,F} ) \propto p_{ f_{N}^{\prime} } ( y_{N,F} | \hat{\bm{s}}_{F}^{N} ) $]{ \label{fig:cons-yNF} \includegraphics[width=0.3\textwidth]{cons_yNF.pdf}} \caption{Signal clusters corresponding to the learned distributions for $\textcircled{\scriptsize 3}$RxPreSN, $\textcircled{\scriptsize 7}$RxPreSF, and $\textcircled{\scriptsize 8}$RxPreNF (top), and the respective true ones, where $ ( {\alpha}_{S,N}, {\alpha}_{S,F} ) = ( 0.4, 0.6 )$, $\bm{h} = [1,1,1]^T$, and SNR$=25$ dB. The x-axis and y-axis denote the in-phase and quadrature parts, respectively.} \label{fig:cons-all} \end{figure} The learned distributions of $\textcircled{\scriptsize 3}$RxPreSN, $\textcircled{\scriptsize 7}$RxPreSF, and $\textcircled{\scriptsize 8}$RxPreNF for demapping and the corresponding true ones are shown in Table \ref{table:function-NN}. Here, to verify that $\textcircled{\scriptsize 3}$, $\textcircled{\scriptsize 7}$, and $\textcircled{\scriptsize 8}$ have successfully learned their respective true distributions, we visualize these distributions in Fig.~\ref{fig:cons-all} by sampling, where each colored cluster consists of $200$ signal points. The results for $\textcircled{\scriptsize 3}$, $\textcircled{\scriptsize 7}$, and $\textcircled{\scriptsize 8}$ are shown in Figs.~\ref{fig:cons-RxPreSN}, \ref{fig:cons-RxPreSF}, and \ref{fig:cons-RxPreNF}, respectively, while the corresponding true distributions in Figs.~\ref{fig:cons-ySN}, \ref{fig:cons-ySF}, and \ref{fig:cons-yNF}, respectively. It is shown that the two figures in the same column have similar cluster shapes, indicating that $\textcircled{\scriptsize 3}$, $\textcircled{\scriptsize 7}$, and $\textcircled{\scriptsize 8}$ have successfully learned the true distributions. Besides, it can be seen that various forms of signal transformations have been learned. For example, Fig.~\ref{fig:cons-RxPreSN} can be regarded as a non-uniformly scaled version of Fig.~\ref{fig:cons-ySN}, Fig.~\ref{fig:cons-RxPreSF} can be regarded as a rotated and scaled version of Fig.~\ref{fig:cons-ySF}, while Fig.~\ref{fig:cons-RxPreNF} can be regarded as a mirrored and scaled version of Fig.~\ref{fig:cons-yNF}. These transformations keep the original signal structure, and meanwhile can introduce more degrees of freedom to facilitate demapping. Similar observations are made in other scenarios. \subsection{Uncoded BER Performance Comparison for S1-S4} Fig. \ref{fig:BER-uncoded} compares the uncoded BER performance of the proposed deep cooperative NOMA, OMA, and the conventional NOMA for $( {\alpha}_{S,N}, {\alpha}_{S,F} ) = ( \hat{\alpha}_{S,N}, \hat{\alpha}_{S,F} )$, i.e., the PA coefficients for training and inference are the same. We first consider the scenario S1 in Fig.~\ref{fig:BER-comp-M4-S1}. It is clearly shown that the proposed scheme significantly outperforms the conventional one by $6.25$~dB for both UN and UF, while outperforming the OMA by $1.25$~dB at BER=$10^{-3}$. It can also be seen that the conventional scheme is worse than the OMA scheme in S1, due to the lack of an appropriate PA. \begin{figure}[!t] \centering \subfigure[For S1]{\label{fig:BER-comp-M4-S1} \includegraphics[width=0.47\textwidth]{BER-S1} } \subfigure[For S2]{\label{fig:BER-comp-M4-S2} \includegraphics[width=0.47\textwidth]{BER-S2} } \subfigure[For S3]{\label{fig:BER-comp-M4-S3} \includegraphics[width=0.47\textwidth]{BER-S3} } \subfigure[For S4]{\label{fig:BER-comp-M4-S4} \includegraphics[width=0.47\textwidth]{BER-S4} } \caption{BER performance comparison of the proposed deep cooperative NOMA scheme, OMA, and the conventional NOMA scheme for different channel scenarios.} \label{fig:BER-uncoded} \end{figure} We then compare the BER with optimized PA coefficients $( {\alpha}_{S,N}, {\alpha}_{S,F} ) $, as shown in Fig.~\ref{fig:BER-comp-M4-S2} for S2. We can see that for UF, the proposed scheme outperforms the conventional one when SNR$\geq 12.5$~dB, while outperforming the OMA across the whole SNR range. For example, the performance gap between the proposed scheme and the conventional one (resp. OMA) is around $2.5$~dB (resp. $5$~dB) at BER=$10^{-4}$ (resp. $10^{-3}$). For UN, the proposed scheme has a similar BER performance with the conventional one. Together with Fig.~\ref{fig:BER-comp-M4-S1}, we can see that the proposed scheme is robust to the PA. In Fig.~\ref{fig:BER-comp-M4-S3}, we compare the BER in S3 with channel conditions different from S1 and S2. Likewise, for UF, the proposed scheme outperforms the conventional one for SNR$> 12.5$~dB, e.g., by $3$~dB at BER=$10^{-4}$. It outperforms the OMA across the whole SNR range, e.g., by $3$~dB at BER=$10^{-4}$. Fig.~\ref{fig:BER-comp-M4-S4} compares the BER in S4 with an unbalanced PA, i.e., $( {\alpha}_{S,N}, {\alpha}_{S,F} ) = ( 0.1, 0.9 )$. Similar observations to Fig. \ref{fig:BER-comp-M4-S3} can be made, and the proposed scheme outperforms both OMA and the conventional one. Moreover, we can see from Figs.~\ref{fig:BER-comp-M4-S2}-\ref{fig:BER-comp-M4-S4} that the proposed scheme shows a larger decay rate for UF BER for large SNRs, revealing that the demapping errors at UN are successfully learned and compensated at UF, achieving higher diversity orders. \subsection{Adaptation to Power Allocation for S5 and S6} \begin{figure}[!t] \centering \subfigure[For S5]{\label{fig:BER-comp-M4-S5} \includegraphics[width=0.47\textwidth]{BER-PA-S5} } \subfigure[For S6]{\label{fig:BER-comp-M4-S6} \includegraphics[width=0.47\textwidth]{BER-PA-S6} } \caption{BER performance comparison of the proposed deep cooperative NOMA and the conventional NOMA schemes with PA mismatch between training and inference.} \end{figure} To demonstrate its adaptation to the mismatch between the training and inference PA discussed in Section \ref{sec:pa}, we validate the proposed scheme in S5 ($\hat{\alpha}_{S,F} < {\alpha}_{S,F} $) and S6 ($\hat{\alpha}_{S,F} > {\alpha}_{S,F} $) in Figs.~\ref{fig:BER-comp-M4-S5} and \ref{fig:BER-comp-M4-S6}, respectively. It can be seen that for UF, the proposed scheme outperforms the conventional one at SNR$>15$~dB. It can also be seen that the proposed scheme still achieves larger BER decay rates in both S5 and S6. These results clearly verify that, without carrying out a new training process, the proposed scheme can handle the PA mismatch. \subsection{BER Performance Comparison with Channel Coding} \begin{figure}[!t] \centering \subfigure[For S2]{\label{fig:BER-comp-coded-S2-NaF} \includegraphics[width=0.47\textwidth]{BER-coded-S2-NaF} } \subfigure[For S4]{\label{fig:BER-comp-coded-S4-NaF} \includegraphics[width=0.47\textwidth]{BER-coded-S4-NaF} } \caption{BER performance comparison of the proposed deep cooperative NOMA and the conventional NOMA schemes with the LDPC code.} \label{fig:BER-LDPC} \end{figure} In Fig.~\ref{fig:BER-LDPC}, we evaluate the coded BER performance with the LDPC code in S2 and S4. The code parity-check matrix comes from the DVB-S2 standard \cite{DVB-S2} with the rate $1/2$ and size of $32400 \times 64800$. Therefore, $\bm{c}_{N}$ and $\bm{c}_{F}$ have the length of $32400$ bits, while the $\mathcal{E}( \cdot )$ encoded $ \langle \bm{s}_{N} \rangle $ and $ \langle \bm{s}_{F} \rangle$ have the length of $64800$ bits. The LDPC decoder $\mathcal{D} ( \cdot ) $ is based on the classic belief propagation algorithm with soft LLR as input. The coded BER is defined as $\Pr \{ \bm{c}_{J} \neq \hat{\bm{c}}_{J} \}$, $J \in \{ N, F\}$. For the conventional scheme, UN adopts SIC due to its low computational complexity. Specifically, it first decodes $\bm{c}_{F}$ as $\hat{\bm{c}}_{F}^{N} = \mathcal{D} ( \cdot )$, cancels the interference after re-encoding and re-modulating $\hat{\bm{c}}_{F}^{N}$, and then decodes $\hat{\bm{c}}_{N}$. Then, UN forwards the re-modulated signal to UF. Note that the decoding is terminated on reaching the maximum number of decoding iterations ($50$ here) or when all parity checks are satisfied. In both scenarios, we observe a significant increasing decoding performance gap between the proposed and conventional schemes. For example, in Fig.~\ref{fig:BER-comp-coded-S4-NaF}, to achieve BER=$10^{-4}$ for UF, the SNRs for the proposed and the conventional\footnote{The performance of the conventional scheme can also be found in \cite{LDPC-NOMA-pan2018sic}.} schemes are $0.25$ and $20$~dB, respectively, which shows a gap more than $19$~dB. Similar observations can be made from Fig.~\ref{fig:BER-comp-coded-S2-NaF}. The performance superiority of the proposed scheme mainly originates from its utilizations of soft information and the parallel demapping at UN attributing to the error performance optimization. In the meantime, the performance of the conventional scheme is limited to the interference and error propagation \cite{LDPC-NOMA-pan2018sic}. \subsection{Computational Complexity Comparisons} As discussed before, we adopt the offline-training and online-deploying mode for the proposed scheme. Therefore, we only need to consider the computational complexity in the online-deploying phase. Specifically, in the uncoded case, the complexity for signal detection is $\mathcal{O}(2^{k})$ for the conventional scheme. By contrast, the mapping-demapping complexity is $\mathcal{O}(k)$ for the proposed scheme, which is only linear in $k$. In the coded case, the conventional scheme includes two decoding processes to jointly decode $\hat{\bm{s}}_{N}$ and $\hat{\bm{s}}_{F}^{N}$ at UN, resulting in a high decoding complexity. The proposed scheme only involves a single decoding process to separately decode its own information $\hat{\bm{s}}_{N}$, so that a low-complexity demapping-and-forward scheme can be used for the UF signal. \section{Conclusion} In this paper, we proposed a novel deep cooperative NOMA scheme to optimize the BER performance. We developed a new hybrid-cascaded DNN architecture to represent the cooperative NOMA system, which can then be optimized in a holistic manner. Multiple loss functions were constructed to quantify the BER performance, and a novel multi-task oriented two-stage training method was proposed to solve the end-to-end training problem in a self-supervised manner. Theoretical perspective was then established to reveal the learning mechanism of each DNN module. Simulation results demonstrate the merits of our scheme over OMA and the conventional NOMA scheme in various channel environments. As a main advantage, the proposed scheme can adapt to PA mismatch between training and inference, and can be incorporated with channel coding to combat signal deterioration. In our future work, we will consider the system designs for high-order constellations, transmission rate adaptation\cite{concl-makki2020error}, and grant-free access \cite{concl-8454392}, and to include more cooperative users \cite{concl-8989311,concl-8726376}.
{ "timestamp": "2020-07-28T02:37:46", "yymm": "2007", "arxiv_id": "2007.13495", "language": "en", "url": "https://arxiv.org/abs/2007.13495" }
\section{\label{sec:intro}INTRODUCTION} The discovery of rare events caused by new physics requires that backgrounds which could mimic the signal be reduced as much as possible. Irreducible backgrounds must be well studied with credible estimates of their uncertainties. Searches for new physics based on the dual-phase liquid xenon (LXe) time projection chamber (TPC) have exciting potential for the discovery of dark matter and for the discovery of new neutrino properties. In the last decade these experiments have grown in size while background levels have been suppressed. In current and future LXe TPCs, the majority of background rate in the low-energy ($\lesssim50$~keV) regime results from $\beta$ decays of \Pbtof, \Pbtot, and \KreF, with \Pbtof being the most significant of these by far. The isotopes \Pbtof and \Pbtot enter the LXe bulk as daughters of \Rnttt and \Rnttz which emanate out of the detector construction materials and dust. \KreF, on the other hand, enters through its abundance in the atmosphere and is therefore present in the raw xenon feedstock. The residual quantity found in low-background LXe experiments is that which survives xenon purification techniques such as chromatography~\cite{Akerib:2016hcd} and distillation~\cite{Aprile:2016xhi}. Typically, the background from \Pbtof dominates over that from \Pbtot and \KreF owing to the \Rnttt half-life of 3.8~days. Each of these isotopes exhibit a $\beta$ decay in which the transition proceeds directly to the daughter nuclei's ground state with no associated $\gamma$-ray or conversion electron emission (henceforth referred to as the ``ground-state" decay). The result is a continuous energy distribution of single-site events made by the ejected electron which spans from zero up to the decay $Q$-value. Table~\ref{tab:data} provides a summary of pertinent nuclear data for these three isotopes and their ground state transitions. The low energy population of these decays forms the majority background for many new physics searches as illustrated in Table~\ref{tab:bkg}. The remaining $\beta$ decays which populate excited states of the daughter and result in $\gamma$-ray emission are less of a concern as they are more easily identified as multi-site events which do not mimic the sought-after signal. While multiple techniques are used to infer the final level of each isotope realized in an experiment, the modelling of this background depends on the ground state decay branching ratios assumed and, more crucially, the precise shape of the $\beta$ energy spectra. \begin{table}[t] \centering \caption{ Relevant nuclear data for the isotopes considered in this work. The last three columns provide information pertaining to ground state decays, including the branching ratio (BR) and the initial and final spin-parity assignments, $J_i^{\pi}$ and $J_f^{\pi}$. Uncertainties smaller than 5\% are not shown. Endpoint data is from~\cite{Wan17} and all other data is from~\cite{A214datasheet,A212datasheet,A85datasheet}. } \label{tab:data} \begin{ruledtabular} \begin{tabular}{llccc} \multirow{2}{*}{Isotope} & \multirow{2}{*}{Half-life} & \multicolumn{3}{c}{Ground state $\beta$ decay} \\ & & Endpoint (keV) & BR (\%) & $J_i^{\pi},J_f^{\pi}$ \\ \hline \Pbtof & 26.8~min & 1018 & 11.0(10)\Tstrut & $0^+,1^-$ \\ \Pbtot & 10.6~h & 569.1 & 11.9(16) & $0^+,1^-$ \\ \KreF & 10.7~yr & 687.0 & 99.6 & $9/2^+,5/2^-$ \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[ht] \centering \caption{ Projected and measured percentage of total electron recoil background in LXe TPC experiments attributed to the ground-state $\beta$ decays of the given isotopes in the specified energy windows. Other backgrounds arise from solar neutrino-electron scattering, $2\nu\beta\beta$ decay of $^{136}$Xe, and $\gamma$-rays from detector materials. The contribution from $^{136}$Xe is falling steeply in this region, becoming subdominant to that from solar neutrino scattering below $\approx12$~keV. } \label{tab:bkg} \begin{ruledtabular} \begin{tabular}{lccc} \multirow{2}{*}{Isotope} & LZ~\cite{LZ:2018} & XENONnT~\cite{xntSens} & XENON1T~\cite{Aprile:2020tmw} \\ & 1.5--15~keV & 1--13~keV & 1--30~keV \\ \hline \Pbtof & 53 & 42 & $83\pm2$ \Tstrut\\ \Pbtot & 8.8 & - & - \\ \KreF & 2.3 & 8.2 & $10\pm2$ \\ \end{tabular} \end{ruledtabular} \end{table} The shape of a $\beta$ particle energy spectrum depends on the nature of the weak interaction transition and on both the atomic and nuclear structure of the initial and final states involved. For first-forbidden unique decays, such as the decay of \KreF, there are only small corrections to the spectrum shape from nuclear structure. However, for first-forbidden non-unique transitions, such as the ground state decays of \Pbtof and \Pbtot, the spectral shape can depend heavily on the details of the nuclear structure. The presently used formalism for the forbidden non-unique $\beta$ transitions was first introduced in~\cite{Mustonen2006} and later extended in~\cite{Haaranen2016} and~\cite{Haaranen2017} to include the next-to-leading-order corrections to the $\beta$-decay shape function. In~\cite{Haaranen2016} it was noticed for the first time that some of the forbidden non-unique $\beta$ transitions can depend strongly on the effective value of the weak axial vector coupling constant $g_{\rm A}$. This dependence on $g_{\rm A}$ was studied in the nuclear shell-model framework in~\cite{Kostensalo2017}. A recent review of the $\beta$ spectral-shape calculations is given in~\cite{Ejiri2019}. The present shell-model calculations are an extension of the aforementioned $\beta$-decay formalism in that the magnitude of a key vector-type nuclear matrix element (NME) is fixed to reproduce the partial half-life of the ground state transitions of \Pbtot and \Pbtof. This method was used in recent $\beta$-decay calculations for light nuclei in~\cite{Kumar2020}. Recently, the XENON1T experiment reported~\cite{Aprile:2020tmw} an excess of electron recoil events above a background which is dominated by the ground-state $\beta$ decay of \Pbtof. In that result both the nuclear transition and atomic exchange effects were modelled assuming the decay is an allowed transition. In this work we report on the ground-state $\beta$ shapes of \Pbtof and \Pbtot obtained by calculating the necessary NMEs for a first-forbidden non-unique transition and by employing a formalism for atomic exchange corrections that has been extended to include forbidden unique transitions. The same exchange formalism is then also applied to the ground state $\beta$ decay of \KreF. \section{\label{sec:calcs}CALCULATIONS} The half-life of a forbidden non-unique $\beta^-$ decay can be expressed as \begin{equation} t_{1/2}=\tilde{\kappa}/\tilde{C}, \label{eq:hl} \end{equation} where~\cite{Hardy1990} \begin{equation} \tilde{\kappa} = \frac{2\pi^3\hbar^7\mathrm{ln \ 2}}{m_e^5c^4(G_{\rm F} \cos \theta_{\rm C})^2}= 6147 \ \mathrm{s}, \label{eq:kappa} \end{equation} $\theta_{\rm C}$ being the Cabibbo angle and $\tilde{C}$ is the dimensionless integrated shape function, given by \begin{equation} \tilde{C} = \int^{w_0}_1 C(w_e)pw_e(w_0-w_e)^2F_0(Z,w_e)K(w_e)dw_e, \label{eq:ctilde} \end{equation} where $w_e=W_e/m_ec^2$, $w_0 = W_0/m_ec^2$, and $p=p_ec/(m_ec^2)= \sqrt{w_e^2 -1}$ are unitless kinematic quantities, $F_0(Z,w_e)$ is the Fermi function, and $K(w_e)$ encompasses a plurality of corrective terms such as atomic effects. The shape factor $C(w_e)$ of Eq.~(\ref{eq:ctilde}) contains complicated combinations of both (universal) kinematic factors and nuclear form factors. The nuclear form factors can be related to the corresponding NMEs using the impulse approximation~\cite{Behrens1982}. The $\beta$ particle spectrum is given by the integral in Eq.~(\ref{eq:ctilde}). The probability of the electron being emitted with energy between $w_e$ and $w_e+dw_e$ is \begin{equation} P(w_e)dw_e \propto C(w_e)pw_e(w_0-w_e)^2F_0(Z,w_e)K(w_e)dw_e. \end{equation} \subsection{\label{ssec:nuclear_calcs}Nuclear shape factors} For the first-forbidden decays the relevant NMEs are those corresponding to the transition operators \begin{align} \mathcal{O} (0^-): g_{\rm A}({\bm\sigma \cdot {\textbf{p}}_e}), \quad g_{\rm A} &({\bm\sigma \cdot {\textbf{r}}}) \label{eq:rank-0}\\ \mathcal{O} (1^-): g_{\rm V}\textbf{p}_e, \quad g_{\rm A} ({\bm\sigma \times {\textbf{r}}}), &\quad g_{\rm V}\textbf{r} \label{eq:rank-1} \\ \mathcal{O} (2^-): g_{\rm A} [{\bm\sigma {\textbf{r}}}]_2&, \label{eq:rank-2} \end{align} where \textbf{r} is the radial coordinate and $\textbf{p}_e$ is the electron momentum. The decay of \KreF is first-forbidden unique, so only the operator $ g_{\rm A} [{\bm\sigma {\textbf{r}}}]_2$ contributes, simplifying the calculations greatly. For the ground state decay of \Pbtot and \Pbtof only the rank-1 operators contribute. The NMEs involved in the transitions can be evaluated using the relation \begin{align} \begin{split} ^{V/A}\mathcal{M}_{KLS}^{(N)}(pn)(k_e,m,n,\rho)& \\=\frac{\sqrt{4\pi}}{\widehat{J}_i} \sum_{pn} \, ^{V/A}m_{KLS}^{(N)}(pn)(&k_e,m,n,\rho)(\Psi_f|| [c_p^{\dagger} \tilde{c}_n]_K || \Psi_i), \label{eq:ME} \end{split} \end{align} where $^{V/A}m_{KLS}^{(N)}(pn)(k_e,m,n,\rho)$ is the single-particle matrix element, and $(\Psi_f|| [c_p^{\dagger}\tilde{c}_n]_K || \Psi_i)$ is the one-body transition density (OBTD), which contains all the relevant nuclear-structure information. The OBTDs need to be evaluated using some nuclear model, such as the nuclear shell model used in this work. The nuclear structure calculations were done using the shell-model code NuShellX@MSU~\cite{nushellx}. For \KreF the calculations were carried out in the full $0f_{5/2}$--$1p$--$0g_{9/2}$ valence space with the effective Hamiltonian JUN45~\cite{Honma2009}. For the Pb isotopes the calculations were done using the complete valence space spanned by proton orbitals $0h_{9/2}$, $1f$, $2p$, and $0i_{13/2}$ and neutron orbitals $0i_{11/2}$, $1g$, $2d$, $3s$, and $0j_{15/2}$ with the effective Hamiltonian khpe~\cite{Warburton91}. For \KreF the spectral shape does not depend on the nuclear structure in the leading-order terms. In this work we include also the next-to-leading-order terms in the Behrens and B{\"u}hring expansion~\cite{Behrens1982}, which increases the number of NMEs involved in each transition to 5 for \KreF and to 13 for the non-unique transitions. Uncertainties in the theoretical spectral shapes are related to uncertainties in the ratios of the NMEs. Based on previous calculations, quenching of the ratio of the axial-vector and vector coupling constants $g_{\rm A}/g_{\rm V}$ is needed in order to reproduce experimental spectral shapes for non-unique beta decays with the shell model~\cite{Haaranen2017}. However, the precise amount of quenching needed for the decays studied here is not known. Based on previous studies from the past four decades, the value $g_{\rm A}=1.0$ was chosen for \KreF, while for $^{212,214}$Pb we report results using the range of values $g_{\rm A}=0.85\pm0.15$ as the quenching of $g_{\rm A}$ seems to be more severe for larger masses (see e.g.~\cite{Suhonen2017}). Since the decay of \KreF is first-forbidden unique, the value of $g_{\rm A}$ affects only the next-to-leading order terms resulting in a correction on the order $\sim$0.1\%. On the other hand, in non-unique decays the value of $g_{\rm A}$ can be more impactful and different values can result in different spectral shapes~\cite{Haaranen2017}. In $^{212,214}$Pb this is not the case, and here we find the spectral shapes are somewhat insensitive to the value of $g_{\rm A}$. The experimental half-lives of the ground state $^{212,214}$Pb transitions are reproduced within the chosen range of $g_{\rm A}$. Specifically, this is achieved with the value 0.83 (0.91) for \Pbtof (\Pbtot) without quenching $g_{\rm V}$ from the conserved vector current hypothesis value of 1.0. The ratio $g_{\rm A}/g_{\rm V}$ also agrees with other shell model calculations in this mass region. Warburton's calculations in this mass region resulted in the value $g_{\rm A}/g_{\rm V}=0.6/0.6\approx 1.0$~\cite{Warburton91} and more recent calculations of Zhi \emph{et al.} in $g_{\rm A}/g_{\rm V}=0.48/0.65\approx 0.74$~\cite{Zhi2013}. It should be noted that the spectral shape is only sensitive to ratios of matrix elements, and so the absolute quenching factor of all matrix elements is irrelevant. Thus taking $g_{\rm A}=g_{\rm V}=0.6$ will result in the same spectral shape as $g_{\rm A}=g_{\rm V}=1.0$. For values of $g_{\rm A}$ which did not manage to reproduce the experimental half-life, the small matrix element $^V\mathcal{M}_{101}$ was adjusted so that the experimental partial half-life related to the transition was reproduced. This approach was shown to work well in the case of the second-forbidden non-unique decay of $^{36}$Cl in the recent work of Kumar \emph{et al.}~\cite{Kumar2020}. Recently, the EXO-200 collaboration reported~\cite{EXO-200} a measurement of the $\beta$ shape of the first-forbidden non-unique ground state $\beta$ transition $^{137}\textrm{Xe}(7/2^-)\to\,^{137}\textrm{Cs}(7/2^+)$. Good agreement between the measured $\beta$ spectrum and that computed in our formalism was found. This transition is not entirely analogous to the decays considered here, however, as rank-0 matrix elements play a significant role. Furthermore, the change in the shape factor is small, about 2\%. In the mass region of interest here, the decay of $^{210}$Bi could provide a useful comparison as it connects $1^-$ and $0^+$ states. However, the spectral shape of $^{210}$Bi computed in \cite{kostensalo2018} was found to depend strongly on the adopted value of $g_{\rm A}$. We therefore do not consider a comparison with this transition a valid test of our calculations, as almost any spectral shape could be fit by altering $g_{\rm A}$. The transition in $^{210}$Bi furthermore differs from those in $^{212,214}$Pb because of the differences in nuclear structure discussed in the appendix of~\cite{Aprile:2020tmw}. \subsection{\label{ssec:nuclear_calcs}Atomic exchange effect} The exchange effect has already been demonstrated to be the most prominent atomic effect at low energy~\cite{Hay18}, possibly enhancing the decay probability by more than 10\% below 5~keV \cite{Harston92,Mou14}. It arises from the indistinguishability of the electrons and the imperfect orthogonality of the initial and final atomic states due to the change of the nuclear charge in the decay. The exchange process is an additional decay channel with the same final state as the direct decay and can be seen as the swap of the $\beta$ electron with an electron of the atomic cloud, which is then ejected to the continuum. Previous studies that included this effect were only focused on allowed transitions or assumed that the correction for an allowed transition can be applied as a first approximation to a forbidden transition~\cite{Harston92,Mou14,Kossert15,Kossert18,Aprile:2020tmw}. This is because a precise formalism of the exchange effect was set out only for allowed transitions~\cite{Pyper88}. A summary of the key ingredients that are needed to calculate the exchange correction factor can be found in~\cite{Aprile:2020tmw}. The $\beta$ spectra calculated as allowed in the present work are identical to the ``improved calculations'' in~\cite{Aprile:2020tmw}. Full numerical calculation of the atomic screening and exchange effects is included, as well as accurate radiative corrections from the precise study of superallowed transitions~\cite{Tow08}. Identical calculations have also been performed for the ground state \KreF decay but with the exchange effect correctly determined for this first-forbidden unique transition. Indeed, the formalism from~\cite{Pyper88} has recently been extended to the forbidden unique transitions and will be detailed elsewhere. We briefly summarize the main results here. The definition of the relativistic electron wave functions is consistent with Behrens and B{\"u}hring formalism~\cite{Behrens1982}. In the relativistic case, the usual operator $L^2$ defined from the orbital angular momentum operator $\vec{L}$ does not commute with the Hamiltonian. Instead, the appropriate operator to consider is \begin{equation} \hat{K} = \beta(\vec{\sigma}\cdot\vec{L}+1) \, , \end{equation} with $\beta$ the ($4 \times 4$) Dirac matrix and $\vec{\sigma}$ standing for the three ($4 \times 4$) matrices defined from the ($2 \times 2$) Pauli matrices $\sigma_{x,y,z}$. Its eigenvalue is the quantum number $\kappa$ and it is convenient to introduce the quantity $k = |\kappa|$. Under spherical symmetry, only the small and large radial components are of interest. A continuum state is characterized by its quantum number $\kappa$, its total energy $w_e$, its momentum $p$, and its Coulomb amplitude $\alpha_{\kappa}$, and is denoted $\phi_{c,\kappa}$. Similarly, an atomic bound state is characterized by its quantum numbers ($n$,$\kappa$), its binding energy $E_{n \kappa}$, its total energy $w_{n \kappa} = 1 - |E_{n \kappa}|/m_e c^2$, its momentum $p_{n \kappa} = \sqrt{1 - w_{n \kappa}^2}$, and its Coulomb amplitude $\beta_{n \kappa}$, and is denoted $\phi_{b,n \kappa}$. Primed quantities refer to the daughter atom and to the parent atom otherwise. By restricting to the dominant NMEs, $\beta$ electrons can only be created in states with $\kappa = \pm 1$ in allowed transitions. The exchange process can then occur only for the atomic electrons in $s_{1/2}$ ($\kappa = -1$) and $p_{1/2}$ ($\kappa = +1$) orbitals. The shape factor $C(w_e)$ as defined in Eq.~(\ref{eq:ctilde}) being energy independent, the exchange effect is corrected by applying \begin{equation} \label{eq:excorr_all} C(w_e) \text{ } \longrightarrow \text{ } C(w_e) \times (1+\eta_1) \, . \end{equation} In the case of first-forbidden unique transitions, $\beta$ electrons can be created in states with $\kappa = \pm 1\text{, } \pm 2$. The exchange process can thus occur also for the atomic electrons in $p_{3/2}$ ($\kappa = -2$) and $d_{3/2}$ ($\kappa = +2$) orbitals. In addition, the shape factor is well-known to exhibit the following energy dependence: \begin{equation} \label{eq:1fu_shape} C(w_e) \propto q^2 + \lambda_2 p^2 \, , \end{equation} with $q = (w_0 - w_e)$ and \begin{equation} \lambda_2 = \dfrac{\alpha_{+2}^2 + \alpha_{-2}^2}{\alpha_{+1}^2 + \alpha_{-1}^2} \, . \end{equation} The first term in Eq.~(\ref{eq:1fu_shape}) comes from $\beta$ electrons with $\kappa = \pm 1$ and the second term from those with $\kappa = \pm 2$. One can demonstrate that the exchange effect is corrected by applying \begin{equation} \label{eq:excorr_1} q^2 \text{ } \longrightarrow \text{ } q^2 \times (1+\eta_1) \end{equation} \begin{equation} \label{eq:excorr_2} \lambda_2 p^2 \text{ } \longrightarrow \text{ } \lambda_2 p^2 \times (1+\eta_2) \, . \end{equation} The correction factor is defined by \begin{equation} \eta_k = \dfrac{T_{+k}(T_{+k} - 2\alpha_{+k}) + T_{-k}(T_{-k} - 2\alpha_{-k})}{\alpha_{+k}^2 + \alpha_{-k}^2} \, . \end{equation} The exchange probability between a $\beta$ electron and an atomic electron mainly depends on the overlap of their radial wave functions. As the process can occur with each electron in a $\kappa$ state, one has to sum over the different ($n$,$\kappa$) states. Assuming no atomic excitation and completely filled orbitals, one can establish \begin{equation} T_{\kappa} = \sum\limits_{n}{\text{ } \dfrac{\langle\phi_{c,\kappa}'|\phi_{b,n \kappa}\rangle}{\langle\phi_{b,n \kappa}'|\phi_{b,n \kappa}\rangle} \text{ } \beta_{n \kappa}' \text{ } \left( \dfrac{p_{n \kappa}'}{p} \right)^{k-1}} \, . \end{equation} In the case of allowed transitions, this result is similar to what is described in~\cite{Pyper88} except that the overlap of parent and daughter atomic wave functions is no longer approximated by unity. As in~\cite{Aprile:2020tmw}, the relativistic electron wave functions have been determined following the numerical procedure described in~\cite{Mou14}, forcing the convergence to the accurate orbital energies from~\cite{Kot97} for the bound states. \section{\label{sec:results}RESULTS} The final ground-state $\beta$ spectra of \Pbtof and \Pbtot obtained from our calculations are shown in Figs.~\ref{fig:pb214} and \ref{fig:pb212}, respectively. In each figure, the final spectrum shown in solid red includes both the effects of nuclear structure and the atomic exchange effect and is evaluated using the value of $g_{\rm A}=0.85$. The spectrum without the exchange correction (nuclear structure only) is shown as a dashed line for comparison. Surrounding the final spectrum is a band which shows the impact of varying the value of $g_{\rm A}$ from 0.7 to 1.0, and these values comprise the upper- and lower-band boundaries, respectively. The boundaries have been calculated using the same normalization as the solid line. The differences in the low energy part of the spectrum are therefore due to the change in the ratios of the relevant matrix elements rather than half-life. \begin{figure*}[b] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{figure1left.pdf} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{figure1right.pdf} \end{subfigure} \caption{Comparison of $\beta$ spectra for the ground-state decay of \Pbtof shown over the full energy range (left) and at low energies (right). The result of this work is shown as a solid red line calculated using $g_{\rm A}=0.85$ and applying the atomic exchange correction. The upper and lower bounds of the shaded band show the spectrum obtained with $g_{\rm A}=0.7$ and $g_{\rm A}=1.0$, respectively. Spectra are normalized over the full energy range for each value of $g_{\rm A}$. \label{fig:pb214}} \end{figure*} \begin{figure*}[htb] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{figure2left.pdf} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{figure2right.pdf} \end{subfigure} \caption{Comparison of $\beta$ spectra for the ground-state decay of \Pbtot shown over the full energy range (left) and at low energies (right). The result of this work is shown as a solid line calculated using $g_{\rm A}=0.85$ and applying the atomic exchange correction. The upper and lower bounds of the shaded band show the spectrum obtained with $g_{\rm A}=0.7$ and $g_{\rm A}=1.0$, respectively. Spectra are normalized over the full energy range for each value of $g_{\rm A}$. \label{fig:pb212}} \end{figure*} \begin{figure*}[ht] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{figure3left.pdf} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{figure3right.pdf} \end{subfigure} \caption{Comparison of $\beta$ spectra for the ground-state decay of \KreF shown over the full energy range (left) and at low energies (right). The dashed red line shows the spectrum with the exchange effect calculated using the extended formalism for first-forbidden unique transitions. The lower portion of each figure gives the difference between the spectra with the exchange effect calculated as an allowed and a first-forbidden unique transition. Spectra are normalized over the full energy range. \label{fig:kr85}} \end{figure*} The principle result of this work is to show the impact of the full nuclear calculation on the spectral shape. Thus we compare our \Pbtof and \Pbtot spectra to those used in the background model of Ref.~\cite{Aprile:2020tmw}, shown here in blue. As only rank-1 operators contribute in these transitions, $\beta$ electrons can only be created with $\kappa = \pm 1$, i.e. the atomic exchange correction as for an allowed transition in Eq.~(\ref{eq:excorr_all}) is a good approximation. However, the spectra were calculated as allowed in Ref.~\cite{Aprile:2020tmw}, without any nuclear shape adjustment, unlike the present work. For both Pb isotopes the present calculations result in less rate in the low energy region of interest for several new physics searches compared to the allowed calculation. The spectra obtained here with $g_{\rm A}=0.85$ predict a 19.0\% and 15.5\% lower event rate in a 1--15~keV energy window from \Pbtof and \Pbtot, respectively. Over the 0.7--1.0 range of $g_{\rm A}$ the corresponding ratios are 15.0--23.2\% and 12.1--19.0\%. For further reference we also show in green the $\beta$ shape generated by the \textsc{GEANT4}~\cite{agostinelli:2002hh} toolkit commonly used to predict background rates and energy spectra in LXe TPC experiments. A detailed description of the $\beta$ spectrum model used in \textsc{GEANT4} can be found in the appendix of~Ref.~\cite{Aprile:2020tmw}. For an analysis such as that in~\cite{Aprile:2020tmw} performed in a restricted energy window well below the $\beta$ endpoint, differences in the assumed $\beta$ spectrum introduce a background systematic in the lowest energy region used to search for possible new physics signals. To illustrate the size of this systematic we normalized the area under our spectra to that under the allowed shape in a 1--210~keV energy window like that used in~\cite{Aprile:2020tmw}. For the $\beta$ decay from \Pbtof our spectra predict 4.3\%, 5.5\%, and 6.7\% less rate in a 1--15~keV window corresponding to the values $g_{\rm A}=0.7,0.85,1.0$. For \Pbtot these ratios are 6.6\%, 8.4\%, and 10.3\%. Interestingly, these results suggest that the size of the excess observed by XENON1T could in fact be larger than what is reported. For the first time, the ground state \KreF $\beta$ spectrum has been calculated with the correct atomic exchange effect for this first-forbidden unique transition. Fig.~\ref{fig:kr85} shows our result (red dashed line) compared with three other calculations. The solid blue spectrum is the result given in Ref.~\cite{Aprile:2020tmw}, calculated as a first-forbidden unique transition with an atomic exchange correction as for an allowed transition. The solid green spectrum comes from the model used by \textsc{GEANT4}, and the dashed orange spectrum does not include any exchange correction. The difference between the spectra from this work and Ref.~\cite{Aprile:2020tmw} is given in the lower portion of the figure and is found to be in the range of $\pm0.05$\%. Such a negligible difference comes from a combination of effects. First, seven orbitals contribute to $T_{\pm 1}$ but only four orbitals to $T_{\pm 2}$. Secondly, the exchange correction in Ref.~\cite{Aprile:2020tmw} corresponds to applying the approximation \begin{equation} \left[ q^2 + \lambda_2 p^2 \right] \text{ } \longrightarrow \text{ } \approx \left[ q^2 + \lambda_2 p^2 \right] \times (1+\eta_1) \, , \end{equation} which means that we are comparing $\eta_1$ with $\eta_2$, two quantities of similar magnitude. Lastly, the magnitude of the exchange correction factors is most important at low energy and as can be seen from Eqs.~(\ref{eq:excorr_1}) and (\ref{eq:excorr_2}): the energy dependence of the shape factor enhances the influence of $(1+\eta_1)$ and at the opposite reduces the influence of $(1+\eta_2)$. This explains why our extended calculation of the exchange effect in \KreF decay gives a $\beta$ spectrum very close to the approximate spectrum of Ref.~\cite{Aprile:2020tmw}. One can expect similar behavior in every first-forbidden unique transition as long as the transition is not dominated by an accidental cancellation of the NMEs. \section{\label{sec:conclusion}CONCLUSION} We have presented improved energy spectra for the ground-state $\beta$ decays of \Pbtof, \Pbtot, and \KreF. Combinations of these three decays form the most significant sources of background in current and future LXe dark matter experiments at low energy, \Pbtof being the most salient of the three. The spectra derived here make use of a nuclear shell model formalism to calculate the relevant NMEs and include corrections for the atomic exchange effect. We find that the ground-state spectra depend on the weak axial vector coupling $g_{\rm A}$ and therefore produce spectra using a suitable range for its value. Our results predict a 19.0\% and 15.5\% downward shift in background rate from \Pbtof and \Pbtot in the energy region of interest for new physics searches relative to previous predictions. Our assessment of a suitable range for $g_{\rm A}$ suggests that these shifts have an uncertainty of roughly 4\%. The overall impact of nuclear structure effects on the $^{212,214}$Pb spectra is found to be more significant than that from the atomic exchange effect considered previously. An extension of the atomic exchange correction to include first-forbidden unique transitions was applied to the ground-state decay of \KreF. The final spectrum shows very minor differences relative to previous calculations which use an allowed approximation for the exchange effect. Given the large discrepancy between the present calculation and the allowed approximation in \Pbtof and \Pbtot used previously, a direct measurement of these transitions would be prudent for the reduction of systematic errors in future experimental endeavors. To our knowledge, no direct experimental data for these spectra exist in the lowest energy region of concern for modern experiments. Historical investigations of the $\beta$ spectra from \Pbtof have focused on lines from internal conversion electrons and not on the continuous spectrum below $\sim700$~keV from decays to the $^{214}$Bi ground state. A dedicated measurement of these shapes could, for example, be comprised of a central, low-threshold detector containing a $^{212,214}$Pb source which is surrounded by a highly efficient $\gamma$-ray veto detector. In this configuration, the ground-state decays are reconstructed from the sample of events with a signature in the central detector, but with no coincident signal in the outer veto. \section{ACKNOWLEDGMENTS} We thank Harry Nelson for helpful discussions. This work was supported by the U.S. Department of Energy Office of Science under contract number DE-AC02-05CH11231 and by the Academy of Finland under the Academy project no. 318043. J. K. acknowledges the financial support from the Jenny and Antti Wihuri Foundation. \FloatBarrier \bibliographystyle{apsrev4-2}
{ "timestamp": "2020-12-14T02:26:33", "yymm": "2007", "arxiv_id": "2007.13686", "language": "en", "url": "https://arxiv.org/abs/2007.13686" }
\section{Introduction} In this paper we present a method to preserve certain splitting families along finite support iterations. These splitting families are constructed via forcing, using specific uncountable $2$-edge-labeled graphs\footnote{A \emph{$2$-edge-labeled graph} is a simple graph whose edges are labeled by either $0$ or $1$.} as support. The main application of this method is a forcing model where many classical cardinal characteristics of the continuum are pairwise different, including the \emph{splitting number} $\mathfrak{s}$ and the \emph{reaping number} $\mathfrak{r}$. We assume that the reader is familiar with \emph{Cicho\'n's diagram} (Figure~\ref{fig:cichon}) containing the characteristics that we will call \emph{Cicho\'n-characteristics}. We also investigate some of the characteristics in the \emph{Blass diagram}~\cite[Pg.~481]{Blass}. Figure~\ref{fig:all20} illustrates both diagrams combined, along with all the ZFC-provable inequalities that we are aware of. See~\cite{Blass,BaJu} for the definitions and the proofs for the inequalities (with the exception of ${\ensuremath{\cof(\Meager)}}\leq\mathfrak{i}$, which was proved in~\cite{BHH}). In the following, we only give the definitions of the non-Cicho\'n-characteristics that we will investigate in this paper. \begin{definition}\label{def:cardchar} \begin{enumerate}[(1)] \item For $a,b\in[\omega]^{\aleph_0}$, we define $a\subseteq^* b$ iff $a\smallsetminus b$ is finite; \item and we say that \emph{$a$ splits $b$} if both $a\cap b$ and $b\smallsetminus a$ are infinite, that is, $a\nsupseteq^* b$ and $\omega\smallsetminus a\nsupseteq^* b$. \item $F\subseteq[\omega]^{\aleph_0}$ is a \emph{splitting family} if every $y\in[\omega]^{\aleph_0}$ is split by some $x\in F$. The \emph{splitting number} $\mathfrak{s}$ is the smallest size of a splitting family. \item $D\subseteq[\omega]^{\aleph_0}$ is an \emph{unreaping family} if no $x\in[\omega]^{\aleph_0}$ splits every member of $D$. The \emph{reaping number} $\mathfrak{r}$ is the smallest size of an unreaping family. \item $D\subseteq[\omega]^{\aleph_0}$ is \emph{groupwise dense} when: \begin{enumerate}[(i)] \item if $a\in[\omega]^{\aleph_0}$, $b\in D$ and $a\subseteq^* b$, then $a\in D$, \item if $\langle I_n:n<\omega\rangle$ is an interval partition of $\omega$ then $\bigcup_{n\in a}I_n\in D$ for some $a\in[\omega]^{\aleph_0}$. \end{enumerate} The \emph{groupwise density number $\mathfrak{g}$} is the smallest size of a collection of groupwise dense sets whose intersection is empty. \item The \emph{distributivity number $\mathfrak{h}$} is the smallest size of a collection of dense subsets of $\langle[\omega]^{\aleph_0},\subseteq^*\rangle$ whose intersection is empty. \item Say that $a\in[\omega]^{\aleph_0}$ is a \emph{pseudo-intersection of $F\subseteq[\omega]^{\aleph_0}$} if $a\subseteq^* b$ for all $b\in F$. \item The \emph{pseudo-intersection number} $\mathfrak{p}$ is the smallest size of a filter base of subsets of $[\omega]^{\aleph_0}$ without pseudo-intersection. \item The \emph{tower number} $\mathfrak{t}$ is the smallest length of a (transfinite) $\subseteq^*$-decreasing sequence in $[\omega]^{\aleph_0}$ without pseudo-intersection. \item Given a class $\mathcal{P}$ of forcing notions, $\mathfrak{m}(\mathcal{P})$ denotes the minimal cardinal $\kappa$ such that, for some $Q\in\mathcal{P}$, there is some collection $\mathcal{D}$ of size $\kappa$ of dense subsets of $Q$ without a filter in $Q$ intersecting every member of $\mathcal{D}$. \item Let $\mathbb{P}$ be a poset. A set $A\subseteq \mathbb{P}$ is \emph{$k$-linked (in $\mathbb{P}$)} if every $k$-element subset of $A$ has a lower bound in $\mathbb{P}$. $A$ is \emph{centered} if it is $k$-linked for all $k\in\omega$. \item A poset $\mathbb{P}$ is \emph{$k$-Knaster}, if for each uncountable $A\subseteq \mathbb{P}$ there is a $k$-linked uncountable $B\subseteq A$. And $\mathbb{P}$ \emph{has precaliber $\aleph_1$}, if such a $B$ can be chosen centered. For notational convenience, \emph{$1$-Knaster} means ccc, and \emph{$\omega$-Knaster} means precaliber $\aleph_1$. \item For $1\leq k\leq \omega$ denote $\mathfrak{m}_k:=\mathfrak{m}(k\text{-Knaster})$ and $\mathfrak{m}:=\mathfrak{m}_1$. We also set $\mathfrak{m}_0:=\aleph_1$. \end{enumerate} \end{definition} \newcommand{*+[F.]{\phantom{\lambda}}}{*+[F.]{\phantom{\lambda}}} \begin{figure} \resizebox{\textwidth}{!}{$ \xymatrix@=2ex{ & & &\mathfrak{c} \\ {\ensuremath{\cov(\Null)}}\ar[r] &{\ensuremath{\non(\Meager)}}\ar[r] &{\ensuremath{\cof(\Meager)}}\ar[r] &{\ensuremath{\cof(\Null)}}\ar[u] \\ & \mathfrak b\ar[r]\ar[u] &\mathfrak d\ar[u] \\ {\ensuremath{\add(\Null)}}\ar[r]\ar[uu] &{\ensuremath{\add(\Meager)}}\ar[r]\ar[u] &{\ensuremath{\cov(\Meager)}}\ar[r]\ar[u] &{\ensuremath{\non(\Null)}}\ar[uu] \\ \aleph_1\ar[u] }\quad \xymatrix@=2ex{ & & &\mathfrak{c} \\ {\ensuremath{\cov(\Null)}}\ar[r] &{\ensuremath{\non(\Meager)}}\ar[r] &*+[F.]{\phantom{\lambda}}\ar[r] &{\ensuremath{\cof(\Null)}}\ar[u] \\ & \mathfrak b\ar[r]\ar[u] &\mathfrak d\ar[u] \\ {\ensuremath{\add(\Null)}}\ar[r]\ar[uu] &*+[F.]{\phantom{\lambda}}\ar[r]\ar[u] &{\ensuremath{\cov(\Meager)}}\ar[r]\ar[u] &{\ensuremath{\non(\Null)}}\ar[uu] \\ \aleph_1\ar[u] } $} \caption{\label{fig:cichon}Cicho\'n's diagram (left). In the version on the right, the two ``dependent'' values ${\ensuremath{\add(\Meager)}}=\min\{\mathfrak b, {\ensuremath{\cov(\Meager)}}\}$ and ${\ensuremath{\cof(\Meager)}}=\max\{{\ensuremath{\non(\Meager)}},\mathfrak d\}$ are removed; the ``independent'' ones remain (nine entries excluding $\aleph_1$, or ten including it). An arrow $\mathfrak x\rightarrow \mathfrak y$ means that ZFC proves $\mathfrak x\le \mathfrak y$.} \end{figure} \usetikzlibrary{arrows} \begin{figure} \[ \tikz{ \node (a1) at (-2,1) {$\aleph_1$} ; \node (m) at (-1,1) {$\mathfrak m$} ; \node (mk) at (0,1) {$\mathfrak{m}_k$} ; \node (prec) at (1,1) {$\mathfrak{m}_\omega$} ; \node (p) at (2,1) {$\mathfrak p$} ; \node (e) at (2,4) {$\mathfrak e$}; \node (addn) at (1,3){{\ensuremath{\add(\Null)}}}; \node (covn) at (1,7){{\ensuremath{\cov(\Null)}}}; \node (nonn) at (9,3) {{\ensuremath{\non(\Null)}}} ; \node (cfn) at (9,7) {{\ensuremath{\cof(\Null)}}} ; \node (addm) at (3,3) {{\ensuremath{\add(\Meager)}}} ; \node (covm) at (7,3) {{\ensuremath{\cov(\Meager)}}} ; \node (nonm) at (3,7) {{\ensuremath{\non(\Meager)}}} ; \node (cfm) at (7,7) {{\ensuremath{\cof(\Meager)}}} ; \node (b) at (3,5) {$\mathfrak b$}; \node (d) at (7,5) {$\mathfrak d$}; \node (h) at (4.2,2) {$\mathfrak h$}; \node (s) at (5.5,2) {$\mathfrak s$}; \node (g) at (5.5,4) {$\mathfrak g$}; \node (a) at (5,8) {$\mathfrak a$}; \node (r) at (8,8) {$\mathfrak r$}; \node (u) at (8,9) {$\mathfrak u$}; \node (i) at (9,8) {$\mathfrak i$}; \node (c) at (10,9) {$\mathfrak c$}; \draw (a1) edge[->] (m); \draw (m) edge[->] (mk); \draw (mk) edge[->] (addn) (addn) edge[->] (covn) (covn) edge [->] (nonm) (nonm)edge [->] (cfm) (cfm)edge [->] (cfn); \draw (addn) edge [->] (addm) (addm) edge [->] (covm) (covm) edge [->] (nonn) (nonn) edge [->] (cfn); \draw (addm) edge [->] (b) (b) edge [->] (nonm); \draw (covm) edge [->] (d) (d) edge[->] (cfm); \draw (b) edge [->] (d); \draw (mk) edge[->] (prec) (prec) edge[->] (p) (p) edge[->] (e); \draw (p) edge [->] (h); \draw (h) edge [->] (b); \draw (b) edge [->] (r); \draw (r) edge [->] (u); \draw (covn) edge [->] (r); \draw (r) edge [->] (i); \draw (b) edge [->] (a); \draw (e) edge [->] (nonm); \draw (h) edge [->] (g); \draw (g) edge [->] (d); \draw (h) edge [->] (s); \draw (s) edge [->] (d); \draw (s) edge [->] (nonm); \draw (s) edge [->] (nonn); \draw (cfm) edge [->] (i); \draw (addn) edge [->] (e) (e) edge [->] (covm); \draw (covm) edge [->] (r); \draw (p) edge [->] (addm); \draw (cfn) edge[->] (c); \draw (i) edge[->] (c); \draw (u) edge[->] (c); \draw (a) edge[->] (c); } \] \caption{\label{fig:all20}Cicho\'n's diagram and the Blass diagram combined. An arrow $\mathfrak x\rightarrow \mathfrak y$ means that ZFC proves $\mathfrak x\le \mathfrak y$.} \end{figure} Below we list some additional properties of these cardinals. Unless noted otherwise, proofs can be found in~\cite{Blass}. \begin{fact}\label{fact:blass} \begin{enumerate} \item In~\cite{MSpt} it was proved that $\mathfrak{p}=\mathfrak{t}$.\footnote{Only the trivial inequality $\mathfrak{p}\leq\mathfrak{t}$ is used in this text.} \item The cardinals ${\ensuremath{\add(\Null)}}$, ${\ensuremath{\add(\Meager)}}$, $\mathfrak{b}$, $\mathfrak{t}$, $\mathfrak{h}$ and $\mathfrak{g}$ are regular. \item $\cf(\mathfrak{s})\geq\mathfrak{t}$ (see \cite{DowShelah}). \item $2^{{<}\mathfrak{t}}=\mathfrak{c}$. \item $\cf(\mathfrak{c})\geq\mathfrak{g}$. \item For $1\leq k\leq k'\leq\omega$, $\mathfrak{m}_k\leq\mathfrak{m}_{k'}$. \item\label{mart} For $1\leq k\leq\omega$, $\mathfrak{m}_k>\aleph_1$ implies $\mathfrak{m}_k=\mathfrak{m}_\omega$ (well-known, but see e.g.\ \cite[Lemma~4.2]{GKMS1}). \end{enumerate} \end{fact} This work contributes to the project of constructing a forcing model satisfying: \begin{equation}\tag{$\heartsuit$}\label{eq:maingoal} \text{All the cardinals in Figure~\ref{fig:all20} are pairwise different,} \end{equation} with the obvious (ZFC provable) exception of the dependent entries ${\ensuremath{\add(\Meager)}}=\min\{\mathfrak{b},{\ensuremath{\cov(\Meager)}}\}$ and ${\ensuremath{\cof(\Meager)}}=\max\{{\ensuremath{\non(\Meager)}},\mathfrak{d}\}$, and the Martin axiom numbers $\mathfrak{m}$, $\mathfrak{m}_k$ for some $2\le k<\omega$, and $\mathfrak{m}_\omega$, which together can only take one uncountable value, see Fact~\ref{fact:blass}(\ref{mart}). In this direction \cite{GKS} constructed a forcing model, using four strongly compact cardinals, where all the ten (non-dependent) values of Cicho\'n's diagram are pairwise different (a situation we call \emph{Cicho\'n's Maximum}), as in Figure~\ref{fig:cichonorders}(A). This was improved later in~\cite{diegoetal} by only using three strongly compact cardinals; finally in~\cite{GKMS2} it was shown that no large cardinals are needed for Cicho\'n's Maximum. A model of Cicho\'n's Maximum with the order as in Figure~\ref{fig:cichonorders}(B) was obtained in~\cite{KeShTa:1131}. Although this model initially required four strongly compact cardinals as well, the methods of~\cite{GKMS2} allow to remove the large cardinal assumptions also here. As a next step towards~\eqref{eq:maingoal}, \cite{GKMS1} proved: \begin{theorem}[{\cite{GKMS1}}]\label{GKMSmain} Under $\mathrm{GCH}$, for any $k\in[1,\omega)$, there is a cofinality preserving poset $\mathbb{P}_{k}$ forcing that \begin{enumerate}[(a)] \item Cicho\'n's Maximum holds with the order of Figure~\ref{fig:cichonorders}\textnormal{(\textsc{a})}. \item $\aleph_1=\mathfrak{m}_{k-1}<\mathfrak{m}_k=\mathfrak{m}_\omega<\mathfrak{p}<\mathfrak{h}<{\ensuremath{\add(\Null)}}$ (recall $\mathfrak{m}_0:=\aleph_1$). \end{enumerate} An analogous result holds for the alternative order of Figure~\ref{fig:cichonorders}\textnormal{(\textsc{b})}. \end{theorem} In this paper, we continue this line of work by including, in addition, $\mathfrak{s}$ and~$\mathfrak{r}$. \begin{mainthm} Under $\mathrm{GCH}$, for any $k\in[2,\omega)$ there is a cofinality preserving poset forcing that the cardinals in Cicho\'n's diagram, $\mathfrak{m}_k$, $\mathfrak{p}$, $\mathfrak{h}$, $\mathfrak{s}$ and $\mathfrak{r}$ are pairwise different. More specifically: \begin{enumerate}[(a)] \item Cicho\'n's Maximum holds, in either of the orders of Figure~\ref{fig:cichonorders}. \item $\aleph_1=\mathfrak{m}_{k-1}<\mathfrak{m}_k=\mathfrak{m}_\omega<\mathfrak{p}<\mathfrak{h}<{\ensuremath{\add(\Null)}}$. \item $\mathfrak{s}$ can assume any regular value between $\mathfrak{p}$ and $\mathfrak{b}$. \item $\mathfrak{r}$ can assume any regular value between $\mathfrak{d}$ and $\mathfrak{c}$. \end{enumerate} \end{mainthm} In both theorems above, item (b) can also be replaced by $\aleph_1<\mathfrak{m}_\omega<\mathfrak{p}<\mathfrak{h}<{\ensuremath{\add(\Null)}}$ while $\mathfrak{m}_k=\aleph_1$ for all $k<\omega$. Those are the only possible constellations of the Knaster numbers, by Fact~\ref{fact:blass}(\ref{mart}), unless you count $\mathfrak{m}$ as the $1$-Knaster-number: In contrast to Theorem~\ref{GKMSmain} (where we do not control $\mathfrak{r},\mathfrak{s}$), we cannot force $\mathfrak{m}>\aleph_1$ with the methods we use here. We cannot just iterate over all small ccc forcings one by one to increase $\mathfrak{m}$, as our method requires that all iterands of the forcing iteration have to be ``homogeneous''. So instead of using a certain small forcing $\dot Q$ as iterand, we will use a finite support product over all variants as iterand. So only if $\dot Q$ (and therefore all variants) is Knaster,\footnote{Or at least stays ccc in ccc extensions.} this product can be used in a ccc iteration; accordingly we can increase the Knaster numbers but not $\mathfrak{m}$ itself. \begin{figure} \subfloat[\cite{GKS,GKMS2}] {\resizebox{0.48\textwidth}{!}{$ \xymatrix@=2ex{ & & &\mathfrak{c} \\ {\ensuremath{\cov(\Null)}}\ar[ddr] &{\ensuremath{\non(\Meager)}}\ar[ddr] &*+[F.]{\phantom{\lambda}}\ar[ddr] &{\ensuremath{\cof(\Null)}}\ar[u] \\ & \mathfrak b\ar[u] &\mathfrak d\ar@{=}[u] \\ {\ensuremath{\add(\Null)}}\ar[uu] &*+[F.]{\phantom{\lambda}}\ar@{=}[u] &{\ensuremath{\cov(\Meager)}}\ar[u] &{\ensuremath{\non(\Null)}}\ar[uu] \\ \aleph_1\ar[u] } $}} \hspace*{\fill} \subfloat[\cite{KeShTa:1131,GKMS2}] {\resizebox{0.48\textwidth}{!}{$ \xymatrix@=2ex{ & & &\mathfrak{c} \\ {\ensuremath{\cov(\Null)}}\ar[r] &{\ensuremath{\non(\Meager)}}\ar[ddr] &*+[F.]{\phantom{\lambda}}\ar[r] &{\ensuremath{\cof(\Null)}}\ar[u] \\ & \mathfrak b\ar[ul] &\mathfrak d\ar@{=}[u] \\ {\ensuremath{\add(\Null)}}\ar[r] &*+[F.]{\phantom{\lambda}}\ar@{=}[u] &{\ensuremath{\cov(\Meager)}}\ar[r] &{\ensuremath{\non(\Null)}}\ar[ul] \\ \aleph_1\ar[u] } $}} \caption{\label{fig:cichonorders}The two known consistent orders where all the (non-dependent) values in Cicho\'n's diagram are pairwise different. (A) corresponds to the model in~\cite{GKS}, and (B) to the model in~\cite{KeShTa:1131} (both proven consistent in~\cite{GKMS2} without large cardinals). Each arrow can be $<$ or $=$ as desired.} \end{figure} We remark that the full power of GCH is not required in the Main Theorem, but we do need \emph{some} assumption on cardinal arithmetic in the ground model. See details in Section~\ref{sec:15}. In order to include $\mathfrak{s}$ and $\mathfrak{r}$ in our main result, we need a new preservation theorem for splitting families. Previously, the following was known in the context of FS (finite support) iterations: \begin{itemize} \item[\cite{baudor}] Hechler forcing (for adding a dominating real) preserves splitting families witnessing the property $\mathsf{LCU}_{\mathbf{R}_{\mathrm{sp}}}(\kappa)$ for any uncountable regular $\kappa$ (see Section~\ref{sec:COB}). \item[\cite{JSsuslin}] Assuming CH, any FS iteration of Suslin ccc posets forces that the ground model reals form a splitting family. \end{itemize} In this paper we will use a splitting family obtained by a FS product of Hechler-type posets (cf.~\cite{Hechlermad}) which we call $\mathbb{G}_\mathbf{B}$; the support of $\mathbb{G}_\mathbf{B}$ is a graph $\mathbf{B}$ of size $\aleph_1$ with certain homogeneity properties. We then show that this splitting family is preserved by certain FS iterations, which we will call ``symmetric Suslin-$\lambda$-small''. (Every FS iteration of Suslin ccc posets with parameters in the ground model is such an iteration, but our application will not use such ``full'' Suslin ccc forcings.) Similar preservation techniques have appeared in different contexts. For instance, concerning preservation of mad (maximal almost disjoint) families, Kunen~\cite{kunen80} constructed, under CH, a mad family that can be preserved by Cohen posets; afterwards, Steprans~\cite{steprans} showed that, after adding $\omega_1$-many Cohen reals, there is a mad family of size $\aleph_1$ that can be preserved in further Cohen extensions; Fischer and Brendle~\cite{VFJB11} constructed a Hechler-type poset $\mathbb{H}_A$ with support (any uncountable set) $A$ that adds a mad family indexed by $A$, which can be preserved not only in further Cohen extensions but after other concrete FS iterations, thus generalizing Steprans' result because $\mathbb{H}_{\omega_1}=\mathbb{C}_{\omega_1}$; \cite{FFMM,mejiavert} showed that any such mad family added by $\mathbb{H}_A$ can be preserved by some general type of FS iterations, but the most general result so far was shown in~\cite{diegoetal}: Any $\kappa$-$\mathrm{Fr}$-Knaster poset preserves $\kappa$-strong-$\mathbf{Md}$-families (with $\kappa$ uncountable regular; the mad family added by $\mathbb{H}_\kappa$ is of such type). There are deep technical differences between the mad family added by this $\mathbb{H}_A$, and the construction of a splitting family in this paper: No structure is needed on $A$, and because of this it is clear that Hechler's posets satisfy $\mathbb{H}_A\lessdot\mathbb{H}_B$ whenever $A\subseteq B$; but we cannot guarantee $\mathbb{G}_{\mathbf{B}_0}\lessdot\mathbb{G}_\mathbf{B}$ for our posets, whenever $\mathbf{B}_0$ is a subgraph of~$\mathbf{B}$. Also, $\mathbb{G}_\mathbf{B}$ itself does not add a splitting family, but it just adds a set of Cohen reals $\{\eta_a : a\in \mathbf{B}\}$ \emph{over the ground model} (recall that we do not have intermediate extensions by restricting the support $\mathbf{B}$). Hence, the FS product (or iteration, which is the same, as the poset $\mathbb{G}_\mathbf{B}$ is absolute) of size $\kappa$ of such posets adds a splitting family of size $\kappa$ (witnessing $\mathsf{LCU}_{\mathbf{R}}(\kappa)$) formed by the previously mentioned Cohen reals. It is clear that just adding $\kappa$ many Cohen reals produces a splitting family satisfying $\mathsf{LCU}_{\mathbf{R}}(\kappa)$, but we need to use FS support products of $\kappa$ many $\mathbb{G}_\mathbf{B}$ (with $\mathbf{B}$ of size $\aleph_1$, instead of just one $\mathbb{G}_{\mathbf{B}'}$ with $\mathbf{B}'$ of size $\kappa$), and we need the graph structure on $\mathbf{B}$, to be able to guarantee the preservation of the new splitting family. The forcing structure is very important here because an isomorphism of names argument is required for this preservation. The strategy to prove the main theorem is similar to Theorem~\ref{GKMSmain}. We first show how to construct a ccc poset that forces distinct values for the cardinals on the left side of Cicho\'n's diagram, including some of the other cardinal characteristics (like $\mathfrak{s}$ in this case). Afterwards, methods from~\cite{GKMS2,GKMS1} are applied to this initial forcing to get the poset for the main theorem. \subsection*{Annotated contents.} \ \smallskip \noindent\textbf{\S\ref{sec:graph}} We show how to construct, in ZFC, a \emph{suitable $2$-graph}. This is the type of graph we use as support for $\mathbb{G}_\mathbf{B}$.\smallskip \noindent\textbf{\S\ref{sec:COB}} The $\mathsf{LCU}$ and $\mathsf{COB}$ properties are reviewed from \cite{GKS,GKMS2,GKMS1}. These describe \emph{strong witnesses} to cardinal characteristics associated with a definable relation on the reals. Examples of such cardinal characteristics are the Cicho\'n-characteristics as well as $\mathfrak{s}$ and $\mathfrak{r}$. \smallskip \noindent\textbf{\S\ref{sec:splpres}} We introduce the forcing $\mathbb{G}_\mathbf{B}$, which has as support a suitable $2$-graph $\mathbf{B}$. We look at FS iterations of ccc posets, in general, whose initial part is a FS product of posets of the form $\mathbb{G}_\mathbf{B}$ where $\mathbf{B}$ is in the ground model. We define \emph{$\lambda$-small history iterations} (where on a dense set, conditions have ${<}\lambda$-sized \emph{history}), as well as \emph{symmetric} iterations, and show that symmetric $\lambda$-small history iterations allow us to control $\mathfrak{s}$ (and later also $\mathfrak{r}$). \smallskip \noindent\textbf{\S\ref{sec:main1}} We define \emph{Suslin $\lambda$-small} iterations, which are $\lambda$-small history iterations, and give consequences of this notions, as well as sufficient conditions to get symmetric ones. \smallskip \noindent\textbf{\S\ref{sec:left}} Closely following~\cite{GKS}, we construct a symmetric Suslin-$\lambda$-small iteration $\mathbb{P}^0$ that separates the cardinals on the left hand side of the diagram, with ${\ensuremath{\cov(\Meager)}}=\mathfrak{c}$ and $\mathfrak{s}=\mathfrak{p}$.\smallskip \noindent\textbf{\S\ref{sec:15}} We show how the tools of~\cite{GKMS2,GKMS1} can applied to $\mathbb{P}^0$, resulting in a forcing that gives the main theorem.\smallskip \noindent\textbf{\S\ref{sec:disc}} We discuss some open questions related to this work. \section{Suitable 2-graphs}\label{sec:graph} In this section we define and construct suitable $2$-graphs. \begin{definition}\label{DefSbG} Say that $\mathbf{B}:=\langle B,R_0,R_1\rangle$ is a \emph{$2$-edge-labeled graph}, abbreviated \emph{$2$-graph}, if \begin{enumerate}[(i)] \item $R_0$ and $R_1$ are irreflexive symmetric relations on $B$, \item $R_0\cap R_1=\emptyset$. \end{enumerate} In other words: Between two nodes $x$ and $y$ there is at most one edge, with color $0$ or $1$. See the example in Figure~\ref{fig:2graph}(\textsc{a}). Concerning $2$-graphs, we define the following notions. \begin{enumerate}[(1)] \item If $A\subseteq B$, denote $\mathbf{B}|_A:=\langle A,R_0|_A,R_1|_A\rangle$ where $R_e|_A:=R_e\cap(A\times A)$. \item A partial function (or \emph{coloring}) $\eta$ from $B$ into $2$ \emph{respects $\mathbf{B}$} if $\{\eta(a),\eta(b)\}\neq\{e\}$ whenever $e\in2$, $a,b\in \dom\eta$ and $aR_e b$. \end{enumerate} See Figure~\ref{fig:2graph} for examples. The $2$-graph of Figure~\ref{fig:2graphcolor} does not have a coloring (with full domain) respecting it. A $2$-graph $\mathbf{B}$ is a \emph{suitable $2$-graph (S2G)} if it satisfies, in addition, \begin{enumerate}[(i)] \setcounter{enumi}{2} \item $|B|=\aleph_1$, \item for $e\in\{0,1\}$, $B$ contains some $R_e$-complete subset of size $\aleph_1$, \item if $a\in B$ and $e\in\{0,1\}$ then there is some $\eta: B\to 2$ respecting $\mathbf{B}$ such that $\eta(a)=e$. \item For any $a,b\in B$, there is some automorphism $f$ of $\mathbf{B}$ such that $f(a)=b$. \end{enumerate} \end{definition} \begin{figure} \subfloat[] {\resizebox{0.31\textwidth}{!}{ {\fontsize{13pt}{16pt}\selectfont\input{graph1wide.pdf_tex}} }} \hspace*{\fill} \subfloat[] {\resizebox{0.31\textwidth}{!}{ {\fontsize{13pt}{16pt}\selectfont\input{graph2.pdf_tex}} }} \hspace*{\fill} \subfloat[] {\resizebox{0.31\textwidth}{!}{ {\fontsize{13pt}{16pt}\selectfont\input{graph3.pdf_tex}} }} \caption{(\textsc{a}) A (finite) $2$-graph. The coloring in (\textsc{b}) does not respect the graph, the one in (\textsc{c}) does.} \label{fig:2graph} \end{figure} \begin{figure} {\fontsize{11pt}{14pt}\selectfont\input{graph4squashed.pdf_tex}} \caption{A finite $2$-graph which cannot be respected by any coloring.} \label{fig:2graphcolor} \end{figure} Properties (iv) and (vi) imply for all $b\in B$ and $e\in\{0,1\}$: \begin{equation}\label{eq:S2G} \text{$b$ is contained in an uncountable $R_e$-complete subgraph of $\mathbf{B}$.} \end{equation} \begin{remark In our applications, we only need the following weakening of property (v): for any $t\in[B]^{<\aleph_0}$, $a\in t$ and $e\in 2$, there is some $\eta :t\to 2$ that respects $\mathbf{B}$ such that $\eta(a)=e$. The only place where (the weakening of) (v) is used is in the proof of Lemma~\ref{GBprop1}(b). \end{remark} The following definition and lemma will be used to construct a suitable $2$-graph: \begin{definition} Fix a $2$-graph $\mathbf{B}$. \begin{enumerate} \item A finite partial function $s: B\to 2$ with $|{\dom s}|\geq 2$ (which we may also call ``finite positive atomic type'') is \emph{realized} by $z\in B$, if $x R_{s(x)} z$ for any $x\in\dom s$. \item Let $D\subseteq B$. We say $D\preceq^- B$ if any such type $s: D\to 2$ which is realized in $B$ is also realized in $D$. \end{enumerate} \end{definition} Note that we require this only for ``types'' with at least two edges. So when checking $D\preceq^- B$ we can ignore all $b\in B$ which have at most one edge to elements of $D$. \begin{lemma}\label{extcolor} Let $\mathbf{B}=\langle B,R_0,R_1\rangle$ be a $2$-graph and $A\preceq^- B$. Then: \begin{enumerate}[(a)] \item If $\eta: A\to 2$ respects $\mathbf{B}$ and $c\in B\smallsetminus A$, then $\eta$ can be extended to some $\eta':A\cup\{c\}\to 2$ that respects $\mathbf{B}$. \item If in addition $\mathbf{B}|_A$ satisfies (v) of Definition~\ref{DefSbG} then, whenever $c\in B\smallsetminus A$ and $e\in 2$, there is some $\eta':A\cup\{c\}\to 2$ that respects $\mathbf{B}$ such that $\eta'(c)=e$. \item Now assume that all elements of $B\smallsetminus A$ have edges only to elements of $A$, i.e., $(B\smallsetminus A)^2 \cap (R_0 \cup R_1) = \emptyset$. Then under the assumptions of (a), we can extend $\eta$ to some $\eta'': B\to 2$; and under (b) we can find some $\eta'': B\to 2$ with $\eta''(c)=e$. \end{enumerate} \end{lemma} \begin{proof} \textbf{(a)}: By contradiction: Assume that $\eta$ cannot be extended in such way, which means that there are $x\neq y$ in $A$ such that $xR_0 c$, $yR_1 c$, $\eta(x)=0$ and $\eta(y)=1$. Since $A\preceq^-B$, there is some $z\in A$ such that $x R_0 z$ and $yR_1 z$, but this contradicts that $\eta$ respects $\mathbf{B}$. \textbf{(b)}: Assume $x_0 R_{1-e} c$ for some $x_0\in A$. Then, by Definition~\ref{DefSbG}(v), there is some $\eta:A\to 2$ respecting $\mathbf{B}$ such that $\eta(x_0)=1-e$. By (a) we can extend it to $\eta':A\cup\{c\}\to 2$, and $\eta'(c)$ has to be $e$. So now we assume that $c$ only has $e$-connections to $A$ (if any). It is enough to show that there is some $\eta$ coloring $A$ which assigns $1-e$ to all neighbours of $c$ in $A$: Then we can again extend it by setting $\eta'(c)=e$. Say that $p:w_p\to A$ is a \emph{$0$-$1$-path} if it satisfies \begin{enumerate}[(i)] \item $0<w_p\leq\omega$, \item $p(0) R_e c$, \item if $n<w_p$ and $n\equiv i\mod 2$ then $p(n) R_{|1-e-i|} p(n+1)$, that is, \[p(0)\ R_{1-e}\ p(1)\ R_e\ p(2)\ R_{1-e}\ p(3)\ \ldots.\] \end{enumerate} It is clear that, whenever $p:w_p\to A$ is a $0$-$1$ path, then there is a unique $\eta_p:\ran p\to 2$ respecting $\mathbf{B}$ such that $\eta_p(p(0))=1-e$. Some such coloring exists, as we assume Definition~\ref{DefSbG}(v) for $\mathbf{B}|_A$. Uniqueness is clear: as $p(0)$ gets color $1-e$, $p(1)$ has to have color $e$, etc., i.e.\ $\eta_p(p(n))=e$ iff $n$ is odd. Note that \begin{equation}\label{eq:bakjwet} \eta_p(p(n))=j\text{ implies that the edge from }p(n)\text{ to }p(n-1)\text{ has color }1-j \end{equation} (where we set $p(-1):=c$). Let $A'\subseteq A$ be the union of (the ranges of) all $0$-$1$-paths. We first show that there is a unique $\eta_c: A'\to 2$ respecting $\mathbf{B}$ such that $\eta_c(x)=1-e$ whenever $x R_e c$. Uniqueness is clear: Each node in $A'$ lies on a $0$-$1$-path, which determines its color. Set $\eta_c$ to be the union of the $\eta_p$ for all $0$-$1$-paths $p$. So it is enough to show that $\eta_c$ is a function and that it respects $\mathbf{B}$. $\eta_c$ is a function: Assume that $p,q$ are $0$-$1$-paths, $m< w_p$ and $n<w_q$ and $p(m)=q(n)$. If $p(0)=q(0)$ then there is some $\eta_0:A\to 2$ respecting $\mathbf{B}$ with $\eta_0(p(0))=1-e$, and this $\eta_0$ must extend both $\eta_p$ and $\eta_q$, hence $\eta_p(p(m))=\eta_q(q(n))=\eta_0(p(m))$. So assume $p(0)\neq q(0)$, so in particular $p(0) R_e c$ and $q(0) R_e c$. As $A\preceq^- B$, there is some $z\in A$ such that $p(0) R_e z$ and $q(0) R_e z$. We can choose $\eta_1:A\to 2$ respecting $\mathbf{B}$ such that $\eta_1(z)=e$. This implies $\eta_1(p(0))=\eta_1(q(0))=1-e$, hence $\eta_1$ extends both $\eta_p$ and $\eta_q$, so $\eta_p(p(m))=\eta_q(q(n))=\eta_1(p(m))$. $\eta_c$ respects $\mathbf{B}$: Assume towards a contradiction that there are $0$-$1$-paths $p,q$, $m< w_p$, $n<w_q$ and $j<2$ such that $\eta_c(p(m))=\eta_c(q(n))=j$ and $p(m) R_j q(n)$. It cannot be that $p(0)=q(0)$ because there is some $\eta_0:A\to 2$ respecting $\mathbf{B}$ with $\eta_0(p(0))=1-e$, and such $\eta_0$ must extend both $\eta_p$ and $\eta_q$. Hence $p(0)\neq q(0)$. As before, $A\preceq^- B$ implies that there is some $z\in A$ such that $p(0) R_e z$ and $q(0) R_e z$, and there is an $\eta_1:A\to 2$ respecting $\mathbf{B}$ such that $\eta_1(z)=e$. Since $\eta_1$ extends both $\eta_p$ and $\eta_q$, $\eta_1(p(m))=\eta_1(q(n))=j$ and $p(m) R_j q(n)$, a contradiction. It remains to be shown that $\eta_c$ can be extended to all of $A$. For this, choose any $\eta_2:A\to 2$ respecting $\mathbf{B}$, and define $\eta:A\to 2$ by $\eta(x):=\eta_c(x)$ if $x\in A'$, or $\eta(x):=\eta_2(x)$ otherwise. We claim that $\eta$ respects $\mathbf{B}$. Assume otherwise, i.e., there are $x\in A'$, $y\in A\smallsetminus A'$ and $j<2$ such that $xR_j y$ and $\eta(x)=\eta(y)=j$. Since $x\in A'$, there is some $0$-$1$-path $p$ such that $x=p(n)$. By \eqref{eq:bakjwet} $\eta_c(x)=j$ implies that the edge between $p(n)$ and $p(n-1)$ is $1-j$, but this means that we can extend $p{\restriction} n$ to $y$ and get another $0$-$1$-path, a contradiction to $y\notin A'$. \textbf{(c)}: First note that, whenever $A\subseteq A'\subseteq B$, $A'\preceq^-B$: Assume $b\in B\smallsetminus A'$. As there are no connections between $b$ and $A'\smallsetminus A$, any finite type over $A'$ realized by $b$ is a type over $A$, which is realized in $A$ as $A\preceq^- B$. Therefore, we can construct $\eta''$ by Zorn's Lemma or by induction (starting with a suitable $\eta'$ as in (b), if required). \end{proof} \begin{theorem}\label{SbGexists} There exists a suitable $2$-graph. \end{theorem} \begin{proof} We are going to construct, using forcing notation,\footnote{Equivalently we could formulate it as an inductive construction, taking care of $\aleph_1$-many requirements in $\omega_1$-many steps.} two relations $R_0^*$ and $R_1^*$ on $\omega_1$ such that $\langle \omega_1,R^*_0,R^*_1\rangle$ becomes a S2G. Define the poset $\mathbb{P}$ whose conditions are tuples $p=\langle B^p,R^p_0,R^p_1,W^p_0,W^p_1,L^p,A^p\rangle$ satisfying the following: \begin{enumerate}[({C}1)] \item $\mathbf{B}^p:=\langle B^p,R_0^p,R_1^p\rangle$ is a $2$-graph with $B^p\subseteq\omega_1$ countable. \item For each $e\in2$, $W^p_e\subseteq B^p$ is infinite and $R^p_e$-complete, and $W^p_0\cap W^p_1=\emptyset$. \item $L^p=\{\eta^p_{a,e}:(a,e)\in B^p\times 2\}$ where $\eta^p_{a,e}: B^p\to 2$ respects $\mathbf{B}^p$ and $\eta^p_{a,e}(a)=e$, for any $(a,e)\in B^p\times 2$. \item $A^p=\{f^p_{a,b}:(a,b)\in C^p\}$ such that $C^p\subseteq B^p\times B^p$ and, for any $(a,b)\in C^p$. \begin{enumerate}[({F}1)] \item $f^p_{a,b}:D^p_{a,b}\to D^p_{a,b}$ is a bijection, \item $a,b\in D^p_{a,b}$ and $f^p_{a,b}(a)=b$, \item for any $x,y\in D^p_{a,b}$ and $e\in2$, $xR^p_e y$ iff $f^p_{a,b}(x)R^p_e f^p_{a,b}(y)$, \item $D^p_{a,b}\preceq^-B^p$. \end{enumerate} \end{enumerate} Order $\mathbb{P}$ by $q\leq p$ iff the following is satisfied: \begin{enumerate}[({O}1)] \item $\mathbf{B}^p$ is a 2-subgraph of $\mathbf{B}^q$, and $B^p\preceq^- B^q$, \item $C^p\subseteq C^q$ and $W^q_e\cap B^p=W^p_e$ for $e\in2$, \item for any $a\in B^p$ and $e\in 2$, $\eta^p_{a,e}\subseteq\eta^q_{a,e}$, and \item for any $(a,b)\in C^p$, $f^p_{a,b}\subseteq f^q_{a,b}$. \end{enumerate} Note that $\mathbb{P}\neq\emptyset$. Indeed, choose disjoint $W^\bullet_0,W^\bullet_1\subseteq\omega_1$ of size $\aleph_0$, and define $\mathbf{B}^\bullet:=\langle B^\bullet,R^\bullet_0,R^\bullet_1\rangle$ where $B^\bullet:=W^\bullet_0\cup W^\bullet_1$ and, for $e\in 2$, $x R^\bullet_e y$ iff $x,y\in W^\bullet_e$. It is easy to construct an $L^\bullet$ such that $\langle B^\bullet,R^\bullet_0,R^\bullet_1,W^\bullet_0,W^\bullet_1,L^\bullet,\emptyset\rangle$ is a condition in $\mathbb{P}$. Recall that, for any $\sigma$-closed poset and arbitrary $\aleph_1$-many dense subsets, there is a filter intersecting these dense sets. So, after showing that $\mathbb{P}$ is $\sigma$-closed, we can obtain a suitable $2$-graph from a filter intersecting suitable dense sets.\smallskip \noindent\emph{$\mathbb{P}$ is $\sigma$-closed:} Let $\langle p_n:n<\omega\rangle$ be a decreasing sequence of conditions in $\mathbb{P}$. Denote $B^{p_n}=B^n$, $R^{p_n}_0=R^n_0$, and so on. Set $B:=\bigcup_{n<\omega}B^n$, $R_e:=\bigcup_{n<\omega}R^n_e$ and $W_e:=\bigcup_{n<\omega}W^n_e$ for $e\in\{0,1\}$, $\mathbf{B}:=\langle B,R_0,R_1\rangle$, and $C:=\bigcup_{n<\omega}C^n$. For $(a,b)\in C$ set $f_{a,b}:=\bigcup_{n\geq m}f^n_{a,b}$ where $m=\min\{n<\omega: (a,b)\in C^n\}$. For $a\in B$ and $e\in 2$, set $\eta_{a,e}:=\bigcup_{n\geq m}\eta^n_{a,e}$ where $m=\min\{n<\omega: a\in B^n\}$. Put $L:=\{\eta_{a,e}:(a,e)\in B\times2\}$, $A:=\{f_{a,b}:(a,b)\in C\}$, and $q:=\langle B,R_0,R_1,W_0,W_1,L,A\rangle$. It is easy to see that $q\in\mathbb{P}$ and that it is stronger than each $p_n$.\smallskip The following sets are dense in $\mathbb{P}$:\smallskip \noindent\textbf{(I) }\emph{$D_{a^*}:=\{p\in\mathbb{P}: a^*\in B^p\}$ for any $a^*\in\omega_1$.} Let $p\in\mathbb{P}$ and assume $a^*\notin B^p$. We define $q\le p$ in $D_{a^*}$ as follows: \begin{enumerate}[(i)] \item $B^q:=B^p\cup\{a^*\}$; \item $R_e^q:=R_e^p$ and $W^q_e:=W^p_e$ for $e\in\{0,1\}$; \item $C^q:=C^p$, \item $f^q_{a,b}:=f^p_{a,b}$ for all $(a,b)\in C^p$. \end{enumerate} Obviously we can extend each old $\eta^q_{a,e}$ (by assigning an arbitrary value $e$ to $a^*$), and picking two such extensions for $e=0,1$ we get the required $\eta^q_{a^*,e}$. It is clear that $B^p\preceq^- B^q$, as the new node has no edges. This implies that $D^q_{(a,b)}=D^p_{(a,b)}\preceq^- B^q$, as $\preceq^-$ is transitive (the same argument will apply to the following dense sets as well). \smallskip \noindent\textbf{(II) }\emph{$E_{a^*,b^*}:=\{p\in\mathbb{P}:(a^*,b^*)\in C^p\}$ for any $a^*,b^*\in\omega_1$.} Without loss of generality assume that $p\in\mathbb{P}$ and $a^*,b^*\in B^p$, but $(a^*,b^*)\notin C^p$. We want to find some $q\leq p$ in $E_{a^*,b^*}$. When $a^*=b^*$, it is enough to set $f^q_{a^*,b^*}=\mathrm{id}_{B^p}$, $C^q:=C^p\cup\{(a^*,a^*)\}$, and leave the other components as in $p$. So assume that $a^*\neq b^*$. The set $B^q$ will be the union of $\mathbb Z$ many copies of $B^p$, where $b^*$ in the $m$-th copy is identified with $a^*$ in the $m+1$-th copy. In more detail: Denote $B_0:=B^p$. Find a sequence $\langle B'_m : m\in\mathbb{Z}\smallsetminus\{0\}\rangle$ of pairwise disjoint subsets of $\omega_1$ of size $\aleph_0$ and disjoint to $B_0$. Set $a_0:=a^*$, $a_1:=b^*$, and for $m\in\mathbb{Z}\smallsetminus\{0,1\}$ choose pairwise different $a_m\in\omega_1\smallsetminus \big(B_0\cup\bigcup_{m\in\mathbb{Z}\smallsetminus\{0\}}B'_m\big)$. For $m\neq0$ put $B_m:=B'_m\cup\{a_m,a_{m+1}\}$. Note that, for any $m,n\in\mathbb{Z}$, if $|m-n|>1$ then $B_m\cap B_n=\emptyset$, and $B_m\cap B_{m+1}=\{a_{m+1}\}$. Choose a bijection $g_m:B_m\smallsetminus\{a_{m+1}\}\to B_{m+1}\smallsetminus\{a_{m+2}\}$ such that $g_m(a_m)=a_{m+1}$, and let $f^q_{a^*,b^*}=f:=\bigcup_{m\in\mathbb{Z}}g_m$, which is a bijection from $B^q_{(a^*,b^*)}:=\bigcup_{m\in\mathbb{Z}}B_m$ onto itself. Define for $e\in 2$ and $x,y\in B^q_{(a^*,b^*)}$, $x R^q_e y$ iff: \begin{center} $x\neq y$, they belong to the same (unique) $B_m$ and $f^{(-m)}(x) R^p_e f^{(-m)}(y)$. \end{center} It is clear that $B^p=B_0\preceq^- B^q$, as any $x\in B^q\smallsetminus B^p$ has connections to at most one node in $B^p$ (to either $a^*$ or $b^*$). We set $W^q_e:=W^p_e$ and $C^q:=C^p\cup\{(a^*,b^*)\}$, and leave the partial automorphisms in $p$ unchanged, i.e., $f_{a,b}^q:=f^p_{a,b}$ for $(a,b)\in C^p$. It is now enough to show that we can extend every old $\eta^p_{a,e}$ to $\eta^q_{a,e}:B^q\to 2$, and find for each new $b\in B^q$ and $e\in 2$ a suitable $\eta^q_{b,e}$. Assume $\eta_0:=\eta^p_{a,e}\in L^p$. We extend $\eta_0$ in the following way: Let $e_1:=\eta^p_{a,e}(a_1)$. Set $\eta_1:=\eta^p_{a_1,e_1}\in L^p$. We now extend $\eta_0$ to $B_1$ by setting $\eta(f(x)):=\eta_1(x)$ for $x\in B_0$; and continue by induction (to the right and also to the left). In more detail: define $\eta_n:B_n\to 2$ and $\eta_{-n}:B_{-n}\to 2$ by recursion on $n\in\omega$ as $\eta_{n+1}:=\eta^p_{a,\eta_n(a_{n+1})}\circ f^{-(n+1)}$ and $\eta_{-(n+1)}:=\eta^p_{b,\eta_{-n}(a_{-n})}\circ f^{n+1}$ (we already have $\eta_0$ from the start). All these functions are compatible, so we can define $\eta^q_{a,e}:=\bigcup_{m\in\mathbb{Z}}\eta_m$, and it is clear that it respects $\mathbf{B}^q$. Similarly, we get new $\eta^q_{a,e}$ for $a\in B^q\smallsetminus B^p$. Concretely, $\eta^q_{a,e}:=\eta^q_{f^{-m}(a),e}\circ f^{-m}$ where $m$ is the one with minimum absolute value such that $a\in B_m$, and $\eta^q_{f^{-m}(x),e}$ is defined as in the previous paragraph. \smallskip \noindent\textbf{(III)} \emph{$E'_{a^*,b^*,c^*}:=\{p\in\mathbb{P}:(a^*,b^*)\in C^p,\ c^*\in D^p_{a^*,b^*}\}$ for any $a^*,b^*,c^*\in\omega_1$.} Without loss of generality, assume $p\in\mathbb{P}$, $(a^*,b^*)\in C^p$ and $c^*\in B^p\smallsetminus D^p_{a^*,b^*}$. Denote $D^p:=D^p_{a^*,b^*}$. Let $c_0:=c^*$ and, for $m\in\mathbb{Z}\smallsetminus\{0\}$, choose pairwise different $c_m\in\omega_1\smallsetminus B^p$. Set $D^q:=D^p\cup\{c_m:m\in\mathbb{Z}\}$ and $f:D^q\to D^q$ extending $f^p_{a^*,b^*}$ such that $f(c_m):=c_{m+1}$. Define $q$ as follows: \begin{enumerate}[(i)] \item $B^q:=B^p\cup\{c_m:m\in\mathbb{Z}\smallsetminus\{0\}\}$; \item A new node $c_n$ has an $R_e$-edge to $f^{n}(x)$ iff $c_0=c^*$ has an $R_e$-edge to $x$. \item $W^q_e:=W^p_e$; \item $C^q:=C^p$; \item $f^q_{a^*,b^*}=f$ (with $D^q_{a^*,b^*}=D^q$), and the other partial automorphisms are unchanged (i.e., for $(a,b)\in C^q\smallsetminus\{(a^*,b^*)\}$, $f^q_{a,b}:=f^p_{a,b}$). \end{enumerate} $B^p\preceq^- B^q$: Let $s:B^p\to 2$ be a type realized by $c_n$ ($n\neq 0$). Then actually $\dom(s)\subseteq D^p$, as $c_n$ only has connections to $D^p$. As $D^p\preceq^- B^p$, and the type $s':=s\circ f^{n}$ is realized by $c^*=c_0$, we know that $s'$ is realized by some $z\in D^p$. Then $s$ is realized by $f^{n}(z)\in D^p\subseteq B^p$. $D^q_{a^*,b^*}=D^q\preceq^- B^q$: If $x\in B^q\smallsetminus D^q$, then $x\in B^p$ and has edges only to $B^p$. So any $s: D^q\to 2$ realized by $x$ has domain in $D^p$, and as $D^p\preceq^- B^p$, this $s$ is realized in $D^p\subseteq D^q$. To see that we can extend all old $\eta^p_{a,e}$ to $\eta^q_{a,e}:B^q\to 2$, and that we can find $\eta^q_{c_m,e}$ for $e<2$ and $m\neq 0$, it is enough to note that all the assumptions in Lemma~\ref{extcolor}(c) are met (where we use $A=B^p$ and $B=B^q$). \smallskip \noindent\textbf{(IV) }\emph{$E''_{\alpha,e}:=\{p\in\mathbb{P}:\exists b^*\in W^p_e(b^*\geq\alpha)\}$ for $\alpha<\omega_1$ and $e\in 2$.} Choose $b^*\in\omega_1\smallsetminus(B^p\cup\alpha)$ and define $q$ such that \begin{enumerate}[(i)] \item $B^q:=B^p\cup\{b^*\}$, and the new node $b^*$ is $R_e$-connected to exactly the nodes in $W^p_{e}$, and has no $R_{1-e}$-connections. \item $W^q_e:=W^p_e\cup\{b^*\}$ and $W^q_{1-e}:=W^p_{1-e}$, \item $C^q:=C^p$, \item $f^q_{a,b}:=f^p_{a,b}$ for all $(a,b)\in C^p$. \end{enumerate} $B^p\preceq^-B^q$: Let $s: B^p\to 2$ be realized by $b^*$. This implies that $\dom s\subseteq W^p_e$ and $s(x)=e$ for all $x\in\dom s$. Since $W^p_e$ is infinite, there is some $z\in W^p_e\subseteq B^p$ such that $x R^p_e z$ for all $x\in\dom s$. Given any old $\eta^p_{a,e}$, we can extend it to a function $\eta^q_{a,e}$ with domain $B^q$ by Lemma~\ref{extcolor}(a); and for arbitrary $e\in 2$, we get $\eta^q_{b^*,e}$ by Lemma~\ref{extcolor}(b). (Again, we use $A=B^p$ and $B=B^q$.) \medskip Let $\mathcal{D}$ be the collection of all dense sets defined above. Since $\mathbb{P}$ is $\sigma$-closed and $|\mathcal{D}|=\aleph_1$, there is some filter $G\subseteq\mathbb{P}$ intersecting all the dense sets in $\mathcal{D}$. Set $R^*_e:=\bigcup_{p\in G}R^p_e$ and $U_e:=\bigcup_{p\in G}W^p_e$ for $e\in\{0,1\}$. Since $G\cap D_a\neq\emptyset$ and $G\cap E_{a,b}\neq\emptyset$ for any $a,b\in\omega_1$, we have $\bigcup_{p\in G}B^p=\omega_1$ and $\bigcup_{p\in G}C^p=\omega_1\times\omega_1$. Set $\mathbf{B}:=\langle\omega_1,R^*_0,R^*_1\rangle$, which is a $2$-graph. It is clear that $\mathbf{B}^p$ is a $2$-subgraph of $\mathbf{B}$ for any $p\in G$. On the other hand, $E''_{\alpha,e}\cap G\neq\emptyset$ for all $\alpha<\omega_1$ and $e\in 2$, which implies that $U_e$ is an $R^*_e$-complete subset of $\omega_1$ of size $\aleph_1$. Even more, $U_e\cap B^p=W^p_e$ for any $p\in G$, and $U_0\cap U_1=\emptyset$. For $a\in \omega_1$ and $e\in 2$ set $\eta_{a,e}:=\bigcup\{\eta^p_{a,e}: a\in B^p,\ p\in G\}$. It is routine to check that $\eta_{a,e}:\omega_1\to 2$ respects $\mathbf{B}$. This guaranties (v) of Definition~\ref{DefSbG}. For $a,b\in\omega_1$, set $f_{a,b}:=\bigcup\{f^p_{a,b}:(a,b)\in C^p,\ p\in G\}$. Since $G\cap E'_{a,b,c}\neq\emptyset$ for any $c\in\omega_1$, $\bigcup_{p\in G}D^p_{a,b}=\omega_1$ and $f_{a,b}$ is a $\mathbf{B}$-automorphism. This shows property (vi) of Definition~\ref{DefSbG}. Therefore, $\mathbf{B}$ is a S2G. \end{proof} \section{Cardinal Characteristics, COB and LCU}\label{sec:COB} Many classical characteristics can be defined by the framework of relational systems as in e.g.~\cite{MR1234291,Blass}. Say that $\mathbf{R}:=\langle X,Y,R\rangle$ is a \emph{relational system} if $X$ and $Y$ are non-empty sets, and $R$ is a relation. The following cardinal characteristics are associated with $\mathbf{R}$.\smallskip $\mathfrak{d}(\mathbf{R}):=\min\{|D|: D\subseteq Y\text{\ and }\forall x\in X\,\exists y\in D\,(x R y)\}$;\smallskip $\mathfrak{b}(\mathbf{R}):=\min\{|F|: F\subseteq X\text{\ and }\neg\exists y\in Y\,\forall x\in X\,(x R y) \}$.\medskip In this work, we are particularly interested in relational systems $\mathbf{R}$ such that \begin{enumerate}[({RS}1)] \item $X$ and $Y$ are subsets of Polish spaces $Z_0$ and $Z_1$, respectively, and absolute for transitive models of ZFC (e.g. they are analytic); \item $R\subseteq Z_0\times Z_1$ is absolute for transitive models of ZFC (e.g.\ analytic in $Z_0\times Z_1$). \end{enumerate} When these properties hold we say that $\mathbf{R}$ is a \emph{relational system of the reals}. In all the cases explicitly mentioned throughout this paper, $X$ and $Y$ are Polish spaces themselves and $R$ is Borel in $X\times Y$. In this case, there is no problem to identify $X=Y=\omega^\omega$, and we call $\mathbf{R}$, or rather the characteristics $\mathfrak{b}(\mathbf{R})$ and $\mathfrak{d}(\mathbf{R})$, \emph{Blass-uniform} (cf.~\cite[\S2]{GKMS1}). \begin{example}\label{exp:blassunif}(\cite[2.2.2]{MR1234291} or \cite[\S4 \& \S5]{Blass}) The splitting number $\mathfrak{s}$ and the reaping number $\mathfrak{r}$ are Blass-uniform: Denote $\mathbf{R}_{\mathrm{sp}}:=\la2^\omega,[\omega]^{\aleph_0},R_{\mathrm{sp}}\rangle$ where $xR_{\mathrm{sp}} y$ iff $x{\upharpoonright}y$ is constant except in finitely many points of $y$. Then $\mathfrak{s}=\mathfrak{b}(\mathbf{R}_{\mathrm{sp}})$ and $\mathfrak{r}=\mathfrak{d}(\mathbf{R}_{\mathrm{sp}})$.\footnote{It would be more natural to consider the relational system $\langle[\omega]^{\aleph_0},[\omega]^{\aleph_0},R\rangle$ where $xRy$ iff either $x\supseteq^* y$ or $\omega\smallsetminus x\supseteq^* y$, but $\mathbf{R}_{\mathrm{sp}}$ is more suitable in our proofs. It is not hard to see that both relational systems are Tukey-equivalent.} Also all Cicho\'n-characteristics are Blass-uniform. The Blass-uniform relational systems we use for these characteristics are (as in the Cicho\'n's Maximum constructions) in some instances slightly different from the ``canonical" ones. See e.g.~\cite[Ex.~2.16]{diegoetal},~\cite[Ex.~2.10]{modKST} and~\cite[\S1]{GKS} for the definition of the Blass-uniform relational systems corresponding to the Cicho\'n-characteristics. \end{example} As in \cite{GKMS2} we also look at relational systems $S=\langle S,S,\leq\rangle$ where $\leq$ is an upwards directed partial order on $S$. Here $\cp(S):=\mathfrak{b}(S)$ is the \emph{completeness of $S$}, and $\cf(S):=\mathfrak{d}(S)$ is the \emph{cofinality of $S$}. Recall that, whenever $S$ has no greatest element, $\cp(S)\leq\cf(S)$, and equality holds when the order is linear. The following is a very useful notion to calculate the value of cardinal characteristics (specially in forcing extensions). \begin{definition}[cf.~{\cite[\S1]{GKS}}]\label{def:COB} Fix a directed partial order $S=\langle S,\leq\rangle$ and a relational system $\mathbf{R}=\langle X,Y,R\rangle$. Define the property:\medskip \noindent\textbf{Cone of bounds.}\\ $\mathsf{COB}_\mathbf{R}(S)$ means: There is a family $\bar{y}=\{y_i:i\in S\}\subseteq Y$ such that \[\forall x\in X\, \exists i_x\in S\, \forall j\geq i_x\, (x Ry_j).\ When $L=\langle L,\leq\rangle$ is a linear order, we additionally define \noindent\textbf{Linear cofinally unbounded.}\\ $\mathsf{LCU}_\mathbf{R}(L)$ means: There is a family $\bar{x}=\{x_i : i\in L\}\subseteq X$ such that \[\forall y\in Y\,\exists i\in L\,\forall j\geq i\,(\neg(x_j R y)).\] \end{definition} In the following remarks we address very natural characterizations and consequences of these properties. \begin{remark}[Tukey connections and $\mathsf{COB}$]\label{rem:COB} Let $\bar{y}$ be a witness of $\mathsf{COB}_\mathbf{R}(S)$. By the definition of $\mathsf{COB}_\mathbf{R}(S)$ we have that the functions $f:X\to S$ and $g:S\to Y$, defined by $f(x):=i_x$ and $g(i):=y_i$, form a Tukey connection from $\mathbf{R}$ into $S$. So we conclude that \[\mathsf{COB}_\mathbf{R}(S)\text{\ holds iff }\mathbf{R}\leq_{\mathrm{T}} S,\] where $\leq_{\mathrm{T}}$ denotes the Tukey order. \end{remark} \begin{remark}[Duality and $\mathsf{LCU}$]\label{rem:LCU} Let $\mathbf{R}=\langle X,Y,R\rangle$ be a relational system. The \emph{dual of $\mathbf{R}$} is the relational system $\mathbf{R}^\perp:=\langle Y,X,R^\perp\rangle$ where $uR^\perp v \Leftrightarrow \neg(v R u)$. It is clear that $\mathfrak{d}(\mathbf{R}^\perp)=\mathfrak{b}(\mathbf{R})$ and $\mathfrak{b}(\mathbf{R}^\perp)=\mathfrak{d}(\mathbf{R})$. Also, given a linear order $L$, \[\mathsf{LCU}_\mathbf{R}(L)\text{\ iff }\mathsf{COB}_{\mathbf{R}^\perp}(L).\] Hence, by Remark~\ref{rem:COB}, \[\mathsf{LCU}_\mathbf{R}(L)\text{\ iff }\mathbf{R}^\perp\leq_{\mathrm{T}} L.\] When $L$ has no greatest element, $L^\perp$ is Tukey-equivalent to $L$, so \[\mathsf{LCU}_\mathbf{R}(L)\text{\ iff }L\leq_{\mathrm{T}} \mathbf{R}.\] Although $\mathsf{LCU}$ is a particular case of $\mathsf{COB}$, they are used with different roles in our applications, so it is more practical to use different notations. \end{remark} As a direct consequence of these remarks: \begin{lemma}[cf.~{\cite[\S1]{GKS}}]\label{lem:COBbounds} Let $\mathbf{R}$ be a relational system, $S$ a directed partial order and let $L$ be a linear order without greatest element. Then \begin{enumerate}[(a)] \item $\mathsf{COB}_\mathbf{R}(S)$ implies $\cp(S)\leq\mathfrak{b}(\mathbf{R})$ and $\mathfrak{d}(\mathbf{R})\leq\cf(S)$. \item $\mathsf{LCU}_\mathbf{R}(L)$ implies $\mathfrak{b}(\mathbf{R})\leq\cp(L)=\cf(L)\leq\mathfrak{d}(\mathbf{R})$. \end{enumerate} \end{lemma} In our applications we aim to force $\mathsf{COB}_\mathbf{R}(S)$ and $\mathsf{LCU}_\mathbf{R}(L)$ for a given relational system of the reals $\mathbf{R}$; this will help us compute the value of $\mathfrak{b}(\mathbf{R})$ and $\mathfrak{d}(\mathbf{R})$ in generic extensions. For this purpose, the following variation of Definition~\ref{def:COB} is very practical. \begin{definition}[{\cite{GKMS2}}]\label{def:COBforcing} Let $\mathbf{R}=\langle X,Y,R\rangle$ be a relational system of the reals, $S=\langle S,\leq_S\rangle$ a directed partial order, $L=\langle L,\leq_L\rangle$ a linear order, and let $\mathbb{P}$ be a forcing notion. Define the following properties.\medskip \noindent$\mathsf{COB}_\mathbf{R}(\mathbb{P},S)$: There is a family $\dot{\bar{y}}=\{\dot{y}_i:i\in S\}$ of $\mathbb{P}$-names of members of $Y^{V^\mathbb{P}}$ such that, for any $\mathbb{P}$-name $\dot{x}$ of a member of $X^{V^\mathbb{P}}$ there is some $i\in S$ such that \[\Vdash_\mathbb{P} \forall j\geq_S i\, (\dot{x}R\dot{y}_j).\] \noindent$\mathsf{LCU}_\mathbf{R}(\mathbb{P},L)$: There is a family $\dot{\bar{x}}=\{\dot{x}_i:i\in L\}$ of $\mathbb{P}$-names of members of $X^{V^\mathbb{P}}$ such that, for any $\mathbb{P}$-name $\dot{y}$ of a member of $Y^{V^\mathbb{P}}$ there is some $i\in L$ such that \[\Vdash_\mathbb{P} \forall j\geq_L i\, (\neg(\dot{x}_jR\dot{y})).\] \end{definition} \begin{remark}\label{rem:COB2} Concerning the properties $\mathsf{COB}_\mathbf{R}(\mathbb{P},S)$ and $\mathsf{LCU}_\mathbf{R}(\mathbb{P},L)$, the relational system $\mathbf{R}$ (i.e., both base sets as well as the relation) are interpreted in the generic extension (this is why we required these objects to be definable), while $S$ and $L$ are taken as sets in the ground model (not interpreted). It is clear that $\mathsf{COB}_\mathbf{R}(\mathbb{P},S)$ implies $\Vdash_\mathbb{P}\mathsf{COB}_\mathbf{R}(S)$. Although the converse is not true in general, it holds in the cases we are interested in, when $\mathbb{P}$ is ccc and $\cp(S)$ is uncountable. More precisely, if $\cp(S)$ is uncountable and $\mathbb{P}$ is $\cp(S)$-cc then $\mathsf{COB}_\mathbf{R}(\mathbb{P},S)$ is equivalent to $\Vdash_\mathbb{P}\mathsf{COB}_\mathbf{R}(S)$. Moreover, $\mathbb{P}$ forces $\cp(S)^{V^\mathbb{P}}=\cp(S)^V$ and $\cf(S)^{V^\mathbb{P}}\leq\cf(S)^V$, so, by Lemma~\ref{lem:COBbounds}, in the generic extension $\mathsf{COB}_\mathbf{R}(S)$ implies $\cp(S)^V\leq\mathfrak{b}(S)$ and $\mathfrak{d}(S)\leq\cf(S)^V$. Likewise, $\mathsf{LCU}_\mathbf{R}(\mathbb{P},L)$ implies $\Vdash_\mathbb{P}\mathsf{LCU}_\mathbf{R}(L)$, and the converse holds whenever $L$ has no greatest element, $\cf(L)$ is uncountable and $\mathbb{P}$ is $\cf(L)$-cc. However, the restriction ``$\cp(S)$ is uncountable and $\mathbb{P}$ is $\cp(S)$-cc" is not required for the following result. \end{remark} \begin{lemma}[{\cite[Lemma~1.3]{GKMS2}}]\label{lem:COBforbd} Let $\mathbf{R}$ be a relational system of the reals, $S$ a directed partial order without greatest element, and let $\mathbb{P}$ be a forcing notion. If $\mu=\cp(S)^V$ and $\lambda=\cf(S)^V$, then \begin{enumerate}[(a)] \item $\mathsf{COB}_\mathbf{R}(\mathbb{P},S)$ implies $\Vdash_\mathbb{P}$``$\mu\leq\mathfrak{b}(\mathbf{R})$ and $\mathfrak{d}(\mathbf{R})\leq|\lambda|$". \item If $L=S$ is a linear order, then $\mathsf{LCU}_\mathbf{R}(\mathbb{P},L)$ implies \[\Vdash_\mathbb{P}\text{``}\mathfrak{b}(\mathbf{R})\leq|\lambda|\leq\lambda\leq\mathfrak{d}(\mathbf{R})\text{"}.\] \end{enumerate} \end{lemma} \section{Preserving splitting families with symmetric iterations}\label{sec:splpres} \subsection{The single forcing \texorpdfstring{$\mathbb{G}_\mathbf{B}$}{GB}} Using suitable $2$-graphs, we define a poset which will be used as factor for the forcing adding the splitting families we aim to preserve. See Figure~\ref{fig:GB} for a graphic description of this forcing. \begin{definition}\label{DefGB} Let $\mathbf{B}=\langle B,R_0,R_1\rangle$ be a suitable $2$-graph. Define the forcing $\mathbb{G}_\mathbf{B}$ whose conditions are functions $p:F_p\times n_p\to\{0,1\}$ where $F_p\in[B]^{<\aleph_0}$ and $n_p<\omega$ (also demand $F_p=\emptyset$ iff $n_p=\emptyset$). The order is defined by $q\leq p$ iff \begin{enumerate}[(i)] \item $p\subseteq q$, \item for each $k\in[n_p,n_q)$, the map $F_p\to 2$, $a\mapsto q(a,k)$ respects $\mathbf{B}$, that is, if $e\in\{0,1\}$, $a,b\in F_p$, and $a R_e b$, then $\{q(a,k),q(b,k)\}\neq\{e\}$. \end{enumerate} For $a\in B$ denote by $\dot{\eta}_a$ the name of the generic real added at $a$, that is, $\mathbb{G}_\mathbf{B}$ forces that, for any $k<\omega$, $\dot{\eta}_a(k)=e$ iff $p(a,k)=e$ for some $p$ in the generic set. For $p\in\mathbb{G}_\mathbf{B}$ denote $\supp p:=F_p$. \end{definition} \begin{figure} \subfloat[] {\resizebox{0.45\textwidth}{!}{ \includegraphics{GBcond.pdf} }} \hspace*{\fill} \subfloat[] {\resizebox{0.45\textwidth}{!}{ \includegraphics{GBorder.pdf} }} \caption{A condition $p\in \mathbb{G}_\mathbf{B}$ is a binary function as in (A), whose domain is a finite square $F_p\times n_p$. The order is illustrated in (B): a condition $q$ is stronger than $p$ if $q$ extends $p$ and the new (horizontal) binary sequences between $n_p$ and $n_q$ respect $\mathbf{B}{\upharpoonright}F_p$.} \label{fig:GB} \end{figure} \begin{lemma}\label{GBprop1} Let $\mathbf{B}=\langle B,R_0,R_1\rangle$ be a suitable $2$-graph. Then: \begin{enumerate}[(a)] \item $\mathbb{G}_\mathbf{B}$ is $\sigma$-centered. \item For any $a\in B$, $\mathbb{G}_\mathbf{B}$ forces that $\dot{\eta}_a$ is Cohen over $V$. \item Any $p\in\mathbb{G}_\mathbf{B}$ forces that, for any $k\geq n_p$, the map $F_p\to 2$, $a\mapsto \dot{\eta}_a(k)$ respects $\mathbf{B}$, that is, if $e\in\{0,1\}$, $a,b\in F_p$ and $a R_e b$, then $\dot{\eta}_a(k)$ and $\dot{\eta}_b(k)$ cannot both be $e$ at the same time. \item\label{item:dafterall} Assume for $i\in\{1,2\}$: \begin{itemize} \item $e\in\{0,1\}$, $p_i\in\mathbb{G}_\mathbf{B}$, $c_i\in F_{p_i}$, $c_1 R_e c_2$, \item $\mathbb{Q}$ is a poset, $\mathbb{G}_\mathbf{B}\lessdot\mathbb{Q}$, \item $\dot b$ is a $\mathbb{Q}$-name of an infinite subset of $\omega$, \item $q_i\leq p_i$ in $\mathbb{Q}$ and $q_i\Vdash_\mathbb{Q}\dot\eta_{c_i}{\restriction}\dot b\equiv e$, \end{itemize} Then $q_1$ and $q_2$ are incompatible. \item\label{item:notd} If $f:B\to B$ is a $\mathbf{B}$-automorphism, then $\hat f:\mathbb{G}_\mathbf{B}\to \mathbb{G}_\mathbf{B}$ defined by $ \hat f(p)(\alpha,n) = p(f^{-1}(\alpha), n)$ (where $F_{\hat{f}(p)}:=f[F_p]$), is a p.o.-automorphism. \end{enumerate} \end{lemma} \begin{proof} To see (a), first note that since $|B\times\omega|=\aleph_1$, by Engelking--Kar{\l}owicz~\cite{EngKarl} there is a countable set $H\subseteq 2^{B\times\omega}$ such that any finite partial function from $B\times\omega$ into $2$ can be extended by some member of $H$. For $h\in H$ and $n<\omega$, let $C_{h,n}:=\{p\in\mathbb{G}_\mathbf{B}:p\subseteq h\text{\ and }n_p=n\}$. It is clear that $C_{h,n}$ is centered and $\mathbb{G}_\mathbf{B}=\bigcup_{h\in H}\bigcup_{n<\omega}C_{h,n}$, so $\mathbb{G}_\mathbf{B}$ is $\sigma$-centered.\smallskip \noindent (b): Consider Cohen forcing $\mathbb{C}:=2^{<\omega}$ ordered by end-extension. For $a\in B$ define $\mathrm{pr}_a:\mathbb{G}_\mathbf{B}\to\mathbb{C}$ such that, for any $p\in\mathbb{G}_\mathbf{B}$, $\mathrm{pr}_a(p):=\langle p(a,k) : k<n_p\rangle$ if $a\in\supp p$, or $\mathrm{pr}_a(p)$ is the empty sequence otherwise. It is enough to show that $\mathrm{pr}_a$ is a forcing projection, that is, \begin{enumerate}[(i)] \item for any $p,q\in\mathbb{G}_\mathbf{B}$ if $q\supseteq p$ then $\mathrm{pr}_a(q)\supseteq\mathrm{pr}_a(p)$, \item for any $p\in\mathbb{G}_\mathbf{B}$ and $s\in\mathbb{C}$, if $s\supseteq\mathrm{pr}_a(p)$ then there is some $q\supseteq p$ in $\mathbb{G}_B$ such that $\mathrm{pr}_a(q)\supseteq s$ (even $\mathrm{pr}_a(q)=s$), \item $\mathrm{pr}_a[\mathbb{C}_\mathbf{B}]$ is dense in $\mathbb{C}$ (even $\mathrm{pr}_a$ is onto). \end{enumerate} Property (i) is easy, (ii) follows by Definition~\ref{DefSbG}(v), and (iii) follows by (ii) and the fact that $\mathrm{pr}_a(\emptyset)=\langle\ \rangle$.\smallskip \noindent (c): By the definition of the order of $\mathbb{G}_\mathbf{B}$.\smallskip \noindent (\ref{item:dafterall}): Assume $q\in\mathbb{Q}$ is stronger than $q_1$ and $q_2$, so $q\Vdash$``$\{k<\omega:\, \dot\eta_{c_1}(k)=\dot\eta_{c_2}(k)=e\}$ is infinite". Hence, there is some $p\in\mathbb{G}_\mathbf{B}$ stronger than $p_1$ and $p_2$ forcing the same, but this contradicts (c) because $c_1,c_2\in F_p$ and $c_1 R_e c_2$.\smallskip \noindent (\ref{item:notd}) is straightforward. \end{proof} \begin{remark}\label{rem:blalb} The obvious restriction of $\mathbb{G}_\mathbf{B}$ to, say, the first two coordinates, is not a projection, and $\mathbb{G}_\mathbf{B}$ is not a FS iteration of length $\omega_1$ in any natural way. Assume, e.g., we restrict to $\{0,1\}\subseteq B=\omega_1$, and $\mathbf{B}$ contains an $e$-colored edge from node $e$ to node $2$ for $e\in\{0,1\}$. Start with a condition $p: \{0,1,2\}\times n\to 2$ (for e.g.\ $n=1$), restrict it to $p^-=p{\restriction} \{0,1\}$ and extend it to $p'\in \mathbb{G}_{\mathbf{B}{\restriction}\{0,1\}}$ by setting $p'(e,n)=e$ for $e\in\{0,1\}$. Then there is no $q\in \mathbb{G}_\mathbf{B}$, $q\le p$, compatible with $p'$. \end{remark} We will use FS iterations where the first step is given by a FS product of posets of the form $\mathbb{G}_\mathbf{B}$ as above. It is clear that, if $\mathbf{B}$ is a S2G in the ground model, then it is still a S2G in any forcing extension preserving $\omega_1$. On the other hand, constructing $\mathbb{G}_\mathbf{B}$ from $\mathbf{B}$ is absolute for transitive models of ZFC, so any finite support product of posets of the form $\mathbb{G}_\mathbf{B}$ is forcing equivalent to their finite support iteration (as long as the sequence of $2$-graphs lives in the ground model). \subsection{Suitable iterations, nice names and automorphisms} We now introduce some notions associated with these iterations, relevant for the preservation of splitting families. From this point on, products of ordinals (such as $\omega_1\pi$) should be interpreted as ordinal products. \begin{definition}\label{DefSI} A \emph{suitable iteration} is defined by the following objects: \begin{enumerate}[(I)] \item A cardinal $\pi^{}_0>0$. \item For each $\delta<\pi^{}_0$, a S2G $\mathbf{B}^{}_\delta=\langle B^{}_{\delta},R^{}_{\delta,0},R^{}_{\delta,1}\rangle$ with $B^{}_\delta:=[\omega_1\delta,\omega_1(\delta+1))$, \item an ordinal $\pi^{}\geq\pi^{}_1:=\omega_1\pi^{}_0$, \item a FS ccc iteration $\mathbb{P}^{}$ of length $1+(\pi^{}- \pi_1^{})$ where the first iterand is the FS product of the $\mathbb{G}_{\mathbf{B}_\delta}$ for $\delta<\pi_0^{}$, called $\mathbb{P}^{}_{\pi_1^{}}$, and the following iterands are indexed by $\xi\in \pi^{}\smallsetminus \pi_1^{}$ and are ccc posets called $\dot{\mathbb{Q}}^{}_\xi$. \end{enumerate} As usual, we denote with $\mathbb{P}^{}_\xi$ the result of the iteration up to $\xi$ (for $\pi^{}_1\le \xi\le \pi^{}$), and use $\mathbb{P}^{}$ to denote either $\mathbb{P}^{}_{\pi^{}}$ or the whole iteration (or its definition). See Figure~\ref{fig:iteration} for an illustration. \end{definition} \begin{remark} Note that we could also view $\mathbb{P}_{\pi_1}$ as (the result of) a FS-iteration of length $\pi_0$ (instead of length $1$, as we do in the definition). Then we would get an iteration $\mathbb{P}$ of $\pi_0+(\pi-\pi_1)$. However, $\mathbb{P}_{\pi_1}$ is not a FS iteration of length $\pi_1$, at least not with natural iterands, see Remark~\ref{rem:blalb}. \end{remark} Let us mention some notation: \begin{notation} \begin{enumerate}[(1)] \item A real-number-poset is a poset whose universe is a subset of the set of real numbers. For simplicity, we identify the ``set of real numbers'' with the power set of $\omega$. \item For notational simplicity we will often identify $\mathbb{P}_{\zeta+1}$ (a set of partial functions) with $\mathbb{P}_\zeta*\dot{\mathbb{Q}}_\zeta$ (a set of pairs $(p,q)$ with $p\in \mathbb{P}_\zeta$ and $p\Vdash q\in \dot{\mathbb{Q}}_\zeta$). \item Similarly, we will not distinguish between sequence of names and names of sequences. \end{enumerate} \end{notation} We now define the ``support'' $\supp(p)\subseteq \pi$ of a condition $p$ (as opposed to the domain $\dom(p)$, which is, as we are dealing with a FS iteration, a finite subset of the index set $\{0\}\cup (\pi\smallsetminus \pi_1)$). We will also define the ``history'' $H$ of a name and of a condition: \begin{definition}\label{def:suppH} Let $\mathbb{P}$ be a suitable iteration. \begin{enumerate}[(1)] \item\label{FSPsupp} For $p\in\mathbb{P}_{\pi_1}$ set $\supp( p):=\bigcup_{\delta\in\dom p}\dom( p(\delta))\subseteq\pi_1$. For $p\in\mathbb{P}$, set $\supp p:=\supp(p(0))\cup(\dom(p)\smallsetminus \{0\})$ (or just $\dom(p)$, if $0\notin\dom(p)$).% \footnote{Recall that according to our indexing, $\dom(p)$ is a finite subset of $\{0\}\cup (\pi\smallsetminus \pi_1)$ (where we interpret a FS condition $p$ as a partial function from the index $\pi_0$ with finite domain $\dom(p)$). Recall that $q:=p(0)\in \mathbb{P}_{\pi_1}$, which is the FS product of $\mathbb{G}_{\mathbf{B}_\delta}$ for $\delta<\pi_0$. So $q$ has a finite domain $\dom(q)\subseteq \pi_0$, and if $\delta\in\dom(q)$, then $q(\delta)\in\mathbb{G}_{\mathbf{B}_\delta}$, so $X_\delta=\dom (q(\delta))$ (in the sense of the forcing $\mathbb{G}_{\mathbf{B}_\delta}$) is a finite subset of $[\omega_1\delta,\omega_1(\delta+1))$. According to our definition, $\supp(q)=\bigcup_{\delta\in \dom(q)} X_\delta$.} \item\label{hist} For $p\in\mathbb{P}$ and a $\mathbb{P}$-name $\tau$, we define $H(p)\subseteq\pi$ and $H(\tau)\subseteq\pi$ as follows: \begin{enumerate}[(i)] \item For $p\in\mathbb{P}_{\pi_1}$, $H(p):=\supp p$. \end{enumerate} For $\xi\ge \pi_1$ we define $H$ by recursion on $\xi$ for $p\in \mathbb{P}_\xi$ and for a $\mathbb{P}_\xi$-name $\tau$. (We assume that $H(r)$ has been defined for all $r\in\mathbb{P}_\zeta$ for $\pi_1\leq \zeta<\xi$ and $H(\sigma)$ for all $\mathbb{P}_\zeta$-names for $\pi_1\le \zeta<\xi$): \begin{enumerate}[(i)] \setcounter{enumii}{1} \item For $\xi=\zeta+1$ and $p\in\mathbb{P}_{\zeta+1}$, \[H(p):=\left\{\begin{array}{ll} H(p{\upharpoonright}\zeta) & \text{if $\zeta\notin\supp p$,} \\ H(p{\upharpoonright}\zeta)\cup\{\zeta\}\cup H(p(\zeta)) & \text{if $\zeta\in\supp p$.} \end{array}\right.\] (Here, $H(p(\zeta))$ is defined because $p(\zeta)$ is a $\mathbb{P}_\zeta$-name.) \item When $\xi>\pi_1$ is limit and $p\in\mathbb{P}_\xi$, then $H(p)$ has already been defined (because $p\in\mathbb{P}_\zeta$ for some $\zeta<\xi$). \item For any $\mathbb{P}_\xi$-name $\tau$ define (by $\in$-recursion on $\tau$) \[H(\tau):=\bigcup\{H(\sigma)\cup H(p):(\sigma,p)\in\tau\}.\] \end{enumerate} \end{enumerate} \end{definition} Note that $H(\check x)=\emptyset$ for any standard name $\check x$.\footnote{A standard name $\check x = \{(\check y, \mathbbm 1): y\in x\}$ (for $x\in V$) hereditarily only uses the weakest condition $\mathbbm 1$, which in our case (an iteration) is the empty partial function; accordingly $H(\check x)=\emptyset$. If the reader prefers a different formal definition of FS iteration, then they should modify the definition of $H$ to make sure that $H(\check x)=\emptyset$.} \begin{comment} A suitable iteration can trivially equivalently be interpreted in two ways: We can view it as FS iteration of ccc posets, where we start with an FS product (which is equivalent to a special kind of FS iteration) of $\pi_0$ many posets as in Definition~\ref{DefGB}. In this interpretation, we have a FS iteration of length $\pi_0+(\pi-\pi_1)$. We can also interpret it as a two stage iteration: First the FS product of the $\pi_0$ many posets, followed by an FS iteration; so the whole iteration has length $1+(\pi-\pi_1)$. \end{comment} \begin{figure} \includegraphics[width=\textwidth]{iteration.pdf} \caption{A suitable iteration. $\pi_1=\omega_1\pi_0$ is partitioned into $\pi_0$-many intervals of length $\omega_1$, and $B_\delta:=[\omega_1\delta,\omega(\delta+1))$, the set of vertices of the graph $\mathbf{B}_\delta$, is the $\delta$-th interval of this partition. A suitable iteration is a FS product of the $\mathbb{G}_{\mathbf{B}_\delta}$ for $\delta<\pi_0$, followed by a FS iteration of ccc posets. The iterands of the FS iteration that follow are indexed by $\alpha\in[\pi_1,\pi)$.} \label{fig:iteration} \end{figure} \begin{remark*} $H$ is not a ``robust'' notion: $\Vdash\tau=\tau'$ does not imply $H(\tau)=H(\tau')$. Still, it is a very natural and useful notion, which has appeared (in slightly different contexts) many times in forcing theory: If $\tau$ is a $\mathbb{P}_\pi$-name, then $H(\tau)\subseteq\pi$ is the set of coordinates the name $\tau$ ``depends on'', more concretely, $\tau$ can be calculated (by a function defined in $V$) from the sequence of generic objects at the indices in $H(\tau)$. \\ In the case of FS iterations where all iterands are real-number-posets (as in~\cite{ShCov,GKS}), $H(p)$ is countable for $p$ in a dense set; and ``hereditarily nice names'' for reals will also have countable history. In this paper we have to use hereditarily ${<}\lambda$-names (even for nice names of reals), the reason is indicated in Remark~\ref{rem:lambdarequired}. \end{remark*} Let us fix some notation regarding the well-known ``nice names'': \begin{definition}\label{Defnicenm} Let $A$ and $B$ be subsets of $\mathbb{P}$. \begin{enumerate}[(1)] \item A $\mathbb{P}$-name $\dot r$ is a \emph{nice name for a subset of $\omega$, determined by $A$}, if $\dot r$ has the form $\bigcup_{n\in \omega} \{(\check n,q): q \in A_n\}$, where each $A_n$ is a (possibly empty) antichain in $\mathbb{P}$, and $A=\bigcup_{n\in\omega} A_n$. \item Analogously, $\dot Q$ is a \emph{nice name for a real-number-poset of size ${<}\lambda$ , determined by $B$}, if there is a $\mu<\lambda$ such that $\dot Q$ is a sequence $\langle \dot{r}_i\rangle_{i\in\mu}$ of nice names for subsets of $\omega$ determined by $A_i$, together with a sequence $\langle \dot{x}_{i,j}\rangle_{i,j\in \mu}$ of nice names for elements in $\{0,1\}$ depending on an antichain $A'_{i,j}$ (where $\dot{x}_{i,j}=1$ codes $r_i\le_Q r_j$),\footnote{A nice name $\dot{x}$ of a member of $\{0,1\}$ depending on an antichain $C\subseteq\mathbb{P}$ (allowed to be empty) has the form $\dot{x}=\{(\check{0},p):p\in C\}$. Note that $p\Vdash\dot{x}=1$ for all $p\in C$, and $q\Vdash\dot{x}=0$ for any $q\in\mathbb{P}$ incompatible with all the members of $C$. Moreover, $H(\dot{x})=\bigcup_{p\in C}H(p)$.} and $B=\bigcup_{i\in\mu} A_i \cup \bigcup_{i,j\in\mu} A'_{i,j}$. \end{enumerate} \end{definition} So in this case \begin{equation}\label{eq:Hotherspecial} H(\dot r)=\bigcup_{p\in A}H(p),\text{ and } H(\dot Q)=\bigcup_{p\in B}H(p). \end{equation} It is well known that every name of a subset of $\omega$ has an equivalent nice name. Moreover, as we can choose the conditions of the antichains in any given dense set, we get the following: \begin{fact}\label{fact:basicnice} (As $\mathbb{P}$ is ccc) Let $D\subseteq \mathbb{P}$ be dense and let $\lambda$ be cardinal with uncountable cofinality. \begin{enumerate}[(a)] \item\label{item:nicereal} For any $\mathbb{P}$-name of a real there is an equivalent nice name determined by $A\subseteq D$ with $|A|\le\aleph_0$. \item\label{item:niceQ} For any name of a poset of size ${<}\lambda$ consisting of reals, there is an equivalent nice name determined by a set $B\subseteq D$ with $|B|<\lambda$. \end{enumerate} \end{fact} Every automorphism of $\mathbf{B}$ induces an automorphism of $\mathbb{G}_\mathbf{B}$, see Lemma~\ref{GBprop1}(\ref{item:notd}). Therefore, a $\pi_0$-sequence $h$ of such automorphisms induces an automorphism of the (FS) product $\mathbb{P}_{\pi_1}$. Such an automorphism can sometimes be naturally extended to the whole iteration $\mathbb{P}$ (which will allow isomorphism-of-names arguments and subsequently show $\mathsf{LCU}_\textrm{sp}$). What do we mean by ``naturally extend''? Recall that, whenever $f:P\to P$ is an automorphism on some poset $P$, and $\tau$ is a $P$-name, $f$ sends $\tau$ to the $P$-name \[f^*(\tau):=\{(f^*(\sigma),f(p)):(\sigma,p)\in\tau\}.\] Also, $(f^{-1})^*(f^*(\tau))=\tau$; and $p\Vdash\varphi(\tau)$ iff $f(p)\Vdash\varphi(f^*(\tau))$ whenever $p\in P$ and $\varphi(x)$ is a formula. If $\dot{Q}$ is a $P$-name and $P\Vdash f^*(\dot{Q})=\dot{Q}$, then we can trivially extend $f$ to $P*\dot{Q}$. We say that $\mathbb{P}$ is $h$-symmetric, if this is the case in all steps of the iteration: \begin{definition}\label{DefAutom} Let $\mathbb{P}$ be a suitable iteration. \begin{enumerate} \item\label{t2aut} A bijection $h:\pi_1\to\pi_1$ is a \emph{$2$G-automorphism} if, for each $\delta<\pi_0$, $h{\upharpoonright}B_\delta$ is an automorphism of $\mathbf{B}_\delta$. \item Such an $h$ defines an automorphism $\hat{h}_{\pi_1}$ of $\mathbb{P}_{\pi_1}\to\mathbb{P}_{\pi_1}$, by $\hat{h}_{\pi_1}(p):=\langle \hat{f}_\delta(p(\delta)):\delta\in\dom p\rangle$ where $f_\delta:=h{\restriction}[\omega_1\delta,\omega_1(\delta+1))$ is the automorphism of $\mathbf{B}_\delta$ induced by $h$, and $\hat f_\delta$ is defined as in Lemma~\ref{GBprop1}(\ref{item:notd}). \item We say $\mathbb{P}$ is $h$-\emph{symmetric} if the following inductive construction defines $\hat{h}_\xi:\mathbb{P}_\xi\to\mathbb{P}_\xi$ for all $\pi_1\le \xi\le \pi$: \begin{enumerate}[(i)] \item For $\xi=\zeta+1$, we require that $\Vdash_{\mathbb{P}_\zeta} {\hat{h}_\zeta}^*(\dot{\mathbb{Q}}_\zeta)=\dot{\mathbb{Q}}_\zeta$. (Otherwise the construction fails.) We then define $\hat{h}_{\zeta+1}:\mathbb{P}_{\zeta+1}\to\mathbb{P}_{\zeta+1}$ by $\hat{h}_{\zeta+1}(p{\upharpoonright}\zeta,p(\zeta))=(\hat{h}_\zeta(p{\upharpoonright}\zeta),{\hat{h}_\zeta}^*(p(\zeta)))$. \item For $\xi>\pi_1$ limit, set $\hat{h}_\xi:=\bigcup_{\zeta<\xi}\hat{h}_\zeta$. \end{enumerate} In this case set $\hat{h}:=\hat{h}_\pi$, which is an automorphism of $\mathbb{P}$. \item For any $\delta_0<\pi_0$ and any pair $(a,b)\in B_{\delta}$, fix a $2$G-automorphism $h^\delta_{a,b}$ such that $h^\delta_{a,b}(a)=b$ and $h^\delta_{a,b}{\upharpoonright}\mathbf{B}_\zeta$ is the identity for any $\zeta\ne \delta$. We can pick such $h^\delta_{a,b}$ by Definition~\ref{DefSbG}(vi). \item Let $\Hgroup^*$ be the group generated by the $h^\delta_{a,b}$ above. So $|\Hgroup^*|=\max\{\pi_0,\aleph_1\}$. Note also that for all $h\in \Hgroup^*$ and $\delta\in \pi_0$ we have $h[B_\delta]=B_\delta$, and that $\supp(h):=\bigcup\{B_\delta:\, h{\restriction}B_\delta\ne\textrm{id}_{B_\delta},\ \delta<\pi_0\}$ has size ${\le}\aleph_1$. \item We say that $\mathbb{P}$ is \emph{symmetric} if $\mathbb{P}$ is $h$-symmetric for every $h\in \Hgroup^*$. \end{enumerate} % \end{definition} In isomorphism-of-names arguments it is relevant to know when a condition or a name remains unchanged after applying an automorphism $\hat h$. The following states a sufficient condition: \begin{lemma}\label{automident} Assume that $\mathbb{P}$ is $h$-symmetric and $\pi_1\leq\xi\leq\pi$. \begin{enumerate}[(a)] \item If $p\in\mathbb{P}_\xi$ and $h{\upharpoonright}(H(p)\cap \pi_1)$ is the identity, then $\hat{h}_\xi(p)=p$. \item If $\tau$ is a $\mathbb{P}_\xi$-name and $h{\upharpoonright}(H(\tau)\cap \pi_1)$ is the identity, then ${\hat{h}_\xi}^*(\tau)=\tau$. \item Let $g:=h^{-1}$. Then $\mathbb{P}$ is $g$-symmetric and $\hat{g}_\xi=\hat{h}^{-1}_\xi$. \end{enumerate} \end{lemma} \begin{proof} We show the three statements by induction on $\xi$. For (a), we use a case distinction: Assume $\xi=\pi_1$. If $p\in\mathbb{P}_{\pi_1}$ then $H(p)=\supp p$, and whenever $h$ is the identity on $\supp p$, it is clear that $\hat{h}_{\pi_1}(p)=p$. The limit step is also immediate (there are no new conditions, and for names use $\in$-induction). For the successor step $\xi=\zeta+1$, assume $p\in\mathbb{P}_{\zeta+1}$ and that $h$ is the identity on $H(p)\cap\pi_1$. If $\zeta\notin\supp p$, then we have $p\in\mathbb{P}_\zeta$, so $\hat{h}_{\zeta+1}(p)=\hat{h}_\zeta(p)=p$ by the induction hypothesis. So assume $\zeta\in\supp p$. Then $H(p)=H(p{\upharpoonright}\zeta)\cup\{\zeta\}\cup H(p(\zeta))$, so by induction hypothesis $\hat{h}_\zeta(p{\upharpoonright}\zeta)=p{\upharpoonright}\zeta$ and ${\hat{h}_\zeta}^*(p(\zeta))=p(\zeta)$, thus $\hat{h}_{\zeta+1}(p)=p$. We now show (b) by $\in$-induction on $\tau$. If $(\sigma,p)\in\tau$ then $H(\sigma)\cup H(p)\subseteq H(\tau)$, so by induction hypothesis and (a), ${\hat{h}_\xi}^*(\sigma)=\sigma$ and $\hat{h}_\xi(p)=p$. Hence \[{\hat{h}_\xi}^*(\tau)=\{({\hat{h}_\xi}^*(\sigma),\hat{h}_\xi(p)):(\sigma,p)\in\tau\} =\{(\sigma,p):(\sigma,p)\in\tau\}=\tau\] For (c), the steps $\xi=\pi_1$ and $\xi>\pi_1$ limit are easy, so we deal with the successor step $\xi=\zeta+1$. So assume that $\hat{g}_\zeta$ is defined and $\hat{g}_\zeta=\hat{h}^{-1}_\zeta$. Since $\hat{h}_\xi$ is defined, $\Vdash_{\mathbb{P}_\zeta}\hat{h}^*_\zeta(\dot{\mathbb{Q}}_\zeta)=\dot{\mathbb{Q}}_\zeta$, which implies $\Vdash_{\mathbb{P}_\zeta}\hat{g}_\zeta^*(\dot{\mathbb{Q}}_\zeta)=\dot{\mathbb{Q}}_\zeta$, so $\hat{g}_{\zeta+1}$ is defined; and for any $p\in\mathbb{P}_\xi$, $\hat{g}_\xi(\hat{h}_\xi(p))=(\hat{g}_\zeta(\hat{h}_\zeta(p{\upharpoonright\zeta})),\hat{g}_\zeta^*(\hat{h}_\zeta^*(p(\zeta))))=p$, so $\hat{g}_\xi=\hat{h}^{-1}_\xi$. \end{proof} \subsection{A digression: Self-indexed products} How to construct a symmetric iteration $\mathbb{P}$? We have to make sure that at each step $\zeta$ the iterand $\dot{\mathbb{Q}}_\zeta$ is invariant under $\hat h$ for all $h\in \Hgroup^*$. One case that will be useful: $\dot{\mathbb{Q}}_\zeta$ is a (ccc) FS product such that whenever $\dot Q$ is one of the factors, then $\hat h^*(\dot Q)$ is also one. But there is a technical difficulty here: We need $\Vdash_{\mathbb{P}_\zeta} \hat h^*(\dot{\mathbb{Q}}_\zeta)=\dot{\mathbb{Q}}_\zeta$ (i.e., really equality, not just isomorphism; as we want to get an actual automorphism of $\mathbb{P}_{\zeta+1}$). This is not possible if we ``naively'' index the product with an ordinal. For example, assume $\dot Q_0$, $\dot Q_1$ are such that $\Vdash_{\mathbb{P}_\zeta} \hat h^*(\dot Q_i)=\dot Q_{1-i}\ne Q_i$. Then $\dot Q_0\times \dot Q_1$ (the product with index set $\{0,1\}$) is not a valid choice for $\mathbb{Q}_\zeta$, as $\Vdash_{\mathbb{P}_\zeta} \hat h^*(\dot Q_0\times \dot Q_1)=\dot Q_1\times \dot Q_0\ne \dot Q_0\times \dot Q_1$. So instead, we define (in the extension) the FS product $\prod \mathcal F$ of a set $\mathcal F$ of posets as the set of all finite partial functions $p$ from $\mathcal F$ into $\bigcup \mathcal F$ satisfying $p(Q)\in Q$ for all $Q\in \dom(p)$. We call this object the \emph{self-indexed product} of the set $\mathcal F$. In our framework, we start with a ground model set $\Xi_\zeta$ of $\mathbb{P}_\zeta$-names of posets. In the $\mathbb{P}_\zeta$-extension we let $\mathcal F$ be the set of evaluations of the names in $\Xi_\zeta$, and let $\dot{\mathbb{Q}}_\zeta$ be the self-indexed product of $\mathcal F$. Assume that all automorphisms from $\Hgroup^*$ can be extended up to $\zeta$.\footnote{For this part all the properties of $\Hgroup^*$ are not required; it is just enough that $h\in\Hgroup^*$ implies $h^{-1}\in\Hgroup^*$.} We assume that $\Xi_\zeta$ is closed under each $h\in \Hgroup^*$, i.e., $\dot Q\in \Xi_\zeta$ implies $\hat h^*_\zeta(\dot Q)\in \Xi_\zeta$. So as $\Xi_\zeta$ is also closed under the inverse of $h$, by Lemma~\ref{automident}(c) we even get $\hat h^*_\zeta[\Xi_\zeta]=\Xi_\zeta$. So in particular $\hat h^*_\zeta[\Xi_\zeta]$ and $\Xi_\zeta$ evaluate to the same set and thus yield the same self-indexed product, i.e., $\Vdash \hat h^*_\zeta(\dot{\mathbb{Q}}_\zeta)= \dot{\mathbb{Q}}_\zeta$. We record this fact for later reference: \begin{fact}\label{fact:specialH1} Assume that $\dot{\mathbb{Q}}_\zeta$ is a ``self-indexed'' product of $\Xi_\zeta$, and that $\dot Q\in \Xi_\zeta$ implies $\hat h^*_\zeta(\dot Q)\in \Xi_\zeta$ for all $h\in \Hgroup^*$. Then $\mathbb{P}$ forces $\hat h^*_\zeta(\dot{\mathbb{Q}}_\zeta)= \dot{\mathbb{Q}}_\zeta$, so we can extend each $h\in\Hgroup^*$ to $\mathbb{P}_\zeta*\dot{\mathbb{Q}}_\zeta$. \end{fact} We additionally assume that each factor is (forced to be) a real-number-poset. Assume that $(p,q)\in \mathbb{P}_\zeta*\dot{\mathbb{Q}}_\zeta$. We can densely assume that $p$ decides the finite domain of $q$, more specifically, the\footnote{Or rather: \emph{a} finite set, as different names in $\Xi_\zeta$ might evaluate to the same object, i.e., index.} finite set $y\subseteq \Xi_\zeta$ such that $p$ forces that $\dom(q)$ is (the set of evaluations of) $y$. Also, for each $\dot Q\in y$, we can assume that $q(\dot Q)$ is a nice name for a real, determined by some $A_{\dot Q}$. As usual, we can use a given dense set $D \subseteq \mathbb{P}_\zeta$ instead of $\mathbb{P}_\zeta$. For later reference: \begin{fact}\label{fact:specialH2} Assume that $\dot{\mathbb{Q}}_\zeta$ is a ``self-indexed'' product of $\Xi_\zeta$ as described above, that each factor is forced to be a set of reals, and that $D\subseteq \mathbb{P}_\zeta$ is dense. If $(p,q)\in \mathbb{P}_\zeta*\dot{\mathbb{Q}}_\zeta$, then there is a $(p',q')\leq (p,q)$ such that $p'\in D$ decides the (finite) $\dom(q')$, and each $q'(\dot Q)$ is a nice name determined by some $A_{\dot Q}\subseteq D$. So in particular \begin{equation}\label{eq:specialH} H((p',q')) = H(p')\cup \{\zeta\} \cup \bigcup_{\dot Q\in\dom(q')}\bigg(H(\dot Q)\cup \bigcup_{r\in A_{\dot Q}} H(r)\bigg). \end{equation} \end{fact} \begin{remark}\label{rem:lambdarequired} This is the reason hereditarily countable nice names are not sufficient in our setting to describe reals: Even the \emph{index} $\dot Q$ in such a product $\dot{\mathbb{Q}}_\zeta$ is too complicated. However, as all the self-indexed products $\dot{\mathbb{Q}}_\zeta$ we use will have factors $\dot Q$ of size ${<}\lambda$, it turns out we can restrict ourselves to hereditarily ${<}\lambda$-names (this will be the dense set $\mathbb{P}^*_\zeta$ of Definition~\ref{DefSstidy}). \end{remark} \subsection{Symmetric small history iterations preserve splitting families} We are finally ready to prove the central fact about preservation of splitting families. \begin{definition}\label{Defnice} Let $\lambda$ be an uncountable cardinal. \begin{enumerate}[(1)] \item A condition $q\in \mathbb{P}$ is \emph{$\lambda$-small}, if $|\{\delta<\pi_0:H(q)\cap B_\delta\neq\emptyset\}|<\lambda$. \item A suitable iteration ${\mathbb{P}}$ has \emph{$\lambda$-small history} if, for any $p\in\mathbb{P}$, there is a $\lambda$-small $q\le p$. \end{enumerate} \end{definition} So in particular if $\mathbb{P}$ has $\lambda$-small history and $\dot x$ is a name of a subset of $\omega$, then there is an equivalent nice name $\dot b$ which only uses $\lambda$-small conditions. In particular \begin{equation}\label{eq:nice} |\{\delta<\pi_0:H(\dot b)\cap B_\delta\neq\emptyset\}|<\mu \end{equation} for any cardinal $\mu\ge\lambda$ with uncountable cofinality. \begin{theorem}\label{PresSplit} Let $\mathbb{P}$ be a symmetric suitable iteration with $\lambda$-small history. Assume $\aleph_1\leq\lambda\leq\mu\le \pi_0$ are cardinals with $\mu$ regular. Then $\mathsf{LCU}_{\mathbf{R}_{\mathrm{sp}}}(\mathbb{P}_\pi,\mu)$ holds, and it is witnessed by $\{\dot{\eta}_{\omega_1\delta}: \delta<\mu\}$. \end{theorem} \begin{proof} Towards a contradiction, assume that there are $p\in\mathbb{P}$ and a $\mathbb{P}$-name $\dot{b}$ of an infinite subset of $\omega$ such that \[p\Vdash|\{\delta<\mu:\dot{\eta}_{\omega_1\delta}{\upharpoonright}\dot{b}\text{\ is eventually constant}\}|=\mu.\] Find $F\in[\mu]^\mu$, $n_0<\omega$ and $e\in\{0,1\}$ such that, for any $\delta\in F$, there is some $p_\delta\leq p$ in $\mathbb{P}$ such that $\omega_1\delta\in\supp(p_\delta)$ and $p_\delta\Vdash\dot{\eta}_{\omega_1\delta}{\upharpoonright}(\dot{b}\smallsetminus n_0)\equiv e$. We can assume that $\dot b$ is a nice name, more particularly that~\eqref{eq:nice} holds, and we can also assume that $p$ is $\lambda$-small. So there is some $\delta_0\in F$ such that $B_{\delta_0}\cap(H(p)\cup H(\dot{b}))=\emptyset$. Put $a:=\omega_1\delta_0\in B_{\delta_0}$. By~\eqref{eq:S2G}, $a$ is contained in an uncountable $R_{\delta_0,e}$-complete $U\subseteq B_{\delta_0}$. Recall that by the definition of ``symmetric'', there is for each $c\in U$ a $2$G-automorphism $h^c\in \Hgroup^*$ such that $h^c(a)=c$ and such that $h^c{\upharpoonright}\mathbf{B}_\delta$ is the identity for all $\delta\neq\delta_0$. Hence, by Lemma~\ref{automident}, $\hat{h}_\pi^c(p)=p$ and $(\hat{h}_\pi^c)^*(\dot{b})=\dot{b}$, therefore $p'_c:=\hat{h}_\pi^c(p_{\delta_0})\leq p$ and, since $(\hat{h}_\pi^c)^*(\dot{\eta}_a)=\dot{\eta}_c$, \[p'_c\Vdash\dot{\eta}_c{\upharpoonright}(\dot{b}\smallsetminus n_0)\equiv e.\] Lemma~\ref{GBprop1}(\ref{item:dafterall}) implies that $\langle p'_c:c\in U\rangle$ must be an antichain, which contradicts that $\mathbb{P}_\pi$ is ccc. \end{proof} \begin{remark} The same argument shows that, for any $g\in\prod_{\delta<\mu}B_\delta$ (in the ground model), $\{\dot{\eta}_{g(\delta)}:\delta<\mu\}$ witnesses $\mathsf{LCU}_{\mathbf{R}_{\mathrm{sp}}}(\mathbb{P}_\pi,\mu)$. \end{remark} \section{Suslin-\texorpdfstring{$\lambda$}{lambda}-small iterations}\label{sec:main1} \newcommand{\mathrm{ac}}{\mathrm{ac}} \newcommand{\mathrm{an}}{\mathrm{an}} \newcommand{\mathrm{col}}{\mathrm{col}} \newcommand{\mathrm{up}}{\mathrm{up}} \newcommand{\mathrm{op}}{\mathrm{op}} \newcommand{\mathrm{un}}{\mathrm{un}} \newcommand{\mathrm{nice}}{\mathrm{nice}} \newcommand{\mathrm{ncp}}{\mathrm{ncp}} We now investigate suitable iterations where the iterand $\dot{\mathbb{Q}}_\zeta$ at step $\zeta>0$ (i.e., after the initial FS product) is \begin{enumerate} \item either a \emph{restricted (also called: partial)} Suslin ccc poset (e.g., random forcing evaluated in some $V^{\mathbb{P}^-_\zeta}$ for some complete subforcing $\mathbb{P}^-_\zeta$ of $\mathbb{P}_\zeta$); \item or the FS product of (in our application: at most $|\pi|$-many) ${<}\lambda$-size posets of reals. \end{enumerate} More formally: \begin{definition}\label{DefSs} Let $\lambda$ be an uncountable cardinal. A \emph{Suslin-$\lambda$-small} iteration (abbreviated S$\lambda$s) is a suitable iteration $\mathbb{P}$ with the following properties: \begin{enumerate}[({S}1)] \item $\pi\smallsetminus\pi_1$ is partitioned into two sets $\Sigma^{}$ and $\Pi^{}$. \item\label{item:souslin} For $\xi\in \Sigma^{}$, \begin{enumerate}[(i)] \item $\mathbb{P}^{-}_\xi$ is a complete subposet of $\mathbb{P}_\xi$, \item $\mathbb{S}^{}_\xi$ is a definition of a Suslin ccc poset (with parameters in the ground model), \item $\dot{\mathbb{Q}}_\xi$ is a $\mathbb{P}_\xi$-name for $(\mathbb{S}^{}_\xi)^{V^{\mathbb{P}^{-}_\xi}}$. \end{enumerate} \item\label{item:prod} % For $\xi\in \Pi^{}$, \begin{enumerate}[(i)] \item $\Xi^{}_\xi$ is a set in the ground model, \item each element of $\Xi^{}_\xi$ is a $\mathbb{P}_\xi$-name $\dot Q $ for a poset of size\footnote{That is, each element of $\Xi^{}_\xi$ is forced to have size ${<}\lambda$, whereas the cardinality of $\Xi^{}_\xi$ may be as large as we want.} ${<}\lambda$ consisting of reals, \item $\dot{\mathbb{Q}}_\xi$ is (the $\mathbb{P}_\xi$-name for) the FS product of $\Xi_\xi$. \end{enumerate} \end{enumerate} \end{definition} \begin{remark} Regarding (S\ref{item:prod}), recall that our setting requires $\dot{\mathbb{Q}}_\xi$ (to be forced by $\mathbb{P}_\xi$) to be ccc (as suitable iterations have to be ccc). In contrast, in (S\ref{item:souslin}), $\dot{\mathbb{Q}}_\xi$ will be always ccc ``for free'' (in $V^{\mathbb{P}^{{}}_\xi}$ as well as in $V^{\mathbb{P}^{-}_\xi}$), as it is an evaluation of a Suslin ccc definition (see~\cite{JSsuslin}). \end{remark} \begin{comment} Typically, S$\lambda$-s iterations are constructed by recursion on $\xi\in[\pi_1,\pi]$. If $\mathbb{P}_\xi$ has already been constructed, we define $\mathbb{P}^*_\xi$ and $\mathbb{P}^-_\xi$ and, depending on whether we put $\xi$ in $\Sigma$ or in $\Pi$, we either choose a Suslin ccc poset $\mathbb{S}_\xi$ to construct $\mathbb{P}_{\xi+1}$ as indicated in (S\ref{item:souslin}), or construct $\mathbb{P}_{\xi+1}$ as in (S\ref{item:prod}). Limit steps are clear as well. \begin{remark}\label{rem:P*} If ${\mathbb{P}}$ is a Suslin $\lambda$-small iteration and $\pi_1\leq\xi<\eta\leq\pi$ then $\mathbb{P}^*_{\xi}=\mathbb{P}_{\xi}\cap \mathbb{P}^*_\eta$. This can be easily proved by recursion on $\eta$. \end{remark} We will later show that such ${\mathbb{P}}$ has $\lambda$-small history, by induction on $\xi\in\pi$. Then, during the proof by induction, we can assume without loss of generality that $p\in\mathbb{P}_\pi$ satisfies the following properties (recall the Definition~\ref{Defnice} of ``nice name'' of a real and~\eqref{eq:nice}): \begin{enumerate}[(i)] \item For any $\xi\in\Sigma\cap\supp p$, $p(\xi)$ is a nice $\mathbb{P}^-_\xi$-name of a member of $\mathbb{S}_\xi$ satisfying~\eqref{eq:nice}. \item For any $\xi\in \Pi\cap\supp p$, $p{\upharpoonright}\xi\Vdash``\dom p(\xi)=d^p_\xi$" for some finite $d^p_\xi\subseteq\Xi_\xi$, and $p(\xi,\dot{Q})$ is a nice $\mathbb{P}^-_\xi$-name of (a code for) a member of $\dot{Q}$, again satisfying~\eqref{eq:nice}. \end{enumerate} We now define an extended support set, which will be the domain of a ``finer'' variant $H^*$ of the history: \begin{definition}\label{def:hstar} \begin{enumerate}[(1)] \item For any $\xi\in[\pi_1,\pi]$ denote $\xi^+:=\xi\cup\{(\alpha,\dot{Q}):\alpha\in\xi\cap \Pi,\ \dot{Q}\in\Xi_\alpha\}$. \item Say that $X\subseteq\pi^+$ is \emph{closed} if $X\cap\pi_1=\bigcup_{\delta\in K}B_\delta$ for some $K\subseteq\pi_0$. \end{enumerate} \end{definition} \end{comment} We now show that we can replace such an iteration $\langle \mathbb{P}'_\zeta,\dot{\mathbb{Q}}'_\zeta:\zeta\in\pi\rangle$ with an \emph{isomorphic} version $\langle\mathbb{P}_\zeta,\dot{\mathbb{Q}}_\zeta:\zeta\in\pi\rangle$: The only difference will be in steps $\zeta\in \Pi$, where we select (hereditarily) nice names for the factors $\dot Q\in \Xi_\zeta$ and make sure that $\dot{\mathbb{Q}}_\zeta$ is self-indexed. In addition, we will define a dense subset $\mathbb{P}^*$ of hereditarily $\lambda$-small conditions, an extended ``refined history domain'' $\pi^+$, and a ``refined history'' $H^*:\mathbb{P}^*\to \mathcal P(\pi^+)$. These are formalized in the following notions. \begin{definition}\label{DefSstidy} Let $\lambda$ be an uncountable cardinal. A \emph{tidy Suslin-$\lambda$-small} iteration is a Suslin-$\lambda$-small iteration $\mathbb{P}$ with the following additional components and properties: \begin{enumerate}[(1)] \item For $\xi\in\pi\smallsetminus\pi_1$, $\mathbb{P}^*_\xi$ is a dense subset of $\mathbb{P}^{}_\xi$. \item $\mathbb{P}^*_{\pi_1}=\mathbb{P}_{\pi_1}$. \item If $\xi\in\Sigma$ and $p\in\mathbb{P}^*_{\xi+1}$ then $p{\restriction}\xi\in\mathbb{P}^*_\xi$, $p(\xi)$ is a nice $\mathbb{P}^*_\xi$-name of a real and $\Vdash_{\mathbb{P}_\xi}p(\xi)\in\mathbb{S}_\xi\cap V^{\mathbb{P}^-_\xi}$. \item For $\xi\in \Pi^{}$, $\Xi_\xi$ is composed of nice $\mathbb{P}^*_\xi$-names for real-number-posets of size ${<}\lambda$ (See Definition~\ref{Defnicenm}(2)). In addition, if $p\in\mathbb{P}^*_{\xi+1}$ then the following is satisfied: \begin{enumerate}[(i)] \item $p{\upharpoonright}\xi\in\mathbb{P}^*_\xi$. \item $\dom p(\xi)$ is decided by $p{\upharpoonright}\xi$, that is, $p{\upharpoonright}\xi\Vdash_{\mathbb{P}^*_\xi}$``$\dom p(\xi)=d^p_\xi$" for some finite $d^p_\xi\subseteq\Xi^{}_\xi$. \item For each $\dot{Q}\in\dom p(\xi)$, $p(\xi,\dot{Q})$ is a nice $\mathbb{P}^*_\xi$-name of a real and $\Vdash_{\mathbb{P}^*_\xi}p(\xi,\dot{Q})\in\dot{Q}$. \item $p(\xi)=\langle p(\xi,\dot{Q})):\dot{Q}\in\dom p(\xi)\rangle$ (in particular, $p(\xi)$ is a $\mathbb{P}^*_\xi$-name). \end{enumerate} \item If $\pi_1\leq\xi<\pi$ then $\mathbb{P}^*_\xi\subseteq\mathbb{P}^*_{\xi+1}$. \item If $\gamma\in(\pi_1,\pi]$ is limit then $\mathbb{P}^*_\gamma=\bigcup_{\xi<\gamma}\mathbb{P}^*_\xi$. \end{enumerate} Denote $\mathbb{P}^*:=\mathbb{P}^*_\pi$. \end{definition} Note that tidy S$\lambda$s iterations are coherent in the sense that $\mathbb{P}^*_\eta\cap\mathbb{P}_\xi=\mathbb{P}^*_\xi$ for any $\pi_1\leq\xi\leq\eta\leq\pi$. Conditions (5) and (6) were included to guarantee this. \begin{definition}\label{def:refhistory} Let $\mathbb{P}$ be a tidy S$\lambda$s iteration. \begin{enumerate}[(1)] \item For $\pi_1\leq\xi\leq\pi$ define the \emph{refined history domain} $\xi^+=\xi\cup\bigcup_{\zeta<\xi}\zeta\times\Xi_\zeta$. \item For $p\in\mathbb{P}^*$ and a $\mathbb{P}^*$-name $\tau$ we define the \emph{refined history} $H^*(p)\subseteq\pi^+$ and $H^*(\tau)\subseteq\pi^+$ as follows. For $\pi_1\leq\xi\leq\pi$ we define $H^*$ by recursion on $\xi$ for $p\in\mathbb{P}^*_\xi$ and for a $\mathbb{P}^*_\xi$-name $\tau$. \begin{enumerate}[(i)] \item For $p\in\mathbb{P}^*_{\pi_1}$, $H^*(p):=H(p)$. \item For $\xi=\zeta+1$ and $p\in\mathbb{P}^*_{\zeta+1}$, $H^*(p)=H^*(p{\restriction}\zeta)$ when $\zeta\notin\supp p$, otherwise: \begin{itemize} \item if $\zeta\in\Sigma$ then \[H^*(p):=H^*(p{\restriction}\zeta)\cup\{\xi\}\cup H^*(p(\zeta));\] \item if $\zeta\in\Pi$ then \[H^*(p):=H^*(p{\restriction}\zeta)\cup\{\xi\}\cup(\{\xi\}\times\dom(p(\zeta)))\cup\bigcup_{\dot{Q}\in\dom(p(\zeta)}(H^*(\dot{Q})\cup H^*(p(\zeta,\dot Q)).\] \end{itemize} \item When $\xi>\pi_1$ is limit and $p\in\mathbb{P}^*_\xi$, then $H^*(p)$ has already been defined (because $p\in\mathbb{P}_\zeta$ for some $\zeta<\xi$). \item For any $\mathbb{P}^*_\xi$-name $\tau$ define, by $\in$-recursion, \[H^*(\tau):=\bigcup\{H^*(\sigma)\cup H^*(p):(\sigma,p)\in\tau\}.\] \end{enumerate} \end{enumerate} \end{definition} Tidy S$\lambda$s iterations have many features that ease its manipulation, in particular, they have $\lambda$-small history. \begin{lemma}\label{smallH} Let $\mathbb{P}$ be a tidy S$\lambda$s iteration with $\lambda$ regular. Then, for any $p\in\mathbb{P}^*$: \begin{enumerate}[(a)] \item $|H^*(p)|<\lambda$. \item $H(p)=H^*(p)\cap\pi$. \item $H(\tau)=H^*(\tau)\cap\pi$ for any $\mathbb{P}^*_\pi$-name $\tau$. \end{enumerate} In particular, $\mathbb{P}$ has $\lambda$-small history. \end{lemma} \begin{proof} We prove (a), (b) and (c) simultaneously for all $p\in\mathbb{P}^*_\xi$ by recursion on $\pi_1\leq\xi\leq\pi$. It is clear that (c) follows from (b). In the case $\xi=\pi_1$, $H^*(p)=\supp p=H(p)$, which is finite. For the successor step $\xi=\zeta+1$, assume $\zeta\in\supp p$. If $\zeta\in\Sigma$ then $p(\zeta)$ is a nice $\mathbb{P}^*_\zeta$-name of a real, so it is determined by some countable $A\subseteq \mathbb{P}^*_\zeta$. Hence \[ H^*(p) = H^*(p{\restriction}\zeta)\cup\{\zeta\}\cup \bigcup\{H^*(r):\, r\in A\} \] so, by induction hypothesis, $|H^*(p)|<\lambda$. Now assume $\zeta\in \Pi$. Since any $\dot Q\in\Xi_\zeta$ is a nice $\mathbb{P}^*_\zeta$-name for a real-number-poset of size ${<\lambda}$, it is determined by some $B_{\dot Q}$ of size ${<\lambda}$. Hence \[ H^*(\dot Q)=\bigcup_{s\in B_{\dot Q}}H^*(s),\text{\ and }|H^*(\dot Q)|<\lambda, \] the latter by induction hypothesis. On the other hand, for any $\dot Q\in\dom p(\zeta)$, $p(\zeta,\dot Q)$ is a $\mathbb{P}^*_\zeta$-name of a real, so it is determined by some countable $A_{\dot Q}\subseteq\mathbb{P}^*_\zeta$. Hence \[H^*(p(\zeta,\dot Q))=\bigcup_{r\in A_{\dot Q}}H^*(r),\] which have size ${<}\lambda$ by induction hypothesis. As \[H^*(p)=H^*(p{\upharpoonright}\zeta)\cup\{\zeta\}\cup(\{\zeta\}\times\dom p(\zeta)) \cup\bigcup_{\dot{Q}\in\dom p(\zeta)}(H^*(\dot{Q})\cup H^*(p(\zeta,\dot{Q}))),\] we get $|H^*(p)|<\lambda$. On the other hand, since $p(\zeta)=\langle p(\zeta,\dot Q): \dot Q\in\dom p(\zeta)\rangle$, \[H(p(\zeta))=\bigcup_{\dot{Q}\in\dom p(\zeta)}(H(\dot{Q})\cup H(p(\zeta,\dot{Q}))),\] so we can deduce (b). The limit step is immediate. \end{proof} As promised, we show that any Suslin-$\lambda$-small iteration is isomorphic to a tidy one. \begin{comment} Again, note that $H^*$ is not robust at all: In particular, if $\mathbb{P}_\zeta\Vdash \dot Q'=\dot Q$, then $H^*(\dot Q')$ can be completely different from $H^*(\dot Q)$, and generally it will be ``arbitrarily large''. To get a useful notion (where we get in particular ``nice names'' for reals $\dot r$ such that $H^*(\dot r)$ is small), we will generally have to modify the S$\lambda$s-iteration in a ``trivial way'', more concretely, we have to replace inductively at each step $\zeta$ the iterand $\mathbb{Q}_\zeta$ by an equivalent name $\mathbb{Q}_\zeta'$ (i.e., $\mathbb{P}_\zeta\Vdash \mathbb{Q}_\zeta=\mathbb{Q}_\zeta'$).\footnote{Note that $\Vdash_\mathbb{P} Q=Q'$ implies that $P*Q$ and $P*Q'$ are literally the same forcing notion, as each of them contains all pairs $(p,q)$ with $p\Vdash q\in Q$ (below a certain rank).} But if we define $H^*$ using the new names $\mathbb{Q}_\zeta'$, we will get different values (and more suitable ones, if we choose the $\mathbb{Q}_\zeta'$ more suitable). In the following lemma we explicitly inductively give suitable choices for $\mathbb{Q}_\zeta$ (and when we mention $H^*$ we mean the $H^*$ referring to these choices), together with a dense subset $\mathbb{P}^*$ or $\mathbb{P}$ of ``nice'' names: \end{comment} \begin{comment} \begin{deflemma}\label{smallH} Let $\mathbb{P}'$ be a Suslin-$\lambda$-small iteration with $\lambda$ regular. Then there is an isomorphic iteration $(\mathbb{P}_\xi,\mathbb{Q}_\xi)_{\xi\in\pi}$, a dense set $\mathbb{P}^*$ of $\mathbb{P}$, a $\pi^+\supseteq \pi$ and a ``refined history'' $H^*:\mathbb{P}^*\to \mathcal P(\pi^+)$ such that for all $p\in\mathbb{P}^*$ the following is satisfied: \begin{enumerate}[(a)] \item $|H^*(p)|<\lambda$. \item $H(p)=H^*(p)\cap\pi$. \end{enumerate} For $X\subseteq \pi^+$, set $\mathbb{P}^*{\restriction}X:=\{p\in\mathbb{P}^*:\, H^*(p)\subseteq X\}$. \begin{enumerate}[(a)] \setcounter{enumi}{2} \item\label{item:restr} If $|X|<\mu$ for some regular $\aleph_1$-inaccessible\footnote{Recall that a cardinal $\mu$ is \emph{$\kappa$-inaccessible} if $\theta^\nu<\mu$ for any cardinals $\theta<\mu$ and $\nu<\kappa$.} $\mu$, then $|\mathbb{P}^*{\restriction}X|<\mu$. \end{enumerate} We call an iteration with these properties a \emph{tidy S$\lambda$s-iteration}. \end{deflemma} Note that generally $\mathbb{P}^*{\restriction} X$ will not be a complete subforcing of $\mathbb{P}^*$; but we will only be interested in the case where it is, see Lemma~\ref{smallrestr}. \end{comment} \begin{lemma}\label{lem:eqvtidy} If $\lambda$ is regular uncountable, then any Suslin-$\lambda$-small iteration is isomorphic to a tidy S$\lambda$s iteration. \end{lemma} \begin{proof} By recursion on $\pi_1\leq\xi\leq\pi$ we construct the tidy iteration up to $\mathbb{P}_\xi$, along with its components, and the isomorphism $i_\xi:\mathbb{P}'_\xi\to\mathbb{P}_\xi$. We also guarantee that $i_\xi$ extends $i_\zeta$ for any $\pi_1\leq\zeta<\xi$. \textbf{Case $\bm{\xi=\pi_1}$:} Set $\mathbb{P}^*_{\pi_1}=\mathbb{P}_{\pi_1}=\mathbb{P}'_{\pi_1}$ and let $i_{\pi_1}$ be the identity function. \textbf{Case $\bm{\xi=\zeta+1}$ with $\bm{\zeta\in \Sigma}$:} As $i_\zeta:\mathbb{P}'_\zeta\to\mathbb{P}_\zeta$ is an isomorphism, we let $\mathbb{P}^-_\zeta=i_{\zeta}[\mathbb{P}^{\prime-}_\zeta]$, which is clearly a complete subforcing of $\mathbb{P}_\zeta$, and evaluate $\mathbb{S}_\zeta$ accordingly. Note that $i_{\zeta}$ can be extended to an isomorphism $i_{\zeta+1}:\mathbb{P}'_{\zeta+1}\to\mathbb{P}_{\zeta+1}$ in a natural way. We define $\mathbb{P}_{\zeta+1}^*$ as the set of pairs $(p,\dot q)$ where $p\in \mathbb{P}_\zeta^*$, $\dot q$ is a $\mathbb{P}^*_\zeta$ nice name for a real, and $p \Vdash \dot q\in \mathbb{S}_\zeta\cap V^{\mathbb{P}_\zeta^-}$. This is dense according to Fact~\ref{fact:basicnice}(\ref{item:nicereal}). \textbf{Case $\bm{\xi=\zeta+1}$ with $\zeta\in \Pi$:} Fix a $\dot Q'$ in $\Xi'_\zeta$. As $i_\zeta^*(\dot Q')$ is forced by $\mathbb{P}_\zeta$ to have size ${<}\lambda$, according to Fact~\ref{fact:basicnice}(\ref{item:niceQ}), % there is an equivalent $\mathbb{P}^*_\zeta$-nice name $\dot Q$ determined by $B_{\dot Q}\subseteq \mathbb{P}^*_\zeta$ of size ${<}\lambda$. Let $\Xi_\zeta$ be the set of all these names, and define $\dot{\mathbb{Q}}_\zeta$ to be the self-indexed FS product of the $\dot Q$ in $\Xi_\zeta$. We can obtain the isomorphism $i_{\xi+1}$ in a natural way. We define $\mathbb{P}_{\zeta+1}^*$ to consist of the $(p,\dot q)$ as in Fact~\ref{fact:specialH2} (using $D=\mathbb{P}_\zeta^*$). \textbf{Case $\xi>\pi_1$ limit:} As $\mathbb{P}$ is a FS iteration, we (have to) set $\mathbb{P}_\xi=\bigcup_{\zeta<\xi}\mathbb{P}_\zeta$; and we set $\mathbb{P}^*_\xi:=\bigcup_{\zeta<\xi}\mathbb{P}^*_\zeta$ and $i_\xi:=\bigcup_{\zeta<\xi}i_\zeta$. \end{proof} For later reference, note: Assume that we can extend some $2$G-automorphism $f$ to $\mathbb{P}_\zeta$, and that $\dot Q\in \Xi_\zeta$. Set $\supp(f):=\bigcup\{B_\delta:\, f{\restriction} B_\delta\ne\textrm{id}_{B_\delta}\}$. Then \begin{equation}\label{eq:unclear} \supp(f)\cap H^*(\dot Q)=\emptyset\text{ implies } \hat f^*_\xi(\dot Q)=\dot Q. \end{equation} This follows from Lemmas~\ref{automident}(b) and~\ref{smallH}(c). \begin{comment} \begin{proof} By induction on $\pi_1\leq\xi\leq\pi$ we construct $\mathbb{P}^*_\xi\subseteq \mathbb{P}_\xi$ dense (and ``coherent'' in the sense that $\mathbb{P}^*_\zeta=\mathbb{P}^*_\xi\cap \mathbb{P}_\zeta$ for all $\zeta<\xi$), as well as $\xi^+$ and $H^*: \mathbb{P}^*_\xi\to \xi^+$ (again, coherent), and show (a)--(c). \textbf{Case $\bm{\xi=\pi_1}$:} Set $\mathbb{P}^*_{\pi_1}=\mathbb{P}_{\pi_1}=\mathbb{P}'_{\pi_1}$, $\pi_1^+=\pi_1$, and $H^*(p):=\supp(p)=H(p)$, which satisfies (b). As $\supp(p)$ is finite, we get (a). For nonempty $X$, $|\mathbb{P}^*_{\pi_1}{\restriction} X|=|\omega\times X|<\mu$, which shows~(\ref{item:restr}). \textbf{Case $\bm{\xi=\zeta+1}$ with $\bm{\zeta\in \Sigma}$:} As $\mathbb{P}_\zeta$ is isomorphic to $\mathbb{P}'_\zeta$, we can interpret $\mathbb{P}^-_\zeta$ as complete subforcing of $\mathbb{P}_\zeta$, and evaluate $\mathbb{S}_\zeta$ accordingly. We define $\mathbb{P}_{\zeta+1}^*$ as the set of pairs $(p,q)$ where $p\in \mathbb{P}_\zeta^*$, $q$ is a nice name for a real determined by some countable $A\subseteq \mathbb{P}_\zeta^*$, and $p \Vdash q\in V^{\mathbb{P}_\zeta^-}\cap \mathbb{S}_\zeta$. This is dense according to Fact~\ref{fact:basicnice}(\ref{item:nicereal}). We set $\xi^+:=\zeta^+\cup \{\zeta\}$, and for $(p,q)$ as above, we define \[ H^*((p,q)) := H^*(p)\cup\{\xi\}\cup \bigcup\{H^*(r):\, r\in A\}. \] By induction, (a) and (b) are satisfied: for (b), note that $q$ is a nice name and see~\eqref{eq:Hotherspecial}. To see~(\ref{item:restr}), note that $(p,q)\in \mathbb{P}_{\zeta+1}^*{\restriction} (X\cup\{\zeta\})$ is equivalent to $H^*(p)\subseteq X$ and $H^*(r)\subseteq X$ for all $r\in A$, i.e., to $A\cup\{p\}\subseteq \mathbb{P}_{\zeta}^*{\restriction} X$. Due to ccc, there are at most $|\mathbb{P}^*_\zeta{\restriction} X|^{\aleph_0}<\mu$ many possibilities for such $(p,q)\in \mathbb{P}_{\zeta+1}^*$. \textbf{Case $\bm{\xi=\zeta+1}$ with $\zeta\in \Pi$:} Fix a $\dot Q'$ in $\Xi'_\zeta$, and interpret it as $\mathbb{P}_\zeta$-name. As $\dot Q'$ is forced to be of size ${<}\lambda$. According to Fact~\ref{fact:basicnice}(\ref{item:niceQ}), % there is an equivalent nice name $\dot Q$ determined by $B_{\dot Q}\subseteq \mathbb{P}^*_\zeta$ of size ${<}\lambda$. Let $\Xi_\zeta$ be the set of all these names, and define $\mathbb{Q}_\zeta$ to be the self-indexed FS product of the $\dot Q$ in $\Xi_\zeta$. We define $\mathbb{P}_{\zeta+1}^*$ to consist of the $(p,q)$ as in Fact~\ref{fact:specialH2} (using $D=\mathbb{P}_\zeta^*$), and set \[ \xi^+:=\zeta^+\cup\{\zeta\}\cup (\zeta\times \Xi_\zeta), \] and for $\dot Q\in \Xi_\zeta$ and $(p,q)\in \mathbb{P}^*_\xi$ we set \begin{multline*} H^*_{\dot Q}:=\bigcup\{H^*(s):\, s\in B_{\dot Q}\} \\ H^*((p,q)) = H^*(p)\cup\{\zeta\}\cup (\{\zeta\}\times\dom(q))\cup \bigcup_{\dot Q\in\dom(q)}(H^*_{\dot Q} \cup \bigcup_{r\in A_{\dot Q}} H^*(r)) \end{multline*} It is again clear that we satisfy (a) and (b); for (b) see~\eqref{eq:specialH}. In particular, \begin{equation}\label{eq:unclear1} H^*_{\dot Q}\cap \zeta=H(\dot Q) \text{ and }|H^*_{\dot Q}|<\lambda \end{equation} (\ref{item:restr}) is similar to the previous case: $q$ is determined by: The finite set $\dom(q)\subseteq (\Xi_\zeta\cap X)$ (${\le}X<\mu$ many possibilities) and the nice names $q(\dot Q)$ (${\le}|\mathbb{P}^*_\zeta{\restriction} X|^{\aleph_0}<\mu$ many possibilities). For later reference, let us note: Assume that we can extend some automorphism $f\in\Hgroup^*$ to $\mathbb{P}_\zeta$, and that $\dot Q\in \Xi_\zeta$. Set $\supp(f):=\bigcup\{B_\delta:\, f{\restriction} B_\delta\ne\textrm{Id}\}$. Then \begin{equation}\label{eq:unclear} \supp(f)\cap H^*_{\dot Q}=\emptyset\text{ implies } \hat f^*(\dot Q)=\dot Q. \end{equation} This follows from~\eqref{eq:unclear1} and from Lemma~\ref{automident}(a). \textbf{Case $\xi>\pi_1$ limit:} As $\mathbb{P}$ is a FS iteration, we (have to) set $\mathbb{P}_\xi=\bigcup_{\zeta<\xi}\mathbb{P}_\zeta$; and we set $\mathbb{P}^*_\xi:=\bigcup_{\zeta<\xi}\mathbb{P}^*_\zeta$. Clearly $H^*$ still satisfies~(a) and~(b). To see~(\ref{item:restr}), first assume $\cof(\xi)\ge \mu$. Then $\xi^+\cap X = \zeta^+\cap X$ for some $\zeta<\xi^+$, and then $\mathbb{P}^*_\xi{\restriction} X=\mathbb{P}^*_\zeta{\restriction} X$. If on the other hand $\cof(\xi)<\mu$, pick a cofinal subset $F$ of $\pi_1$ of size ${<}\mu$, and note that $\mathbb{P}^*_\xi{\restriction} X= \bigcup_{\zeta\in F}\mathbb{P}^*_\zeta{\restriction} X$, which has size ${<}\mu$. \end{proof} \end{comment} \begin{comment} When $\xi>\pi_1$ is limit and $p\in\mathbb{P}^*_\xi$, $H^*(p)$ has already been defined (because $p\in\mathbb{P}^*_\eta$ for some $\eta<\xi$). For a $\mathbb{P}^*_\xi$-name $\tau$ define (by $\in$-recursion on $\tau$) \[ H^*(\tau):=\bigcup\{H^*(\sigma)\cup H^*(p):(\sigma,p)\in\tau\}. \] So let us show (a) and (b): Assume $\zeta\in\dom(p)$ (otherwise there is nothing to do). If $\zeta\in\Sigma$ then $p(\zeta)$ is a nice $\mathbb{P}_\zeta^*$-name of a real, so it depends on countably many conditions and, by induction hypothesis, $|H^*(p(\zeta))|<\lambda$ which gives (a). And (b) (and thus also (c)) follows by the inductive assumption (c) for $\mathbb{P}_\zeta^*$-names. We will have to replace $\mathbb{Q}'_\zeta$, or rather each $\dot Q$ in $\Xi_\zeta$, with an equivalent name (to get a reasonable $H^*$): Recall that \footnote{For notational simplicity we identify $\mathbb{P}_{\zeta+1}$ with $\mathbb{P}_\zeta*\mathbb{Q}_\zeta$.} $H^*((p,q))$ is \[H^*(p)\cup\{\zeta\}\cup(\{\zeta\}\times\dom q) \cup\bigcup_{\dot{Q}\in\dom q}(H^*(\dot{Q})\cup H^*(q(\dot{Q}))).\] $p$ determines $\dom(q)$, and each $q(\dot Q)$ is a nice $\mathbb{P}^*_\zeta$-name of a real (for $\dot Q\in \dom(q)$), and each $\dot{Q}\in\Xi_\zeta$ is a ${<}\lambda$-sequence of nice $\mathbb{P}^*_\zeta$-names for reals. This shows (a). And $H^*((p,q))\cap \pi$ is \[\bigg( (H^*(p)\cap \pi) \cup \{\zeta\} \cup \bigcup_{\dot{Q}\in\dom q} \biggl( (H^*(\dot{Q})\cap \pi)\cup (H^*(q(\dot{Q}))\cap \pi) \biggr) \biggr),\] which, by induction, is $H((p,q))$.\footnote{Well, formally you have to be careful to use names for ${<}\lambda$-sequences in such a way that \dots fill} \end{comment} In our applications, $\mathbb{P}^-_\xi$ has the following form. \begin{definition}\label{def:supprestr} Let $\mathbb{P}$ be a tidy S$\lambda$s iteration. For any $X\subseteq\pi^+$, define $\mathbb{P}^*{\upharpoonright}X:=\{p\in\mathbb{P}^*_\pi: H^*(p)\subseteq X\}$. \end{definition} Note that generally $\mathbb{P}^*{\restriction} X$ will not be a complete subforcing of $\mathbb{P}^*$; but we will only be interested in the case where it is, see Lemma~\ref{smallrestr}. \begin{lemma}\label{item:restr} Let $\mathbb{P}$ be a tidy S$\lambda$s iteration with $\lambda$ uncountable regular, and let $\mu\geq\lambda$ be regular and $\aleph_1$-inaccessible\footnote{Recall that a cardinal $\mu$ is \emph{$\kappa$-inaccessible} if $\theta^\nu<\mu$ for any cardinals $\theta<\mu$ and $\nu<\kappa$.}. If $X\subseteq\pi^+$ and $|X|<\mu$ then $|\mathbb{P}^*{\upharpoonright}X|< \mu$. \end{lemma} \begin{proof} By induction on $\xi\in[\pi_1,\pi]$ we show that, whenever $X\subseteq\xi^+$ has size ${<}\mu$, $|\mathbb{P}^*{\upharpoonright}X|<\mu$. For $\xi=\pi_1$, it is clear that $|\mathbb{P}^*{\upharpoonright}X|=\max\{|\omega\times X|,1\}<\mu$. For limit $\xi>\pi_1$, $\mathbb{P}^*{\upharpoonright}X=\bigcup_{\eta\in c}\mathbb{P}^*{\upharpoonright}(X\cap \eta^+)$ where $c$ is a cofinal subset of $\xi$ of size $\cf(\xi)$. If $\cf(\xi)<\mu$ then $|\mathbb{P}^*{\upharpoonright}X|<\mu$ because it is a union of ${<}\mu$ many sets of size ${<}\mu$; if $\cf(\xi)\geq\mu$ then $X\subseteq\eta^+$ for some $\eta<\xi$, so $|\mathbb{P}^*{\upharpoonright}X|<\mu$ by induction hypothesis. For the successor step $\xi=\zeta+1$, assume $X\subseteq(\zeta+1)^+$ and $X\nsubseteq\zeta^+$ (the non-trivial case). Put $X_0:=X\cap\zeta^+$. By induction hypothesis, $|\mathbb{P}^*{\upharpoonright}X_0|<\mu$ and, since $\mu$ is $\aleph_1$-inaccessible, there are at most $|\mathbb{P}^*{\upharpoonright}X_0|^{\aleph_0}<\mu$ many nice $\mathbb{P}^*{\upharpoonright}X_0$-names of reals. Let $p\in\mathbb{P}^*{\upharpoonright}X$. If $\xi\in \Sigma$ then $p(\xi)$ is a nice $\mathbb{P}^*{\upharpoonright}X_0$-name of a real; if $\xi\in \Pi$, then $p(\xi)$ is determined by a finite partial function from $(\{\xi\}\times\Xi_\xi)\cap X$ into the set of nice $\mathbb{P}^*{\upharpoonright}X_0$-names of reals, and there are ${<}\mu$-many such finite partial functions. Hence, $|\mathbb{P}^*{\upharpoonright}X|<\mu$. \end{proof} \begin{corollary}\label{cor:small} Let $\mathbb{P}$ be a tidy S$\lambda$s iteration with $\lambda$ regular. \begin{enumerate}[(a)] \item\label{item:quaxa} $\mathbb{P}$ has $\lambda$-small history. \item\label{item:quaxb} Every $p\in \mathbb{P}^*$ is an element of $\mathbb{P}^*{\restriction} X$ from some $X\subseteq\pi^+$ of size ${<}\lambda$. \item\label{item:quaxc} For every $\mathbb{P}$-name of a real there is an equivalent $\mathbb{P}^*{\restriction} X$-name for some $X\subseteq\pi^+$ of size ${<}\lambda$. \item\label{item:quaxd} Assume that $|\pi|^{\aleph_0}=|\pi|$, that $\lambda\le |\pi|^+$, and that $|\Xi_\zeta|\le |\pi|$ for each $\zeta\in \Pi$. Then $|\mathbb{P}^*|\le |\pi|$. \end{enumerate} \end{corollary} \begin{proof} (\ref{item:quaxa}) follows from Lemma~\ref{smallH}(a),(b); (\ref{item:quaxb}) follows from Lemma~\ref{smallH}(a) (using $X:=H^*(p)$); and (\ref{item:quaxc}) follows from (\ref{item:quaxb}) (use a nice $\mathbb{P}^*$-name for a real). For (\ref{item:quaxd}), set $\mu=|\pi|^+$ (the cardinal successor). Note that $\mu$ is ${<}\aleph_1$-inaccessible because $|\pi|^{\aleph_0}=|\pi|$, and $|\pi^+|\le |\pi|\times \sup_{\zeta\in\Pi}\{|\Xi_\zeta|\}\le |\pi|<\mu$. Therefore $|\mathbb{P}^*|=|\mathbb{P}^*{\restriction} \pi^+|<\mu$ by Lemma~\ref{item:restr} (for $X=\pi^+$). \end{proof} In our applications, $\mathbb{P}^-_\xi=\mathbb{P}^*{\upharpoonright}C_\xi$ with $C_\xi\subseteq\xi^+$ for all $\xi\in\Sigma$. We will now show how to build symmetric S$\lambda$s-iterations: \begin{comment} , but first let us note: \begin{lemma}\label{lem:UNCLEAR} Assume each $h\in \Hgroup^*$ can be extended to $\mathbb{P}_\zeta$. \begin{enumerate} \item As in FILL, let $\dot Q$ be a nice $\mathbb{P}^*_\zeta$-name for a poset of size ${<\lambda}$, whose universe is a set of reals, and define $H^*(\dot Q)$ as in FILL. Set $X_{\dot Q}:=\bigcup\{B_\delta: \delta\in\pi_0, B_\delta\cap H^*(\dot Q)\neq \emptyset\}$. If $h_1$ and $h_2$ agree on $X_{\dot Q}$, then $\hat h_1^*({\dot Q})=\hat h_2^*({\dot Q})$. \item\label{item:point} If $C\subseteq \zeta^+$, $|C|<\mu$ for $\mu$ uncountable, and $\mathfrak Q$ is a set of names as in (1) of size ${<}\mu$, such that $H^*(\dot Q)\subseteq C$ for all $\dot Q\in \mathfrak Q$, then $\{\hat h^*(\dot Q):\, h\in\Hgroup^*, \dot Q\in \mathfrak Q\}$ has size ${<}\mu$. \end{enumerate} \end{lemma} \begin{proof} For (1): Set $g:=h_1\circ h_2^{-1}$, so $g{\restriction} X_{\dot Q}$ is the identity, now use \ref{automident}(b). For (2): Set $C':=\bigcup{B_\delta:\, B_\delta\cap C\neq \emptyset}$. Set $\Hgroup':=\{h\in \Hgroup^*: h\text{ is the identity outside }C'\}$. Note that for each $h\in\Hgroup^*$ there is a $g\in \Hgroup'$ such that $h$ and $g$ agree on $C'$, so $\hat g^*(\dot Q) = \hat h^*(\dot Q)$ for all $\dot Q\in\mathfrak Q$. Also, $|\Hgroup'|\le |C|\times |\omega_1|<\mu$. \end{proof} \end{comment} \begin{definition}\label{def:closed} Let $\mathbb{P}$ be a tidy S$\lambda$s iteration, and let $h:\pi_1\to\pi_1$ be a $2$G-automorphism. \begin{enumerate}[(1)] \item Let $\xi\in\Pi$. We say that \emph{$\Xi_\xi$ is closed}, if, whenever $h\in\Hgroup^*$ and $\hat{h}_\xi$ can be defined (see Definition~\ref{DefAutom}), $\dot Q\in\Xi_\xi$ implies $\hat{h}^*_\xi(\dot Q)\in\Xi_\xi$ (where $\Hgroup^*$ is the group of $2$G-automorphisms fixed in Definition~\ref{DefAutom}(5)). \item We say that $C\subseteq\pi^+$ is \emph{closed} if, for any $h\in\Hgroup^*$, it satisfies: \begin{enumerate}[(i)] \item For any $\delta<\pi_0$, $B_\delta\cap C\neq\emptyset$ implies $B_\delta\subseteq C$. \item For any $\xi\in\Pi$, whenever $\hat{h}_\xi$ can be defined, if $(\xi,\dot{Q})\in C$ then $(\xi,\hat{h}^*_\xi(\dot Q))\in C$ and $\xi\in C$. \end{enumerate} \end{enumerate} \end{definition} \begin{lemma}\label{lem:kjgeqwtr} Assume that $\mathbb{P}$ is a tidy S$\lambda$s iteration such that the following requirements are satisfied: \begin{enumerate}[(I)] \item\label{item:blaprod} For any $\xi\in\Pi$, $\Xi_\xi$ is closed. \item\label{item:blasigma} For any $\xi\in\Sigma$, $\mathbb{P}^-_\xi=\mathbb{P}^*{\restriction}C_\xi$ where $C_\xi\subseteq\xi^+$ is closed. \end{enumerate} Then we get: \begin{enumerate}[(a)] \item\label{item:symmetric} $\mathbb{P}$ is symmetric (i.e., $h$-symmetric for all $h\in\Hgroup^*$). \item $\hat h[\mathbb{P}^*{\restriction} C]= \mathbb{P}^*{\restriction} C$ for all closed $C\subseteq \pi^+$. \end{enumerate} \end{lemma} Of course we will have to also make sure that the assumption ``$\mathbb{P}$ is a tidy S$\lambda$s-iteration'' is satisfied. The nontrivial points of these assumptions are: \begin{enumerate}[(I{-b}) \ite For $\zeta\in \Pi$ the FS product $\mathbb{Q}_\zeta$ is ccc. \\ (In our case this will be trivial, as all factors $\dot Q$ will be Knaster). \ite For $\zeta\in \Sigma$, $\mathbb{P}^*{\restriction} C_\zeta$ is a complete subforcing of $\mathbb{P}^*$. \\ We will see in Lemma~\ref{smallrestr} how to achieve this. \end{enumerate} \begin{proof} By induction on $\xi\in[\pi_1,\pi]$ we show that $\hat{h}_\xi$ can be defined for any $h\in\Hgroup^*$ (towards (a)), and that (b) is valid for any closed $C\subseteq\xi^+$.\footnote{In this proof we only use that $h\in\Hgroup^*$ implies $h^{-1}\in\Hgroup^*$.} \textbf{Case $\bm{\xi=\pi_1}$:} It is clear that $\hat{h}_{\pi_1}$ can be defined; (b) is clear because $h[B_\delta]=B_\delta$ for any $\delta<\pi_0$ and $h\in\Hgroup^*$. \textbf{Case $\bm{\xi=\zeta+1}$ with $\bm{\zeta\in \Sigma}$:} (a): Note that $\Vdash\hat h^*_\zeta(\mathbb{Q}_\zeta)= \hat h^*_\zeta(\mathbb{S}_\zeta^{V^{\mathbb{P}^*{\restriction} C_\zeta}})=\mathbb{S}_\zeta^{V^{\hat h_\zeta[\mathbb{P}^*{\restriction} C_\zeta]}}$ (as $\mathbb{S}_\zeta$ only uses parameters from the ground model), which is $\dot{\mathbb{Q}}_\zeta$ by induction hypothesis (as we assume that $C_\zeta$ is closed). So we can extend $\hat h_\zeta$ to $\hat h_{\zeta+1}$. (b): Note that $\xi^+=\zeta^+\cup\{\zeta\}$, so if $C\subseteq \xi^+$ is closed (and not already a subset of $\zeta^+$), then $C=C'\cup \{\zeta\}$ with $C'$ closed, and $(p,\dot q) \in\mathbb{P}^*{\restriction} C$ means that $p\in\mathbb{P}^*{\restriction}C'$ and $\dot q$ is a nice $\mathbb{P}^*{\restriction} C'$-name for an element of $\dot{\mathbb{Q}}_\zeta$. Then $\hat h^*_\zeta(\dot q)$ is a nice $\hat h_\zeta[\mathbb{P}^*{\restriction} C']$-name for an element of $\hat h^*_\zeta(\dot{\mathbb{Q}}_\zeta)$, which is by induction hypothesis a nice $\mathbb{P}^*{\restriction} C'$-name for an element of $\dot{\mathbb{Q}}_\zeta$, i.e., $\hat h_{\zeta+1}((p,\dot q))\in\mathbb{P}^*{\restriction} C$. This shows that $\hat h_\xi[\mathbb{P}^*{\restriction} C]\subseteq \mathbb{P}^*{\restriction} C$. As this is also true for the inverse of $h$ (because $h^{-1}\in\Hgroup^*$), by Lemma~\ref{automident}(c) we get equality. \textbf{Case $\bm{\xi=\zeta+1}$ with $\bm{\zeta\in \Pi}$:} First note that, by induction hypothesis, $\hat h_\zeta[\mathbb{P}^*_\zeta]=\mathbb{P}^*_\zeta$ because $\mathbb{P}^*_\zeta=\mathbb{P}^*{\restriction}\zeta^+$ and $\zeta^+$ is closed. (a): Since $\dot{\mathbb{Q}}_\zeta$ is a $\mathbb{P}^*_\zeta$-name for the self-indexed FS product of the (evaluated) set $\Xi_\zeta=\{\dot Q:\, \dot Q\in \Xi_\zeta\}$, $\hat h^*_\zeta(\dot{\mathbb{Q}}_\zeta)$ is the $\mathbb{P}^*_\zeta$-name for the self-indexed FS product of the (evaluated) set $\hat h^*_\zeta[\Xi_\zeta]=\{\hat h^*_\zeta(\dot Q):\, \dot Q\in \Xi_\zeta\}$. But as the ground model set of names $\{\hat h^*(\dot Q):\, \dot Q\in \Xi_\zeta\}$ is identical to the ground model set of names $\Xi_\zeta$, their evaluations are identical as well (because $\Xi_\xi$ is closed under inverses, and by Lemma~\ref{automident}(c)). In other words, $\hat h^*_\zeta(\dot{\mathbb{Q}}_\zeta)=\dot{\mathbb{Q}}_\zeta$, and $\hat h_\zeta$ can be extended to $\mathbb{P}_\xi$. (b): Assume that $C\subseteq \xi^+=\zeta^+\cup\{\zeta\}\cup({\zeta}\times \Xi_\zeta)$ is closed, and that $(p,\dot q)\in \mathbb{P}^*{\restriction} C$ (but to avoid the trivial case, not in $\mathbb{P}^*_\zeta$). This means that $p\in \mathbb{P}^*{\restriction} C'$ for $C'=C\cap \zeta^+$, and it determines $\dom(\dot q)=\{\dot Q_1,\dots , \dot Q_n\}$, such that all $(\zeta, \dot Q_i)$ are in $C$ (and each $\dot q(\dot Q_i)$ is a nice $\mathbb{P}^*{\restriction}C'$-name). Then $\hat h_\zeta(p)\in \mathbb{P}^*{\restriction} C'$ by induction hypothesis, and it determines $\dom(\hat h^*_\zeta(\dot q))=\{\hat h^*_\zeta(\dot Q_1),\dots , \hat h^*_\zeta(\dot Q_n)\}$ (and each $\hat h^*_\zeta(\dot q)(\dot Q_i)$ is a $\mathbb{P}^*{\restriction}C'$-nice name). Accordingly $\hat h_\xi((p,\dot q))\in \mathbb{P}^*{\restriction} C$ as required. We conclude that $\hat h_\xi[\mathbb{P}^*{\restriction}C]\subseteq\mathbb{P}^*{\restriction}C$, but equality holds because the same is true for $h^{-1}$. \textbf{Case $\bm{\xi}$ limit:} By induction hypothesis, $\hat{h}_\zeta$ is defined for all $\zeta<\xi$, so $\hat h_\xi$ is defined (as its union). On the other hand, if $C\subseteq\xi^+$ is closed then $C=\bigcup_{\zeta<\xi}C\cap\zeta^+$ where each $C\cap\zeta^+$ is closed, so $\hat h_\xi[\mathbb{P}^*{\restriction}C]=\mathbb{P}^*{\restriction}C$ by induction hypothesis. \end{proof} We address some few facts about closed sets. \begin{lemma}\label{lem:closure} Let $\mathbb{P}$ be a symmetric tidy S$\lambda$s iteration. Then: \begin{enumerate}[(a)] \item\label{item:sjlgclosed} The union of closed sets is closed. \item If $A\subseteq \pi^+$ has size ${<}\mu$, with $\mu\ge\max\{\lambda,\aleph_2\}$ uncountable regular, then the closure $\overline A$ of $A$ (the smallest closed set containing $A$) has size ${<}\mu$. \end{enumerate} \end{lemma} \begin{proof} Property (a) is straightforward. We show by induction on $\xi\in[\pi_1,\pi]$ that (b) holds for any $A\subseteq\xi^+$. \textbf{Case $\bm{\xi=\pi_1}$:} $\overline A=\bigcup\{B_\delta:\, B_\delta\cap A\neq \emptyset\}$. So $|\overline A|=\aleph_1\times |A|<\mu$. \textbf{Case $\bm{\xi=\zeta+1}$ with $\bm{\zeta\in \Sigma}$:} If $A\subseteq \xi^+$ has size ${<}\mu$, then $\overline A\subseteq \overline{(A\cap \zeta^+)}\cup \{\zeta\}$ has size ${<}\mu$. \textbf{Case $\bm{\xi=\zeta+1}$ with $\bm{\zeta\in \Pi}$:} For $h\in\Hgroup^*$, set $\supp(h)=\bigcup\{B_\delta:\, h{\restriction} B_\delta\neq \textrm{id}_{B_\delta}\}$. Let $A^*$ be the closure of \[ \bigcup\{ H^*(\dot Q)\cap\pi_1:\, (\zeta,\dot Q)\in A\}. \] $H^*(\dot Q)$ has size ${<}\lambda \le \mu$ by Lemma~\ref{smallH}(c), and therefore also the set $A^*$ has size ${<}\mu$. It is clear that $\overline A\subseteq \overline{(A\cap \zeta^+)}\cup \{\zeta\} \cup X$ for \[ X:=\{(\zeta, \hat h^*(\dot Q)):\, (\zeta,\dot Q)\in A, h\in\Hgroup^*\}. \] Since $\Hgroup^*$ is a group, $\{\zeta\}\cup X$ is closed. We claim that we get the same set $X$ if we replace $\Hgroup^*$ with \[ \Hgroup':=\{g\in \Hgroup^*:\, \supp(g)\subseteq A^*\}. \] As $\Hgroup'$ has size ${<}\mu$ (recall that $|\supp(g)|\leq\aleph_1$ for any $g\in\Hgroup^*$), we get $|\overline A|<\mu$, as required. Note that for $f\in \Hgroup^*$ and $(\zeta,\dot Q)\in A$, by~\eqref{eq:unclear} $\supp(f)\cap A^*=\emptyset$ implies $\hat f^*_\zeta(\dot Q)=\dot Q$. And for $h\in \Hgroup^*$, there is a $g\in \Hgroup'$ such that $f:= g^{-1}\circ h$ satisfies $\supp(f)\cap A^*=\emptyset$. (Basically, $g{\restriction}A^*=h{\restriction} A^*$ and $g{\restriction}(\pi_1\smallsetminus A^*)$ is the identity.) So $\hat f^*(\dot Q)=\dot Q$, which implies \[ \hat g^*(\dot Q)=\hat g^*(\hat f^*(\dot Q))=\hat h^*(\dot Q), \] as required. \textbf{Case $\bm{\xi}$ limit:} If $\cof(\xi)\ge \mu$, then $A\subseteq \zeta^+$ for some $\zeta<\xi$, so $|\overline A|<\mu$ by induction hypothesis. Otherwise, $\overline A=\bigcup_{\zeta\in I}\overline{A\cap \zeta^+}$ for some witness $I$ of $\cof(\xi)$, so again $|\overline A|<\mu$. \end{proof} \begin{comment} \begin{deflemma}\label{lem:kjgeqwtr} Assume that $\mathbb{P}$ is a tidy S$\lambda$s iteration (for regular $\lambda$) such that (inductively) the following requirements are satisfied (where ``closed'' is defined in the same induction): \begin{enumerate}[(1)] \item For $\xi=\pi_1$: \emph{Define} $C\subseteq \pi_1$ to be closed, if $B_\delta\cap C\neq\emptyset$ implies $B_\delta\subseteq C$ for $\delta\in\pi_0$. \item\label{item:blasigma} For $\xi=\zeta+1$ with $\zeta\in \Sigma$: \emph{Require} that $\mathbb{P}^-_\zeta=\mathbb{P}^*{\upharpoonright}C_\zeta$ for some closed $C_\zeta$. \\ \emph{Define} $C\subseteq \xi^+$ to be closed iff $C\cap \zeta^+$ is closed. \item\label{item:blaprod} For $\xi=\zeta+1$ with $\zeta\in \Pi$: \emph{Require} that $\Xi_\zeta$ is a set of names closed under $h$ for all $h\in \Hgroup^*$ (i.e., $\dot Q\in \Xi_\zeta$ implies $\hat h^*(\dot Q)\in \Xi_\zeta$). \\ \emph{Define} $C\subseteq \xi^+$ to be closed iff $C\cap \zeta^+$ is closed and $\{(\zeta,\dot Q)\}\in C$ implies $\{(\zeta,\hat h^*(\dot Q))\}\in C$ for all $h\in \Hgroup^*$. \item For $\xi$ limit: \emph{Define} $C\subseteq \xi^+$ to be closed if all $C\cap \zeta^+$ are closed for $\zeta<\xi$. \end{enumerate} Then (inductively) we get: \begin{enumerate}[(a)] \item\label{item:symmetric} Each $h\in \Hgroup^*$ can be extended to any $\mathbb{P}_\xi$. \\ In other words: $\mathbb{P}$ is symmetric. \item $\hat h[\mathbb{P}^*_\xi{\restriction} C]= \mathbb{P}^*_\xi{\restriction} C$ for all closed $C\subseteq \xi^+$. \item If $A\subseteq \pi^+$ has size ${<}\mu$, $\mu\ge\lambda$ uncountable regular, then the closure $\overline A$ of $A$ (the smallest closed set containing $A$) has size ${<}\mu$. \item\label{item:sjlgclosed} The increasing union of closed sets is closed. \end{enumerate} \end{deflemma} Of course we will have to also make sure that the assumption ``$\mathbb{P}$ is a tidy S$\lambda$s-iteration'' is satisfied. The nontrivial points of these assumptions are: \begin{enumerate}[(1)] \item[(\ref{item:blasigma}b)] For $\zeta\in \Sigma$, $\mathbb{P}^*_\zeta{\restriction} C_\zeta$ is a complete subforcing of $\mathbb{P}^*$. \\ We will see in Lemma~\ref{smallrestr} how to achieve this. \item[(\ref{item:blaprod}b)] For $\zeta\in \Pi$ the FS product $\mathbb{Q}_\zeta$ is ccc. \\ (In our case this will be trivial, as all factors $\dot Q$ will be Knaster). \end{enumerate} \begin{proof} \textbf{Case $\bm{\xi=\pi_1}$:} $\overline A=\bigcup\{B_\delta:\, B_\delta\cap A\neq \emptyset\}$. So $|\overline A|=\aleph_1\times |A|<\mu$. \textbf{Case $\bm{\xi=\zeta+1}$ with $\bm{\zeta\in \Sigma}$:} (a): Note that $\Vdash\hat h^*(\mathbb{Q}_\zeta)= \hat h^*(\mathbb{S}_\zeta^{V^{\mathbb{P}^*_\xi{\restriction} C_\zeta}})=\mathbb{S}_\zeta^{V^{\hat h^*(\mathbb{P}^*_\xi{\restriction} C_\zeta)}}$ (as $\mathbb{S}_\zeta$ only uses parameters from the ground model), which is $\mathbb{Q}_\zeta$ by induction (as we assume that $C_\zeta$ is closed). So we can extend $\hat h$. (b): $\xi^+=\zeta^+\cup\{\alpha\}$, so if $C\subseteq \xi^+$ is closed (and not already subset of $\zeta^+$), then $C=C'\cup \{\zeta\}$ with $C'$ symmetric, and $(p,q) \in\mathbb{P}^*_\xi{\restriction} C$ means that $q$ is a nice $\mathbb{P}^*{\restriction} C'$-name for an element of $\mathbb{Q}_\zeta$. Then $\hat h^*(q)$ is a nice $\hat h[\mathbb{P}^*{\restriction} C']$-name for an element of $\hat h^*(\mathbb{Q}_\zeta)$, which is by induction a nice $\mathbb{P}^*{\restriction} C'$-name for an element of $\mathbb{Q}_\zeta$, i.e., in $\mathbb{P}^*_\xi{\restriction} C$. This shows that $\hat h[\mathbb{P}^*_\xi{\restriction} C]\subseteq \mathbb{P}^*_\xi{\restriction} C$ (and as this is also true for the inverse of $h$, we get equality). (c): If $A\subseteq \xi^+$ is of size ${<}\mu$, then $\overline A\supseteq \overline{(A\cap \zeta^+)}\cup \{\zeta\}$ has size ${<}\mu$. \textbf{Case $\bm{\xi=\zeta+1}$ with $\bm{\zeta\in \Pi}$:} (a): $\mathbb{Q}_\zeta$ is a $\mathbb{P}_\zeta$-name for the self-indexed FS product of the (evaluated) set $\Xi_\zeta=\{\dot Q:\, \dot Q\in \Xi_\zeta\}$, so $\hat h^*(\mathbb{Q}_\zeta)$ is the $\mathbb{P}_\zeta$-name for the self-indexed FS product of the (evaluated) set $\{\hat h^*(\dot Q):\, \dot Q\in \Xi_\zeta\}$. But as the ground model set of names $\{\hat h^*(\dot Q):\, \dot Q\in \Xi_\zeta\}$ is identical to the ground model set of names $\Xi_\zeta$, their evaluations are identical as well. In other words, $\hat h^*(\mathbb{Q}_\zeta)=\mathbb{Q}_\zeta$, and $h$ can be extended to $\xi$. (b): Assume that $C\subseteq \xi^+=\zeta^+\cup\{\zeta\}\cup({\zeta}\times \Xi_\zeta)$ is symmetric, and that $(p,q)\in \mathbb{P}^*_\xi{\restriction} C$ (but to avoid the trivial case, not in $\mathbb{P}^*_\zeta$). This means that $p\in \mathbb{P}^*_\zeta{\restriction} C'$ for $C'=C\cap \zeta^+$, and determines $\dom(q)=\{\dot Q_1,\dots , \dot Q_n\}$, such that all $(\zeta, \dot Q_i)$ are in $C$ (and each $q(\dot Q_i)$ is a nice name). Then $\hat h(p)\in \mathbb{P}^*_\zeta{\restriction} C'$ by induction, determines $\dom(h^*(q))=\{\hat h^*(\dot Q_1),\dots , \hat h^*(\dot Q_n)\}$ (and each $h^*(q)(\dot Q_i)$ is a nice name). Accordingly $\hat h((p,q))\in \mathbb{P}^*_\xi{\restriction} C$ as required. (c): For $h\in\Hgroup^*$, set $\supp(h)=\bigcup\{B_\delta:\, h{\restriction} B_\delta\neq \textrm{Id}\}$. Set \[ A^*=\bigcup\{ H^*_{\dot Q}\cap\pi_1:\, (\zeta,\dot Q)\in A\}. \] $H^*_{\dot Q}$ has size ${<}\lambda \le \mu$ by~\eqref{eq:unclear1}, and therefore also the set $A^*$ has size ${<}\mu$. It is clear that $\overline A\subseteq \overline{(A\cap \zeta^+)}\cup \{\zeta\} \cup X$ for \[ X:=\{(\zeta, \hat h^*(\dot Q)):\, (\zeta,\dot Q)\in A, h\in\Hgroup^*\}, \] and we claim that we get the same set if we replace $\Hgroup^*$ with \[ \Hgroup':=\{g\in \Hgroup^*:\, \supp(g)\subseteq A^*\} \] As $\Hgroup'$ has size ${<}\mu$, we get then $|\overline A|<\mu$, as required. Note that for $f\in \Hgroup^*$ and $(\zeta,\dot Q)\in A$, by~\eqref{eq:unclear} $\supp(f)\cap A^*=\emptyset$ implies $\hat f^*(\dot Q)=\dot Q$. And for $h\in \Hgroup^*$, there is a $g\in \Hgroup'$ such that $f:= g^{-1}\circ h$ satisfies $\supp(f)\cap A^*=0$. (Basically, $g=h{\restriction} A^*$.) So $\hat f^*(\dot Q)=\dot Q$, which implies \[ \hat g^*(\dot Q)=\hat g^*(\hat f^*(\dot Q))=\hat h^*(\dot Q), \] as required. \textbf{Case $\bm{\xi}$ limit:} If $\cof(\xi)\ge \mu$, then $A\subseteq \zeta^+$ for some $\zeta<\xi$, so $|\overline A|<\mu$ by induction. Otherwise, $|\overline A|=\bigcup_{\zeta\in I}\overline{A\cap \zeta^+}$, so again $|\overline A|<\mu$. \end{proof} \end{comment} \begin{lemma}\label{lem:allthefactors} To satisfy assumption (\ref{item:blaprod}) of Lemma~\ref{lem:kjgeqwtr} for $\xi\in \Pi$, the following is sufficient, while assuming (I) and (II) for $\zeta<\xi$: \begin{enumerate}[(a)] \item[(\ref{item:blaprod}')] For some formula $\varphi(x,y)$ using only parameters from the ground model and some $\kappa_\xi\leq\lambda$, $\Xi_\xi$ is the set of all nice $\mathbb{P}^*_\xi$-names $\dot Q$ for ${<}\kappa_\xi$-sized forcings consisting of reals such that $\Vdash_{\mathbb{P}^-_\xi} \varphi(\dot Q,\xi)$. \end{enumerate} \end{lemma} \begin{proof} By the assumption and Lemma~\ref{lem:kjgeqwtr}, $\mathbb{P}_\xi$ is symmetric and $h^*_\xi[\mathbb{P}_\xi^*]=\mathbb{P}_\xi^*$ for any $h\in\Hgroup^*$ (because $\mathbb{P}_\xi^*=\mathbb{P}^*{\upharpoonright}\xi^+$). Let $\dot Q$ be such a nice $\mathbb{P}^*_\xi$-name. Then $\hat h^*_\xi(\dot Q)$ is also a nice $\mathbb{P}^*_\xi$-name, and $\Vdash_{\mathbb{P}^*_\xi} \varphi(\hat h^*_\xi(\dot Q),\xi)$ as $\varphi$ only uses ground model parameters (i.e., standard names). \end{proof} \begin{comment} \begin{lemma Let ${\mathbb{P}}$ be a S$\lambda$-s iteration, $\lambda$ uncountable regular. We can find a ${}'$ equivalent to ${}$ such that for any $p\in\mathbb{P}^{{}'}$ there is a $q\le_{\mathbb{P}'} p$ with \begin{enumerate}[(a)] \item $|H^*(q)|<\lambda$. \item $H^*(q)\supseteq H(q)$. \item For any $\xi\in\Sigma\cap\supp p$, $q(\xi)$ is a nice $\mathbb{P}^{\prime-}_\xi$-name of a member of $\mathbb{S}_\xi$ satisfying $|H^*(q(\xi))|<\lambda$. \item For any $\xi\in \Pi\cap\supp q$, $q{\upharpoonright}\xi\Vdash``\dom q(\xi)=d^q_\xi$" for some finite $d^q_\xi\subseteq\Xi_\xi$, and for all $\dot Q\in d^q_\xi$, the name $q(\xi,\dot{Q})$ is a nice $\mathbb{P}^-_\xi$-name of (a code for) a member of $\dot{Q}$, again satisfying $|H^*(q(\xi,\dot Q))|<\lambda$. \end{enumerate} In particular, $|H(q)\cap \pi_1|<\lambda$; i.e., ${}'$ has $\lambda$-small history. \end{lemma} The only modification required (to get ${}'$ from ${}$) is to recursively replace the ``arbitrary'' names $\dot Q\in \Xi_\xi$ with equivalent, suitably ``nice'' names; otherwise $H^*(\dot Q)$, and therefore $H^*(p)$ for all $p$ ``depending on $\dot Q$'' will be too big. (Note that $H^*$ of two equivalent names can be very different.) \begin{proof} By induction on $\xi\ge \pi_1$: In case $\xi=\pi_1$, $H^*(p)=\supp p=H(p)$, which is finite, thus (a) and (b) are satisfied. The limit step is trivial as there are no additional statements to be shown. So assume that $\xi=\zeta+1$ is a successor. Case I: $\zeta\in \Pi$. By induction, $\mathbb{P}'_\zeta$ is canonically equivalent to $\mathbb{P}_\zeta$, and we have to define the $\mathbb{P}'_\zeta$-name $\dot Q'$ which is equivalent to $\dot Q$ (for each $\dot Q\in \Xi_\zeta$). We know that $\dot Q$ is forced to be a poset of size ${<}\lambda$, so (by ccc) it has size at most $\mu<\lambda$, i.e., there is a sequence $(\tau_i)_{i\in \mu}$ of names of reals such that $\dot Q$ is forced to be the set of (evaluations as $H(\omega_1)$-elements) of the $\tau_i$. Find an equivalent nice name $\tau'_i$ for each $\tau_i$, such that each condition $p$ appearing in $\tau'_i$ satisfies $|H^*(p)|<\lambda$ (this is possible by induction). So $|H^*(\tau'_i)|<\lambda$. Let $\dot Q'$ be the poset consisting of the $\tau'_i$, so $|H^*(\dot Q')|<\lambda$. (Actually we also have to deal with the order, in the same way.) So we are done defining $\Xi'_\xi$, and need to show (a), (b) and (d) for Case I. (d) is clear: We pick an equivalent nice name for $q(\xi,\dot Q)$ such that each condition in it is in the dense set satisfying (a). For (a) and (b), assume that $p\in P_\xi$ with $\zeta\in\dom(p)$. First find $q\leq p{\restriction}\zeta$ satisfying (a)--(d), and find $q(\zeta)$ equivalent to $p(\zeta)$ satisfying (d). Then (a) and (b) follow by induction and definition of $H$ and $H^*$. Case II: $\zeta\in \Sigma$ is similar (but simpler). It remains to show (a) (for both cases I and II). We can assume that $\zeta\in\dom(p)$, and (by (d) or (e)) that $|H^*()|<\lambda$. , so it depends on countably many conditions and, by induction hypothesis, $|H^*(p(\zeta))|<\lambda$; if $\zeta\in \Pi$ then $H^*(p)$ is \[H^*(p{\upharpoonright}\zeta)\cup\{\zeta\}\cup(\{\zeta\}\times\dom p(\zeta)) \cup\bigcup_{\dot{Q}\in\dom p(\zeta)}(H^*(\dot{Q})\cup H^*(p(\zeta,\dot{Q}))).\] Since any $\mathbb{P}_\zeta$-name of a member of $H_{\aleph_1}$ depends on countably many conditions and each $\dot{Q}\in\Xi_\zeta$ depends on ${<}\lambda$-many $\mathbb{P}_\zeta$-names of members of $H_{\aleph_1}$, $|H^*(\dot{Q})|<\lambda$ and $|H^*(p(\zeta,\dot{Q}))|<\lambda$ for any $\dot{Q}\in\dom p(\zeta)$. Hence, $|H^*(p)|<\lambda$. On the other hand, since $p(\zeta)$ is determined by $\dot{Q}$ and $p(\zeta,\dot{Q})$ for all $\dot{Q}\in\dom p(\zeta)$, we deduce (b). \end{proof} \begin{lemma Let ${\mathbb{P}}$ be a S$\lambda$-s iteration with $\lambda$ regular. Then, for any $p\in\mathbb{P}^*_\pi$: \begin{enumerate}[(a)] \item $|H^*(p)|<\lambda$. \item $H(p)=H^*(p)\cap\pi$. \item $H(\tau)=H^*(\tau)\cap\pi$ for any $\mathbb{P}^*_\pi$-name $\tau$. \end{enumerate} In particular, ${\mathbb{P}}$ has $\lambda$-small history. \end{lemma} \begin{proof} We prove (a), (b) and (c) simultaneously for all $p\in\mathbb{P}^*_\xi$ by recursion on $\pi_1\leq\xi\leq\pi$. It is clear that (c) follows from (b). In the case $\xi=\pi_1$, $H^*(p)=\supp p=H(p)$, which is finite. For the successor step $\xi=\zeta+1$, assume $\zeta\in\supp p$. If $\zeta\in\Sigma$ then $p(\zeta)$ is a nice $\mathbb{P}^-_\zeta$-name of a real, so it depends on countably many conditions and, by induction hypothesis, $|H^*(p(\zeta))|<\lambda$; if $\zeta\in \Pi$ then $H^*(p)$ is \[H^*(p{\upharpoonright}\zeta)\cup\{\zeta\}\cup(\{\zeta\}\times\dom p(\zeta)) \cup\bigcup_{\dot{Q}\in\dom p(\zeta)}(H^*(\dot{Q})\cup H^*(p(\zeta,\dot{Q}))).\] Since any $\mathbb{P}^-_\zeta$-name of a real depends on countably many conditions and each $\dot{Q}\in\Xi_\zeta$ depends on ${<}\lambda$-many $\mathbb{P}^-_\zeta$-names of reals, $|H^*(\dot{Q})|<\lambda$ and $|H^*(p(\zeta,\dot{Q}))|<\lambda$ for any $\dot{Q}\in\dom p(\zeta)$. Hence, $|H^*(p)|<\lambda$. On the other hand, since by (S4)(iv) $p(\zeta)$ is determined by $\dot{Q}$ and $p(\zeta,\dot{Q})$ for all $\dot{Q}\in\dom p(\xi)$, we deduce (b). The limit step is immediate. \end{proof} The $\lambda$-small dense subset $\mathbb{P}^*$ constructed in Lemma~\ref{smallH} allows us to define the following subforcings (note that for $p\in \mathbb{P} \smallsetminus \mathbb{P}^*$ the history $H^*(p)$ does not make much sense): \begin{definition}\label{def:supprestr} For any $A\subseteq\pi^+$, define $\mathbb{P}^*{\upharpoonright}A:=\{p\in\mathbb{P}^*_\pi: H^*(p)\subseteq A\}$. \end{definition} The challenge is to construct $C_\xi$ such that $\mathbb{P}^*{\upharpoonright}C_\xi\lessdot\mathbb{P}^*_\xi$; as Note that in general, $\mathbb{P}^*{\upharpoonright}A$ is not a complete subposet of $\mathbb{P}^*$. \end{comment} As mentioned, we need closed $C\subseteq \pi^+$ that define complete subforcings. For this we use the following result: \begin{lemma}\label{smallrestr} Let $\mathbb{P}$ be as in Lemma~\ref{lem:kjgeqwtr}, and let $\mu>\lambda$ be regular and $\aleph_1$-inaccessible \begin{enumerate}[(a)] \item For $A\subseteq\pi^+$ of size ${<}\mu$ there is some closed $C\supseteq A$ of size ${<}\mu$, such that $\mathbb{P}^*{\upharpoonright}C\lessdot\mathbb{P}^*$. \item The closed sets $C\in [\pi^+]^{{<}\mu}$ that satisfy $\mathbb{P}^*{\upharpoonright}C\lessdot\mathbb{P}^*$ form a $\lambda$-club. \end{enumerate} \end{lemma} \begin{proof} \begin{comment} We first show (a) by induction on $\xi\in[\pi_1,\pi]$. For $\xi=\pi_1$, it is clear that $|\mathbb{P}^*{\upharpoonright}A|=\max\{|\omega\times A|,1\}<\mu$. For limit $\xi>\pi_1$, $\mathbb{P}^*{\upharpoonright}A=\bigcup_{\eta\in c}\mathbb{P}^*{\upharpoonright}(A\cap \eta^+)$ where $c$ is a cofinal subset of $\xi$ of size $\cf(\xi)$. If $\cf(\xi)<\mu$ then $\mathbb{P}^*{\upharpoonright}A$ has ${<}\mu$ because it is a union of ${<}\mu$ many sets of size ${<}\mu$; if $\cf(\xi)\geq\mu$ then $A\subseteq\eta^+$ for some $\eta<\xi$, so $|\mathbb{P}^*{\upharpoonright}A|<\mu$ by induction hypothesis. For the inductive step, assume $A\subseteq(\xi+1)^+$ and $A\nsubseteq\xi^+$ (the non-trivial case). Put $A_0:=A\cap\xi^+$. By induction hypothesis, $|\mathbb{P}^*{\upharpoonright}A_0|<\mu$ and, since $\mu$ is $\aleph_1$-inaccessible, there are at most $|\mathbb{P}^*{\upharpoonright}A_0|^{\aleph_0}<\mu$ many nice $\mathbb{P}^*{\upharpoonright}A_0$-names of reals. Let $p\in\mathbb{P}^*{\upharpoonright}A$. If $\xi\in \Sigma$ then $p(\xi)$ is a nice $\mathbb{P}^*{\upharpoonright}A_0$-name of a real; if $\xi\in \Pi$, then $p(\xi)$ is determined by a finite partial function from $(\{\xi\}\times\Xi_\xi)\cap A$ into the set of nice $\mathbb{P}^*{\upharpoonright}A_0$-names of reals, and there are ${<}\mu$-many such finite partial functions. Hence, $|\mathbb{P}^*{\upharpoonright}A|<\mu$. We now show (b). \end{comment} (a) Using Corollary~\ref{cor:small}(\ref{item:quaxb}) we can fix a function $f:(\mathbb{P}^*_\pi)^2\to [\pi^+]^{<\lambda}$ such that if $p$ and $q$ are compatible then there is some $r\leq p,q$ in $\mathbb{P}^*{\restriction} f(p,q)$. Also, we can fix a function $g:(\mathbb{P}^*_\pi)^{{\leq}\omega}\to[\pi^+]^{<\lambda}$ such that, whenever $\bar{p}=\langle p_n:n<w\rangle$ ($w\leq\omega$) is a non-empty antichain but \textbf{not} maximal in $\mathbb{P}_\pi$, then there is some $q\in \mathbb{P}^*{\restriction} g(\bar{p})$ with $q\perp p_n$ for any $n<w$. By recursion on $\alpha<\lambda$, define \[A'_\alpha:=A\cup A_{{<}\alpha}\cup\bigcup_{p,q\in\mathbb{P}^*{\upharpoonright}A_{{<}\alpha}}f(p,q)\cup\bigcup_{\bar{p}\in(\mathbb{P}^*{\upharpoonright}A_{{<}\alpha})^{{\leq}\omega}} g(\bar{p})\] where $A_{{<}\alpha}:=\bigcup_{\xi<\alpha}A_\xi$; and let $A_\alpha:=\overline{A'_\alpha}$ be the closure (see Lemma~\ref{lem:closure}). So $|A_\alpha|<\mu'$, and we can set $C:=\bigcup_{\alpha<\lambda}A_\alpha$, which is as desired because any countable sequence in $\mathbb{P}^*{\upharpoonright}C$ is a countable sequence in $\mathbb{P}^*{\upharpoonright}A_\alpha$ for some $\alpha<\lambda$. (b) Let $\langle B_i : i\in\lambda\rangle$ be an increasing sequence of closed subsets of $\pi^+$ such that $\mathbb{P}^*{\upharpoonright}B_i$ is a complete subforcing of $\mathbb{P}^*$. Set $B:=\bigcup_{i\in\lambda}B_i$. According to~\ref{lem:closure}(\ref{item:sjlgclosed}) $B$ is closed. Assume that $A\subseteq \mathbb{P}^*{\upharpoonright}B$ is a maximal antichain. Any $p,q\in A$ are incompatible in $\mathbb{P}^*{\upharpoonright}B_i$ for some $i$, and therefore in $\mathbb{P}^*$. Due to ccc, $A$ is countable, and by Corollary~\ref{cor:small}(\ref{item:quaxb}) there is an $i<\lambda$ such that $A\subseteq \mathbb{P}^*{\upharpoonright}B_i$. Therefore $A$ is maximal in $\mathbb{P}^*$. \end{proof} \begin{corollary}\label{smallrestr2} With the same hypothesis of Lemma~\ref{smallrestr}, if $P\subseteq\mathbb{P}^*_\pi$ has size ${<}\mu$, then there is some $\mathbb{P}^-\lessdot\mathbb{P}^*_\pi$ of size~${<}\mu$ such that $P\subseteq\mathbb{P}^-$. \end{corollary} \begin{proof} Apply Lemma~\ref{smallrestr} to $A:=\bigcup_{p\in P}H^*(p)$. \end{proof} We now summarize what we already know about the construction that we are going to perform in the next section: \begin{corollary}\label{cor:summary} Let $\lambda$ be regular uncountable and assume $\lambda\le |\pi|$, $|\pi|^{<\lambda}=|\pi|$, and that $\pi\smallsetminus \pi_1$ is partitioned into $\Sigma$ and $\Pi$. We inductively construct a (tidy) S$\lambda$s iteration $\mathbb{P}$ as follows: \begin{itemize} \item[$(\Sigma)$] As step $\zeta\in \Sigma$, we pick a (definition of a) Suslin-ccc forcing $\mathbb{S}_\zeta$, some $\kappa_\zeta>\lambda$, and some $C_\zeta$ in the $\lambda$-club set $[\zeta^+]^{<\kappa_\zeta}$ of Lemma~\ref{smallrestr}(b), and set $\mathbb{Q}_\zeta=\mathbb{S}^{V^{\mathbb{P}^*{\restriction} C_\zeta}}$. (So $\mathbb{P}^-_\zeta=\mathbb{P}^*{\upharpoonright}C_\zeta$.) \item[$(\Pi)$] Fix a formula $\varphi(x,y)$ with parameters in the ground model. At step $\zeta\in\Pi$, pick some regular uncountable $\kappa_\zeta\le\lambda$ and let $\dot{\mathbb{Q}}_\zeta$ be (a suitable name for) the FS product of all Knaster real-number-posets of size ${<}\kappa_\zeta$ satisfying $\varphi(x,\zeta)$. \end{itemize} Then (inductively) $\mathbb{P}_\xi$ is a well defined ccc forcing for $\pi_1\leq\xi\leq\pi$, and \begin{enumerate}[(a)] \item $\mathsf{LCU}_{\textnormal{sp}}(\mathbb{P}_\xi,\mu)$ holds for any regular $\lambda\le\mu\le\pi_0$. \item $\mathbb{P}_\xi$ forces that the continuum has size $|\pi|$. \end{enumerate} \end{corollary} \begin{proof} Each $\dot{\mathbb{Q}}_\zeta$ is forced to be ccc (by either absoluteness or the Knaster assumption), so we get a valid iteration (and we assume that we choose the names for the iterands such that we get a tidy S$\lambda$s-iteration). $\mathsf{LCU}$ follows from Lemmas~\ref{lem:allthefactors} and \ref{lem:kjgeqwtr}(\ref{item:symmetric}), and Theorem~\ref{PresSplit}. For the size of the continuum, we use Corollary~\ref{cor:small}(\ref{item:quaxd}) to show by induction that $|\mathbb{P}^*_\xi|\le|\pi|$: Assume this already is the case for $\zeta\in \Pi$, then $\Xi_\zeta$ consists of nice $\mathbb{P}^*_\zeta$-names for ${<}\lambda$-sized real-number-forcings, and there are only $|\mathbb{P}^*_\zeta|^{<\lambda}\le|\pi|$ many such nice names. \end{proof} \section{The forcing construction for the left hand side}\label{sec:left} In this section, we prove the first step of the main theorem:\ Theorem~\ref{thm:step3}, which gives the independence results for the left hand side. After the work we have done in the previous sections, this is basically a simple variant of the construction in~\cite{GKS}. \subsection{Preliminaries.} \newcommand{\mathbb{A}}{\mathbb{A}} \newcommand{\mathbb{B}}{\mathbb{B}} \newcommand{\mathbb{D}}{\mathbb{D}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbbm{Lc}}{\mathbbm{Lc}} \begin{notation}\label{not:RLCU} Denote \[(\mathfrak{b}_i,\mathfrak{d}_i):=\left\{\begin{array}{ll} ({\ensuremath{\add(\Null)}},{\ensuremath{\cof(\Null)}}) & \text{if $i=1$,}\\ ({\ensuremath{\cov(\Null)}},{\ensuremath{\non(\Null)}}) & \text{if $i=2$,}\\ (\mathfrak{b},\mathfrak{d}) & \text{if $i=3$,}\\ ({\ensuremath{\non(\Meager)}},{\ensuremath{\cov(\Meager)}}) & \text{if $i=4$,}\\ (\mathfrak{s},\mathfrak{r}) & \text{if $i=\mathrm{sp}$.} \end{array}\right.\] As in~\cite{GKS,GKMS2}, for $i\in\{1,2,3,4,{\mathrm{sp}}\}$ consider Blass-uniform relational systems $\mathbf{R}^{\mathsf{LCU}}_i$ and $\mathbf{R}^{\mathsf{COB}}_i$ such that, following in Example~\ref{exp:blassunif}, $\mathbf{R}^\mathsf{LCU}_{\mathrm{sp}}=\mathbf{R}^\mathsf{COB}_{\mathrm{sp}}=\mathbf{R}_{\mathrm{sp}}$ and ZFC proves\footnote{In more detail, $\mathbf{R}^{\mathsf{LCU}}_i=\mathbf{R}^{\mathsf{COB}}_i$ except when $i=2$. If we follow~\cite{diegoetal} we can also consider $\mathbf{R}^{\mathsf{LCU}}_2=\mathbf{R}^{\mathsf{COB}}_2$.} \[\mathfrak{b}(\mathbf{R}^{\mathsf{COB}}_i)\leq\mathfrak{b}_i\leq\mathfrak{b}(\mathbf{R}^{\mathsf{LCU}}_i)\text{\ and }\mathfrak{d}(\mathbf{R}^{\mathsf{LCU}}_i)\leq\mathfrak{d}_i\leq\mathfrak{d}(\mathbf{R}^{\mathsf{COB}}_i).\] We abbreviate $\mathsf{COB}_{\mathbf{R}^\mathsf{COB}_i}$ by $\mathsf{COB}_i$, and $\mathsf{LCU}_{\mathbf{R}^\mathsf{LCU}_i}$ by $\mathsf{LCU}_i$ \end{notation} For completeness, we review the posets we use in our construction. \begin{definition} Define the following forcing notions (where the forcing in item ($i$) is designed to increase $\mathfrak{b}_i$): \begin{enumerate}[(1)] \item \emph{Amoeba forcing $\mathbb{A}$} is the poset whose conditions are subtrees $T\subseteq 2^{<\omega}$ without maximal nodes such that $[T]$, the set of branches of $T$, has measure ${<}\frac{1}{2}$ (with respect to the Lebesgue measure of $2^\omega$). The order is $\supseteq$. \item \emph{Random forcing $\mathbb{B}$} is the poset whose conditions are subtrees $T\subseteq 2^{<\omega}$ without maximal nodes such that $[T]$ has positive measure. The order is $\subseteq$. \item \emph{Hechler forcing} is $\mathbb{D}:=\omega^{<\omega}\times\omega^\omega$ ordered by $(t,y)\leq(s,x)$ iff $s\subseteq t$, $x\leq y$ (pointwise) and $t(i)\geq x(i)$ for all $i\in|t|\smallsetminus|s|$. \item \emph{Eventually different forcing} is \[\mathbb{E}:=\omega^{<\omega}\times\bigcup_{n<\omega}\big([\omega]^{\leq n}\big)^\omega\] ordered by $(t,\psi)\leq(s,\varphi)$ iff $s\subseteq t$, $\forall i<\omega(\varphi(i)\subseteq\psi(i))$ and $t(i)\notin\varphi(i)$ for all $i\in|t|\smallsetminus |s|$. \item[(sp)] Let $F$ be a base of a (free) filter on $\omega$. \emph{Mathias-Prikry forcing on $F$} is $\mathbb{M}_F:=\{(s,x)\in[\omega]^{<\aleph_0}\times F: \max(s)<\min(x)\}$ (here $\max(\emptyset):=-1$) ordered by $(t,y)\leq(s,x)$ if $s\subseteq t$, $y\subseteq x$ and $t\smallsetminus s\subseteq x$. \end{enumerate} \end{definition} For each of the posets above it is easy to construct a 1-1 function from the poset into $\omega^\omega$. So, until the end of this section, the posets above are seen as subsets of $\omega^\omega$. Moreover, the posets (1)--(4) are Suslin ccc, and they are homeomorphic to a Borel subset of $\omega^\omega$ (and the order is Borel as well). In the proof of Theorem~\ref{mainstepI} we deal with special restrictions of the posets (1)--(4) under sets of reals of the following form. \begin{definition}\label{def:elementary} Let $\lambda\geq\aleph_1$ be a cardinal. Say that $E\subseteq \omega^\omega$ is \emph{$\lambda$-elementary} if $E=\omega^\omega\cap N$ for some regular $\chi\geq(2^{\aleph_0})^+$ and some $N\preceq \mathcal H_\chi$ of size ${<}\lambda$, where $\mathcal H_\chi$ denotes the collection of hereditarily ${<}\lambda$-size sets. \end{definition} We look at posets of the form $\mathbb{S}\cap E$ where $\mathbb{S}$ is a poset as in (1)--(4) and $E\subseteq\omega^\omega$ is $\lambda$-elementary. Note that, whenever $\chi\geq(2^{\aleph_0})^+$, $N\preceq \mathcal H_\chi$ and $E=\omega^\omega\cap N$, we have $\mathbb{S}\cap E=\mathbb{S}^N$. Therefore: \begin{fact}\label{fc:Suslinrestr} Let $E\subseteq\omega^\omega$ be elementary. Then: \begin{enumerate}[(1)] \item The poset $\mathbb{A}\cap E$ adds a (code of a) Borel measure zero set that contains all Borel null sets with Borel code in $E$. \item The generic real added by $\mathbb{B}\cap E$ evades all Borel null sets with Borel code in $E$. \item The generic real added by $\mathbb{D}\cap E$ dominates all the functions in $E$. \item The generic real added by $\mathbb{E}\cap E$ is eventually different from all the functions in $E$. \end{enumerate} \end{fact} We now show how to modify the forcing construction in~\cite[\S4 \& \S5]{GKMS1} to include $\mathsf{LCU}_{\mathrm{sp}}$ and $\mathsf{COB}_{\mathrm{sp}}$, by performing a construction according to the previous section, in particular to Corollary~\ref{cor:summary}. We will assume the following: \begin{assumption}\label{asm:bla} $k_0\in[2,\omega]$; $\lambda_\mathfrak{m}\leq\lambda_1\leq\lambda_2\leq\lambda_3<\lambda_4$ are uncountable regular cardinals, $\lambda_5\geq\lambda_4$ is a cardinal, $\lambda_3=\chi^+$, $\lambda_{\mathfrak{m}}\leq\lambda_{\mathrm{sp}}\leq\lambda_3$ regular, such that $\chi^{<{\chi}}=\chi\geq\aleph_1$, $\lambda_5^{<\lambda_4}=\lambda_5$, and $\lambda_i$ is $\aleph_1$-inaccessible whenever $\lambda_i>\lambda_{\mathrm{sp}}$ and $1\leq i\leq 4$. \end{assumption} Our intention is to show the following: \begin{goal}\label{mainstepI} There is a ccc poset $\mathbb{P}$ of size $\lambda_5$ such that, for any $i\in\{1,2,3,4,{\mathrm{sp}}\}$, \begin{enumerate}[(a)] \item $\mathsf{LCU}_i(\mathbb{P},\theta)$ holds for any regular $\lambda_i\leq\theta\leq\lambda_5$. \item There is some directed $S_i$ with $\cp(S_i)=\lambda_i$ and $|S_i|=\lambda_5$ such that $\mathsf{COB}_i(\mathbb{P},S_i)$ holds. \item $\mathbb{P}$ forces $\mathfrak{p}=\mathfrak{s}=\lambda_{\mathrm{sp}}$ and $\mathfrak{c}=\lambda_5$. \item $\mathbb{P}$ forces $\mathfrak{m}_k=\aleph_1$ for any $k\in[1,k_0)$, and $\mathfrak{m}_k=\lambda_\mathfrak{m}$ for any $k\in[k_0,\omega]$.\footnote{Note that $\lambda_\mathfrak{m}=\aleph_1$ is allowed.} \end{enumerate} \end{goal} The way to achieve this is parallel to \cite[\S1]{GKS}: As first step we give the ``basic construction'' in Lemma~\ref{lem:step1}, using ``simple bookkeeping'' (which is described by parameters $\bar C=\langle C_\alpha\rangle_{\alpha\in\Sigma}$ in the ground model). This gives us everything apart from $\mathsf{LCU}_3$ (i.e., we do not claim that $\mathfrak{b}$ remains small). This first step contains the only new aspect of the construction: As we use a variant of the construction according to the previous section, we get $\mathsf{LCU}_\mathrm{sp}$. The next steps are just as in \cite[\S1.3 \& \S1.4]{GKS}. In Lemma~\ref{lem:step2} we remark: Assuming $2^\chi\geq\lambda_5$ (in addition to Assumption~\ref{asm:bla}), we can choose the bookkeeping parameters $\bar C$ in such a way that the resulting forcing satisfies $\mathsf{LCU}_3$ and thus all of Goal~\ref{mainstepI}. And finally we show Theorem~\ref{thm:step3}: without the assumption $2^\chi\geq\lambda_5$ (while assuming~\ref{asm:bla}) we can also get all of Goal~\ref{mainstepI}. Why do we need to supress the assumption $2^\chi\geq\lambda_5$ from Lemma~\ref{lem:step2}? Because we can then additionally control the right-hand side characteristics in Section~\ref{sec:15}, using the method of elementary submodels from~\cite{GKMS2}. \medskip In the following proof, we deal with the case $2\leq k_0<\omega$ and $\lambda_\mathfrak{m}>\aleph_1$. In Section~\ref{ss:remaining} we mention the necessary changes for the remaining cases. \medskip \subsection{The basic forcing construction.} To each $1\leq i\leq 4$ associate a Suslin ccc poset as follows: $\mathbb{S}_1=\mathbb{A}$, $\mathbb{S}_2=\mathbb{B}$, $\mathbb{S}_3=\mathbb{D}$, and $\mathbb{S}_4=\mathbb{E}$. Set $\lambda:=\lambda_{\mathrm{sp}}$. Let $i^*$ be the minimal $i$ such that $\lambda_i>\lambda$. Note that $1\le i^*\le 4$. Set $I_1:=\{i^*,\dots, 4\}$ and $I_0:=\{\mlabel,\plabel\}\cup \{1,2,3,4\}\smallsetminus I_1$. Set $\pi_0:=\lambda_5$ (so $\pi_1=\omega_1\cdot\lambda_5$), and $\pi:=\pi_1+\lambda_5+\lambda_5$. Partition the final $\lambda_5$-interval of $\pi$, i.e. $\pi\smallsetminus (\pi_1+\lambda_5)$, into sets $\Pi_i$ ($i\in I_0$) and $\Sigma_i$ ($i\in I_1$), each of size $\lambda_5$. We construct a tidy S$\lambda$-s iteration, using $\Sigma:=\{\pi_1+\alpha:\, \alpha<\lambda_5\}\cup \bigcup_{i\in I_1} \Sigma_i$ and $\Pi:=\bigcup_{i\in I_0} \Pi_i$. We will satisfy the requirements of Corollary~\ref{cor:summary}, so in particular inductively we will have $|\mathbb{P}^*_\xi|=\lambda_5$ (and so ${\mathbb{P}_\xi}\Vdash\mathfrak{c}=\lambda_5$) for all $\xi$. \begin{comment} also determines , $\mathbb{P}^*_\xi$ and $\mathbb{P}^-_\xi$ by recursion on $\xi\in[\pi_1,\pi)$ (which also determines $\mathbb{P}=\mathbb{P}_\pi$, and $\mathbb{P}^*_\pi$, as the respective unions). We will have that $|\Xi_\zeta|\le \lambda_5$ for each $\zeta\in \Pi$, so according to Corollary~\ref{cor:small}(4), we get $|\mathbb{P}^*_\xi|=\lambda_5$ (and $\Vdash_{\mathbb{P}_\xi}\mathfrak{c}=\lambda_5$) for all $\xi$. We point out that $\mathbb{P}^-_\xi$ is defined in the step $\xi+1$ (and then used to define $\mathbb{Q}_\xi$). In the first step set $\mathbb{P}^*_{\pi_1}:=\mathbb{P}_{\pi_1}$. Limit steps $\xi$ are always determined, because we require $\mathbb{P}^*_\xi=\bigcup_{\zeta<\xi}\mathbb{P}^*_\zeta$ when $\xi\in(\pi_1,\pi]$ and the same for $\mathbb{P}_\xi$ (by Definitions~\ref{DefSs}(S6), and because $\mathbb{P}$ is a FS iteration). So assume that $\xi=\zeta+1\in (\pi_1,\pi)$ and that we have already defined $\mathbb{P}_\zeta$ and $\mathbb{P}^*_\zeta$. We now define a ${\mathbb{P}}$-closed set $C_\zeta \subseteq \zeta^+$ such that the following is complete subforcing of $\mathbb{P}_\zeta$: \begin{equation}\label{eq:wejhwet} \mathbb{P}^-_\zeta:=\mathbb{P}^*_\zeta{\upharpoonright}C_\zeta, \end{equation} we then define $\dot{\mathbb{Q}}_\zeta$ (which in turn defines $\mathbb{P}_{\zeta+1}:=\mathbb{P}_\zeta\ast\dot{\mathbb{Q}}_\zeta$) and finally define $\mathbb{P}^*_{\zeta+1}$. \end{comment} \begin{enumerate}[({I}1)] \item At stage $\zeta\in[\pi_1,\pi_1+\lambda_5)$ (in particular, $\zeta\in\Sigma$), we just add Cohen reals. More formally, to fit our framework, we set $\mathbb{S}_\zeta=\mathbb{C}=\omega^{<\omega}$ (Cohen forcing). Let $C_\zeta:=\emptyset$, which is closed and satisfies that $\mathbb{P}^-_\zeta:=\mathbb{P}^*_\zeta{\restriction} C_\zeta$ (i.e., the set containing only the empty condition) is a complete subforcing of $\mathbb{P}^*_\zeta$. And $\mathbb{S}_\zeta^{V^{\mathbb{P}^-_\zeta}}$ is Cohen forcing in the ground model, which is Cohen forcing in any extension by absoluteness. \item Assume $\zeta\in \Pi_i$ (for some $i\in I_0$). \begin{enumerate}[(i)] \item When $i=\mlabel$, let $\Xi_\zeta$ be the family of all nice $\mathbb{P}^*_\zeta$-names of real-number-posets of size ${<}\lambda_\mathfrak{m}$ that are forced (by $\mathbb{P}^*_\zeta$) to be $k_0$-Knaster. \item When $i=\plabel$, let $\Xi_\zeta$ be the family of all nice $\mathbb{P}^*_\zeta$-names of real-posets of size ${<}\lambda$ that are forced to be $\sigma$-centered. \item When $i\in\{1,2,3\}\cap I_0$, we consider $\Xi_\xi$ as the family of nice $\mathbb{P}^*_\zeta$-names of all smaller-than-$\lambda_i$ versions of $\mathbb{S}_i$ in the $\mathbb{P}^*_\zeta$-extension, i.e., the forcings of the form \begin{equation* Q=\mathbb{S}_i\cap E\text{ where $E$ is $\lambda_i$-elementary} \end{equation*} as in Definition~\ref{def:elementary}. % % % % Note that $\mathbb{S}_i$, and therefore also every variant $\mathbb{S}_i\cap E$, is linked and therefore Knaster. \end{enumerate} \begin{comment} Let $\dot{\mathbb{Q}}_\zeta$ be a $\mathbb{P}^*_\zeta$-name of the FS product of all the members of $\Xi_\zeta$. Clearly $\mathbb{P}^*_\zeta$ forces that $\dot{\mathbb{Q}}$ is the FS product of $\mathbb{S}_i\cap E$ for every $\lambda_i$-elementary $E\subseteq\omega^\omega$. Define $p\in\mathbb{P}^*_{\zeta+1}$ iff $p$ satisfies (i)--(iv) of (S4) in Definition~\ref{DefSs}, while omitting (ii)--(iv) when $\zeta\notin\supp(p)$. It is clear that $\mathbb{P}^*_{\zeta+1}$ is dense in $\mathbb{P}_{\zeta+1}$. \end{comment} \item If $\zeta\in \Sigma_i$ (for some $i\in I_1$, so $\lambda_i>\lambda$), we pick (by suitable book-keeping) a $C_\zeta\subseteq\zeta^+$ as in Lemma~\ref{smallrestr}(b). I.e., $|C_\zeta| < \lambda_i$, $\mathbb{P}^-_\zeta:=\mathbb{P}^*{\upharpoonright}C_\zeta\lessdot\mathbb{P}^*_\zeta$, and we set $\mathbb{S}_\zeta:=\mathbb{S}_i^{V^{\mathbb{P}^-_\zeta}}$. (Here, suitable bookkeeping just means: For every $K\in [\pi^+]^{<\lambda_i}$ there is some index $\zeta$ such that $C_\zeta\supseteq K$.) \begin{comment} For (I3), we declare $p\in\mathbb{P}^*_{\zeta+1}$ iff $p{\upharpoonright\zeta}\in\mathbb{P}^*_\zeta$ and, whenever $\zeta\in\supp(p)$, $p(\zeta)$ is a nice $\mathbb{P}^-_\zeta$-name of a real that is forced to be a member of $\mathbb{S}_i$. \end{comment} \end{enumerate} \begin{comment} For all cases (I1)--(I3), it is easy to check that $\mathbb{P}^*_{\xi}$ is a dense subset of $\mathbb{P}_{\xi}$ of size $\lambda_5$.\medskip \end{comment} We can now show that the construction does what we want, apart from keeping $\mathfrak{b}$ small. First let us note that sometimes it is more convenient to view $\mathbb{P}$ as a FS ccc iteration, where we first add the $\lambda_5$-many $\mathbb{G}_\mathbf{B}$ forcings (of size $\aleph_1$), then the $\lambda_5$-many Cohen reals, and then the rest of the iteration, where we interpret each FS product $\mathbb{Q}_\zeta$ for $\zeta\in \Pi_i$ as a FS iteration with index set $\lambda_5=|\Xi_i|$. So all in all we can represent $\mathbb{P}$ as a FS iteration \begin{equation}\label{eq:noprod} \langle P'_\alpha,\dot Q'_\alpha\rangle_{\alpha\in \delta'} \text{ of length } \delta'=\lambda_5 + \lambda_5 + \Sigma_{\zeta\in \pi\smallsetminus (\lambda_5+\lambda_5)}\delta'_\zeta \text{, with } \delta'_\zeta:=\begin{cases} \lambda_5 & \text{if } \zeta\in\Pi_i,\\ 1 & \text{otherwise.} \end{cases} \end{equation} For each $\alpha<\lambda_5+\lambda_5$, $|\dot Q'_\alpha|\le\aleph_1$, and for each $\alpha\ge \lambda_5+\lambda_5$ in $\delta'$, we say that $\dot Q'_\alpha$ ``is of type $i$'' for $i\in\{\textrm{m},\textrm{p},1,2,3,4\}$, if either $\dot Q'_\alpha=\dot \mathbb{Q}_\zeta$ for the respective $\zeta\in \Sigma_i$, or if $\dot Q'_\alpha$ is a factor $\dot Q$ of $\dot{\mathbb{Q}}_\zeta$ for the respective $\zeta\in \Pi_i$. Note that $P'_{\lambda_5}=\mathbb{P}_{\pi_1}$. \begin{lemma}\label{lem:step1} The construction above satisfies Goal~\ref{mainstepI}, apart from possibly $\mathsf{LCU}_3$. \end{lemma} \begin{proof} \textbf{$\bm{\mathfrak{c}=\lambda_5}$}, as we use a construction following Corollary~\ref{cor:summary}.\smallskip \textbf{Item (a) for $\bm{i=\mathrm{sp}}$}, i.e., $\mathsf{LCU}_\textrm{sp}$: This also follows from Corollary~\ref{cor:summary}, and implies $\Vdash_{\mathbb{P}} \mathfrak{s}\le \lambda$.\smallskip \textbf{$\bm{\mathfrak{p}=\mathfrak{s}=\lambda}$}: To see $\mathfrak{p}\ge\lambda$ it is enough to show (in fact, equivalent, by Bell's theorem~\cite{MR643555}): For every $\sigma$-centered poset $Q'$ of size ${<}\lambda$ (and contained in $\omega^\omega$), and any collection $D$ of size ${<}\lambda$ of dense subsets of $Q'$, there is a $Q'$-generic set over $D$. Any such $Q'$ and $D$ are forced to be already in the $\mathbb{P}_\alpha$-extension for some $\alpha<\pi$. Pick some $\zeta\in\Pi_{\textrm{p}}$ larger than $\alpha$. Then a name $\dot Q$ for $Q'$ is used as factor of $\mathbb{P}_\zeta$, i.e., in $\mathbb{P}_{\zeta+1}$ there is a $\dot Q$-generic object (over $D$). ZFC shows $\mathfrak{p}\le\mathfrak{s}$, and as $\mathfrak{s}\le\lambda$ we get equality.\smallskip \textbf{Item (a) for $\bm{i\in\{1,2,4\}}$}, i.e., $\mathsf{LCU}_i$: This is exactly as in~\cite[\S1.2]{GKS}. For this argument we interpret $\mathbb{P}$ as the iteration $\langle P'_\alpha,\dot Q'_\alpha\rangle_{\alpha\in\delta'}$ of~\eqref{eq:noprod}. However, we work in the $\mathbb{P}_{\pi_1}$-extension (i.e., the $P'_{\lambda_5}$-extension). So we investigate the forcing which first adds $\lambda_5$ many Cohens, and then a FS iteration of the iterands $Q'_\alpha$. As in~\cite[\S1.2]{GKS}, we now argue that each such $Q'_\alpha$ is $(\mathbf{R}^\mathsf{LCU}_i,\lambda_i)$-good. So let us quickly check the cases (they are all summarized in~\cite[Lemma~1.6]{GKS}, and use results from \cite{JS90}, \cite{Kam89}, \cite{Bre91}). To get $(\mathbf{R}^\mathsf{LCU}_1,\lambda_1)$-good: \begin{itemize} \item If $Q'_\alpha$ is of type $\mlabel$ or type $1$, then $Q'_\alpha$ has size ${<}\lambda_1$ and thus is $(\mathbf{R}^\mathsf{LCU}_i,\lambda_1)$-good (for any $i\in\{1,2,3,4\}$). \item If $Q'_\alpha$ is of type $\plabel$, $3$ or $4$, then $Q'_\alpha$ is $\sigma$-centered, and therefore $(\mathbf{R}^\mathsf{LCU}_1,\aleph_1)$-good. \item If $Q'_\alpha$ is of type $2$, then it is a subalgebra of the measure algebra, and thus $(\mathbf{R}^\mathsf{LCU}_1,\aleph_1)$-good. \end{itemize} For $(\mathbf{R}^\mathsf{LCU}_2,\lambda_2)$-good the argument is even simpler: All $Q'_\alpha$ have size ${<}\lambda_2$ or are $\sigma$-centered; and for $(\mathbf{R}^\mathsf{LCU}_4,\lambda_4)$-good the argument is trivial, as all $Q'_\alpha$ have size ${<}\lambda_4$. So this argument shows that, in the intermediate model $V^{\mathbb{P}_{\pi_1}}$, the rest $P'$ of the forcing satisfies $\mathsf{LCU}_i(P',\lambda_i)$, witnessed by the Cohen reals $\{\eta_\alpha: \alpha\in [\pi_1,\pi_1+\lambda_i)\}$. This implies by definition of $\mathsf{LCU}$ that in the ground model $\mathsf{LCU}_i(\mathbb{P},\lambda_i)$ holds, witnessed by the same Cohen reals.\smallskip \textbf{Item (b) for $\bm{i\in I_1}$}, i.e., $\mathsf{COB}_i$: This is also basically the same as in \cite[\S1.2]{GKS}, where this time we argue from the ground model $V$, not the intermediate model $V^{\mathbb{P}_{\pi_1}}$. We define the partial order $S_i$ to have domain $\Sigma_i$, ordered by $\zeta_1\le_{S_i} \zeta_2$ iff $C_{\zeta_1}\subseteq C_{\zeta_2}$. Note that $C_\zeta$ is in $[\pi^+]^{<\lambda_i}$, $|\pi^+|=\lambda_5$, and our book-keeping ensures that $S_i$ is ${<}\lambda_i$-directed. Corollary~\ref{cor:small}(\ref{item:quaxc}) together with the fact that $\lambda\le \lambda_i$ shows that our bookkeeping will catch every real in the $\mathbb{P}$-extension. Therefore $S_i$, and the generics added at stages in $S_i$, witness the COB property.\smallskip \textbf{Item (b) for $\bm{i\in \bm{I_0}\cap\{1,2,3\}}$}: This is very similar: Let $S_i$ be the set of pairs $(\zeta,\dot E)$ such that $\zeta\in\Pi_i$ and $\dot E$ is a nice $\mathbb{P}^*_\zeta$-name of a $\lambda_i$-elementary subset of $\omega^\omega$. We order $S_i$ as follows: $(\xi_1,\dot E_1)\le_i(\xi_2,\dot E_2)$ iff $\xi_1\leq\xi_2$ and the empty condition forces that $\dot E_1\subseteq \dot E_2$. For $(\zeta,\dot E)\in S_i$, $\mathbb{S}_i\cap\dot E$ forms part of the FS product $\dot{\mathbb{Q}}_\zeta$, so $\mathbb{P}_{\zeta+1}$ adds a $\mathbb{S}_i\cap\dot E$-generic object $\dot{y}_{\zeta,\dot E}$ as in Fact~\ref{fc:Suslinrestr}. We show that $S_i$ and $\{\dot{y}_{\zeta,\dot E}:\, (\zeta,\dot E)\in S_i\}$ witness $\mathsf{COB}_i$. Let $\dot r$ be a $\mathbb{P}^*$-name of a real, then $\dot r$ is a $\mathbb{P}^*_{\xi_0}$-name of a real for some $\xi_0<\pi$, and there is some $\dot E_0$ such that $(\xi_0,\dot E_0)\in S_i$ and $\Vdash_{\mathbb{P}_{\xi_0}}\dot r\in\dot E_0$. Hence, whenever $(\xi,\dot E)\in S_i$ is above $(\xi_0,\dot E_0)$, $\Vdash_\mathbb{P} \dot r\in\dot E$ so $\dot y_{\xi,\dot E}$ is generic over $\dot r$. And for any ${<}\lambda_i$-sequence $\langle E_j:j\in J\rangle$ of nice names for $\lambda_i$-elementary sets $E_j$ we can find a nice name for a $\lambda_i$-elementary set $E\supseteq \bigcup_{j\in J} E_j$. This shows that $S_i$ is ${<}\lambda_i$-directed.\smallskip \begin{comment} Due to Theorem~\ref{PresSplit} it is enough to show that the set of all $2$G-automorphisms of ${\mathbb{P}}$ is compatible with ${\mathbb{P}}$ (it is obvious that ${\mathbb{P}}$ is S$\lambda$-s, hence it has $\lambda$-small history). Let $h$ be a $2$G-automorphism. We show, by induction on $\xi\in[\pi_1,\pi]$, that $\hat{h}_\xi$ can defined and that $\hat{h}_\xi{\upharpoonright}(\mathbb{P}^*{\upharpoonright}C)$ is an automorphism on $\mathbb{P}^*{\upharpoonright}C$ for any ${\mathbb{P}}$-closed $C\subseteq\xi^+$. For the basic step $\xi=\pi_1$ it is clear that $\hat{h}_{\pi_1}$ can be defined. Whenever $C\subseteq\pi_1^+=\pi_1$ is ${\mathbb{P}}$-closed, we get $h[C]=C$, so $\hat{h}_{\pi_1}(p)\in\mathbb{P}^*{\upharpoonright}C$ for any $p\in\mathbb{P}^*{\upharpoonright}C$ because $\mathbb{P}^*_{\pi_1}=\mathbb{P}_{\pi_1}$ and $H^*(\hat{h}_{\pi_1}(p))=H(\hat{h}_{\pi_1}(p))=h[H(p)]\subseteq C$. Conversely, if $q:=\hat{h}_{\pi_1}(p)\in\mathbb{P}^*{\upharpoonright}C$ then $p=\hat{h}_{\pi_1}^{-1}(q)=\hat{g}_{\pi_1}(q)\in\mathbb{P}^*{\upharpoonright}C$ by applying the previous argument to $g:=h^{-1}$ (see Lemma~\ref{automident}(c)). The limit step $\xi>\pi_1$ is clear, so it remains to look at the successor step $\xi=\zeta+1$. Since $C_{\zeta}$ is ${\mathbb{P}}$-closed, by induction hypothesis $\hat{h}_{\zeta}{\upharpoonright}(\mathbb{P}^*{\upharpoonright} C_{\zeta})$ is an automorphism on $\mathbb{P}^*{\upharpoonright}C_{\zeta}=\mathbb{P}^-_{\zeta}$. This implies that $\Vdash_{\mathbb{P}_{\zeta}}\hat{h}_{\zeta}^*[\dot{\mathbb{Q}}_{\zeta}]=\dot{\mathbb{Q}}_{\zeta}=\mathbb{S}_{\zeta}^{V^{\mathbb{P}^-_{\zeta}}}$ when $\zeta\in \Sigma$. When $\zeta\in \Pi$ it is clear that both $\dot{\mathbb{Q}}_{\zeta}$ and $\hat{h}^*_{\zeta}(\dot{\mathbb{Q}}_{\zeta})$ are forced to be one of the following: \begin{itemize} \item the FS product of all the $k_0$-Knaster posets of size ${<}\lambda_\mathfrak{m}$ contained in $\omega^\omega$ that belong to $V^{\mathbb{P}^-_{\zeta}}$; \item the FS product of all the $\sigma$-centered posets of size ${<}\lambda$ contained in $\omega^\omega$ that belong to $V^{\mathbb{P}^-_{\zeta}}$; \item for some $i\in[1,3]\cap I_0$, the FS product of all posets of the form $\mathbb{S}_i\cap E$ for some $\lambda_i$-elementary subset $E\in V^{\mathbb{P}^-_{\zeta}}$ of $\omega^\omega$. \end{itemize} So $\Vdash_{\mathbb{P}_{\zeta}}\hat{h}_{\zeta}^*[\dot{\mathbb{Q}}_{\zeta}]=\dot{\mathbb{Q}}_{\zeta}$. Therefore, we can define $\hat{h}_{\zeta+1}$. Now assume that $C\subseteq\xi^+$ is ${\mathbb{P}}$-closed with $\zeta\in C$ (the non-trivial case). If $p\in\mathbb{P}^*{\upharpoonright}C$ then, by induction hypothesis, $\hat{h}_{\zeta}(p{\upharpoonright}\zeta)\in\mathbb{P}^*{\upharpoonright}(C\cap\zeta^+)$ and $H^*(\hat{h}^*_{\pi_1}(p(\zeta)))\subseteq C\cap\xi^+_0$ since $H^*(p(\zeta))\subseteq C\cap\zeta^+$ (that is, $p(\zeta)$ only depends on conditions in $\mathbb{P}^*{\upharpoonright}(C\cap\zeta^+)$, so does $\hat{h}^*_{\zeta}(p(\zeta))$). It is routine to check that $\hat{h}^*_{\zeta}(p(\zeta))$ satisfies the conditions to conclude $\hat{h}_{\xi}(p)=(\hat{h}_{\zeta}(p{\upharpoonright}\zeta),\hat{h}^*_{\zeta}(p(\zeta)))\in\mathbb{P}^*{\upharpoonright}C$. Conversely, if $q:=\hat{h}_\xi(p)\in\mathbb{P}{\upharpoonright}C$ then $p=\hat{h}_{\xi}^{-1}(q)=\hat{g}_{\xi}(q)\in\mathbb{P}^*{\upharpoonright}C$ by applying the previous argument to $g:=h^{-1}$ (see Lemma~\ref{automident}(c)). \end{comment} \textbf{Item (b) for $\bm{i=\mathrm{sp}}$,} i.e., $\mathsf{COB}_\textrm{sp}$: This is basically the same: Among the $\sigma$-centered forcings that we use as factors in step $\zeta$ of type $\plabel$, there are Mathias-Prikry forcings $\mathbb{M}_{\dot F}$ on (free) filter bases of size ${<}\lambda$. In more detail: Assume $\dot F$ is a $\mathbb{P}^*_\zeta$-name for a filter base of size ${<}\lambda$, so set $\dot Q:=\mathbb{M}_{\dot F}$. Then $\dot Q$ is $\sigma$-centered and adds a real which is not split by any set in $\dot F$. So let $S_{\mathrm{sp}}$ be the set of pairs $(\zeta,\dot F)$ such that $\zeta\in\Pi_{\plabel}$ and $\dot F$ is a nice $\mathbb{P}^*_\zeta$-name of a filter base of size ${<}\lambda$. Set $(\xi_1,\dot F_1)\le_{\plabel}(\xi_2,\dot F_2)$ iff $\xi_1\leq\xi_2$ and the empty condition forces that $\dot F_1\subseteq \dot F_2\cup\dot F^d_2$, where $F^d:=\{\omega\smallsetminus x:x\in F\}$. For $(\xi,\dot F)\in S_{\plabel}$, let $\dot{y}_{\xi,\dot F}$ be the $\mathbb{P}_{\xi+1}$-name of the generic real added by $\mathbb{M}_{\dot F}$. It follows that $S_{\plabel}$ and $\{\dot{y}_{\xi,\dot F}:\, (\xi,\dot F)\in S_i\}$ witness $\mathsf{COB}_{\mathrm{sp}}$. \smallskip % % % % \textbf{Item (d)} is exactly the same as in~\cite[Lemma~4.8]{GKMS1}. \end{proof} \begin{comment} For each $\alpha\in \Pi_{\plabel}$ let \[\mathcal{F}_\alpha:=\big\{F\in[\mathrm{ncp}_{\mathbb{P}^*_{\xi_\alpha}}(\langle\omega:n<\omega\rangle)]^{<\lambda}\ :\ \Vdash_{\mathbb{P}^*_{\xi_\alpha}}\Tilde{F}\text{\ is a base of a free filter on }\omega\big\},\] where $\xi_\alpha=\pi_1+\lambda_5+\alpha$. Let \[S_{\mathrm{sp}}:=\bigcup_{\alpha\in \Pi_0}\mathcal{F}_\alpha\] ordered by $F\leq_{\mathrm{sp}}F'$ iff $\mathbb{P}_\pi\Vdash\Tilde{F}\subseteq\Tilde{F}'$. It is clear that $\mathfrak{b}(S_{\mathrm{sp}})=\lambda$ and $|S_{\mathrm{sp}}|=\lambda_5$. For each $F\in S_{\mathrm{sp}}$ let $\dot{y}_F$ be the name of the generic real added by $\mathbb{M}_{\Tilde{F}}$. Note that this poset is part of the FS product $\dot{\mathbb{Q}}_{\xi_\alpha}$ because it is $\sigma$-centered of size ${<}\lambda$. It is not hard to see that $\{\dot{y}_F\ :\ F\in S_{\mathrm{sp}}\}$ witnesses $\mathsf{COB}_{\mathrm{sp}}(\mathbb{P}_\pi,S_{\mathrm{sp}})$. \end{comment} To guarantee $\mathfrak{b}\le\lambda_3$, we have to make sure that the large iterands (i.e., the forcings of size $\ge\lambda_3$) do not destroy $\mathsf{LCU}_3$ (small forcings are, as usual, harmless). In our construction, the only large forcings are the partial eventually different forcings at steps $\zeta\in\Sigma_4$. For these forcings, we introduce in~\cite{GKS} (based on~\cite{GMS}) ultrafilter-limits and use them to preserve $\mathsf{LCU}_3$. The same argument works here. \begin{remark} Note that in the proof of Lemma~\ref{lem:step1} we do not require the hypotheses $\chi=\chi^{{<}\chi}$ and $\lambda_3=\chi^+$ from Assumption~\ref{asm:bla}. These will be used to guarantee $\mathsf{LCU}_3$ in the following subsection. If in Assumption~\ref{asm:bla} we consider $\lambda_3=\lambda_4$ (instead of $\lambda_3<\lambda_4$), then the same proof of Lemma~\ref{lem:step1} guarantees Goal~\ref{mainstepI} in full (i.e., including $\mathsf{LCU}_3$). When $\lambda_{\mathrm{sp}}=\lambda_4$, in the forcing construction above we have $\Sigma=[\pi_1,\pi_1+\lambda_5)$. \end{remark} \subsection{Dealing with~\texorpdfstring{$\mathfrak{b}$}{b}} \begin{lemma}\label{lem:step2} In addition to Assumption~\ref{asm:bla} we suppose that $2^\chi\geq\lambda_5$. Then we can choose $C_\zeta$ for all $\zeta\in \Sigma_4$ such that $\mathsf{LCU}_3(\mathbb{P},\kappa)$ holds for all regular $\kappa\in[\lambda_3,\lambda_5]$. Moreover, in the inductive construction, for each $\zeta\in \Sigma_4$ there is a $\lambda$-club of $[\zeta^+]^{<\lambda_4}$ such that we can choose $C_\zeta$ from this club set. \end{lemma} \begin{proof} This is analogous to~\cite[\S1.3]{GKS}, in particular to Lemma/Construction 1.30. We will only remark on the required changes. Again we interpret $\mathbb{P}$ as in~\eqref{eq:noprod}. We work from the ground model, not in the intermediate $\mathbb{P}_{\pi_1}$-extension. Accordingly, we have to incorporate the initial segment of the iteration $\mathbb{P}_{\pi_1}=P'_{\lambda_5}$ into the argument. This is no problem, as we just have to deal with another type of small forcing, the $\dot Q'_\alpha$ for $\alpha<\lambda_5$, which all have size $\aleph_1$. Of course, $E':=\mathbb{E}^{V^{\mathbb{P}^*_\zeta{\restriction} C_\zeta}}$ is closed under conjunctions of conditions, i.e., satisfies the assumptions of \cite[Fact~1.25]{GKS}. And instead of ``ground model code sequences'' we use ``nice $\mathbb{P}^*_\zeta{\restriction} C_\zeta$-names''. The crucial part of the old proof is \cite[Lemma~1.30(d)]{GKS}. There, we use the notation $w_\alpha\subseteq \alpha$, and $Q_\alpha$ are those $\mathbb{E}$-conditions that can be calculated in a Borel way from the generics with indices in $w_\alpha$, i.e., $Q_\alpha=\mathbb{E}\cap V^{\mathbb{P}^*{\restriction}w_\alpha}$; and we show that the set of ``suitable'' $w_\alpha$ is an $\omega_1$-club in $[\alpha]^{<\lambda_4}$, where ``suitable'' means: If we have a ground-model-sequence of (nice) $Q_\alpha$-names, then the $D_\alpha^\varepsilon$-limit (a well-defined condition in eventually different forcing) is also element of $Q_\alpha$ (for all $\varepsilon\in\chi)$. The same argument gives us the following for our new framework: We can perform the construction of Lemma~\ref{lem:step1} and, at all indices $\zeta$ of type $4$, the set of ``suitable'' $C_\zeta\in [\zeta^+]^{<\lambda_4}$ is a $\lambda$-club, where suitable now means the following (recall that we have $\dot{\mathbb{Q}}_\zeta=\mathbb{E}^{\mathbb{P}^*_\zeta{\restriction} C_\zeta}$): For any sequence of nice $\mathbb{P}^*_\zeta{\restriction} C_\zeta$-names for elements of $\mathbb{E}$,\footnote{Note: as $|C_\zeta|<\lambda_4$, and $\lambda_4$ is $\aleph_1$-inaccessible, there are ${<}\lambda_4$ many such sequences, cf.~Lemma~\ref{smallH} (and~\ref{item:restr}).} the $D_\alpha^\varepsilon$-limit of this sequence is forced to be in $\dot{\mathbb{Q}}_\zeta$ as well. Here, we only get a $\lambda$-club and not an $\omega_1$-club, as only for increasing unions of length $\lambda$ we have \[ \bigcup_{i\in\lambda} \bigl(\mathbb{P}^*_\zeta{\restriction} C_i \bigr) = \mathbb{P}^*_\zeta{\restriction}\bigg( \bigcup_{i\in\lambda} C_i\bigg). \] Also, we now have to choose $C_\zeta$ not only in this $\lambda$-club, but in the intersection with the $\lambda$-club of Lemma~\ref{smallrestr}(b) (so that we get a closed $C_\zeta$ such that $\mathbb{P}^*_\zeta{\restriction} C_\zeta\lessdot \mathbb{P}^*_\zeta$ as required for our construction.) The same argument as in the old proof (Lemma 1.31 there) then shows: Whenever all $C_\zeta$ as chosen ``suitably'' (for all $\zeta$ of type $4$), we get $\mathsf{LCU}_3$. \end{proof} \begin{theorem}\label{thm:step3} Assumption~\ref{asm:bla} is enough to find a $\mathbb{P}$ as required for Goal~\ref{mainstepI}. \end{theorem} \begin{proof} Let $R$ be the poset of partial functions $r:\chi\times\lambda_5\to\{0,1\}$ with domain of size ${<}\chi$ (ordered by extension). As we assume $\chi^{<\chi}=\chi$, this poset is $\chi^+$-cc, and obviously ${<}\chi$-closed, so it does not change any cofinalities. As in the old proof, at each step $\zeta$ of type $4$ in the inductive construction of $\mathbb{P}$, we can go into the $R$-extension of the ground model, use Lemma~\ref{lem:step2} to get a suitable $C_\zeta^0$ (above some initial set given by the usual book-keeping), find in $V$ some $\tilde C_\zeta^0$ such that $C_\zeta^0$ is forced to be a subset. Now we iterate this $\lambda$ many times (not just $\omega_1$ as in the old proof), taking unions at limits, and use the fact that the ``suitable'' parameters $C_\zeta$ are closed under $\lambda$-unions (they form a $\lambda$-club in $[\zeta^+]^{<\lambda_4}$). This way we get a sequence of parameters $C_\zeta$ \emph{in the ground model}, such that if we define in the $R$-extension a forcing $\mathbb{P}'$ using these parameters we get $\mathsf{LCU}_3(\mathbb{P}',\kappa)$; a simple absoluteness argument~\cite[Lemma~1.33]{GKS} then shows that these parameters will already define in $V$ a forcing $\mathbb{P}$ with $\mathsf{LCU}_3(\mathbb{P},\kappa)$. Note: We do not interpret $\Xi_\zeta$ (for $\zeta\in\Pi$) in the $R$-extension, but use it with the same meaning it has in $V$. So $\mathbb{P}'$ may not be symmetric in the $R$-extension, but this is not important here: We are only interested in $\mathsf{LCU}_3(\mathbb{P}',\kappa)$ in this argument, and we do not claim that $\mathbb{P}'$ in the $R$-extension satisfies the other properties we have already shown for $\mathbb{P}$. And for $\mathsf{LCU}_3(\mathbb{P}',\kappa)$, any iterand that has size ${<}\lambda_3$ is unproblematic. \end{proof} \begin{remark}\label{rem:limitb} It is not necessary to restrict $\lambda_3$ to a successor cardinal in Assumption~\ref{asm:bla}. To allow regular $\lambda_3$ in general, we forget about $\chi$ in Assumption~\ref{asm:bla} and just assume that $\lambda_3^{<\lambda_3}=\lambda_3>\aleph_1$. In this way, Lemma~\ref{lem:step2} is valid by assuming $2^{\lambda_3}\geq\lambda_5$ instead, and Theorem~\ref{thm:step3} is true when replacing $\chi$ by $\lambda_3$ in the proof (i.e., $R$ gets modified and it forces $2^{\lambda_3}\geq\lambda_5$). No further changes in the proofs (even in those from~\cite{GKS}) are needed to justify this. On the other hand, can we allow $\lambda_3=\aleph_1$ in Assumption~\ref{asm:bla}? (So all cardinals except $\lambda_4$ and $\lambda_5$ are $\aleph_1$.) Although we can make the construction in this case, now the forcings $\mathbb{G}_{\mathbf{B}_\delta}$ have size $\lambda_3=\aleph_1$, so they could destroy $\mathsf{LCU}_3(\mathbb{P},\aleph_1)$. An alternative to deal with this problem is to perform a similar iteration with $\pi_0=0$ (so $\pi_1=0$, that is, no initial FS product of $\mathbb{G}_{\mathbf{B}}$ is used) and guarantee $\mathsf{LCU}_{\mathbf{R}^*}(\mathbb{P},\kappa)$ for any regular $\kappa\in[\aleph_1,\lambda_5]$ with the methods of this subsection (i.e.\ the methods from~\cite[\S1.3 \& \S1.4]{GKS}) adapted to $\mathbf{R}^*$, where $\mathbf{R}^*$ is the Blass-uniform relational system from~\cite{Ksplit} (see also~\cite[Example~2.19]{MejMod}) such that $\mathfrak{b}(\mathbf{R}^*)=\max\{\mathfrak{b},\mathfrak{s}\}$ and $\mathfrak{d}(\mathbf{R}^*)=\min\{\mathfrak{d},\mathfrak{r}\}$. \end{remark} \subsection{The other constellations for the Knaster numbers}\label{ss:remaining} So far we have assumed that $\lambda_\mathfrak{m}>\aleph_1$ and that $k_0<\omega$. We now remark on how to prove the other cases:\smallskip \emph{Case $\lambda_\mathfrak{m}=\aleph_1$.} We only change $I_0:=\{\plabel\}\cup\{i\in[1,4]:\lambda_i\leq\lambda\}$ (so (I2)(i) is excluded in the construction). Check details in~\cite[Lemma~4.8]{GKMS1}. Note that here the value of $k_0$ is irrelevant.\smallskip \emph{Case $k_0=\omega$ and $\lambda_\mathfrak{m}>\aleph_1$.} Force with $P_{\mathrm{cal},\lambda_\mathfrak{m}}\ast\mathbb{P}$ where $P_{\mathrm{cal},\lambda_\mathfrak{m}}$ is the precaliber $\aleph_1$ poset from~\cite[\S5]{GKMS1} and $\mathbb{P}$ is the forcing resulting from the construction above (in the $P_{\mathrm{cal},\lambda_\mathfrak{m}}$-extension).\footnote{For $i=\mlabel$, recall that ``$\omega$-Knaster" abbreviates ``precaliber $\aleph_1$".} \subsection{The alternative order of the left side} The construction of~\cite[\S2]{KeShTa:1131} for the alternative order of the left side of Cicho\'n's diagram can also be adapted in the situation of the previous theorems. This is just interchanging the order of the values of $\mathfrak{b}$ and ${\ensuremath{\cov(\Null)}}$, that is, instead of forcing ${\ensuremath{\cov(\Null)}}=\lambda_2\leq\mathfrak{b}=\lambda_3$, we force $\mathfrak{b}=\lambda_3<{\ensuremath{\cov(\Null)}}=\lambda_2$. See also \cite{modKST} for the weakening of the hypothesis GCH: \begin{theorem}\label{altmainstepI} Theorem~\ref{thm:step3} (and Goal~\ref{mainstepI}) is still valid when, in Assumption~\ref{asm:bla}, we replace $\lambda_1\leq\lambda_2\leq\lambda_3<\lambda_4$ by $\lambda_1\leq\lambda_3<\lambda_2<\lambda_4$.\footnote{As in~\cite[\S2]{KeShTa:1131}, the relational system $\mathbf{R}^\mathsf{LCU}_2$ corresponding to this result is not the same as the one for Theorem~\ref{thm:step3}. Although this is a relational system of the reals, it is not Blass-uniform.} \end{theorem} Remark~\ref{rem:limitb} also applies in this situation. \section{15 values}\label{sec:15} In this section, we review some tools from~\cite{GKMS2,GKMS1} and show how they are used to control the cardinal characteristics other than $\mathfrak{s}$. We describe the forcing constructions but we omit the details in the proofs, since these are exactly as in the cited references. We use the notions of \emph{$\mathfrak{m}$-like\ cardinal characteristic} and \emph{$\mathfrak{h}$-like\ characteristic} from~\cite[\S3]{GKMS1}. We do not need to recall their definition, but we only need some of their properties and to know that the cardinals $\mathfrak{m}_k$ ($1\leq k\leq \omega$) are $\mathfrak{m}$-like, $\mathfrak{h}$ and $\mathfrak{g}$ are $\mathfrak{h}$-like, and $\mathfrak{p}$ and $\mathfrak{t}$ are of both types. \begin{lemma}[{\cite[Cor.~3.5]{GKMS1}}]\label{lem:mlike} Let $\kappa$ be an uncountable regular cardinal, $\lambda$ a cardinal, $\mathfrak{x}$ a cardinal characteristic, and let $\mathbb{P}$ be a $\kappa$-cc poset that forces $\mathfrak{x}=\lambda$ (so $\lambda$ is a cardinal in the $\mathbb{P}$ extension). If $M\preceq \mathcal H_\chi$ (with $\chi$ a large enough regular cardinal) is ${<}\kappa$ closed and contains (as elements) $\mathbb{P},\kappa,\lambda$ and the parameters of the definition of $\mathfrak{x}$, then $\mathbb{P}\cap M$ is a complete subposet of $\mathbb{P}$ and: \begin{enumerate}[(i)] \item If $\mathfrak{x}$ is $\mathfrak{m}$-like\ and $\lambda\geq\kappa$, then $\mathbb{P}\cap M\Vdash \mathfrak{x}\geq\kappa$. \item If $\mathfrak{x}$ is $\mathfrak{m}$-like\ and $\lambda<\kappa$, then $\mathbb{P}\cap M\Vdash \mathfrak{x}=\lambda$. \item If $\mathfrak{x}$ is $\mathfrak{h}$-like, then $\mathbb{P}\cap M\Vdash\mathfrak{x}\leq|\lambda\cap M|$. \end{enumerate} \end{lemma} \begin{lemma}[{\cite[Lemma~6.3]{GKMS1}}]\label{lem:gfrak} Assume: \begin{enumerate}[(1)] \item $\kappa\le\nu$ are uncountable regular cardinals, $\mathbb{P}$ is a $\kappa$-cc poset. \item $\mu=\mu^{<\kappa}\geq\nu$ and $\mathbb{P}$ forces $\mathfrak{c}>\mu$. \item For some relational systems of the reals $\mathbf{R}^1_i$ ($i\in I_1$) and some regular $\lambda^1_i\leq\mu$: $\mathbb{P}$ forces $\mathsf{LCU}_{\mathbf{R}^1_i}(\lambda^1_i)$ \item For some relational systems of the reals $\mathbf{R}^2_i$ ($i\in I_2$), and some directed order $S^2_i$ with $\mathfrak{b}(S^2_i)=\lambda^2_i\leq\mu$ and $|S^2_i|\leq\vartheta^2_i\leq\mu$: $\mathbb{P}$ forces $\mathsf{COB}_{\mathbf{R}^2_i}(S^2_i)$. \item For some $\mathfrak{m}$-like\ characteristics $\mathfrak y_j$ ($j\in J$) and $\lambda_j<\kappa$: $\Vdash_\mathbb{P} \mathfrak y_j=\lambda_j$. \item For some $\mathfrak{m}$-like\ characteristics $\mathfrak y'_k$ ($k\in K$): $\Vdash_\mathbb{P} \mathfrak{y}'_k\geq\kappa$. \item $|I_1\cup I_2\cup J\cup K|\leq\mu$. \end{enumerate} Then there is a complete subforcing $\mathbb{P}^*$ of $\mathbb{P}$ of size $\mu$ forcing: \begin{enumerate}[(a)] \item $\mathfrak y_j=\lambda_j$, $\mathfrak{y}'_k\geq\kappa$, $\mathsf{LCU}_{R^1_i}(\lambda^1_i)$ and $\mathsf{COB}_{R^2_{i'}}(\lambda^2_{i'},\vartheta^2_{i'})$ for all $i\in I_1$, $i'\in I_2$, $j\in J$ and $k\in K$; \item $\mathfrak{c}=\mu$ and $\mathfrak{g}\leq\nu$. \end{enumerate} \end{lemma} We are now ready to prove the main result of this paper. We use Notation~\ref{not:RLCU} and the following assumption for all the results in this section. \begin{assumption}\label{hyp:11} \ \begin{enumerate}[(1)] \item $\mu_\mathfrak{m}\leq\mu_\mathfrak{p}\leq\mu_{0}\leq\mu_1\leq\mu_2\leq\ldots\leq\mu_8$ are uncountable regular. \item $\mu_9\geq\mu_8$ is a cardinal such that $\mu_9^{{<}\mu_{0}}=\mu_9$. \item\label{newcard} $0\leq i_0\leq2$, $\mu_\mathrm{sp}\in[\mu_{i_0},\mu_{i_0+1}]$ and $\mu_\mathfrak{r}\in[\mu_{8-i_0},\mu_{9-i_0}]$ are regular. \item\label{11card} There are eleven regular cardinals $\theta_0>\cdots> \theta_{10}>\mu_9$ such that $\theta_i^{{<}\theta_i}=\theta_i$ for any $i<11$, $\theta_i$ is $\aleph_1$-inaccessible for $i\in\{1,3,5,7\}$, $\theta_3=\chi_3^+$ and $\chi_3=\chi_3^{<\chi_3}$.\footnote{% % We could further weaken the assumption depending on the value $i_0$. E.g., in case $i_0=1$, $\theta_i$ is required $\aleph_1$-inaccessible only for $i\in\{1,3,5\}$. Also, it is enough that $\theta_0^{{<}\theta_1}=\theta_0$ (here, $\theta_0$ could be singular), and $\theta_3$ is not needed successor according to Remark~\ref{rem:limitb}. For more pedantic weakenings, see~\cite[Rem.~3.5]{GKMS2}.% % } \end{enumerate} \end{assumption} Note that, under GCH, assumption (\ref{11card}) is irrelevant, and $\mu_9^{{<}\mu_{0}}=\mu_9$ is equivalent to $\cof(\mu_9)\geq\mu_{0}$. The Main Theorem for Figure~\ref{fig:cichonorders}(A) is proved in two steps through the following two results. \begin{theorem}\label{mainstep2} Under Assumption~\ref{hyp:11}, for any $k_0\in[2,\omega]$ there is a ccc poset $\mathbb{P}^1$ such that, for any $i\in\{1,2,3,4,\mathrm{sp}\}$, \begin{enumerate}[(a)] \item $\mathsf{LCU}_i(\mathbb{P}^1,\theta)$ holds for $\theta\in\{\mu_i,\mu_{9-i}\}$, where $\mu_{\mathrm{sp}}:=\mu_\mathfrak{s}$ and $\mu_{9-\mathrm{sp}}:=\mu_\mathfrak{r}$. \item There is some directed $S_i$ with $\cp(S_i)=\mu_i$ and $\cf(S_i)=\mu_{9-i}$ such that $\mathsf{COB}_i(\mathbb{P}^1,S_i)$ holds. \item $\mathbb{P}^1$ forces $\mathfrak{p}=\mathfrak{g}=\mu_0$ and $\mathfrak{c}=\mu_9$. \item $\mathbb{P}^1$ forces $\mathfrak{m}_k=\aleph_1$ for any $k\in[1,k_0)$, and $\mathfrak{m}_k=\mu_\mathfrak{m}$ for any $k\in[k_0,\omega]$. \end{enumerate} \end{theorem} \begin{proof} We deal with the case $i_0=1$, that is, $\mu_1\leq\mu_\mathrm{sp}\leq\mu_2$ (any other case is similar). We rewrite the sequence \[ \xymatrix@=0.1ex{ \mu_1&\leq&\mu_\mathrm{sp}&\leq&\mu_2&\leq& \mu_3&\leq&\mu_4&\leq&\mu_5&\leq& \mu_6&\leq&\mu_{\mathfrak{r}}&\leq&\mu_7&\leq&\mu_8&\leq&\mu_9&\text{as} \\ \vartheta_{10}&\leq&\vartheta_8&\leq&\vartheta_6&\leq&\vartheta_4&\leq&\vartheta_2&\leq&\vartheta_1&\leq&\vartheta_3&\leq&\vartheta_5&\leq&\vartheta_7&\leq&\vartheta_9&\leq&\vartheta_{11}, } \] and let $\langle\theta_j:j<11\rangle$ be cardinals as in Assumption~\ref{hyp:11}(\ref{11card}) ordered by \[\vartheta_{11}<\theta_{10}<\theta_9<\cdots<\theta_0\] as shown in Figure~\ref{fig:setup2}. \begin{figure} \resizebox{\textwidth}{!}{$ \xymatrix@=5ex{ & & &\txt{$\innitialmark{\cfrak}{=}\theta_0$} \\ \txt{$\innitialmark{\covN}{=}\theta_5$} \ar@{.}[r]\ar[dr]_-{\textstyle\theta_4} &\txt{$\innitialmark{\nonM}{=}\theta_1 $} \ar@{.}[r]\ar[rru] &*+[F.]{\phantom{\lambda}} \ar@{=}@[Gray][r] &*+[F.]{\phantom{\lambda}} \ar@{=}@[Gray][u] \\ \txt{$\mathfrak{p}^{\mathrm{pre}}{=}\mathfrak{s}^{\mathrm{pre}}{=}\theta_7$} \ar[u]^-{\textstyle\theta_6} &\txt{$\innitialmark{\bfrak}{=}\theta_3$} \ar@{.}[r]\ar[u]_-{\textstyle\theta_2} &*+[F.]{\phantom{\lambda}} \ar@{=}@[Gray][u] \\ \txt{$\innitialmark{\addN}{=}\theta_9$} \ar@{.}[r]\ar[u]^-{\textstyle\theta_8} &*+[F.]{\phantom{\lambda}} \ar@{.}[r]\ar@{=}@[Gray][u] &*+[F.]{\phantom{\lambda}} \ar@{=}@[Gray][r]\ar@{=}@[Gray][u] &*+[F.]{\phantom{\lambda}} \ar@{=}@[Gray][uu] \\ & & &\txt{$\mathfrak{c}{=}\vartheta_{11}$} \ar[ulll]^-{\textstyle\theta_{10}} \\ \txt{${\ensuremath{\cov(\Null)}}{=}\vartheta_6$} \ar@{.}[r]\ar[rd] &\txt{${\ensuremath{\non(\Meager)}}{=}\vartheta_2$} \ar@{.}[r]\ar[ddr] &*+[F.]{\phantom{\lambda}} \ar@{.}[r] &\txt{${\ensuremath{\cof(\Null)}}{=}\vartheta_{9}$} \ar[u] \\ \txt{$\mathfrak{s}{=}\vartheta_8$} \ar[u] &\txt{$\mathfrak{b}{=}\vartheta_4$} \ar@{.}[r]\ar[u] &\txt{$\mathfrak{d}{=}\vartheta_3$} \ar@{=}@[Gray][u]\ar[dr] &\txt{$\mathfrak{r}{=}\vartheta_7$} \ar[u] \\ \txt{${\ensuremath{\add(\Null)}}{=}\vartheta_{10}$} \ar@{.}[r]\ar[u] &*+[F.]{\phantom{\lambda}}\ar@{.}[r] \ar@{=}@[Gray][u] &\txt{${\ensuremath{\cov(\Meager)}}{=}\vartheta_1$} \ar@{.}[r]\ar[u] &\txt{${\ensuremath{\non(\Null)}}{=}\vartheta_5$} \ar[u] \\ \txt{$\mathfrak{p}{=}\mathfrak{g}{=}\mu_0$} \ar[u] &\txt{$\mathfrak{m}_{k_0}{=}\mu_{\mathfrak{m}}$} \ar[l] &\aleph_1 \ar[l] } $} \caption{The cardinals $\vartheta_n$ and $\theta_n$ are increasing along the arrows. The upper diagram shows the situation forced by $\mathbb{P}^0$, and the lower diagram shows the one forced by $\mathbb{P}^1$. ($\mathfrak{s}$ can be anywhere between $\mathfrak{p}$ and $\mathfrak{b}$.)}\label{fig:setup2} \end{figure} Let $\mathbb{P}^0$ be the ccc poset obtained by application of Theorem~\ref{thm:step3} to $\lambda_\mathfrak{m}=\mu_\mathfrak{m}$, $\lambda_1=\theta_9$, $\lambda_{\mathrm{sp}}=\theta_7$, $\lambda_2=\theta_5$, $\lambda_3=\theta_3$, $\lambda_4=\theta_1$ and $\lambda_5=\theta_0$. In particular, this forces the top diagram of Figure~\ref{fig:setup2} and item (d). We show how to construct a complete subforcing of $\mathbb{P}^0$ that satisfies the statement of the theorem, in particular, it forces the bottom diagram of Figure~\ref{fig:setup2}. For $1\leq n\leq 10$ and $\alpha<\vartheta_n$ define $M_{n,\alpha}$ fulfilling: \begin{itemize} \item $M_{n,\alpha}\preceq \mathcal H_\chi$ (for a fixed large enough regular $\chi$) and it contains (as elements) the sequences of $\theta$'s and $\vartheta$'s, $\mathbb{P}^0$ and the directed sets associated with the $\mathsf{COB}$ properties forced by $\mathbb{P}^0$. \item The sequences $\langle M_{m,\xi} : \xi<\vartheta_m \rangle$ for $1\leq m<n$ and $\langle M_{n,\xi}:\xi<\alpha\rangle$ belong to $M_{n,\alpha}$. \item $M_{n,\alpha}$ is ${<}\theta_n$ closed of size $\theta_n$. \end{itemize} Set $M_n:=\bigcup_{\alpha<\vartheta_n}M_{n,\alpha}$ and $M^+:=\bigcap_{n=1}^{10} M_n$. Exactly as in the proof of~\cite[Thm.~3.1]{GKMS2} one can show that $M^+\preceq \mathcal H_\chi$, $M^+$ is ${<}\vartheta_{10}$-closed, and $\mathbb{P}':=\mathbb{P}^0\cap M^+$ is a ccc poset that forces (a), (b) and $\mathfrak{c}=\theta_{10}$. Even more, $\mathbb{P}'$ forces (d) and $\mathfrak{p}\geq\vartheta_{10}$ by Lemma~\ref{lem:mlike}. The desired poset is a complete subposet $\mathbb{P}_1$ of $\mathbb{P}'$ of size $\vartheta_{11}$ obtained by direct application of Lemma~\ref{lem:gfrak} (to $\kappa=\nu=\mu_0$ and $\mu=\vartheta_{11}$). \end{proof} \begin{theorem}\label{mainfinal} Under Assumption~\ref{hyp:11}, for any $k_0\in[2,\omega]$ there is a cofinality preserving poset $\mathbb{P}$ such that, for any $i\in\{1,2,3,4,\mathrm{sp}\}$, it satisfies (a), (b) and (d), and $\mathbb{P}$ forces $\mathfrak{p}=\mu_\mathfrak{p}$, $\mathfrak{h}=\mathfrak{g}=\mu_0$ and $\mathfrak{c}=\mu_9$. \end{theorem} \begin{proof} Let $\mathbb{Q}:=\mu_\mathfrak{p}^{{<\mu_\mathfrak{p}}}$ ordered by end extension, and let $\mathbb{P}^1$ be the poset constructed in Theorem~\ref{mainstep2}. Exactly as in the proof of~\cite[Thm.~7.4]{GKMS1}, $\mathbb{P}:=\mathbb{P}^1\times\mathbb{Q}$ is as required. \end{proof} In the same way, we can prove the Main Theorem corresponding to Figure~\ref{fig:cichonorders}(B). In this case, we initial forcing $\mathbb{P}^0$ is obtained from Theorem~\ref{altmainstepI}. \begin{theorem}\label{altmainfinal} Both Theorems~\ref{mainstep2} and~\ref{mainfinal} are valid when Assumption~\ref{hyp:11} is modified in the following way: \begin{enumerate}[(i)] \item We replace the order of the regular cardinals in (1) by \[\mu_\mathfrak{m}\leq\mu_\mathfrak{p}\leq\mu_0\leq\mu_1\leq\mu_3\leq\mu_2\leq\mu_4\leq\mu_5\leq\mu_7\leq\mu_6\leq\mu_8.\] \item In (\ref{newcard}), we consider $i_0\in\{0,1\}$, but $\mu_{\mathrm{sp}}\in[\mu_1,\mu_3]$ and $\mu_\mathfrak{r}\in[\mu_6,\mu_8]$ when $i_0=1$. \item In (\ref{11card}), instead of $\theta_3=\chi_3^+$ and $\chi_3^{<\chi_3}=\chi_3$, assume $\theta_5=\chi_5^+$ and $\chi_5^{<\chi_5}=\chi_5$. \end{enumerate} \end{theorem} \section{Discussions}\label{sec:disc} One obvious question is: \begin{question} How to separate additional cardinals from Figure~\ref{fig:all20}? \end{question} Another one: \begin{question} How to get other orderings, where ${\ensuremath{\non(\Meager)}}>{\ensuremath{\cov(\Meager)}}$? \end{question} This is not possible with FS ccc iterations, as any such iteration whose length has uncountable cofinality $\delta$ forces ${\ensuremath{\non(\Meager)}}\le\delta\le{\ensuremath{\cov(\Meager)}}$, so alternative methods are required. A creature forcing method based on the notion of decisiveness~\cite{MR2499421,MR2864397} has been developed in~\cite{MR3696076} to separate five characteristics in Cich\'on's diagram, but this method is restricted to $\omega^\omega$-bounding forcings, i.e., results in $\mathfrak{d}=\omega_1$. An unbounded decisive creature construction might be promising. Alternatively, Brendle proposed a method of shattered iterations~\footnote{J.\ Brendle, personal communication}, which also may be a way to solve this problem. \begin{question} Are our main results (specifically, Theorems~\ref{thm:step3},~\ref{altmainstepI},~\ref{mainstep2},~\ref{mainfinal} and~\ref{altmainfinal}) valid for $k_0=1$? I.e., can we force $\mathfrak{m}>\aleph_1$? \end{question} For $k_0\geq 2$ there was no problem to include, in our iterations, FS products of $k_0$-Knaster posets since they are still $k_0$-Knaster (hence ccc), but we cannot just use FS products of ccc posets because they do not produce ccc posets in general. In particular, we do not know how to modify Theorem~\ref{thm:step3} to force $\mathfrak{m}>\aleph_1$. \begin{question} Is it consistent with $\mathrm{ZFC}$ that $\mathfrak{b}<\mathfrak{s}<{\ensuremath{\non(\Meager)}}<{\ensuremath{\cov(\Meager)}}$? \end{question} In this paper $\mathfrak{s}\leq\mathfrak{b}$; and forcing $\mathfrak{s}>\mathfrak{b}$ is much more difficult, since Mathias-Prikry posets may add dominating reals. Shelah~\cite{SSCR} proved the consistency of $\mathfrak{b}=\aleph_1<\mathfrak{s}=\mathfrak{c}=\aleph_2$ by a countable support iteration of proper posets. Much later, Brendle and Fischer~\cite{VFJB11} constructed an FS iteration via a matrix iteration to force $\aleph_1<\mathfrak{b}=\kappa<\mathfrak{s}=\mathfrak{c}=\lambda$ for arbitrarily chosen regular $\kappa<\lambda$. However, in this latter model, ${\ensuremath{\non(\Meager)}}={\ensuremath{\cov(\Meager)}}=\mathfrak{c}$. It is not clear how to adapt Brendle's and Fischer's methods to our methods and produce a poset for the previous question.
{ "timestamp": "2020-07-28T02:37:50", "yymm": "2007", "arxiv_id": "2007.13500", "language": "en", "url": "https://arxiv.org/abs/2007.13500" }
\section{Introduction} In the past decade, multi-label learning has attracted lots of attention in the fields of neural network and machine learning \cite{charte2014li-mlc,liu2015on,liu2017an,Li2018A,shen2018multilabel}. In this problem, instances (e.g., images, documents) are assumed to be associated with a set of labels instead of one single label. In order to deal with this case, multi-label learning aims to learn a series of classifiers for labels, which can project an instance into a label vector with a fixed size. Actually, multi-label learning is a special case of multi-output learning problems, where each label can be regarded as an output. So far, multi-label learning has been widely applied to image classification \cite{Tan2015Learning}, text classification \cite{liu2017deep}, music instrument recognition \cite{xioufis2011multilabel}, and so on. In this paper, we focus on solving the image classification problem by leveraging the multi-label learning technique. To date, many multi-label learning approaches for image classification have been proposed \cite{Tan2015Learning,Li2014Multi,Luo2015Multiview,Wang2016CNN,Wei2016HCP,Zhao2016Regional,li2017improving,shen2018compact}. Simply speaking, multi-label image classification can be achieved by casting this task into several binary-class subproblems, where each subproblem is to predict whether the image is relevant to the corresponding label. This kind of method takes different labels as independent ones. However, in practice, there are often correlations among labels, e.g., in an image of landscape, blue sky and white cloud often appear simultaneously. Empirically and theoretically speaking, taking advantage of such correlations during learning can help predict testing images more accurately \cite{Gao2013On,li2016self-paced,Wang2017Multi,li2019dynamic}. Therefore, current mainstream approaches attempt to learn correlations among multiple labels based on training data and incorporate such correlations into the learning process for improving model performance \cite{Zhang2007ML}. Here we briefly survey some typical algorithms (For a complete review, please refer to \cite{Zhang2014A}). In \cite{Zhang2007ML}, a multi-label lazy learning approach, called Ml-knn, was presented, which utilized the maximum a posteriori (MAP) principle to determine label sets for unseen instances. Authors in \cite{Luo2015Multiview} proposed a multi-view framework to fuse different kinds of features, and explored the complementary properties of different views for matrix completion based multi-label classification. Huang and Zhou \cite{Huang2012Multi} proposed a method, i.e., multi-label learning using local correlation (MLLOC), which incorporated global discrimination fitting and local correlation sensitivity into a unified framework. Furthermore, Li et al. \cite{Li2018A} extended \cite{Huang2012Multi} into a self-paced framework, where the instances and labels were simultaneously learnt from an easy-to-hard fashion. Recently, deep learning has achieved very promising results in various image applications, including object recognition/detection \cite{Ren2015Faster}, semantic segmentation \cite{Long2017Fully}, to name a few. In the meantime, there are also some deep learning methods that are proposed for solving the multi-label image classification problems \cite{Tan2015Learning,Wang2016CNN,Li2016Correlated,Chen2017Recurrent,Yeh2017Learning,Wang2017Multi,he2018reinforced,liu2018multilabel,shen2018deep}. For example, in \cite{Wang2016CNN}, authors proposed a CNN-RNN framework to jointly capture the semantic dependency among labels and the image-label relevance. Authors in \cite{Zhu2017Learning} proposed a spatial regularization network to generate class-related attention maps and capture both spatial and semantic label dependencies. In \cite{Yeh2017Learning}, authors integrated the deep canonical correlation analysis and an autoencoder in a unified DNN architecture for image classification. The method in \cite{Chen2017Recurrent} introduced a recurrent attention mechanism to locate attentional and contextual regions for multi-label prediction. Different from the above methods, in this paper, we propose a novel framework for multi-label image classification, which is based on \underline{RE}construction regularized \underline{T}wo-way \underline{D}eep distance \underline{M}etric (RETDM) learning. Specifically, we first attempt to learn an embedding space, where original images and labels are embedded via a Convolutional Neural Network (CNN) and a Deep Neural Network (DNN), respectively. Through these two networks, we expect that image features dependency and labels dependency can be both discovered. In order to capture the correlations between images and labels on the embedded space, a \emph{two-way} distance metric learning strategy is presented. Figure \ref{twoway} illustrates the idea behind the two-way distance metric learning strategy. In the embedding space, we hope the distance between an input image embedding vector and its label embedding vector is smaller than those distances between the image embedding vector and the embedding vectors of the labels' nearest neighbors as shown in Figure \ref{twoway}(a). In the meantime, as demonstrated in Figure \ref{twoway}(b), we also anticipate that the distance between the image embedding vector and its corresponding label embedding vector is smaller than those distances between the label embedding vector and other image embedding vectors with their labels being the nearest neighbors of the target labels. By such way, two nearby instances with different labels will be pushed far away. Finally, a reconstruction network is incorporated into the framework as a regularization term to make the learnt embedding space more representative. \begin{figure} \centering {\includegraphics[width=0.95\linewidth]{example.pdf}} \caption{ Schematic illustration of the main idea behind two-way distance metric learning. The x-y plane denotes the 2D embedding space. } \label{twoway} \end{figure} Compared with state-of-the-art multi-label image classification methods, the proposed framework has the following advantages: \begin{itemize} \item An end-to-end trainable framework is proposed to integrate comprehensive distance metric learning into deep learning for multi-label image classification. \item We present a two-way distance metric learning strategy based on two different views for capturing the correlations between images and labels, which is tailored for multi-label image classification. \item A reconstruction error based loss function is introduced to regularize the label embedding space for further improving model performance. \end{itemize} We evaluate the proposed framework with exhaustive experiments on publicly available multi-label image datasets including scene, mirflickr, and NUS-WIDE. Experimental results demonstrate that the proposed method achieves significantly better performance compared to the state-of-the-art multi-label classification methods. The remainder of this paper is organized as follows. Related works are reviewed in Section II, and the proposed RETDM is introduced in Section III. Extensive experiments are presented in Section IV, followed by conclusions in Section V. \section{Related Work} Our work is related to three lines of active research: 1) Shallow metric learning for multi-label prediction; 2) Deep metric learning for other image applications; 3) Deep learning for multi-label image classification. \smallskip \noindent\textbf{Shallow metric learning for multi-label prediction:} \cite{Jin2010Learning} proposed a distance metric learning approach for multi-instance multi-label learning. The authors presented an iterative algorithm by alternating between the step of estimating instance-label association and the step of learning distance metrics from the estimated association. In \cite{Zhang2012Maximum}, a maximum margin output coding (MMOC) formulation was proposed to learn a distance metric for capturing the correlations between inputs and outputs. Although MMOC has shown promising results for multi-label prediction, it requires an expensive decoding procedure to recover multiple labels of each testing instance. To avoid this issue, \cite{Liu2015Large,Liu2018Metric} incorporated $k$ nearest neighbor (kNN) constraints into a distance metric formulation, and provided a generalization error bound analysis to show that their method can converge to the optimal solution. \cite{gouk2016learning} introduced linear and nonlinear distance metric learning methods, which aimed at improving the performance of kNN for multi-label data. In \cite{verma2016a}, a novel metric learning framework was presented to integrate class-specific distance metrics and explicitly take into account inter-class correlations for multi-label prediction. All the methods mentioned above aim to learn various shallow distance metric models for multi-label tasks. However, they do not incorporate deep learning, a very powerful tool for image analysis, into their framework. \smallskip \noindent\textbf{Deep metric learning for other applications:} Up to now, there have been many deep metric learning approaches proposed for various image tasks. For example, \cite{Chopra2005Learning} proposed a Siamese Network to learn complex similarity metrics for face verification. The learning process minimized a discriminative loss function that drove the distance to be small for pairs of faces from the same individual, and large for pairs from different individuals. Now the Siamese Network has been very popular for numerous applications beyond face verification \cite{bertinetto2016fully,Shen2017Deep}. In \cite{Schroff2015FaceNet}, a triplet loss was introduced to directly learn an embedding into an Euclidean space for face verification, and two kinds of strategies for triplet selection were provided during training. \cite{sohn2016improved} proposed a new metric learning objective called multi-class $N$-pair loss. The proposed objective function generalized the triplet loss by allowing joint comparisons among more than one negative example, and reduced the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy. \cite{Song2016Deep} described a deep feature embedding and metric learning algorithm for image clustering/retrieve. The authors defined a novel structured prediction objective on the lifted pairwise distance matrix within the batch during the neural network training. Soon afterwards, \cite{song2017deep} further proposed a novel framework for image clustering/retrieve which optimized the deep metric embedding with a learnable clustering function and a clustering metric in an end-to-end fashion. Moreover, \cite{Movshovitzattias2017No} presented two proxy assignment schemes for optimizing the triplet loss on a different space of triplets, so that the computational cost of training DNN models can be reduced and the accuracy of the model can be improved. In addition, in \cite{Wang2017Deep}, a novel angular loss was introduced based on the angle constraints of the triplet triangle instead of distance constraints. Although these methods have shown encouraging results for various applications, they can not be directly applied to multi-label image classification. This is because they do not consider the correlations among labels at all. \begin{figure*} \centering {\includegraphics[width=0.95\linewidth]{my_framework.pdf}} \caption{ Overall framework of our approach. (Left) The embedding net (EN) consists of a convolutional neural network (CNN) and a deep neural network (DNN) to map images and labels to a latent space, respectively. VGG-16 is used as the CNN base model. DNN is comprised of two fully-connected layers. The dimension $d$ of the latent space is 512. (Right) The deep metric net (DMN) contains three modules: one two-way distance metric module to learn the correlations between images and labels based on the latent space; one reconstruction module to regularize the label embedding space; one classification module used to make the image embedding space discriminative and predict unseen testing images. $\mathbf{y}$ is the target label of image $\mathbf{I}$;$\mathbf{y}(\Theta)$ denotes the set of $\mathbf{y}$'s $k$ nearest neighbors; $\mathbf{I}(\Theta)$ denotes the set of images corresponding to $\mathbf{y}$'s $k$ nearest neighbors. } \label{framework} \end{figure*} \smallskip \noindent\textbf{Deep learning for multi-label image classification:} Recently, deep learning has been gradually applied to multi-label image classification. For example, \cite{Gong2014Deep} proposed to use ranking to train deep convolutional neural networks for multi-label image annotation problems. In \cite{Tan2015Learning}, authors proposed a clique generating machine to learn graph structures, so as to exploiting label dependency for multi-label image classification. The method in \cite{Wang2016CNN} formulated a CNN-RNN framework to jointly characterize the semantic dependency among labels and the image-label relevance. \cite{Zhu2017Learning} further proposed a Spatial Regularization Network that generated class-related attention maps and captured both spatial and semantic label dependencies. In \cite{Yeh2017Learning}, authors proposed Canonical Correlated Autoencoder (C2AE) for solving the task of multi-label classification. They integrated deep canonical correlation analysis and autoencoder in a unified DNN model, and introduced sensitive loss functions to exploit cross-label dependency. \cite{Chen2017Recurrent} introduced the recurrent attention mechanism into generic multi-label image classification for locating attentional and contextual regions regarding classification. The authors in \cite{liu2018multilabel} boosted classification by distilling the unique knowledge from weakly-supervised detection into classification with only image-level annotations, and obtained promising results. \section{Proposed Method} We propose a deep neural network for multi-label classification, which takes advantage of distance metric learning to capture the dependency of image features, the dependency of labels, as well as the correlations between images and labels. The overall framework of our approach is shown in Figure \ref{framework}. Our framework consists of two main network structures: The embedding net (EN) and the deep metric net (DMN). The embedding net is used to embed images and labels into a latent space, and it consists of a convolutional neural network (CNN) and a deep neural network (DNN). The deep metric net (DMN) contains three modules: one two-way distance metric module to learn the correlations between images and labels over the latent space, which is tailored for multi-label image classification; one reconstruction module for regularizing the label embedding space; one classification module used for making the image embedding space more discriminative and conducting predictions on unseen testing images. The whole framework is trained in an end-to-end manner. Let $\mathcal{S}=\{\mathbf{I}_i, \mathbf{y}_i\}_{i=1}^n$ denote a set of training data. $\mathbf{I}_i$ is the $i$-th input image with ground-truth labels $\mathbf{y}_i = [y_i^1 , y_i^2 , ..., y_i^m]^T$, where $y_i^j$ is a binary indicator. $y_i^j = 1$ indicates image $\mathbf{I}_i$ is tagged with the $j$-th label, and $y_i^j = 0$ otherwise. $n$ and $m$ denote the number of all training images and all possible labels, respectively. The goal of multi-label image classification is to learn a serious of classifiers for mapping $\mathbf{I}_i$ to a vector $\mathbf{\widehat{y}}_i$, such that $\mathbf{\widehat{y}}_i$ is close to $\mathbf{y}_i$ as much as possible. To achieve this goal, a simple model is to learn a projection matrix via minimizing the following logistic loss function: \begin{align}\label{aa} \min \mathcal{L}(\mathbf{I}_i,\mathbf{y}_i)= \sum_{i=1}^n \sum_{j=1}^m \log(1 + \exp({y}_i^j \mathbf{P} g(\mathbf{I}_i))) \end{align} where $g(\mathbf{I}_i)$ denotes the feature representation of image $\mathbf{I}_i$, where $g(\cdot)$ can be an arbitrarily hand-crafted feature extractor or deep learning based feature extractor. $\mathbf{P}$ is a learnable matrix to project $g(\mathbf{I}_i)$ to a new feature space. However, Eq. (\ref{aa}) treats all labels as independent ones, thus ignoring the dependency among labels. In this paper, we utilize two deep neural networks to respectively embed images and labels to a latent space for discovering input dependency and labels dependency simultaneously. Based on the embedded space, a two-way distance metric is learned to capture the correlations between images and labels for multi-label image classification. \smallskip \noindent \textbf{Learning an embedding space for images and labels} \noindent For obtaining an input image embedding vector, image $\mathbf{I}_i$ is first resized to $W\times H$ and fed into a convolutional neural network. In this paper, we use VGG-16 \cite{Simonyan14c} as the base model and $W\times H$ is thus set to $224\times 224$. We dropped the last fully-connected layer in the base model, and changed the number of hidden units of the second fully-connected layer to 512. Then the image embedding vector $f_{\mathbf{I}_i}\in \mathbf{R}^d$ can be represented by: \begin{align} f_{\mathbf{I}_i} = \Phi_{cnn} (\mathbf{I}_i, \theta_{cnn}) \end{align} where $\Phi_{cnn}$ is the architecture of the image embedding network, and $\theta_{cnn}$ is the learnt parameters of the network. In order to acquire the label embedding vector, $\mathbf{y}_i$ is fed into a deep neural network which consists of two fully-connected layers with 512 hidden units. The label embedding vector can be represented by \begin{align} f_{\mathbf{y}_i} = \Phi_{dnn} (\mathbf{y}_i, \theta_{dnn}) \end{align} where $f_{\mathbf{y}_i} \in \mathbf{R}^d$ has the same dimension with $f_{\mathbf{I}_i}$. $\Phi_{dnn}$ is the architecture of the label embedding network, and $\theta_{dnn}$ is learnable parameters of the network. Through the networks $\Phi_{cnn}$ and $\Phi_{dnn}$, we expect to learn an embedding space to discover dependencies from image features and labels, respectively. In the meantime, it is desired that the correlations between images and labels can be captured on this latent space. To reach this goal, we integrate three modules operated on the latent space into the whole framework: one two-way distance metric module which is specially designed for multi-label image classification, one reconstruction module aiming at regularizing the embedding space, and one classification module used for predicting labels during the inference phase. \smallskip \noindent \textbf{Two-way Distance Metric Module} \noindent Based on the embedding space, it is expected that the correlations between images and labels can be captured, and can be integrated into the learning process, such that the distance between an image embedding vector and its corresponding label embedding vector is not only smaller than the distances between the image embedding vector and the embedding vectors of the target labels' nearest neighbors, but also smaller than the distances between the label embedding vector and other images with labels being the target labels' nearest neighbors. In sight of these, we propose a two-way strategy for deep distance metric learning. We first give a detailed description for one-way deep distance metric learning, as shown in Figure \ref{oneway}. Let ${\mathbf{y}}$ be the label of image ${\mathbf{I}}$. $f_{\mathbf{I}}$ denotes the embedding vector of image $\mathbf{I}$; $f_{\mathbf{y}}$ and $f_{\mathbf{y(\Theta)}}$ denote the embedding vectors of label $\mathbf{y}$ and its $k$ nearest neighbors $\mathbf{y(\Theta)}$, respectively. In order to make the distance between $f_{\mathbf{I}}$ and $f_{\mathbf{y}}$ be smaller than the distance between $f_{\mathbf{I}}$ and $f_{\mathbf{y(\Theta)}}$, the following constraints should be satisfied: \begin{align} d(f_{\mathbf{I}},f_{\mathbf{y}})<d(f_{\mathbf{I}},f_{\mathbf{y}_i}), \forall \mathbf{y}_i\in \mathbf{y}(\Theta)\label{constraint} \end{align} where $d(\cdot)$ is an arbitrary distance function. Here the Euclidean distance is chosen in the experiment. \begin{figure} \centering {\includegraphics[width=0.997\linewidth]{distance_metric.pdf}} \caption{ An illustration for one-way deep distance metric learning. } \label{oneway} \end{figure} In order to satisfy the constraints in (\ref{constraint}), we formulate it as a multi-class classification problem. The vector $[(f_{\mathbf{I}},f_{\mathbf{y}}), (f_{\mathbf{I}},f_{\mathbf{y}^1}),\ldots, (f_{\mathbf{I}},f_{\mathbf{y}^k})]$ is regarded as a sample, where $\mathbf{y}^i\in \mathbf{y}(\Theta)$ and $\cup_{i=1}^k \mathbf{y}^i =\mathbf{y}(\Theta)$. The new sample's label vector is [1, 0, \ldots, 0]. The distance values are taken as the feature representation of the new sample. After obtaining the distances between image embedding vector $f_{\mathbf{I}}$ and label embedding vectors $f_{\mathbf{y}}$ and $f_{\mathbf{y}(\Theta)}$. We then can calculate the similarity scores between image and labels by \begin{align} sim(\mathbf{I},{\mathbf{y}^i})=-1* d(f_{\mathbf{I}},f_{{\mathbf{y}^i}}), \forall {\mathbf{y}^i}\in \{\mathbf{y} \cup \mathbf{y}(\Theta)\} \end{align} Our goal is to maximize the score $sim(\mathbf{I},{\mathbf{y}})$, while minimize the scores $sim(\mathbf{I},{\mathbf{y}(\Theta)})$, so that the distance between image $\mathbf{I}$ and its target output $\mathbf{y}$ is the smallest in the embedding space. Thus, given these similarity scores, our network produces a distribution for image $\mathbf{I}$ based on a softmax over these scores in the embedding space: \begin{align} p(z=1|f_{\mathbf{I}},f_{\mathbf{y}},f_{\mathbf{y}(\Theta)})=\frac{\exp (sim(\mathbf{I},\mathbf{y}))}{\sum_{{\mathbf{y}^i}\in \{\mathbf{y}\cup\mathbf{y}(\Theta)\}} \exp (sim(\mathbf{I},{\mathbf{y}^i }))} \nonumber \end{align} Finally, learning can be proceeded by minimizing the negative log-probability as \begin{align} \mathcal{L}_{1}= - \text{log}\ p(z=1|f_{\mathbf{I}},f_{\mathbf{y}},f_{\mathbf{y}(\Theta)}) \nonumber \end{align} In order to penalize the case that $d(f_{\mathbf{I}},f_{\mathbf{y}})$ is greater than or equal to $d(f_{\mathbf{I}},f_{\mathbf{y}_i})$, we propose a new loss function to be minimized as \begin{align}\label{loss11} \mathcal{J}_{1}=\left\{ \begin{aligned} &\mathcal{L}_{1}, \ \ \text{if} \ d(f_{\mathbf{I}},f_{\mathbf{y}})\geq d(f_{\mathbf{I}},f_{\mathbf{y}_i}), \exists \mathbf{y}_i\in \mathbf{y}(\Theta)\\ &0,\ \ \text{otherwise} \end{aligned} \right. \end{align} For the second way distance metric learning, we expect that the distance between $f_{\mathbf{y}}$ and $f_{\mathbf{I}}$ is smaller than the distances of $f_{\mathbf{y}}$ and $f_{\mathbf{I}(\Theta)}$, where $\mathbf{I}(\Theta)$ denotes the set of images with labels being $\mathbf{y}$'s $k$ nearest neighbors. The whole process is similar to that in the first way distance metric learning. Therefore, we can obtain another probability distribution over new similarity scores: \begin{align} p(z=1|f_{\mathbf{I}},f_{\mathbf{y}},f_{\mathbf{I}(\Theta)})=\frac{\exp (sim(\mathbf{y},\mathbf{I}))}{\sum_{{\mathbf{I}^i}\in \{\mathbf{I}\cup\mathbf{I}(\Theta)\}} \exp (sim(\mathbf{y},{\mathbf{I}^i}))} \nonumber \end{align} where $sim(\mathbf{y},\mathbf{I})= d (f_{\mathbf{y}}, f_{\mathbf{I}})$, and $sim(\mathbf{y},{\mathbf{I}^i})=d(f_{\mathbf{y}},f_{\mathbf{I}_i})$. Thus, we minimize another loss function as \begin{align}\label{loss12} \mathcal{J}_{2}=\left\{ \begin{aligned} &\mathcal{L}_{2}, \ \ \text{if} \ d(f_{\mathbf{I}},f_{\mathbf{y}})\geq d(f_{\mathbf{I}_i},f_{\mathbf{y}}), \exists \mathbf{I}_i\in \mathbf{I}(\Theta)\\ &0,\ \ \text{otherwise} \end{aligned} \right. \end{align} where \begin{align} \mathcal{L}_{2}= - \text{log}\ p(z=1|f_{\mathbf{I}},f_{\mathbf{y}}, f_{\mathbf{I}(\Theta)}) \nonumber \end{align} Based on Eq. (\ref{loss11}) and Eq. (\ref{loss12}), we finally obtain a joint loss function for metric learning as \begin{align}\label{loss1} \mathcal{J}_{metric}= \mathcal{J}_{1} + \lambda \mathcal{J}_{2} \end{align} where $\lambda\geq 0$ is a hyper-parameters. In the experiment, we simply set $\lambda = 1$. Through two-way distance metric learning, the latent space is more informative, and the correlations between images and labels can be well learned, which is beneficial for multi-label image classification. \smallskip \noindent \textbf{Classification Module and Reconstruction Module} \noindent In the field of spatio-temporal data mining, combining reconstruction loss with classification/regression tasks has been touched upon in recent studies \cite{Correlated,kieu2018distinguishing,kieu2018outlier}. Motivated by this, we jointly optimize the reconstruction loss and classification loss for multi-label image classification. In order to make the embedding space of images more discriminative, a classification module is introduced into the framework which conducts binary classification for each of the $m$ labels. It consists of one fully connected layer, followed by a sigmoid layer for each category. \begin{align} \widehat{\mathbf{y}} = \Phi_{cls}(f_{\mathbf{I}}, \theta_{cls}), \widehat{\mathbf{y}}\in \mathbf{R}^m \end{align} where $\theta_{cls}$ is the learned parameters of the classification module, and $\widehat{\mathbf{y}}=[\widehat{{y}}_1, \ldots, \widehat{{y}}_m]^T$ is predicted label confidences for each category. Prediction errors are measured via binary cross entropy over $\widehat{\mathbf{y}}$ and ground truth $\mathbf{y}$ as: \begin{align}\label{loss2} {\mathcal{J}_{cls}} = -\sum_{i=1}^m (y_i \text{log} (\widehat{{y}}_i) + (1-y_i) \text{log} (1-\widehat{{y}}_i)) \end{align} Based on the label embedding space, a reconstruction module is incorporated into the whole architecture, so as to make the embedding more representative. The reconstruction module contains one fully connected layer for recovering $\mathbf{y}$. The reconstructed output $\bar{\mathbf{y}}$ can be expressed as: \begin{align} \bar{\mathbf{y}} = \Phi_{rec}(f_{\mathbf{y}}, \theta_{rec}), \bar{\mathbf{y}}\in \mathbf{R}^m \end{align} where $\theta_{rec}$ is the parameters of the reconstruction module, and $\bar{\mathbf{y}}=[\bar{{y}}_1, \ldots, \bar{{y}}_m]^T$ is the reconstructed output. We measure the reconstruction error through the mean square error (MSE) with respect to $\bar{\mathbf{y}}$ and ground truth $\mathbf{y}$ as: \begin{align}\label{loss3} {\mathcal{J}_{rec}} = \frac{1}{m}\sum_{i=1}^m (y_i -\bar{y}_i)^2 \end{align} \smallskip \noindent \textbf{Overall Network and Training Scheme} \noindent Based on these three modules, we propose to jointly minimize the following loss function: \begin{align} \mathcal{L}= \mathcal{L}_{cls}+\alpha \mathcal{L}_{metric} +\beta \mathcal{L}_{rec} \end{align} where $\alpha$ and $\beta$ are two hyper-parameters to balance the three losses. The network can be trained by the following steps. First, we fine-tune only the classification net on the target dataset, i.e. setting $\alpha=\beta=0$, which is pre-trained on 1000-classification task of ImageNet dataset \cite{Deng2009ImageNet}. Both $\Phi_{cnn}(\mathbf{I}_i,\theta_{cnn})$ and $\Phi_{cls}(f_{\mathbf{I}},\theta_{cls})$ are learned with cross-entropy loss. Secondly, we fix $\Phi_{cnn}(\mathbf{I}_i,\theta_{cnn})$ and $\Phi_{cls}(f_{\mathbf{I}},\theta_{cls})$ , and focus on training $\Phi_{dnn}(\mathbf{y}_i,\theta_{dnn})$ and $\Phi_{rec}(f_{\mathbf{y}},\theta_{rec})$ with loss $\alpha \mathcal{L}_{metric} +\beta \mathcal{L}_{rec}$. Finally, the whole network is jointly fine-tuned with loss $\mathcal{L}_{cls}+\alpha \mathcal{L}_{metric} +\beta \mathcal{L}_{rec}$. Our deep neural network is implemented with Pytorch library\footnote{https://pytorch.org/}. In the experiment, we adopt the image augmentation strategies as suggested in \cite{Wang2015Towards}, which is a powerful tool to reduce the risk of over-fitting. The input images are first resized to $256\times 256$, and then cropped at four corners and the center. Finally, the cropped images are further cropped to $224\times 224$. We employ stochastic gradient descend algorithm for training, with a batch size of 256, a momentum of 0.9, and weight decay of 0.0001. The initial learning rate is set as 0.1, and decreased to 1/10 of the previous value whenever validation loss gets saturated, until 0.01 or the maximum epoch is reached. We train our model with 8 NVIDIA Titan XP GPUs. For testing, we simply resize all images to $224\times 224$ and conduct single-crop evaluation. \section{Experiment} In this section, in order to sufficiently verify the effectiveness of our method, RETDM, we perform it on three publicly available image datasets, scene \cite{Maron1998Multiple}, mirflickr \cite{huiskes08}, and Microsoft COCO \cite{Lin2014Microsoft}. Table \ref{detail} lists the details of the three datasets. These datasets are widely used for evaluating multi-label image classification algorithms. Experimental results demonstrate that our proposed RETDM significantly outperforms the state-of-the-art methods on all the three datasets, and has strong generalization capability to different types of labels. \begin{table} \caption{Details of the used datasets.} \centering \begin{tabular}{|c|c|c|} \hline {Dataset} & Number of Images & Number of Labels \\ \hline {scene} & 2,000 & 5 \\ \hline {mirflickr} & 25,000 & 24 \\ \hline {MS-COCO} & 123,287 & 80 \\ \hline \end{tabular} \label{detail} \end{table} \begin{figure} \centering {\includegraphics[width=0.95\linewidth]{true_predict.pdf}} \caption{One example image from scene (left), mirflickr (middle) and MS-COCO(right) datasets, the ground-truth annotations and our model's predictions. } \label{predict_true} \end{figure} \subsection{Experimental Datasets} The scene image dataset consists of 2,000 natural scene images, where a set of labels is annotated to each image manually. There are five possible class labels, including desert, mountains, sea, sunset and trees. On average, each image is associated with 1.24 class labels. An example of the annotations and predictions for both of label set is shown in the left side of Fig. \ref{predict_true}. The mirflickr image dataset \cite{huiskes08} contains 25,000 images that are representative of a generic domain and are of high quality. There are 24 possible labels in total, for instance ``sky'', ``water'', ``sea'', ``clouds'', and so on. The average number of labels per image is 8.94. In the dataset there are 1386 tags which occur in at least 20 images. An example of the annotations and predictions for both of label set is shown in the middle of Fig. \ref{predict_true}. The Microsoft COCO (MS-COCO) dataset \cite{Lin2014Microsoft} is an image recognition, segmentation, and captioning dataset. Following \cite{Wang2016CNN}, we use it to evaluate multi-label learning algorithms. The training set is composed of 82,783 images, which contain common objects in the scenes. There are 80 classes with about 2.9 labels per image. Another 40,504 images are employed as testing data in the experiment. The number of labels for each image varies considerably on this dataset. An example of the annotations and predictions for both of label set is shown in the right side of Fig. \ref{predict_true}. For the scene and mirflickr datasets, 80\% images are randomly chosen as the training data, and the rest are used for testing data. On the MS-COCO dataset, we use the same split of training/testing as \cite{Wang2016CNN} for a fair comparison. \begin{table*} \caption{Quantitative results by our proposed RETDM and compared methods on the scene dataset.} \label{performance} \centering \begin{tabular}{c|c|c|c|c|c|c||c|c|c|c|c} \hline {Method} & C-P & C-R & C-F1 & O-P & O-R & O-F1 & hamming loss & ranking loss & coverage & one error& average precision \\ \hline {LMMO-kNN} & 0.453 & 0.725 & 0.576 & 0.412 & 0.705 & 0.613 & 0.3985 & 0.6867 & 2.1450 & 0.7075 &0.5060 \\ {MLSPL} & 0.425 & 0.682 & 0.563 & 0.402 & 0.699 & 0.610 & 0.4155 & 0.7214 & 2.2687 & 0.7089 &0.4561 \\ {CL} & \textbf{0.973} & 0.136 & 0.152 & 0.867 & 0.142 & 0.242 & 0.8603 & 0.2225 & 1.7225 & \textbf{0.0275} &0.6162 \\ BCE & 0.906 & 0.861 & 0.883 & \textbf{0.906 }& 0.869 & 0.883 & 0.1331 & \textbf{0.0580} & 0.6200 &0.0475& 0.9223 \\ C2AE & 0.380 & 0.710 & 0.414 & 0.343 & 0.725 & 0.465 & 0.5552 & 0.8235 & 1.6675& 0.2625& 0.6241 \\ \hline RETDM & {0.893} & \textbf{0.879} & \textbf{0.885} & 0.892 & \textbf{0.877} & \textbf{0.884} & \textbf{0.1217 }& \textbf{0.0580}& \textbf{0.5850}& 0.0425& \textbf{0.9285}\\ \hline \end{tabular} \label{scene} \end{table*} \begin{table*} \caption{Quantitative results by our proposed RETDM and compared methods on the mirflickr dataset.} \label{performance} \centering \begin{tabular}{c|c|c|c|c|c|c||c|c|c|c|c} \hline {Method} & C-P & C-R & C-F1 & O-P & O-R & O-F1 & hamming loss & ranking loss & one error & coverage & average precision \\ \hline {LMMO-kNN} & 0.433 & 0.602 & 0.463 & 0.577 & 0.615 & 0.601 & 0.9013 & 0.4025 & 0.0677 & 13.1261 &0.5664 \\ {MLSPL} & 0.458 & 0.652 & 0.481 & 0.592 & 0.628 & 0.619 & 0.8235 & 0.3417 & 0.0518 & 13.0482 &0.5822 \\ {CL} & 0.995 & 0.012 & 0.018 & 0.872 & 0.032 & 0.062 & 0.9626 & 0.1498 & 20.2414 & 0.0167 &0.1964 \\ BCE & 0.735 &0.677& 0.704& 0.782& 0.739& 0.760& 0.2758& 0.0717 & 0.0344& 11.0742& 0.7076\\ C2AE & 0.476 &0.616 & 0.505 &0.613 & 0.630 & 0.621 & 0.9029& 0.3869 & 0.0660 &12.9678& 0.5767 \\ \hline RETDM & 0.756 & 0.661 & 0.705 & 0.811 & 0.749 & 0.778 & 0.2645 & 0.0659 & 0.0296 & 10.8320 & 0.7275 \\ \hline \end{tabular} \label{mirflickr} \end{table*} \begin{table*} \caption{Quantitative results by our proposed RETDM and compared methods on the MS-COCO dataset.} \label{performance} \centering \begin{tabular}{c|c|c|c|c|c|c||c|c|c|c|c} \hline {Method} & C-P & C-R & C-F1 & O-P & O-R & O-F1 & hamming loss & ranking loss & coverage& one error& average precision \\ \hline {LMMO-kNN} & 0.298 & 0.379 & 0.342 & 0.401 & 0.438 & 0.410 & 0.5123 & 0.5582 & 37.4352 & 0.1734 &0.3877 \\ {MLSPL} & 0.289 & 0.367 & 0.332 & 0.387 & 0.403 & 0.399 & 0.5334 & 0.5584 & 37.5626 & 0.1866 &0.3781 \\ {CNN-Softmax} & 0.590 & \textbf{0.570} & 0.580 & 0.602 &0.621 & 0.611 &-& -& -& -& - \\ {CNN-WARP} & 0.593 & 0.525 & 0.557 & 0.598 & 0.614 & 0.607 &-& -& -& -& - \\ {CNN-RNN} & 0.660 &0.556 &0.604 &0.692&\textbf{ 0.664} &0.678 &-& -& -& -& - \\ BCE & \textbf{0.804} & 0.547 & 0.651 & 0.815 &0.605 & {0.695} &0.3357& 0.0191& 30.6551& 0.0243& 0.6584\\ C2AE & 0.428 & 0.411 & 0.409 & 0.474 & 0.528 & 0.499 & 0.4560& 0.4809& 36.1900 &0.1416& 0.4512 \\ \hline RETDM & 0.799 &0.555 & \textbf{0.655} &\textbf{0.819} & 0.611 & \textbf{0.700} &\textbf{0.3295} &\textbf{0.0189} &\textbf{30.3492}& \textbf{0.0219} & \textbf{0.6628}\\ \hline \end{tabular} \label{coco} \end{table*} \subsection{Experimental Setting} To verify the effectiveness of RETDM, we compare it with the following related methods: \begin{itemize} \item LMMO-kNN \cite{Liu2018Metric,Liu2015Large}\footnote{The code is downloaded from the authors' homepage: https://sites.google.com/site/weiweiliuhomepage/.}: large margin multi-output metric learning with $k$ nearest neighbor constraints (LMMO-kNN) that is a shallow deep metric learning paradigm to incorporate the predefined loss functions to learn the embedding space based on $k$ nearest neighbor constraints. \item MLSPL \cite{Li2018A}: MLSPL integrates a self-paced learning strategy to learn instances and labels from an easy-to-hard fashion, which is proposed recently and shows promising results for multi-label learning. \item C2AT \cite{Yeh2017Learning}\footnote{The code is downloaded from https://github.com/yankeesrules/C2AE.}: Canonical Correlated AutoEncoder uniquely integrated deep canonical correlation analysis (DCCA) and autoencoder in a unified DNN model, which is recently proposed for multi-label learning. \item CL \cite{Wen2016A}: CL is a powerful feature learning approach for face recognition that simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. Actually, CL can be regarded as a deep distance metric learning method. Therefore, we compare with it in the experiment. \item BCE: it treats each label as independent one, and trains one classifier for each label. In the experiment, we use VGG-16 with binary cross-entropy as the network architecture. \end{itemize} In our method, there are some parameters, such as $\alpha$, $\beta$, and the number of nearest neighbors $k$ that needed to be set in advance. The parameters $\alpha$, $\beta$ in our method are chosen by cross validation. The number of the nearest neighbors $k$ is set to 10 throughout the experiments. In order to sufficiently verify our method, we utilize extensive criteria mentioned in \cite{Wu2017A}. We first evaluate the effectiveness of the proposed approaches with the following five criteria: hamming loss, ranking loss, one error, coverage, and average precision. These criteria are commonly used for evaluating multi-label learning algorithms \cite{Zhou2008Multi,Huang2013Fast}. \begin{itemize} \item Hamming loss: The hamming loss evaluates how many times an sample-label pair is misclassified, i.e., a wrong label is predicted. The performance is perfect when the hamming loss is equal to zero; the smaller the value of the hamming loss, the better the performance of the model. \item Ranking loss: It evaluates the average fraction of label pairs that are not ordered correctly for the sample. The performance is perfect when the ranking loss is equal to zeros; the smaller the value of the ranking loss, the better the performance of the model. \item one error: It measures how many times the top-ranked label is not a correct label of the sample. The performance is perfect when one error is equal to zero; the smaller the value of one error, the better the performance of the model. \item coverage: It measures how far it is needed, on the average, to go down the list of labels in order to cover all the correct labels of the sample. The smaller the value of coverage, the better the performance of the model. \item average precision: It evaluates the average fraction of correct labels ranked above a particular threshold. The performance is perfect when the average precision is equal to one; the larger the value of the average precision, the better the performance of the model. \end{itemize} Besides the above five criteria, we also compute macro precision (denoted as ``C-P''), micro precision (denoted as ``O-P''), macro recall (denoted as ``C-R''), micro recall (denoted as ``O-R''), macro F1-measure (denoted as ``C-F1''), and micro F1-measure (denoted as ``O-F1''). ``C-P'' is evaluated by averaging per-class precisions, while ``O-P'' is an overall measure that counts true predictions for all images over all labels. Similarly, ``C-R'' and ``O-R'' can be also evaluated. The F1 (``C-F1'' and ``O-F1'') score is the geometrical average of the precision and recall scores. Note that since the above criteria measure the performance of the model from different aspects, it is difficult for one algorithm to outperform another on every one of these criteria. However, in our experiment, our method outperforms other state-of-the-arts in most of the criteria. \subsection{Experimental Results} \begin{figure*}[htb] \centering \subfigure[C-P]{\includegraphics[width=0.32\linewidth]{cp1.pdf}} \subfigure[C-R]{\includegraphics[width=0.32\linewidth]{cr1.pdf}} \subfigure[C-F1]{\includegraphics[width=0.32\linewidth]{cf1.pdf}} \subfigure[O-P]{\includegraphics[width=0.32\linewidth]{op1.pdf}} \subfigure[O-R]{\includegraphics[width=0.32\linewidth]{or1.pdf}} \subfigure[O-F1]{\includegraphics[width=0.32\linewidth]{of1.pdf}} \caption{Sensitivity study of the parameters in terms of ``C-P'', ``C-R'', ``C-F1'', ``O-P'', ``O-R'', and ``O-F1'' on the mirflickr dataset.} \label{sensitive} \end{figure*} \begin{table*} \caption{Verify the effectiveness of two-way distance metric module on the MS-COCO dataset.} \label{performance} \centering \begin{tabular}{c|c|c|c|c|c|c||c|c|c|c|c} \hline {Method} & C-P & C-R & C-F1 & O-P & O-R & O-F1 & hamming loss & ranking loss & coverage & one error& average precision \\ \hline RETDM (one-way) & \textbf{0.806} & 0.545 & 0.651 &\textbf{ 0.828} & 0.601 & {0.696} & 0.3389& \textbf{0.0189}& 30.9184&\textbf{0.0215}& 0.6575 \\ \hline RETDM & 0.799 &\textbf{0.555} & \textbf{0.655 }&0.819 & \textbf{0.611} & \textbf{0.700} &\textbf{0.3295} &\textbf{0.0189} &\textbf{30.3492}& 0.0219 & \textbf{0.6628}\\ \hline \end{tabular} \label{freq} \end{table*} We first test the general performance of our method RETDM on the three image datasets. Tables I-III summarize the results of different methods in terms of all the eleven evaluation criteria. From these tables, we can see that our method RETDM significantly outperforms LMMO-kNN and MLSPL on the three datasets, which indicates that deep network model indeed has better performance than the shallow models in the scenario of multi-label image classification. In the meantime, RETDM achieves better results than C2AE and BCE. This shows our method can better exploit the correlations of labels to improve the performance. Finally, RETDM beats CL on all the three datasets, which demonstrates our deep metric learning model can learn more discriminative distance metric for multi-label image classification. In addition, the performance of CL is quite unstable in terms of all criteria. It indicates that the deep metric learning for other applications can not be directly applied to multi-label classification. Note that we do not run CL on the MS-COCO dataset, because of its prohibitively training cost. In our paper, we propose a two-way distance metric learning module. In this section, we verify its effective. To do this, we add another experiment on the MS-COCO dataset. In the experiment, we only use one-way strategy to learn the metric, i.e., the loss function $\mathcal{J}_{metric}$ is equal to $\mathcal{J}_{1}$ in Eq. (\ref{loss1}). We name it ``RETDM (one-way)'' for short. The results are listed in Table IV. From the Table, we can see that RETDM is better than RETDM (one-way) with most of the criteria. This illustrates our two-way strategy is good for the multi-label classification problem. We also study the sensitivity of parameters $\alpha$ and $\beta$ in our algorithm on the mirflickr dataset. Fig. \ref{sensitive} shows the results. From Fig. \ref{sensitive}, our method is not sensitive to $\alpha$ and $\beta$ with wide ranges. Fig. \ref{convergence} shows the convergence curve of our method on the mirflickr dataset. As in Fig. \ref{convergence}, we can see RETDM has a good convergence rate. It will converge after only about 20 epochs. \begin{figure} \centering \subfigure[mirflickr]{\includegraphics[width=0.47\linewidth]{mir.pdf}} \subfigure[MS-COCO]{\includegraphics[width=0.5\linewidth]{co.pdf}} \caption{Convergence analysis on the mirflickr and MS-COCO datasets. } \label{convergence} \end{figure} \section{Conclusion} In this paper, we proposed a novel deep distance metric learning framework for multi-label image classification, named RETDM. First, RETDM aimed to learn a latent space to embed images and labels via two independent deep neural networks respectively, so that the input dependency and output dependency can be well captured. After that, a reconstruction regularized deep metric network was presented for mining the correlations between input and output and making the embedded space more discriminative as well. Extensive evaluations on scene, mirflickr, and NUS-WIDE datasets showed that our proposed RETDM significantly outperformed the state-of-the-arts. Several interesting directions can be followed up, which are not covered by our current work. For example, we can leverage input's nearest neighbors in our approach. RETDM learns a two-way distance metric learning based on target label's $k$ nearest neighbors, as well as $k$ input images with their labels being $k$ nearest neighbors of the target labels. Symmetrically, input image's $k$ nearest neighbors and their corresponding target labels can be also involved into our method. A potential issue is that we need to find input image's $k$ global nearest neighbors during each iteration, which would increase the burden of computation. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-07-28T02:39:21", "yymm": "2007", "arxiv_id": "2007.13547", "language": "en", "url": "https://arxiv.org/abs/2007.13547" }
\section{Introduction} Transmission spectroscopy has proven to be a highly successful method for probing the atmospheres of close-in exoplanets, allowing us to infer the chemical composition and physical structure of a planet's atmosphere without needing to spatially resolve the planet and star. During primary transit, when a planet crosses the disk of a star from the point of view of an observer, a small fraction of the stellar light is filtered through the annulus of the planet's atmosphere \citep{2001ApJ...553.1006B,2000ApJ...537..916S} with the observed transit depth increasing at wavelengths corresponding to strong atomic and molecular absorption. The transit depth as a function of wavelength (conventionally measured as a planet-to-star radius ratio) is known as a transmission spectrum and is sensitive to compositions along the day-night terminator of the planet. Transit and radial velocity surveys have revealed that a significant subset of exoplanetary systems are surprisingly unlike anything found in our own Solar System, and show a remarkably diverse range of properties. This includes the discovery of highly irradiated hot-Jupiters \citep[e.g.][]{1995Natur.378..355M,2000ApJ...529L..41H,2000ApJ...529L..45C} - gas giants with masses similar to Jupiter orbiting extraordinarily close to their host stars - which exhibit a wide variety of transmission spectra and a continuum from clear to cloudy atmospheres \citep{2016Natur.529...59S}. Substantial progress in the area of exoplanet atmospheric characterisation was first achieved using space-based instruments such as the \textit{Hubble Space Telescope} (HST) \citep[e.g.][]{2002ApJ...568..377C,2008MNRAS.385..109P,2012MNRAS.422.2477H,2012ApJ...747...35B,2013MNRAS.432.2917P,2014Natur.505...69K,2015MNRAS.447..463N} and \textit{Spitzer Space Telescope} \citep[e.g.][]{2005ApJ...626..523C,2005Natur.434..740D,2007Natur.447..183K,2007ApJ...661L.191B,2013ApJ...776L..25D}, but ground-based observations, utilising multi-object differential spectrophotometry, have been rapidly catching up with their own significant contributions \citep[e.g.][]{Snellen_2008,2008ApJ...673L..87R,2010Natur.468..669B,2013A&A...559A..33C,2013MNRAS.436.2974G,2013MNRAS.428.3680G,2016MNRAS.463.2922K,2016A&A...587A..67L,2016A&A...590A.100M,2014Sci...346..838S}. The importance of ground-based observations for exoplanetary science is set to continue well into the era of the upcoming James Webb Space Telescope (JWST) by complementing the newly acquired near- and mid-IR observations with those obtained in the optical regime. Here we report ground-based transmission spectroscopy results for the ultra-hot Jupiter WASP-103b using the FOcal Reducer and Spectrograph (FORS2) mounted on the European Southern Observatory's (ESO) Very Large Telescope (VLT). FORS2 is a general-purpose imager, spectrograph and polarimeter \citep{1998Msngr..94....1A} which has been shown to offer improved performance for exoplanet spectroscopy after undergoing an upgrade to its Linear Atmospheric Dispersion Corrector \citep{2016SPIE.9908E..2BB}, with detections of Na and K absorption and scattering by clouds and hazes in multiple exoplanet atmospheres \citep[e.g.][]{2015A&A...576L..11S,2016ApJ...832..191N,2018Natur.557..526N}. Our results are part of a large, ground-based, comparative survey which aims to study the chemical compositions and occurrence rates of clouds and hazes over the full range of mass and temperature regimes \citep[e.g.][]{2016ApJ...832..191N,2017MNRAS.467.4591G,2020MNRAS.tmp.1223C}. WASP-103b is an ultra-short period (P = 0.9\,d), highly irradiated (\textit{T}$_\mathrm{eq}$\,$\approx$\,2500\,K) hot-Jupiter discovered by \citet{2014A&A...562L...3G}. It has a mass and radius significantly larger than Jupiter - 1.49\,\textit{M}$_\mathrm{J}$ and 1.53\,\textit{R}$_\mathrm{J}$ respectively - and transits a late F-type (V\,$\approx$\,12.1) main-sequence star. At a separation of less than 1.2 times the Roche limit WASP-103b is expected to be in the late stages of orbital decay and close to tidal disruption \citep[e.g.][]{Matsumura_2010,2017AJ....154....4P}. \citet{Staab_2016} measured the chromospheric activity of WASP-103 finding marginal evidence that it was higher than expected from the system age (log(\textit{R$'$}$_\mathrm{HK}$) = -4.57). \citet{Pass_2019} found a dayside effective temperature of $\approx$\,3200\,K using Gaussian process regression on WFC3 and Spitzer secondary eclipse depth measurements. Meanwhile, \citet{Garhart_2020} calculated an effective temperature of $\approx$\,2500\,K and measured brightness temperatures in the 3.6 micron and 4.5 micron Spitzer bands of $\approx$\,2800\,K and $\approx$\,3100\,K respectively. Follow-up observations by \citet{2015MNRAS.447..711S} revealed a strong wavelength-dependent slope in their broad-band optical transmission spectrum which they concluded was too steep to be caused by Rayleigh scattering processes alone. A re-analysis of the same data by \citet{2016MNRAS.463...37S} accounting for the flux contamination of a previously unknown companion star \citep{2015A&A...579A.129W} instead showed a minimum around 760\,nm and increasing opacity towards both the blue and red. This overall picture was subsequently confirmed by \citet{2018MNRAS.474.2334D} from an independent global analysis including a large fraction of the same archival transit light curves. This surprising V-shaped transmission spectrum cannot be easily explained by theoretical models and nor is it confirmed by higher-resolution observations with Gemini/GMOS, which instead showed signs of enhanced absorption in the cores of the Na and K features \citep{2017A&A...606A..18L} and no evidence for a Rayleigh scattering signature, suggesting that WASP-103b might possess a largely clear atmosphere at the terminator region. However, since they did not have any data bluewards of 550\,nm they were unable to conclusively rule out the presence of a scattering slope. In the near-IR, \citet{2017AJ....153...34C} found a featureless emission spectrum using HST/WFC3 which was indistinguishable from that due to an isothermal atmosphere and could be explained by either a thermal inversion layer or clouds and/or hazes in the upper atmosphere and suggested the need for additional optical observations in order to differentiate between these possible explanations. \citet{2018AJ....156...17K} observed a featureless transmission spectrum between 1.15 and 1.65\,$\mu$m with WFC3/Spitzer at the 1\,$\sigma$ level after correcting for nightside emission and determined that their phase-resolved spectra were consistent with blackbody emission at all orbital phases, attributing the lack of detection of dayside spectral features of water to partial H$_2$O dissociation. This paper is structured as follows: we describe our observations and data reduction steps in Section 2 and detail our light curve analysis and contaminant correction in Section 3; in Section 4 we describe our atmospheric modelling approach and discuss our results in Section 5. Finally, we offer our conclusions in Section 6. \section{FORS2 Observations and Data Reduction} We observed a single transit of the hot-Jupiter WASP-103b during the night of 2017 May 1st with the FORS2 spectrograph mounted on the 8.2\,--\,m `Antu' telescope of the VLT at the European Southern Observatory, Paranal, Chile, as part of the large program 199.C-0467 (PI: Nikolov). Our transit was observed using the GRIS600B (hereafter 600B) grating covering the spectral range of 320\,--\,620\,nm with a total of 174 science exposures of 80\,s each, covering a total period of 310 minutes with a readout time of $\sim$\,30\,s. FORS2 consists of two 2k\,$\times$\,4k CCDs separated by a small detector gap with an image scale of 0.25$''$\,/\,pixel in 2\,$\times$\,2 binning mode, corresponding to a field-of-view of 6.8\,$\times$\,6.8 arcminutes squared. Observations of the target and two comparison stars were carried out simultaneously in multi-object (MXU) spectroscopy mode. We used a custom mask consisting of broad slits accurately centred on the positions of WASP-103 and the comparison stars with a width of 22$''$ and length of 120$''$ to reduce differential slit losses from seeing variations and guiding inaccuracies. We found that one of our comparison stars was significantly fainter than the other and so we excluded this from our analysis and only used the brighter of the two stars. FWHM for the observations was typically $\sim$\,3 pixels but reached a maximum of $\sim$\,7 pixels towards the very beginning of the observations resulting in seeing-limited resolution of R\,$\approx$\,450\,--\,1050, with airmass varying from a maximum of 1.87 at the commencement of observations down to 1.18. We used the FORS2 pipeline for standard bias and flat field corrections with relevant calibration frames taken before and after the science exposures. However, we found that neither of these corrections had a significant influence on our conclusions and therefore we proceeded using only the raw frames for our final analysis. Spectral extraction was performed in IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation}/PyRAF\footnote{PyRAF is a product of the Space Telescope Science Institute, which is operated by AURA for NASA} using a custom pipeline and summing an aperture radius of 15 pixels after background subtraction (we found that a radius of 15 pixels resulted in the lowest average uncertainties for our transmission spectrum). We estimated the background contribution by taking the median value in a region of pixels located 80\,--\,100 pixels either side of the spectral trace. Example spectra of WASP-103 and the reference star are shown in Figure 1. Wavelength calibration was performed using arc lamp exposures with a calibration mask in place which is identical to the science mask but with narrower 1$''$ slit widths to obtain arcs with narrower features for more precise calibration. We accounted for shifts in the dispersion direction by cross-correlating the target spectra using the H$\beta$ line after normalising the continua, and then cross-correlating again between the target and comparison star using the same feature. We then used the measured x-shifts to realign all spectra to the reference spectrum's wavelength scale. To check that our results were not overly sensitive to the specific choice of feature we also tried extracting the x-shifts by cross-correlating using the Na feature, but found that this had little impact on our final transmission spectrum, and therefore we present our results using only the H$\beta$ alignment. We found that the wavelength solution obtained from the reduction pipeline resulted in small residual offsets between our target and comparison star and so we decided to construct an alternative solution using a set of well-resolved lines in the mean spectrum (after realignment) and fitting gaussians to each of these lines to accurately determine the line centres. We used a Gaussian process (GP) to fit the measured line centres. GPs are routinely used within the machine learning community for Bayesian non-parametric regression problems and were introduced by \citet{2012MNRAS.419.2683G} for the analysis of systematics in exoplanet time-series. We discuss our implementation of GPs in Section 3.1. We also tried fitting using a second-order polynomial but obtained near identical results. In principle we could fit with a higher order polynomial but this is unlikely to alter our final transmission spectrum given that the changes are small when compared to our bin widths and we proceeded using the wavelength solution derived from the GP fit. \begin{figure} \includegraphics[width=\columnwidth]{examplespec.pdf} \caption{Example spectra of the target (black) and one reference star (red). Coloured regions indicate the spectral bins used for extraction of the white light curve (grey), spectroscopic light curves (blue) and the high-resolution bins centred around the Na feature (magenta).} \label{fig:examplespec} \end{figure} The time-series spectra were then used to construct the white light curve by summing the flux of each stellar spectrum over a broad wavelength range as shown in Figure 1, and dividing the target star's flux by the comparison star's flux, thereby correcting for the effects of atmospheric transparency variations. We also constructed multiple `spectral' light curves by integrating over the narrower channels also shown in Figure 1 and discussed in Section 3.2. We tried varying the total number of channels used in our analysis and found that, while this altered the resolution and signal-to-noise of our results, it did not significantly influence our conclusions. In the end we chose to extract a total of 15 individual wavelength channels and the resulting white light curve and spectral light curves are shown in Figures 2 and 3. We also calculated the theoretical noise for our white light curve and each of our spectral light curves, including the contributions from photon noise, read noise and the sky background. The average electron counts per exposure for the white light curve was $\approx$\,9\,$\times$\,10$^{7}$ for both the target and comparison star resulting in time-averaged theoretical precision in the relative flux per exposure of $\approx$\,1.4\,$\times\,$10$^{-4}$. The average electron counts per exposure for the spectral light curves varied from $\approx$\,2\,$\times\,$10$^{6}$ to $\approx$\,1\,$\times\,$10$^{7}$ for the target star and $\approx$\,3\,$\times\,$10$^{6}$ to $\approx$\,9\,$\times\,$10$^{6}$ for the comparison. The time-averaged theoretical precision in the relative flux per exposure for the spectral light curves therefore ranges from $\approx$\,4\,$\times$\,10$^{-4}$ to $\approx$\,9\,$\times$\,10$^{-4}$. Finally, we also extracted auxiliary measurements from the target and comparison spectra, including the shifts in the dispersion and cross-dispersion axes and the width of the spectral trace. Such measurements can in principle be used to attempt to investigate the cause of the instrument systematics in the light curves \citep[e.g.][]{2001ApJ...553.1006B,2003nicm.rept....1G,2007A&A...476.1347P,2008arXiv0812.1844S,2010Natur.464.1161S,2012A&A...542A...4G,2013MNRAS.434.3252H,Nikolov_2016}, however in our case we found no obvious correlations between the auxiliary measurements and the form of the systematics. We used the PyLDTk toolkit \citep{2015MNRAS.453.3821P}, which uses the spectral libraries of \citet{2013A&A...553A...6H}, to determine the limb darkening parameters for the spectral response functions (adopting the stellar values for WASP-103), and used the system parameters and uncertainties for WASP-103b given in the discovery paper \citep{2014A&A...562L...3G}. \section{Analysis} \subsection{White Light Curve Analysis} Rather than impose a prespecified parametric form to describe the unknown instrumental systematics, we follow the procedure described by \citet{2012MNRAS.419.2683G} and use a time-dependent GP\footnote{For the implementation of our Bayesian inference we made extensive use of the Python modules \textbf{GeaPea} and \textbf{Infer} which are freely available from \url{https://github.com/nealegibson}} to model the systematics as a stochastic process simultaneously with a deterministic transit model derived from the equations of \citet{2002ApJ...580L.171M}. This approach leads to a much more flexible model for the instrumental effects, and being intrinsically Bayesian, automatically helps mitigate against the possibility of over-fitting. In our case a GP defines a joint Gaussian probability distribution around a transit mean function given by: \begin{equation} p(\bmath f| \bmath t,\bphi,\btheta) = \mathcal{N} \left (T(\bmath t,\bphi) , \boldsymbol{\Sigma} (\bmath t,\btheta) \right). \end{equation} where $\bmath t$ is the vector of time measurements, \textit{T} is the transit function depending on $\bmath t$ and the transit parameters $\bphi$, $\bmath f$ is the vector of flux measurements and $\boldsymbol{\Sigma}$ is the covariance matrix which is a function of $\bmath t$ and the hyperparameters $\btheta$. GPs are capable of including multiple inputs such as the optical state parameters which describe the behaviour of the instrument, however given that we found no obvious correlations between the instrumental systematics and auxiliary measurements we proceeded to model the systematics as time-correlated noise only. The instrumental systematics are fully described by the covariance matrix which describes the correlation between data points, and the covariance matrix itself is populated by the covariance function, also known as a kernel (see \citet{3569} for a detailed discussion of kernels), with parameters $\boldsymbol{\theta}$. For our analysis we used the Mat\'ern 3/2 kernel defined as: \begin{equation} k({t}_n, {t}_m | \btheta) = \xi^2 \left( 1+{\sqrt{3}\,\eta\,\Delta t} \right) \exp \left( -{\sqrt{3}\,\eta\,\Delta t}\right) + \delta_{nm}\sigma^2, \end{equation} where $\xi$ specifies the maximum covariance or height scale, $\Delta$\,t is the time difference of observations, $\eta$ is the inverse characteristic length scale, $\delta_{nm}$ is the Kronecker delta and $\sigma$ specifies the white noise (assumed to be identical for all data points). The Mat\'ern 3/2 kernel can be viewed as a less smooth version of the more commonly employed squared exponential kernel and our choice was mainly motivated by the arguments outlined in \citet{2013MNRAS.436.2974G}. As a check we also ran the same analysis using the squared exponential kernel but found that this had little effect on the final results. The posterior probability distribution is then obtained by specifying priors for the hyperparameters of the model and multiplying by the marginal likelihood (in practice we use log priors and the log marginal likelihood). Our mean function is the deterministic transit model assuming a circular orbit and the two parameter quadratic limb darkening law of \citet{2000A&A...363.1081C} with coefficients \textit{c}$_\mathrm{1}$ and \textit{c}$_\mathrm{2}$. In our analysis we held the value of the period fixed to that reported by \citet{2014A&A...562L...3G} and fit for the central transit time (\textit{T}$_\mathrm{c}$), planet-to-star radius ratio ($\rho$ = \textit{R}$_\mathrm{p}$/\textit{R}$_\mathrm{\star}$) and a further two parameters of a linear baseline model of time (\textit{f}$_{\mathrm{oot}}$, \textit{T}$_{\mathrm{grad}}$). We chose to fix the values for the system scale (\textit{a}/\textit{R}$_\mathrm{\star}$) and the impact parameter \textit{b} at the tightly constrained values reported in \citet{2015MNRAS.447..711S} and set a Gaussian prior for the planet-to-star radius ratio also using their reported value. This was to help facilitate a direct comparison with the results from \citet{2018AJ....156...17K} who adopted these parameter values for their analysis. This constrains the white light curve parameters to previously derived values and enables us to recover a more accurate systematics model. As a test, we also performed an independent fit to our white light curve to check the validity of our assumed parameter values. Both of the parameters which we chose to fix in our analysis were found to be consistent within 1\,$\sigma$ to those of \citet{2015MNRAS.447..711S}, though the measured planet-to-star radius ratio was found to be slightly higher (within 2\,$\sigma$). However, since our retrieval accounts for an offset in the planet-to-star radius ratio between the different datasets (see Section 4.2 for a description of our atmospheric retrieval using AURA), we don't expect this discrepancy to significantly affect our results. In addition, common-mode corrections can lead to biases in the mean level of the transmission spectrum for each instrument/transit if the correction is inaccurate. This does not affect the relative transmission spectrum for each individual transit observation, but can lead to offsets between datasets which should be taken into account in the interpretation. We also placed Gaussian priors on the limb darkening parameters \textit{c}$_\mathrm{1}$ and \textit{c}$_\mathrm{2}$ with a mean and uncertainty determined from the best fit values from PyLDTk, and additionally restricted their values to ensure that the brightness of the stellar surface is positive with a monotonically decreasing intensity profile using the following boundary conditions \citep[e.g.][]{Kipping_2013}: \begin{eqnarray} \textit{c}_\mathrm{1}+\textit{c}_\mathrm{2} < 1,\nonumber \\ \textit{c}_\mathrm{1} > 0,\nonumber \\ \textit{c}_\mathrm{1}+2\textit{c}_\mathrm{2} > 0. \end{eqnarray} As another check we also repeated our analysis having fixed the limb darkening parameters to their best fit values but found this did not affect the conclusions of our study. We summarise the assumed values for the white light curve in Table 1. The kernel hyperparameters are variable in our fit but we fit for log $\xi$ and log $\eta$ with uniform priors in log space which is the natural parameterisation for scale parameters \citep[e.g.][]{2013MNRAS.428.3680G,2013MNRAS.436.2974G}. We also constrain the length scale to be no smaller than the cadence of our observations and no larger than twice the total duration with lower frequency systematics being accounted for in the baseline function. \begin{figure} \includegraphics[width=\columnwidth]{wlc.pdf} \caption{White light curve of WASP-103b obtained with the 600B grism. The red line shows the best fit model with blue shading indicating plus/minus two standard deviations. The green line shows the systematics model derived from the GP fit. Residuals are indicated below the light curve. We clip any points over 4\,$\sigma$ from the fit, but preserve them for the common-mode correction (shown in magenta, see Section 3.1). } \label{fig:wlc} \end{figure} Our best-fitting model for the white light curve is obtained by optimising the posterior over the transit and kernel parameters using a differential evolution algorithm with the values from \citet{2014A&A...562L...3G} and \citet{2015MNRAS.447..711S} as the starting point, and then fine-tuning our estimated values using a Nelder-Mead simplex algorithm. Due to some high-frequency systematics (most likely caused by thin clouds), which occur before and at the beginning of ingress, we clip any data points over 4\,$\sigma$ from our white light curve fits, to avoid biasing our systematics model towards short length scales. Nonetheless, the clipped points are retained in the common-mode correction, to correct similar high-frequency systematics also present in the spectroscopic light curves. We verified that this process did not significantly affect our final transmission spectrum by obtaining near identical results when the points are included in the GP fit. We then marginalise our posterior distribution using a Markov-Chain Monte-Carlo (MCMC) method to obtain uncertainty estimates for our parameters. For each of our light curves we used 4 independent chains of length 80,000, discarding the first 40$\%$ of samples in the chain and checking for mutual convergence using the Gelman-Rubin statistic. We derive our best-fit systematics model by separating the mean of our GP (conditioned on the observed data) from the transit model and use this for our common-mode correction for the spectroscopic light curves. The best fit white light curve model and derived systematics model and residuals are shown in Figure 2. \begin{table} \caption{Transit parameter values used in the fitting of the white light curve. The orbital period, system scale and impact parameter were held fixed and Gaussian priors were placed on the following parameters with the mean and standard deviations given below.} \label{tab1} \begin{tabular}{ll} \hline Parameter & Value\\ \hline \textit{P} & 0.925542 days (fixed)\\[2pt] \textit{a}/\textit{R}$_\mathrm{\star}$ & 2.999 (fixed)\\[2pt] \textit{b} & 0.14 (fixed)\\[2pt] \textit{$\rho$} & 0.1127 $\pm$ 0.0009\\[2pt] \textit{c}$_\mathrm{1}$ & 0.614 $\pm$ 0.004\\[2pt] \textit{c}$_\mathrm{2}$ & 0.102 $\pm$ 0.005\\[2pt] \hline \end{tabular} \end{table} \subsection{Spectroscopic Light Curve Analysis} \begin{figure*} \includegraphics[width=\textwidth]{slc1.pdf} \caption{Spectral light curves for the 600B grism corresponding to the broad spectral channels shown in Figure 1. The left panel shows the raw light curves before correction. The middle panel shows the light curves with best fit GP model after the common-mode correction. The right panel shows the residuals from the best-fit model.} \label{fig:slc1} \end{figure*} For the spectroscopic light curves, we first extracted individual low-resolution channels using uniform bins with a width of 150\,{\AA} as shown in Figure 1. In total we extracted 15 of these low-resolution channels and the resulting light curves are shown in the left panel of Figure 3. The spectroscopic light curves are corrupted by significant systematics which are similar in shape to that seen for the white light curve, and which are mainly invariant in wavelength. This allows us to correct the spectral light curves prior to model fitting by dividing through by the common-mode correction which we derive from the white light curve. We also subtract the residuals from the white light curve and its best-fitting model to remove any remaining high-frequency systematics. This process removes much of the common signal in the light curves and results in significant improvements to the precision of our transmission spectrum without affecting the relative value of the planet-to-star radius ratio. After correction we fit each spectroscopic light curve using the same method outlined above for the white light curve except we fix the central transit time to the best fit value inferred from the white light curve analysis, and allow the planet-to-star radius ratio, limb darkening parameters, normalisation parameters and kernel hyperparameters to vary for each light curve fit. We set broad normal priors for the limb darkening coefficients centred at the best fit values determined using PyLDTk, but increased the uncertainties for our prior to have a standard deviation of 0.1 in order to allow a greater degree of flexibility in our model. We again optimise the transit and kernel parameters using a differential evolution algorithm and fine-tuned using a Nelder-Mead simplex algorithm. We clip any outliers over 4\,$\sigma$ from our predictive distribution for each individual fit (typically only 1-2 points for each light curve) before running the same MCMC procedure to explore our posterior distribution as for the white light curve. The best fit GP models are shown in Figure 3 and we summarise the derived planet-to-star radius ratios and associated uncertainties in Tables 2 and 3. \begin{table} \caption{Transmission spectrum for WASP-103b recovered from the FORS2 low-resolution spectroscopic light curves. } \label{tab2} \begin{tabular}{lcc} \hline Wavelength & Radius Ratio & Limb Darkening\\ Centre [Range] ({\AA}) & \textit{R}$_\mathrm{p}$/\textit{R}$_\mathrm{\star}$ & \textit{c}$_\mathrm{1}$ \hspace{0.5cm} \textit{c}$_\mathrm{2}$\\ \hline 3943 [3868-4018] & 0.11352 $\pm$ 0.00292 & 0.860\hspace{0.3cm}-0.066\\ 4093 [4018-4168] & 0.11076 $\pm$ 0.00100 & 0.797\hspace{0.3cm}0.020\\ 4243 [4168-4318] & 0.11241 $\pm$ 0.00140 & 0.856\hspace{0.3cm}-0.057 \\ 4393 [4318-4468] & 0.11162 $\pm$ 0.00105 & 0.749\hspace{0.3cm}0.035\\ 4543 [4468-4618] & 0.11095 $\pm$ 0.00068 & 0.730\hspace{0.3cm}0.053\\ 4693 [4618-4768] & 0.11205 $\pm$ 0.00047 & 0.696\hspace{0.3cm}0.078\\ 4843 [4768-4918] & 0.11168 $\pm$ 0.00085 & 0.631\hspace{0.3cm}0.111\\ 4993 [4918-5068] & 0.11280 $\pm$ 0.00059 & 0.645\hspace{0.3cm}0.093\\ 5143 [5068-5218] & 0.11309 $\pm$ 0.00049 & 0.624\hspace{0.3cm}0.094\\ 5293 [5218-5368] & 0.11176 $\pm$ 0.00062 & 0.600\hspace{0.3cm}0.107\\ 5443 [5368-5518] & 0.11330 $\pm$ 0.00045 & 0.581\hspace{0.3cm}0.109\\ 5593 [5518-5668] & 0.11143 $\pm$ 0.00057 & 0.563\hspace{0.3cm}0.119\\ 5743 [5668-5818] & 0.11103 $\pm$ 0.00055 & 0.544\hspace{0.3cm}0.125\\ 5893 [5818-5968] & 0.11163 $\pm$ 0.00101 & 0.529\hspace{0.3cm}0.128 \\ 6043 [5968-6118] & 0.11106 $\pm$ 0.00092 & 0.516\hspace{0.3cm}0.129\\ \hline \end{tabular} \end{table} \begin{table} \caption{Transmission spectrum for WASP-103b recovered from the FORS2 high-resolution spectroscopic light curves centered on the Na feature. } \label{tab3} \begin{tabular}{lcc} \hline Wavelength & Radius Ratio & Limb Darkening\\ Centre [Range] ({\AA}) & \textit{R}$_\mathrm{p}$/\textit{R}$_\mathrm{\star}$ & \textit{c}$_\mathrm{1}$\ \hspace{0.5cm} \textit{c}$_\mathrm{2}$\\ \hline 5833 [5818-5848] & 0.11154 $\pm$ 0.00117 & 0.535\hspace{0.3cm}0.130\\ 5863 [5848-5878] & 0.11240 $\pm$ 0.00105 & 0.525\hspace{0.3cm}0.131\\ 5893 [5878-5908] & 0.11329 $\pm$ 0.00137 & 0.535\hspace{0.3cm}0.121\\ 5923 [5908-5938] & 0.11187 $\pm$ 0.00117 & 0.528\hspace{0.3cm}0.130\\ 5953 [5938-5968] & 0.11112 $\pm$ 0.00177 & 0.524\hspace{0.3cm}0.129\\ \hline \end{tabular} \end{table} \subsection{Investigation of sodium feature} \begin{figure*} \includegraphics[width=\textwidth]{slc2.pdf} \caption{Same as Figure 3, showing the five additional light curves extracted using the high-resolution channels shown in Figure 1 centred around the Na feature.} \label{fig:slc2} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{fors_spec.pdf} \caption{FORS2 transmission spectrum of WASP-103b. The top panel shows the contaminant corrected spectrum. The blue points are the results for the low-resolution light curves. The red triangle is the central high-resolution channel centred on Na and the grey triangles show the high-resolution channels located either side. The grey dashed lines correspond to the mean of the transmission spectrum plus and minus 3 atmospheric scale heights. The gold line shows an example model assuming a clear atmosphere at the terminator. We do not attempt to fit this model to the data, but simply over plot it for reference. The brown dashed line in the lower panel gives an indication of the applied contaminant correction. } \label{fig:fors_spec} \end{figure*} Enhanced absorption from the alkali metals Na and K has been detected for a range of exoplanets as pronounced features in their transmission spectra \citep[e.g.][]{2002ApJ...568..377C,2011MNRAS.416.1443S,2012MNRAS.422.2477H,2014MNRAS.437...46N,2015A&A...577A..62W,2015MNRAS.446.2428S,2016ApJ...832..191N,2018Natur.557..526N}, whilst many others have shown only partially or highly attenuated features or even completely featureless spectra due to the presence of clouds and/or hazes in the upper atmosphere \citep[e.g.][]{2013MNRAS.436.2974G,2013ApJ...778..183L,2016A&A...587A..67L}. Despite possessing both a high temperature and large radius, WASP-103b nevertheless represents a challenging target for transmission spectroscopy observations, with the amplitude of potential absorption features predicted to be intrinsically small due to its high mass and density. However, signs of enhanced Na and K absorption have previously been observed using Gemini/GMOS by \citet{2017A&A...606A..18L} and we attempted to confirm the Na feature in our FORS2 data by extracting five additional high-resolution light curves in addition to the low-resolution light curves described above. For our high-resolution channels we used 30\,{\AA} bins with the central bin placed at the mid point of the Na doublet at 5892.9\,{\AA} and two bins either side in the neighbouring continuum. Only Na is covered by the 600B grism and so we are unable to search for additional K absorption using the FORS2 dataset. The resulting light curves show similar systematic features as for the white light curve and low-resolution light curves and so we fitted each of these light curves following the same steps as before, applying the same common-mode correction. The narrow spectral bins used for extraction are shown in Figure 1 and the corresponding light curves in Figure 4. \subsection{Correction for Contaminant Star} Previous observations \citep[e.g.][]{2015A&A...579A.129W,2016ApJ...827....8N,2017AJ....153...34C} have revealed that WASP-103 harbours a faint K5V companion star (\textit{T}$_\mathrm{eff}$\,$\approx$\,4400\,$\pm$\,200\,K) within 0.24$''$ and it is likely that this pair are gravitationally bound. Flux contamination from a blended companion has the potential to introduce additional wavelength-dependent effects in both transmission and emission spectra if not properly accounted for \citep[e.g.][]{2012ApJ...760..140C,2016A&A...587A..67L}. At such a small angular separation both of these stars are blended in our observations necessitating a contaminant correction which we implemented as follows: first we obtained theoretical PHOENIX spectra \citep{2013A&A...553A...6H} for WASP-103 and the companion interpolated to the stellar properties reported in \citet{2017AJ....153...34C}. We then estimate the flux contribution integrated over each of our wavelength channels due to the contaminant given by: \begin{equation} \frac{\textit{F}_\mathrm{cont}}{\textit{F}_\mathrm{W103}} = \left(\frac{\textit{R}_\mathrm{cont}}{\textit{R}_\mathrm{W103}}\right)^2\left(\frac{\textit{M}_\mathrm{cont}}{\textit{M}_\mathrm{W103}}\right), \end{equation} where \textit{M}$_\mathrm{cont}$ and \textit{M}$_\mathrm{W103}$ are the integrated model fluxes for each passband and \textit{R}$_\mathrm{cont}$/\textit{R}$_\mathrm{W103}$ is the contaminant to target radius ratio. Finally, we used the estimated flux contributions to apply dilution correction factors to each spectral bin in order to account for the contamination and the resulting flux ratios and decontaminated planet-to-star radius ratios are shown in Table 4. Our estimated values are consistent (over the overlapping wavelengths) with those calculated by \citet{2017A&A...606A..18L} who used a similar method to obtain their estimates. For our analysis we have simply applied the correction factors to our measured radius ratios and uncertainties and have not attempted to account for the small uncertainties which are introduced by these factors. There are a number of other models available which could be used to estimate the flux contamination, the specific choice of which could potentially result in different spectral shapes. \citet{2017A&A...606A..18L} estimated the uncertainties introduced by adopting the PHOENIX model of the K5V companion by carrying out a large number of simulations and found that, while the entire transmission spectrum may be subject to an overall offset of 0.0013 in \textit{R}$_\mathrm{p}$/\textit{R}$_\mathrm{\star}$, any introduced wavelength-dependent slopes are small and therefore unlikely to significantly alter our results even in the most extreme case. The overall effect of our contaminant correction is to add a vertical shift to the transmission spectrum. An indication of the applied contaminant correction is shown in the lower panel of Figure 5. \begin{table} \caption{Calculated flux ratios used to derive the correction factors for each spectral bin and resulting decontaminated transmission spectrum. } \label{tab4} \begin{tabular}{lcc} \hline Wavelength & Flux Ratio & Radius Ratio\\ Centre [Range] ({\AA}) & \textit{F}$_\mathrm{cont}$/\textit{F}$_\mathrm{W103}$ & \textit{R}$_\mathrm{p}$/\textit{R}$_\mathrm{\star}$ \\ \hline 3943 [3868-4018] & 0.019 & 0.11459 $\pm$ 0.00295\\ 4093 [4018-4168] & 0.021 & 0.11191 $\pm$ 0.00102\\ 4243 [4168-4318] & 0.024 & 0.11375 $\pm$ 0.00106 \\ 4393 [4318-4468]& 0.028 & 0.11317 $\pm$ 0.00115\\ 4543 [4468-4618] & 0.034 & 0.11283 $\pm$ 0.00070\\ 4693 [4618-4768] & 0.035 & 0.11399 $\pm$ 0.00048\\ 4843 [4768-4918] & 0.038 & 0.11378 $\pm$ 0.00087\\ 4993 [4918-5068] & 0.035 & 0.11476 $\pm$ 0.00060\\ 5143 [5068-5218] & 0.031 & 0.11483 $\pm$ 0.00050\\ 5293 [5218-5368] & 0.043 & 0.11414 $\pm$ 0.00064\\ 5443 [5368-5518] & 0.044 & 0.11577 $\pm$ 0.00046\\ 5593 [5518-5668] & 0.048 & 0.11408 $\pm$ 0.00059\\ 5743 [5668-5818] & 0.051 & 0.11383 $\pm$ 0.00056\\ 5893 [5818-5968] & 0.052 & 0.11449 $\pm$ 0.00104\\ 6043 [5968-6118] & 0.053& 0.11397 $\pm$ 0.00095\\ \hline central high-resolution channel\\ 5893 [5883-5903] & 0.048 & 0.11598 $\pm$ 0.00140\\ \hline \end{tabular} \end{table} \begin{figure*} \includegraphics[width=\textwidth]{models.pdf} \caption{Combined optical-infrared transmission spectrum of WASP-103b obtained from FORS2, Gemini/GMOS, WFC3 and Spitzer observations. The blue points are the results from the low-resolution FORS2 light curves. The green and orange points are the results from \citet{2017A&A...606A..18L} and \citet{2018AJ....156...17K} respectively. The red triangles show the result for our high-resolution bin centred on the Na feature. The magenta triangles show the high-resolution Na and K measurements from \citet{2017A&A...606A..18L} (Na measurement slightly offset for clarity). The top panel shows the best-fit model (purple line) from our forward modelling along with the best-fit full cloud (crimson) and full haze (turquoise) models for comparison. Both of these models have been slightly offset from the best-fit for clarity. The middle panel shows the median fit (red line) from the retrieval analysis using AURA along with the 1 and 2 sigma significance contours (red/light red). The lower panel shows the median fit (purple line) from the retrieval analysis using NEMESIS along with the 1 and 2 sigma significance contours (purple/light purple). For all panels the dashed lines correspond to the mean of the transmission spectrum plus and minus 3 atmospheric scale heights.} \end{figure*} Our decontaminated FORS2 transmission spectrum for WASP-103b is shown in the top panel of Figure 5. Our results are consistent with a linear fit to the transmission spectrum with a $\chi$$^2$ value of 19.62 for 13 degrees of freedom or reduced $\chi$$^2$ of 1.51, and we calculate a low significance for the Na feature ($<$\,1.5\,$\sigma$). We discuss these results further in Section 5. \section{Atmospheric Modelling} \subsection{Goyal forward models} We combined the optical data from the FORS2 and GMOS observations with those obtained by \citet{2018AJ....156...17K} using WFC3 and Spitzer in the near-IR to produce a complete, optical-infrared transmission spectrum of WASP-103b. Our system scale and inclination parameters are fixed to those assumed in the WFC3/Spitzer analysis and we apply a small offset to the GMOS spectrum ($\approx$\,--\,4\,$\times$\,10$^{-6}$\,\textit{R}$_\mathrm{p}$/\textit{R}$_\mathrm{\star}$) calculated using the overlapping FORS2/GMOS region to stitch the spectra together in the vertical direction, accounting for the difference in white light curve parameters. We compared the full transmission spectrum with a generic grid of forward models generated using the one-dimensional radiative-convective equilibrium code ATMO \citep[][]{Amundsen_2014,dr02400q,go01600j,2015ApJ...804L..17T,2016ApJ...817L..19T}. Each model in the grid assumes chemical equilibrium abundances and isothermal pressure-temperature (p-T) profiles with the entire grid exploring 24 equilibrium temperatures from 300\,--\,2600\,K in steps of 100\,K, six metallicities (0.1\,--\,200\,x\,solar), four planetary gravities (5\,--\,50\,ms$^{-2}$), four C/O ratios (0.35\,--\,1.0) and four parameters each describing scattering hazes and uniform clouds. The haze parameter defines the wavelength dependent Rayleigh scattering profile for small particles whilst the cloud parameter defines the uniform grey scattering profile, simulating the effects of a cloud deck from 0 to 100$\%$ cloud opacity across all wavelengths. Each of the models considers H$_2$\,-\,H$_2$, H$_2$\,-\,He collision-induced absorption (CIA) and opacities due to H$_2$O, CO$_2$, CO, CH$_4$, NH$_3$, Na, K, Li, Rb, Cs, TiO, VO, FeH, PH$_3$, H$_2$S, HCN, SO$_2$ and C$_2$H$_2$. The source of these opacities and their pressure broadening parameters can be found in \citet{Amundsen_2014} and \citet{go01600j} and we adopt the Na and K pressure broadened line profiles from \citet{Burrows_2000}. The generic model grid is baselined for a Jupiter radius planet around a Solar radius star and each model in the grid can then be scaled based on the planetary radius, stellar radius and surface gravity of WASP-103b using the scaling relationship derived in \citet{go01600j}. We note that since we fit for the temperature, we don't scale to the planetary equilibrium temperature in the equation (i.e. the temperature terms cancel out). The generic model grid has been developed for two different condensation schemes: `local' and `rainout' condensation. In the `local condensation' scheme each model level is independent, with the chemical composition depending only on elemental abundances and local conditions of pressure and temperature. In this scheme, any condensates which form will deplete elements only within that layer of the atmosphere. In the `rainout' scenario, condensates which form will also deplete elements in all layers above, in addition to the local layer, and the chemical abundances of each layer will therefore also be dependent on all other deeper layers of the atmosphere. Our best fit model is found after scaling to the parameters of WASP-103b and using a least square minimization procedure with free vertical offset in \textit{R}$_\mathrm{p}$/\textit{R}$_\mathrm{\star}$ (see \citealt{go01600j} for further details of the grid parameters and implementation). Here we only consider the rainout condensation scenario, although we note that using the local condensation approach made little difference to our interpretation of the observed spectrum. Our best-fit model is shown in the top panel of Figure 6 along with full cloud/full haze models for comparison. Our best-fit model favours a clear atmosphere with \textit{T}\,=\,1700\,K, super-solar metallicity [M/H]\,=\,+\,1.7, solar C/O ratio [C/O]\,=\,0.56 and planetary gravity \textit{g}\,=\,10\,m s$^{-2}$ corresponding to a $\chi^2$ of 81.46 for 44 degrees of freedom or reduced $\chi^2$ of 1.89. Our model shows evidence of H$_2$O at 1.4 microns and no evidence of either clouds or hazes. We obtain a reduced $\chi^2$ of 3.01 and 1.99 for the full haze and full cloud models respectively (note that the models shown in the top panel of Figure 6 have been subsequently, and arbitrarily, shifted for clarity). \subsection{Atmospheric Retrieval with AURA} In addition to the forward modelling described above we performed a Bayesian atmospheric retrieval on the full dataset to try to constrain the atmospheric composition and temperature structure at the day-night terminator of WASP-103b. Our retrieval uses an adaptation of the retrieval code AURA \citep[]{Pinhas_2018,2019AJ....157..206W}, with the method having already been successfully implemented for a number of transmission spectra \citep[e.g.][]{Pinhas_2018,2019MNRAS.482.1485P,2019AJ....157..206W,Madhusudhan_2020}. Our atmospheric retrieval code consists of two components: a forward model to predict the atmospheric spectrum and an algorithm for statistical parameter estimation. The model solves line-by-line radiative transfer for a transmission geometry and assumes hydrostatic equilibrium, a plane parallel atmosphere and uniform volume mixing ratios. In addition to CIA we include the sources of chemical opacity expected to be prominent in hot-Jupiter atmospheres in the observed spectral range: H$_2$O, Na, K, TiO, VO, AlO, HCN, CO and CO$_2$. Cross sections for these sources are calculated by \citet{Gandhi_2017} from a range of databases \citep[]{2010JQSRT.111.2139R,2013JQSRT.130....4R,2012JQSRT.113.1276R,2016JMoSp.327...73T} including EXOMOL \citep[]{2016JMoSp.327...73T,Yurchenko_2011,Barber_2013,Yurchenko_2014}, HITEMP \citep{2010JQSRT.111.2139R} and HITRAN \citep{2012JQSRT.113.1276R}. The model uses a parameterised, one-dimensional p-T profile with the atmosphere divided into three distinct zones defined by pressure values $P_{\rm 1}$, $P_{\rm 2}$ and $P_{\rm 3}$. $T_{\rm 0}$ (in Kelvin) defines the temperature at the top of the atmosphere, while $\alpha_{\rm 1}$ and $\alpha_{\rm 2}$ describe the gradient of the profile. The model also includes the a priori unknown reference pressure $P_{\rm ref}$ at $R_{\rm p}$. Our model also considers the contributions from homogeneous/inhomogeneous cloud/haze coverage \citep{2017MNRAS.469.1979M}. This is parameterised with a cloud-deck altitude ($P_{\rm cloud}$, in bars), a Rayleigh enhancement factor ($a$, a linear scaling of the opacity from H$_2$ Rayleigh scattering), and $\gamma$, which describes the Rayleigh scatting slope. Finally, we include a term $\bar\phi$, which describes the terminator averaged cloud/haze contribution. This varies between 0 (clear atmosphere) and 1 (fully cloudy atmosphere), where the forward model is computed both with and without the cloud/haze model before being averaged (with weighting governed by $\bar\phi$). For more details on these parameters and a detailed description of the method see \citet{Pinhas_2018}. A statistical sampling algorithm is used to infer the properties of the exoplanetary atmosphere - i.e. the posterior distributions of the forward model parameters and their credibility intervals. For our retrieval we used the Nested Sampling algorithm MultiNest \citep{Feroz_2009} implemented with PyMultiNest \citep{Buchner_2014} which, in addition to robust parameter estimation, also allows calculation of the Bayesian evidence term $\mathcal{Z}$, facilitating model comparison and the calculation of detection significances. Our abundances are presented as the average terminator volume mixing ratios $X_{\rm i}$\,=\,$n_{\rm i}$/$n_{\rm tot}$. The normalised abundance is relative to a `solar' value - that expected in equilibrium at the relevant temperature for an atmosphere with solar elemental abundances \citep[e.g.][]{Asplund_2009,2012ApJ...758...36M}. We employed a uniform prior between 800\,--\,2800\,K for the temperature at the top of the atmosphere. We follow a similar procedure to that outlined in \citet{Pinhas_2018} and set an upper limit on the prior for $T_{\rm 0}$ to be a few 100\,K above the equilibrium temperature (\textit{T}$_\mathrm{eq}$\,$\approx$\,2500\,K). The reason for this restriction is that the value for $T_{\rm 0}$ is expected to be significantly below the equilibrium temperature, and allowing for much higher values can lead to unphysical solutions. (see Table A1 in the appendix for the full list of our assumed priors). We also include an offset between the datasets as a free parameter in our retrieval to account for any remaining discrepancies in the overall levels. Our retrieval indicates a detection of H$_2$O with a significance of 4.0\,$\sigma$. We retrieve a terminator H$_2$O abundance of log(\textit{X}$_\mathrm{H_2O}$) = --1.73$_\mathrm{-0.55}^\mathrm{+0.38}$, corresponding to a $\sim$\,40\,$\times$ solar abundance composition. We also constrain the Na abundance to log(\textit{X}$_\mathrm{Na}$) = --3.02$^\mathrm{+0.98}_\mathrm{-2.43}$ though with a low significance (2.0\,$\sigma$) and constrain the terminator-averaged cloud/haze fraction $\Bar{\phi}$ to be 0.35$_\mathrm{-0.15}^\mathrm{+0.15}$. The retrieved H$_2$O and Na abundance estimates, solar-relative abundances and detection significances are listed in Table 5 and the median fit along with the 1 and 2 sigma confidence contours are shown in the middle panel of Figure 6. The complete atmospheric retrieval results including marginalised posterior probability densities are presented in the appendix. \begin{table} \caption{Retrieved terminator H$_2$O and Na abundances, solar-normalised abundances and detection significances using AURA. } \label{tab5} \begin{tabular}{lccc} \hline Species & Abundance & Normalised Abundance & Significance\\ \hline H$_2$O & -1.73$_\mathrm{-0.55}^\mathrm{+0.38}$ & 40 & 4.0-$\sigma$\\ Na & -3.02$_\mathrm{-2.43}^\mathrm{+0.98}$ & 600 & 2.0-$\sigma$\\ \hline \end{tabular} \end{table} \subsection{Atmospheric Retrieval with NEMESIS} We also compared the results from our AURA retrieval to those obtained using the NEMESIS radiative transfer and retrieval algorithm. NEMESIS (Non-linear optimal Estimator for MultivariatE spectral analySIS) was originally designed to model Solar System objects \citep{IRWIN20081136} but has since been modified for exoplanet atmospheres \citep[e.g.][]{Lee_2011,Barstow_2013,Lee_2014,Barstow_2016,Bruno_2019,Barstow_2020}. In its original form NEMESIS paired a fast correlated-k \citep{doi:10.1029/90JD01945} forward model with an efficient optimal estimation algorithm for parameter estimation \citep{doi:10.1142/3171}, though more recently this has been upgraded to take advantage of the PyMultiNest algorithm \citep{Krissansen_Totton_2018}, allowing for full exploration of non-Gaussian posterior distributions. As with our AURA retrieval we also include CIA and the sources for our opacities are found in various databases including those from \citet{osti_5642348,osti_6904948}, \citet{borysowfm89,borysow97}, \citet{borysow02}, EXOMOL \citep{Chubb2020}, and NIST \citep{NaKlinelist}. We assume an isothermal p-T profile and represent clouds with the cloud top and base pressures and index of the scattering slope as free parameters in the model, along with the total optical depth. We use similar prior ranges as for the AURA retrieval. We find our results from NEMESIS are in excellent agreement with those obtained from AURA, with the log opacity of -3.98$_\mathrm{-3.70}^\mathrm{+3.72}$ being consistent with a clear atmosphere and terminator abundances for H$_2$O and Na of log(\textit{X}$_\mathrm{H_2O}$) = --1.33$_\mathrm{-0.58}^\mathrm{+0.22}$ and log(\textit{X}$_\mathrm{Na}$) = --3.25$_\mathrm{-4.34}^\mathrm{+1.43}$ respectively. The retrieved abundance estimates agree within 1\,$\sigma$ with the corresponding AURA estimates, giving us further confidence in the model fits. Our best-fit retrieval model along with 1 and 2 sigma significance contours are shown in the lower panel of Figure 6 and the complete retrieval results including marginalised posterior probability densities are given in the appendix. \section{Discussion} WASP-103b is an ultra-short period hot-Jupiter with a mass and radius of 1.49 \textit{M}$_\mathrm{J}$ and 1.53 \textit{R}$_\mathrm{J}$ respectively, it has an equilibrium temperature close to 2500\,K, a surface gravity $\approx$\,15\,m\,s$^{-2}$ and an atmospheric scale height $\approx$\,600\,km or 0.0006 \textit{R}$_\mathrm{p}$/\textit{R}$_\mathrm{\star}$. \citet{2015MNRAS.447..711S} observed a strong (7.3\,$\sigma$) wavelength-dependent slope in the optical and found that this held even after applying a correction for the contaminant star \citep{2016MNRAS.463...37S}. Conversely, \citet{2017A&A...606A..18L} observed signs of strong absorption in the cores of both the alkali features using Gemini/GMOS but did not recover the V-shaped pattern or any evidence for a Rayleigh scattering signature. A study by \citet{2018AJ....156...17K} revealed a featureless nightside-corrected transmission spectrum which was consistent with a flat line fit within 1\,$\sigma$ across the WFC3 and Spitzer bands. Additionally, they determined that their phase-resolved spectra were consistent with blackbody emission and attributed the lack of detection of H$_2$O features to partial dissociation. Several recent studies \citep[e.g.][]{2019A&A...626A.133H,2019A&A...631A..79H} suggest that the large temperature gradients expected for ultra-hot Jupiters such as WASP-103b likely lead to cloud-free daysides but rather cloudy nightsides. Furthermore, the regions probed by transmission spectroscopy observations may not be homogeneously cloudy and can also feature strong morning/evening terminator asymmetries, with similar asymmetries expected for the amount of observable gas in these regions. \citet{Staab_2016} estimated the log(\textit{R$'$}$_\mathrm{HK}$) value for WASP-103 finding it to be $\approx$\,4.57 which was higher than expected from the system age. However, despite this activity we don't expect the corresponding stellar heterogeneity to result in a measurable offset in our transmission spectrum given the spectral type (F8V) of the host star. In \citet{2019AJ....157...96R} the estimated contamination for an F8 dwarf is a factor of $\sim$\,1.001 to $\sim$\,1.002, corresponding to an offset in transit depth of $\sim$\,0.0025\% which is within the error of our measurements. \subsection{FORS2 Transmission Spectrum} Our decontaminated FORS2 transmission spectrum is shown in Figure 5 along with an example model which assumes a clear atmosphere at the terminator. The horizontal lines show the weighted average and plus and minus three scale heights, with one scale height corresponding to $\approx$\,600\,km or 1\,$\times$\,10$^{-3}$ in transit depth. Our uncertainties range from $\approx$\,5\,$\times$\,10$^{-4}$ at the centre of the grism to $\ga$\,2\,$\times$\,10$^{-3}$ at the edges showing that our applied common-mode correction is most accurate over the central wavelength bands. We performed a least squares fit for a horizontal line using a Levenburg-Marquardt algorithm and calculate a $\chi$$^2$ of 25.18 for 14 degrees of freedom or reduced $\chi$$^2$ of 1.80 for the decontaminated transmission spectrum. We find an improved fit for a linear model including an upwards slope with a $\chi$$^2$ of 19.62 for 13 degrees of freedom or reduced $\chi$$^2$ of 1.51 and so we are unable to reject a featureless model. In both cases the major contribution to the $\chi$$^2$ value stems from a single outlier at $\sim$\,5500\,{\AA} which does not correspond to absorption from any of the species considered in our retrieval. Masking this single point in our calculation results in a reduced $\chi$$^2$ close to unity. We calculate the significance of the central high-resolution Na measurement to be $<$\,1.5\,$\sigma$. To facilitate a more direct comparison to the results from \citet{2017A&A...606A..18L} we also extracted an additional high-resolution channel centred on the Na feature with a bin width of 20\,{\AA} (the smallest bin width used in their study). Whilst we do observe a slightly higher value for the planet-to-star radius ratio when compared to our 30\,{\AA} bin, we also recover a correspondingly larger uncertainty and calculate a significance of $\approx$\,1.2\,$\sigma$. We therefore conclude that we do not detect strong evidence of Na absorption in our FORS2 dataset. Additionally, we find no evidence for a wavelength-dependent slope towards the blue which would also appear to rule out the strong Rayleigh scattering signature previously inferred. Our measured value for the high-resolution Na channel is lower than that reported by \citet{2017A&A...606A..18L}, though the measurements agree within their 1\,$\sigma$ uncertainties. The significance of our measurement is also slightly reduced (down from $\approx$\,1.7\,$\sigma$ for GMOS to $\approx$\,1.2\,$\sigma$ for FORS2). One possible explanation for this systematic offset in the narrow-band feature is the different methods used to treat the instrumental systematics. In their analysis, \citet{2017A&A...606A..18L} used parametric baseline models along with a common noise model to account for the systematic effects and they performed model selection via the Bayesian information criterion (BIC) to optimally choose from amongst the possible models. The absence of a pressure broadened Na feature in our FORS2 transmission spectrum could most easily be explained by the presence of a high-altitude cloud deck which acts to obscure the lower regions of the planetary atmosphere and reduces the strength of the spectral features. However, neither the results of our forward modelling nor our retrieval analyses strongly support this conclusion, with each favouring a relatively clear atmosphere at the terminator. Our retrieved value ($\Bar{\phi} = 0.35_\mathrm{-0.15}^\mathrm{+0.15}$) for the cloud/haze fraction parameter using AURA is not consistent with a thick cloud deck acting as a grey absorber, though it perhaps indicates some degree of patchy or inhomogeneous clouds/hazes which may still contribute to the muting of spectral features in the FORS2 spectrum. Furthermore, due to a high mass and density, the amplitude of potential absorption features in the atmosphere of WASP-103 is predicted to be intrinsically small. Therefore the most likely explanation is that a combination of inadequate signal-to-noise and a relatively small atmospheric scale height makes the robust detection of Na in the atmosphere of WASP-103b challenging to obtain from the ground. A further possible explanation is that with such high temperatures Na is largely ionised in the part of the atmosphere that dominates the transmission light curves, leading to low abundances and a small Na feature. This scenario is similar to that found for WASP-18b \citep{2019A&A...626A.133H} where the low-pressure part of the terminators show more Na$^+$ then Na. In principle this could also be the case for K. A final possibility is that Na is not present in the atmosphere, though given its equilibrium temperature ($\approx$\,2500\,K) this explanation is less likely. Higher signal-to-noise observations would be useful to definitively choose between these possible scenarios whilst higher resolution observations will be required to detect the Na core if clouds are present in the atmosphere. With sufficient signal-to-noise (e.g. with MIRI on JWST) it may be possible to verify the presence of cloud species in the mid-infrared where the cloud spectral signatures are more distinct, and perhaps even put constraints on their composition \citep[e.g.][]{2018AJ....155...29W}. \subsection{Combined transmission spectrum} Figure 6 shows our full optical-infrared transmission spectrum for WASP-103b incorporating the FORS2, GMOS, WFC3 and Spitzer data. Our retrieval analyses indicate a detection of H$_2$O absorption in the near-IR with a significance of 4.0\,$\sigma$. All of our best-fit models favour a relatively clear atmosphere at the terminator region. Our best-fit forward model has greater than solar metallicity ([M/H] = +1.7) and solar C/O ratio ([C/O] = 0.56). The retrieved temperature using AURA ($\sim$\,2300\,K) is significantly higher than the best fit temperature from the Goyal grid ($\sim$\,1700\,K). One explanation for this is that for higher temperatures the Goyal grid predicts prominent TiO/VO absorption features in the optical and these features are not well matched to the observed FORS2 and GMOS datasets. Therefore the model tends to favour the highest temperature in the grid which does not result in strong TiO/VO absorption. Another possible explanation is that the Goyal grid uses isothermal p-T profiles whilst the AURA retrieval uses a parameterised profile. The majority of carbon bearing species in the atmosphere are covered by the Spitzer observations and the significant uncertainties in these measurements means that we are unable to put any meaningful constraints on the carbon abundance, with the low retrieved abundances for CO/CO$_2$ possibly reflecting the lack of data in the relevant parts of the spectrum. It is important to acknowledge that the differing paradigms used in the modelling, i.e. equilibrium models vs free chemistry, lead to differing assumptions about the physical and chemical properties of the atmosphere. On the one hand equilibrium models may lack the full flexibility required to model the wide range of exoplanetary atmospheres which could deviate significantly from equilibrium assumptions \citep{Madhusudhan_2018}, whilst on the other it is possible that the free chemistry approach could result in some un-physical combinations of parameters. In their analysis \citet{2018AJ....156...17K} found that their nightside-corrected transmission spectrum was consistent with a flat line fit at the 1\,$\sigma$ level across the WFC3 and Spitzer bands and concluded that they did not detect any evidence of H$_2$O absorption. However, in that analysis they also found that their corrected spectrum was consistent with predictions from a general circulation model including H$_2$O features in the WFC3 bandpass and suggested that further high-precision observations might render these features detectable. One potential explanation for our contrasting result is that \citet{2018AJ....156...17K} only had access to the WFC3 and Spitzer data and their analysis did not include the additional optical data from FORS2 and GMOS. Having access to the full optical-infrared data is important for obtaining accurate retrievals as precise modelling of the continuum is crucial for breaking degeneracies between model parameters \citep[]{Pinhas_2018,2018AJ....155...29W}. Furthermore, our retrieval method also includes the possibility of patchy cloud coverage which can affect the shape of the H$_2$O feature in the WFC3 bandpass depending on the degree of cloud coverage \citep[e.g.][]{2016ApJ...820...78L}. We calculated the $\chi$$^2$ value for a featureless (flat) fit to the WFC3 data alone finding a reduced $\chi$$^2$ >\,2. We therefore conclude that our retrieval results provide reasonable evidence for H$_2$O absorption at the day-night terminator of WASP-103b. \citet{2018AJ....156...17K} also demonstrated that WASP-103b shows poor heat redistribution between the day and night sides and so one possible explanation as to why they did not detect H$_2$O in their phase-curve observations is that the majority of H$_2$O on the dayside is thermally dissociated but still exists with measurable abundance on the cooler nightside and terminator of the planet. For example, this is the case for the similar ultra-hot Jupiter WASP-121b, which shows H$_2$O absorption at the day-night terminator \citep[]{Evans_2016,2018AJ....156..283E} and the same feature in emission on the dayside hemisphere \citep[]{Evans_2017}, although significantly weakened due to thermal dissociation \citep[]{Parmentier_2018,Mikal_Evans_2019}. We do not obtain particularly strong evidence for Na in our retrievals. However, our detection significance of 2.0\,$\sigma$, along with the previous Na detection in the GMOS band, means that neither can we definitively rule out its presence. Finally, we should also note that a number of studies have recently highlighted some of the challenges inherent for one-dimensional retrievals of ultra-hot Jupiters: in \citet{Pluriel_2020} they show that thermal dissociation and the strong day to night temperature gradient lead to a chemical composition dichotomy between the two hemispheres which can strongly bias retrieved abundances. \citet{2020ApJ...893L..43M} demonstrates that one-dimensional retrievals can significantly underestimate the temperature - particularly for ultra-hot Jupiters - and resulted in an overestimate of the H$_2$O abundance and an underestimate in the H$^-$ abundance, whilst in \citet{Irwin_2020} they found that their 2.5-dimensional retrieval approach was more reliable for modelling phase curve observations of exoplanets compared to the one-dimensional approach. \section{Conclusion} We have presented ground-based FORS2 observations of the highly irradiated hot-Jupiter WASP-103b covering one full transit and extracting a transmission spectrum over the range $\approx$\,400\,--\,600\,nm using the technique of differential spectrophotometry. We used a Gaussian process to simultaneously model the deterministic transit component and the instrumental systematics avoiding the need to specify a specific functional form for the systematics. We used the derived systematics model to correct our spectroscopic light curves using a common-mode correction to improve the precision of our transmission spectrum, reaching a typical precision of $\approx$\,2\,$\times$\,10$^{-4}$ in transit depth. We accounted for flux contamination due to a blended companion by applying a dilution correction factor to each spectral bin based on an estimate of the flux ratios derived from PHOENIX model spectra of WASP-103 and the contaminant star. Our analysis of the FORS2 data reveals a featureless spectrum across the full range of the observations and we find no evidence for either alkali metal absorption or Rayleigh scattering. The featureless FORS2 transmission spectrum and absence of broad absorption features could most easily be explained by either a low abundance of Na in the atmosphere, or the presence of optically thick, high-altitude clouds or other aerosols which cause broad-band extinction, masking the absorption signatures in the upper atmosphere either by scattering or absorption across the full range of the observations. To investigate this further we fit a grid of forward models and performed an atmospheric retrieval on the full optical-infrared spectrum incorporating the additional observations from GMOS, WFC3 and Spitzer. Our retrieval indicates a detection of H$_2$O at the 4.0\,$\sigma$ level and Na at the lower significance of 2.0\,$\sigma$. We compared the results from our AURA retrieval with those obtained using NEMESIS finding excellent agreement between the two approaches. In all cases we find that a relatively clear atmosphere at the terminator provides the best fit to our data. We conclude that the most likely explanation for our featureless FORS2 spectrum is due to a combination of low signal-to-noise and the inherently small scale height of WASP-103b, although patchy/inhomogeneous clouds or hazes may still play a part in damping the absorption features in the optical. Additional observations at high signal-to-noise might be able to resolve the Na feature whilst high-resolution observations will be required to detect the narrow Na core if clouds/hazes are present in the atmosphere of WASP-103b. \section*{Acknowledgements} We are extremely grateful to the anonymous referee for careful reading of the manuscript. This work is based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme 199.C-0467. J.W. would like to acknowledge funding from the Northern Ireland Department for the Economy. N.P.G. gratefully acknowledges support from Science Foundation Ireland and the Royal Society in the form of a University Research Fellowship. A.L.C. is funded by an STFC studentship. We thank Patrick Irwin for the use of NEMESIS. We are grateful to the developers of the NumPy, SciPy, Matplotlib, iPython and Astropy packages, which were used extensively in this work \citep{jones2001scipy,2007CSE.....9...90H,2007CSE.....9c..21P,2013A&A...558A..33A,2020SciPy-NMeth}. \section*{Data availability} The data in this article are available from the ESO Science Archive Facility (\url{http://archive.eso.org}) with program ID 199.C-0467. The data products generated from the raw data are available upon request from the author. \bibliographystyle{mnras}
{ "timestamp": "2020-07-28T02:38:11", "yymm": "2007", "arxiv_id": "2007.13510", "language": "en", "url": "https://arxiv.org/abs/2007.13510" }
\section{Introduction} As a white dwarf cools down, the fully ionized plasma that makes up its core eventually becomes so correlated that a first-order phase transition occurs, leading to the formation of a solid core. Predicted more than 50 years ago \citep{vanhorn1968}, the unequivocal observational signature of this phenomenon has recently been brought to light using data from the \textit{Gaia} satellite \citep{tremblay2019}. This signature comes in the form of a pile-up of objects in the cooling sequence of evolving white dwarfs \citep[see also][]{bergeron2019}. The core crystallization of white dwarfs is accompanied by the release of latent heat and gravitational energy from the change in the C/O abundance profile \citep{mochkovitch1983,garciaberro1988,isern1997,fontaine2001,althaus2010}. Those two phenomena temporarily slow down the evolution of solidifying white dwarfs, leading to a pile-up of objects at the luminosities where the phase transition takes place. Detecting this pile-up for normal-mass white dwarfs (\mbox{$\sim\,0.6\,M_\odot$}) is a delicate exercise as crystallization occurs at the same time as convective coupling, another important event in the evolution of a white dwarf. Convective coupling refers to the contact of the superficial convection zone with the degenerate and highly conductive interior. This temporarily slows down the evolution of the white dwarf \citep[][Figure~5]{fontaine2001}, thereby masking the delay caused by crystallization. For more massive objects however, crystallization starts well before convective coupling, which allowed \cite{tremblay2019} to unambiguously attribute to core crystallization the pile-up structure detected in the luminosity function of DA white dwarfs with masses between 0.9 and 1.1\,$M_{\odot}$. Using white dwarf population simulations, \cite{tremblay2019} showed that the observed pile-up of massive white dwarfs is roughly consistent with the predictions of theoretical evolution sequences and that both latent heat release and C/O phase separation are needed to explain the observations.\footnote{We note that earlier results were already supporting the occurrence of O sedimentation in white dwarfs \citep{garciaberro2010}.} However, discrepancies between the models and the observations were noticed: (1) the crystallization bump is predicted to start too early and (2) the amplitude of the pile-up is significantly underestimated \citep[see also][]{kilic2020}. Such problems are a source of concerns for the use of white dwarfs as cosmochronometers \citep{winget1987,oswalt1996,fontaine2001,kalirai2012,hansen2013,tremblay2014,kilic2017,isern2019,fantin2019}. Core crystallization is a significant event in the evolution of a white dwarf and an accurate description of this phenomenon is needed to generate reliable theoretical cooling sequences. In this letter, we present a new C/O phase diagram aimed at improving the modeling of the pile-up structure discovered by \cite{tremblay2019}. We describe our phase diagram in Section~\ref{sec:dphase} and our white dwarf population simulations in Section~\ref{sec:simul}, where we show the impact of our improved description of the phase transition on the luminosity function of massive hydrogen-atmosphere white dwarfs. \section{C/O phase diagram} \label{sec:dphase} The exact shape of the C/O phase diagram is crucial to determine the impact of core crystallization on white dwarf cooling \citep[e.g.,][]{althaus2012}. The position of the liquidus dictates the temperature at the liquid--solid transition, and the separation between the liquidus and the solidus, $\Delta x_{\scriptscriptstyle \rm O}$, determines the importance of O sedimentation during crystallization. Several versions of the C/O phase diagram already exist in the literature, each with their particular limitations. Early calculations used density-functional methods \citep{segretain1993}, which are intrinsically more approximate than modern simulation techniques. Other studies relied on analytic fits to Monte Carlo (MC) simulations \citep{ogata1993,medin2010}. This approach can be delicate due to the use of approximate linear mixing rules and to the sensitivity of the phase diagram on the somewhat arbitrary choices that are made to construct the analytic functions used to interpolate the MC data \citep{dewitt1996}. Finally, molecular dynamics (MD) simulations were used to directly simulate the phase transition \citep{horowitz2010}. Challenges associated with those MD methods include the existence of finite-size effects and their extreme computational cost, which prohibits the detailed, well-sampled mapping of the phase diagram that is needed for white dwarf modeling. We have computed a new, accurate phase diagram by adapting the Gibbs--Duhem integration technique coupled to MC simulations \citep{kofke1993a,kofke1993b} to plasmas. This advanced approach, previously only used for mixtures of neutral particles, was specifically designed to address the limitations of other phase diagram mapping techniques outlined above. It consists in calculating the phase diagram by integrating at constant pressure the Clapeyron equation $dT/d\xi$ along the liquid--solid coexistence curve in the temperature ($T$) -- fugacity fraction ($\xi$) space. The Clapeyron equation depends on the enthalpies and the compositions of the liquid and solid phases, which we calculate with Monte Carlo simulations in the semi-grand canonical ensemble, i.e., at constant pressure, temperature, number of ions and fugacity fraction. The details of our adaptation of the method to plasmas and analytical fits to our C/O phase diagram for an accurate implementation in white dwarf evolution codes will be published elsewhere. \begin{figure*} \centering \includegraphics[width=1.3\columnwidth]{CO.pdf} \caption{C/O phase diagram. Both the liquidus (above which the plasma is entirely liquid) and the solidus (below which it is entirely solid) are shown. The horizontal axis gives the number fraction of O, $x_{\scriptscriptstyle \rm O} = N_{\rm O}/ ( N_{\rm C} + N_{\rm O})$, and the vertical axis corresponds to the ratio between the temperature and the melting temperature of a pure carbon plasma. Our results, in red, are compared to those of \cite{segretain1993}, \cite{medin2010} and \cite{horowitz2010}. The region where an azeotrope is predicted is enlarged in the upper-left corner.} \label{fig:dphase} \end{figure*} Our new C/O phase diagram is shown in Figure~\ref{fig:dphase}, where we compare it to previous calculations. Qualitatively, it is close to those of \cite{medin2010} and \cite{horowitz2010}, with a similar azeotropic form. However, the quantitative differences are important in the context of white dwarf modeling. For instance, the narrow separation between the liquidus and the solidus in the phase diagram of \cite{horowitz2010} would lead to a slight underestimation of O sedimentation. We are confident that our new phase diagram is the most accurate as it is free of many of the limitations and approximations of previous studies. In particular, (1) we directly integrate the MC simulations along the coexistence curve, removing the need for arbitrary fit functions to a sparse set of simulations; (2) the relatively low cost of MC simulations allows a very fine sampling of the diagram; (3) finite-size effects are easily mitigated since there is no need to simulate a liquid--solid interface as with MD methods; (4) all calculations are performed at constant pressure and not at constant volume as in all other approaches (phase transitions occur at constant pressure); (5) the relativistic electron jellium is explicitly included in the MC simulations; (6) screening of the ion--ion interactions by relativistic electrons is accounted for; and (7) the numerical precision of the MC simulations and Gibbs--Duhem integration is revealed by the smoothness of the final coexistence curve and the recovery of the exact melting temperature at the end point of the integration at $x_{\scriptscriptstyle \rm O}=1$ (see Figure~\ref{fig:dphase}). Finally, our approach provides a full description of the thermodynamics of the phase transition. \section{Pile-up in the cooling sequence} \label{sec:simul} We have computed new evolution sequences for massive hydrogen-atmosphere white dwarfs in order to test our updated phase diagram against the observed core crystallization pile-up. To do so, we used STELUM, the Montreal white dwarf evolution code \citep{brassard2018}. The constitutive physics is identical to what is described in \cite{fontaine2001}, except that (1) we now use the \cite{cassisi2007} conductive opacities, which we correct in the moderately coupled and moderately degenerate regime \citep{blouin2020} following the new theory of \cite{shaffer2020}; (2) the plasma coupling parameter at the phase transition is given by our new C/O phase diagram; (3) the release of gravitational energy due to O sedimentation is implemented following \cite{isern1997,isern2000}; (4) diffusion of $^{22}$Ne in the liquid phase is included. We assume an initially chemically homogeneous core with $X({\rm C}) = X({\rm O}) = 0.49$ and $X(^{22}{\rm Ne})=0.02$ (consistent with the results of \citealt{salaris2010} for $M_{\star} \approx 1\,M_{\odot}$) and we use an envelope stratification given by $M_{\rm H}/M_{\star}=10^{-4}$ and $M_{\rm He}/M_{\star}=10^{-2}$ (the canonical values for DA stars). We find that the cooling delay imposed by C/O phase separation is of $1.0\,$Gyr at $\log L/L_{\odot}=-4.5$ for our $0.9\,M_{\odot}$ sequence. This is close to the value obtained by \citet[Figure 5]{althaus2012} using the \cite{horowitz2010} phase diagram, which is unsurprising given the similarity between both phase diagrams (Figure~\ref{fig:dphase}). In order to compare our cooling sequences to the luminosity function given in \cite{tremblay2019}, we have developed our own MC population synthesis code. We use the initial mass function of \cite{salpeter1955}, main-sequence lifetimes from \cite{hurley2000}, the initial--final mass relation of \cite{cummings2018} and synthetic photometry from state-of-the-art atmosphere models\footnote{\url{http://www.astro.umontreal.ca/~bergeron/CoolingModels/}} \citep{bergeron1995,kowalski2006,tremblay2011,blouin2018}. Our slightly different choice of main sequence lifetimes and initial--final mass relation from those of \cite{tremblay2019} has no effect on the results presented below. Figure~\ref{fig:wdlf} compares our theoretical luminosity function for hydrogen-atmosphere white dwarfs between 0.9 and 1.1\,$M_{\odot}$ (black solid line) to the data of \citet[in red]{tremblay2019}. We have assumed a constant stellar formation rate and a 10\,Gyr age for the Galactic disk. The new C/O phase diagram leads to a luminosity function that is very close to that obtained by \cite{tremblay2019}. The magnitude of the crystallization pile-up is very similar and the fit to the low-luminosity cut-off is nearly identical. On this, we note that the corrections to the \cite{cassisi2007} opacities in the regime of moderate Coulomb coupling and partial electron degeneracy \citep{blouin2020}---which significantly increase the conductivity of the H envelope---were crucial to obtain a cut-off close to the observations. \cite{tremblay2019} were able to reproduce the cut-off without those corrections, because they were relying on older calculations \citep{hubbard1969,itoh1983,mitake1984} that predict higher conductivities than \cite{cassisi2007} in the core and in the He envelope \citep{salaris2013}. One notable difference between our luminosity function and that of \cite{tremblay2019} is that the bump associated with crystallization starts later in our simulations ($\log L/L_{\odot} \approx -2.6$ instead of $-2.3$), bringing the theoretical luminosity function closer to the data. This shift is due to the fact that the Coulomb coupling parameter at the liquid--solid transition for an equimassic C/O mixture is $\Gamma \approx 215$ according to our phase diagram, while \cite{tremblay2019} used the one-component plasma result of $\Gamma=175$ \citep{potekhin2000}. We note that this problem does not affect evolution codes that already include a detailed treatment of crystallization \citep[e.g.,][]{althaus2012}, but only a comparison with \cite{tremblay2019} is possible at this point since no other studies have yet tried to model the {\it Gaia} crystallization pile-up. \begin{figure} \centering \includegraphics[width=\columnwidth]{WDLF.pdf} \caption{The lower panel displays luminosity functions for hydrogen-atmosphere white dwarfs with masses between 0.9 and 1.1\,$M_{\odot}$. Observational data from \cite{tremblay2019} are shown in red and their population synthesis is represented by the dotted curve. Results from our own simulations are shown as dashed and solid lines (with and without an attempt to simulate the effect of $^{22} {\rm Ne}$ phase separation, respectively). The normalization is arbitrary and we use the \textit{Gaia} $G$ magnitude limit given in \cite{gentile2019}. The upper panel shows the evolution of the fraction of the core that is crystallized for $1.1$, $1.0$ and $0.9\,M_{\odot}$ sequences.} \label{fig:wdlf} \end{figure} We have performed the most accurate calculation of the classical C/O phase diagram to date. Because it is in good agreement with previous calculations, we can conclude that the physics of the solidification of the C/O plasma in white dwarfs is by and large well understood. This implies that the excess in the luminosity function at $\log L/L_{\odot} \approx -3.3$ above the prediction from C/O crystallization is likely due to a separate mechanism. The gravitational settling of $^{22}$Ne in the C/O liquid phase is another source of cooling delay in white dwarfs \citep{bildsten2001,garcia-berro2008,camisassa2016}. It is already included in our sequences, but we note that its importance may be underestimated given remaining uncertainties on the initial $X(^{22}{\rm Ne})$ profile and on the diffusion coefficients \citep[e.g.,][]{cheng2019}. However, we checked that arbitrarily increasing the importance of $^{22}$Ne settling does not lead to a more prominent crystallization bump and worsens the overall agreement with the observations by reducing the number of objects that have had the time to evolve to lower luminosities. At $\log L/L_{\odot} \approx -3.3$, a significant portion of the core is already solidified (see the upper panel of Figure~\ref{fig:wdlf}), meaning that $^{22}$Ne diffusion is already largely stopped. Therefore, any change to the treatment of $^{22}$Ne settling---whether from the initial $X(^{22}{\rm Ne})$ profile or the diffusion coefficients---is unlikely to solve the discrepancy at $\log L/L_{\odot} \approx -3.3$. Another possibly important cooling delay may arise from the phase separation of $^{22}$Ne during crystallization \citep{isern1991,althaus2010}. Our current best understanding is that at the small $^{22}$Ne concentrations typical of C/O white dwarfs ($\sim 1$\% by number), the presence of $^{22}$Ne should not affect the phase diagram, except near the azeotropic point of the C/O/Ne phase diagram. Thus the crystallization of the C/O core initially proceeds as in the case without $^{22}$Ne with no redistribution of neon ions between the solid and liquid phases. After a significant fraction of the core has crystallized, the temperature approaches the azeotropic point and the existing calculations indicate that the liquid phase is enriched in $^{22}$Ne relative to the solid \citep{segretain1996,garcia-berro2016}. The $^{22}$Ne-poor solid is lighter than the surrounding liquid and floats upward where it eventually melts. This gradually displaces the $^{22}$Ne-rich liquid downward toward the solid--liquid interface until the azeotropic composition is reached, thereby releasing a considerable amount of gravitational energy. Given our very limited knowledge of the ternary C/O/Ne phase diagram \citep{segretain1996,hughto2012}, this effect cannot be quantitatively implemented in our evolution models. However, we note that our current understanding of $^{22}$Ne phase separation is remarkably consistent with the missing cooling delay. In Figure~\ref{fig:wdlf} we show the luminosity function obtained by adding an artificial 0.6\,Gyr delay when 60\% of the core is crystallized. Those parameters are entirely consistent with those found in preliminary studies \citep{segretain1996,garcia-berro2016} and yield an excellent fit to the crystallization pile-up.\footnote{The additional cooling delay from $^{22}$Ne phase separation worsens the fit to the low-luminosity cut-off, but this may simply be due to the unrealistic assumption that $^{22}$Ne phase separation has the same importance for all stars in our simulation. In particular, the old stars that form the cut-off of the luminosity function likely contain less $^{22}$Ne than the younger ones that form the crystallization bump, since the $^{22}$Ne abundance of a white dwarf increases for higher metallicity progenitors. Alternatively, this mismatch could be due to our assumption on the age of the Galactic disk.} Based on the current, yet limited knowledge of the C/O/Ne phase diagram, we propose that the phase separation of $^{22}$Ne in the advanced stage of crystallization significantly contributes to the pile up in the luminosity function of $0.9-1.1\,M_{\odot}$ white dwarfs (Figure~\ref{fig:wdlf}). Finally, we speculate that the cooling anomaly for very massive white dwarfs ($1.08-1.23\,M_{\odot}$) identified by \cite{cheng2019}---where roughly 6\% of objects are affected by an unexplained $\approx 8\,{\rm Gyr}$ cooling delay---may also be at least partially explained by $^{22}$Ne phase separation rather than only by $^{22}$Ne diffusion in the liquid phase as originally suggested. In fact, the energy source responsible for the unexplained cooling delay has an effect that is highly peaked on the crystallization sequence. We found that such a peaked effect is unlikely to occur from simple diffusion alone (which is inhibited by crystallization), while it can realistically be expected from $^{22}$Ne phase separation. Of course, a 0.6\,Gyr delay is too small to explain the findings of \cite{cheng2019}, but this delay could be much more important if the initial $^{22}$Ne concentration is higher. A significant fraction of the massive objects of Cheng et al.'s sample must come from double white dwarf mergers and, interestingly, additional $^{22}$Ne is expected to be formed during merger events \citep{staff2012}. This would mean that those objects have a higher $^{22}$Ne abundance than the usual $X(^{22}{\rm Ne})=0.01-0.02$ (and a different distribution throughout the core), possibly leading to a longer cooling delay due to the phase separation of neon during crystallization. By removing any remaining uncertainties on the classical C/O phase diagram, we have shown that the pile-up detected in the \textit{Gaia} cooling sequence cannot be explained by latent heat release and O sedimentation alone. $^{22}$Ne phase separation appears to play a crucial role in the formation of the excess of massive white dwarfs observed at $\log L/L_\odot \approx -3.3$. Our results highlight the need for a complete and accurate ternary C/O/Ne phase diagram to establish quantitatively the importance of $^{22}$Ne phase separation in white dwarf evolution. We plan to generalize our newly developed Gibbs--Duhem integration method to three-component mixtures to address this problem. \begin{acknowledgements} Research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number 20190624PRD2. This work was performed under the auspices of the U.S. Department of Energy under Contract No. 89233218CNA000001. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2020-07-28T02:43:44", "yymm": "2007", "arxiv_id": "2007.13669", "language": "en", "url": "https://arxiv.org/abs/2007.13669" }
\subsection{Dual Loss Functions} To optimize the entire denoising process, we consider {\it dual loss} to measure the quality of both the subsampling and final point cloud reconstruction. That is, we have two loss functions, including 1) a loss function $\mathcal{L}_{\mathrm{sample}}$ to quantify the distance between the subsampled and pre-filtered set $\hat{{\mathbf S}}$ and the ground truth point cloud ${\mathbf P}_\mathrm{gt}$, which explicitly reduces the noise in $\hat{{\mathbf S}}$ but is not required for the convergence of the training; 2) a loss function $\mathcal{L}_{\mathrm{rec}}$ to quantify the distance between the finally reconstructed point cloud ${\widetilde{\mathbf P}}$ and the ground truth ${\mathbf P}_\mathrm{gt}$. Formally, our network can be trained end-to-end by minimizing \begin{equation} \min_{\Theta} \mathcal{L}_{\mathrm{sample}} + \mathcal{L}_{\mathrm{rec}}, \end{equation} where $\Theta$ denotes the learnable parameters in the network. We choose the Chamfer distance (CD) \cite{fan2017pointsetgen} as $\mathcal{L}_{\mathrm{sample}}$, since $\hat{{\mathbf S}}$ and ${\mathbf P}_\mathrm{gt}$ exhibit different number of points, {\it i.e.}, $|\hat{{\mathbf S}}| < |{\mathbf P}_\mathrm{gt}|$. It is defined as \begin{equation} \label{eq:cd_train} \mathcal{L}_{\mathrm{sample}} = \mathcal{L}_{\mathrm{CD}}(\hat{{\mathbf S}},{\mathbf P}_\mathrm{gt})= \frac{1}{\left|\hat{{\mathbf S}}\right|} \sum_{{\mathbf p} \in \hat{{\mathbf S}}} \min _{{\mathbf q} \in {\mathbf P}_\mathrm{gt}}\| {\mathbf p} - {\mathbf q}\|_2^2+\frac{1}{\left|{\mathbf P}_\mathrm{gt}\right|} \sum_{{\mathbf q} \in {\mathbf P}_\mathrm{gt}} \min _{{\mathbf p} \in \hat{{\mathbf S}}}\| {\mathbf q} - {\mathbf p}\|_2^2. \end{equation} This loss term improves the denoising quality by explicitly optimizing the sampled and pre-filtered set $\hat{{\mathbf S}}$, but is optional for the network training. In particular, this loss is not adopted when the network is trained in an unsupervised manner. $\mathcal{L}_{\mathrm{rec}}$ can be supervised or unsupervised, which are discussed below. \textbf{Supervised} $\mathcal{L}_{\mathrm{rec}}$. We choose the Earth Mover's distance (EMD) \cite{fan2017pointsetgen} as the supervised loss of $\mathcal{L}_{\mathrm{rec}}$, which is shown superior to the Chamfer distance in terms of the visual quality \cite{liu2019morphing, achlioptas2018learning}. The Earth Mover's distance is defined when two point clouds have the {\it same} number of points. Fortunately, the denoising task naturally satisfies this requirement. The EMD loss measuring the distance between the denoised point cloud ${\widetilde{\mathbf P}}$ and the ground truth point cloud ${\mathbf P}_\mathrm{gt}$ is given by: \begin{equation} \mathcal{L}_{\mathrm{rec}}=\mathcal{L}_{\mathrm{EMD}}\left({\widetilde{\mathbf P}}, {\mathbf P}_\mathrm{gt}\right)=\min _{\varphi: {\widetilde{\mathbf P}} \rightarrow {\mathbf P}_\mathrm{gt}} \frac{1}{N} \sum_{{\mathbf p} \in {\widetilde{\mathbf P}}}\|{\mathbf p}-\varphi({\mathbf p})\|_{2}^2 \end{equation} where $N = |{\widetilde{\mathbf P}}| = |{\mathbf P}_\mathrm{gt}|$, and $\varphi$ is a bijection. Note that, previous works on denoising \cite{PCN2020, TotalDenoising2019} often suffer from the clustering effect of points, which is alleviated by introducing a repulsion loss. Our architecture does not suffer from this problem thanks to the one-to-one correspondence of points in $\mathcal{L}_\mathrm{EMD}$. \newline \noindent\textbf{Unsupervised} $\mathcal{L}_{\mathrm{rec}}$. Our network can also be trained in an unsupervised fashion. Leveraging an unsupervised denoising loss in \cite{TotalDenoising2019}, we design an unsupervised loss tailored for our manifold reconstruction based denoising. The key observation is that points with a denser neighborhood are closer to the underlying clean surface, which may be regarded as ground truth points for training a denoiser. In \cite{TotalDenoising2019}, the unsupervised denoising loss is defined as \begin{equation} \mathcal{L}_{\mathrm{U}} = \frac{1}{N} \sum_{i=1}^{N} \mathbb{E}_{{\mathbf q} \sim P({\mathbf q} | {\mathbf p}_i)} \| f({\mathbf p}_i) - {\mathbf q} \|, \end{equation} where $P({\mathbf q}|{\mathbf p}_i)$ is a prior capturing the probability that a point ${\mathbf q}$ from the noisy point cloud is the underlying clean point of the given ${\mathbf p}_i$ in the noisy point cloud. Empirically, $P({\mathbf q}|{\mathbf p}_i)$ is defined as $ P({\mathbf q}|{\mathbf p}_i) \propto \exp ( - \frac{\| {\mathbf q} - {\mathbf p}_i \|_2^2}{2\sigma^2} )$, so that sampling points from the noisy input point cloud ${\mathbf P}$ according to $P({\mathbf q}|{\mathbf p}_i)$ produces points that are closer to the underlying clean surface with high probability \cite{TotalDenoising2019}. $f(\cdot)$ represents the denoiser that maps a noisy point ${\mathbf p}_i$ to a denoised point ${\mathbf q}_i$. It is a bijection between the noisy point cloud ${\mathbf P}$ and the output point cloud ${\widetilde{\mathbf P}}$. This bijection can be naturally established in previous deep-learning based denoising methods such as PointCleanNet \cite{PCN2020}, Total Denoising \cite{TotalDenoising2019} {\it etc.}, because these methods predict the displacement of each point. However, there is no natural one-to-one correspondence between ${\mathbf P}$ and ${\widetilde{\mathbf P}}$ in our method based on patch manifolds. Hence, we seek to construct one by \begin{equation} f=\arg\min _{f: {\mathbf P} \rightarrow {\widetilde{\mathbf P}}} \sum_{{\mathbf p} \in {\mathbf P}}\|f({\mathbf p}) - {\mathbf p}\|_{2}. \end{equation} It has to be noted that, since the aforementioned loss term $\mathcal{L}_\mathrm{sample}$ requires the supervision of ground truth data, we will disable the term $\mathcal{L}_\mathrm{sample}$ and only use the unsupervised $\mathcal{L}_\mathrm{U}$ in the setting of unsupervised training. \section{Introduction} \label{sec:introduction} Recent advances in depth sensing, laser scanning and image processing have enabled convenient acquisition of 3D point clouds from real world scenes \footnote{Commercial products include Microsoft Kinect (2010-2014), Intel RealSense (2015-), Velodyne LiDAR (2007-2020), LiDAR scanner of Apple iPad Pro (2020), {\it etc.}}. Point clouds consist of discrete 3D points irregularly sampled from continuous surfaces, which can be applied to a wide range of applications such as autonomous driving, robotics and immersive tele-presence. Nevertheless, they are often contaminated by noise due to the inherent limitations of scanning devices or matching ambiguities in the reconstruction from images, which significantly affects downstream understanding tasks since the underlying structures are deformed. Hence, point cloud denoising is crucial to relevant 3D vision applications, which is also challenging due to the irregular and unordered characteristics of point clouds. Previous point cloud denoising methods include non-deep-learning based methods \cite{bilat, jetsfit2005, MRPCA2017, GLR2019, projbased2018, featuregraph2020} and deep-learning based methods \cite{NeuralProj2019, PCN2020, TotalDenoising2019, ECNet2018}. We focus on the class of deep-learning based methods, which have achieved promising denoising results thanks to the advent of neural network architectures crafted for point clouds \cite{qi2017pointnet,qi2017pointnet2,yang2018foldingnet,achlioptas2018learning,ECNet2018, te2018rgcnn,PPPU2019, yu2018pu, wang2019dynamic, gao2020graphter}. Neural Projection \cite{NeuralProj2019}, PointCleanNet \cite{PCN2020} and Total Denoising \cite{TotalDenoising2019} are pioneers of deep-learning based point cloud denoising approaches. In general, these methods infer the displacement of noisy points from the underlying surface and reconstruct {\it points}, which however are not designated to recover the surface explicitly and may lead to sub-optimal denoising results. To this end, inspired by that a point cloud is typically a representation of some underlying surface or 2D manifold over a set of sampled points, we propose to explicitly learn the underlying {\it manifold} of a noisy point cloud for reconstruction, aiming to capture intrinsic structures in point clouds. As demonstrated in Fig.~\ref{fig:teaser}, the key idea is to sample a subset of points with low noise ({\it i.e.}, closer to the clean surface) via differentiable pooling, and then reconstruct the underlying manifold from these points and their embedded neighborhood features. By resampling on the reconstructed manifold, we obtain a denoised point cloud. In particular, we present an autoencoder-like neural network for differentiable manifold reconstruction. At the encoder, we learn both local and non-local features of each point, which embed the representations of local surfaces. Based on the learned features, we sample points that are closer to the underlying surfaces (less noise perturbation) via the proposed adaptive {\it differentiable pooling} operation, which narrows down the latent space for reconstructing the underlying manifold. These sampled points are pre-filtered and retained, while the other points are discarded. At the decoder, we infer the underlying manifold by transforming each sampled point along with the embedded neighborhood feature to a local surface centered around the point---referred to as {\it "patch manifold"}. By sampling on such patch manifolds, we finally obtain a denoised point set which captures intrinsic structures of the underlying surface. Further, we design an unsupervised training loss, so that our network can be trained in either an unsupervised or supervised fashion. Experiments show that our method significantly outperforms state-of-the-art denoising methods especially at high noise levels. To summarize, the contributions of our paper include \begin{itemize} \item We propose a differentiable manifold reconstruction paradigm for point cloud denoising, aiming to learn the underlying manifold of a noisy point cloud via an autoencoder-like framework. \item We propose an adaptive differentiable pooling operator on point clouds, which samples points that are closer to the underlying surfaces and thus narrows down the latent space for reconstructing the underlying manifold. \item We infer the underlying manifold by transforming each sampled point along with the embedded feature of its neighborhood to a local surface centered around the point---a patch manifold. \item We design an unsupervised training loss, so that our network can be trained in either an unsupervised or supervised fashion. \end{itemize} \section{Related Work} \subsection{Non-deep-learning Based Point Cloud Denoising} \label{sec:related-nondl} Non-deep-learning based point cloud denoising methods have been extensively studied, which mainly include local-surface-fitting based methods, sparsity based methods and graph based methods. \begin{itemize} \item \textbf{Local-surface-fitting based methods.} This class of methods approximate the point cloud with a smooth surface and then project points in the noisy point cloud onto the fitted surface. \cite{MLS2001} proposes a moving least squares (MLS) projection operator to calculate the optimal fitting surface of the point cloud. Similarly, other surface fitting methods have been proposed for point cloud denoising such as jet-fitting with re-projection \cite{jetsfit2005} and bilateral filtering \cite{bilat} which take into account both point coordinates and normals. However, these methods are often sensitive to outliers. \item \textbf{Sparsity based methods.} This class of methods are based on the sparse representation theory \cite{xu2015sparsity, sparsecoding2010, sun2015denoising}. They generally reconstruct normal vectors by solving an optimization problem of sparse regularization and then update the position of points based on the reconstructed normals. Moving Robust Principal Component Analysis (MRPCA) \cite{MRPCA2017} is a recently proposed sparsity-based method. However, the performance tends to degrade when the noise level is high due to over-smoothing or over-sharpening. \item \textbf{Graph based methods.} This class of methods represent point clouds on graphs and perform denoising via graph filters \cite{graphbased2015, graphbased2018, GLR2019, featuregraph2020,Hu2020gsp}. In \cite{graphbased2015}, the input point cloud is represented as signal on a $k$-nearest-neighbor graph and then denoised via a convex optimization problem regularized by the gradient of the graph. In \cite{GLR2019}, Graph Laplacian Regularization (GLR) of low dimensional manifold models is proposed for point cloud denoising. \end{itemize} \subsection{Deep-learning Based Point Cloud Denoising} With the advent of point-based neural networks \cite{qi2017pointnet, qi2017pointnet2, wang2019dynamic}, deep point cloud denoising has received increasing attention. Existing deep learning based methods generally involve predicting the displacement of each point in noisy point clouds via neural networks, and apply the inverse displacement to each point. Among them, Neural Projection \cite{NeuralProj2019} employs PointNet \cite{qi2017pointnet} to predict the tangent plane at each point, and projects points to the tangent planes. However, training a Neural Projection denoiser requires the access to not only clean point clouds but also normal vectors of each point. PointCleanNet \cite{PCN2020} predicts displacement of points from the clean surface via PCPNet \cite{guerrero2018pcpnet}---a variant of PointNet. It is trained end-to-end by minimizing the $\ell 2$ distance between the denoised point cloud and the ground truth, which does not require the access to normal vectors. PointCleanNet out-performs some classical denoising methods including bilateral filtering and jet fitting. The main defect of PointCleanNet includes outlier sensitivity and point cloud shrinking. Total Denoising \cite{TotalDenoising2019} is the first unsupervised deep learning method for point cloud denoising. It is based on the assumption that points with denser surroundings are closer to the underlying surface. Hence, it introduces a spatial prior that steers convergence towards the underlying surface without the supervision of ground truth point clouds. However, the unsupervised denoiser is sensitive to outliers and may shrink point clouds. In addition to denoising networks, some other neural network architectures involve point cloud consolidation, which includes denoising but is often only applicable to trivial noise. PointProNet \cite{PointProNets2018} projects patches in the point cloud into 2D height maps and leverages a 2D CNN to denoise and upsample them. EC-Net \cite{ECNet2018} and 3PU \cite{PPPU2019} mainly focus on upsampling, and have shown to be robust against trivial noise. These consolidation methods are generally prone to fail when the noise level is high \cite{PCN2020}. \section{Method} In this section, we present our method on learning the underlying manifold for point cloud denoising. We start with an overview of our key ideas, and then elaborate on the proposed differentiable manifold reconstruction. Finally, we present our loss functions as well as provide further analysis into our method. \subsection{Overview} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/backbone.pdf} \caption{Illustration of the proposed point cloud denoising framework.} \Description{The neural network.} \label{fig:framework} \end{figure*} Given an input point cloud ${\mathbf P} \in \mathbb{R}^{N \times 3} $ corrupted by noise, our network produces a clean point cloud ${\widetilde{\mathbf P}} \in \mathbb{R}^{N \times 3} $. As illustrated in Fig.~\ref{fig:framework}, we propose an autoencoder-like network architecture for denoising. \begin{itemize} \item \textbf{Representation Encoder $\mathcal E$.} $\mathcal E$ samples a subset of $M$ points ${\mathbf S} \in \mathbb{R}^{M \times 3}$ that are perturbed by less noise from ${\mathbf P}$ via differentiable pooling. Specifically, $\mathcal E$ consists of a feature extraction unit and a differentiable downsampling (pooling) unit. The feature extraction unit produces features that encode both local and non-local geometry at each point of ${\mathbf P}$. The extracted features are then fed into the differentiable pooling operator---essentially a downsampling unit that identifies points that are closer to the underlying surface, leading to a subset of points ${\mathbf S}$. \item \textbf{Manifold Reconstruction Decoder $\mathcal D$.} $\mathcal D$ first infers the underlying manifold from ${\mathbf S}$ and then samples on the inferred manifold to produce the denoised point set ${\widetilde{\mathbf P}} \in \mathbb{R}^{N \times 3} $. We transform each point in ${\mathbf S}$ along with the embedded neighborhood feature to a local surface centered around each point---a {\it patch manifold}. By sampling multiple times on each patch manifold, we reconstruct a clean point cloud ${\widetilde{\mathbf P}}$. \end{itemize} Further, we propose a dual supervised loss function as well as an unsupervised loss, so that our network can be trained end-to-end in an unsupervised or supervised fashion. \subsection{Representation Encoder with Differentiable Pooling} \label{sec:encoder} The representation encoder consists of a feature extraction unit and a differentiable pooling unit, which we discuss in details as follows. \subsubsection{Feature Extraction Unit} The feature extraction unit consists of multiple dynamic graph convolution layers, leveraging on the DGCNN \cite{wang2019dynamic}. Given features ${\mathbf X}^{\ell}=\{{\mathbf x}_i^{\ell}\}_{i=1}^N \in \mathbb{R}^{N \times F^\ell}$ in the $\ell$th layer, the $(\ell+1)$th layer first constructs a $k$-Nearest-Neighbor ($k$-NN) graph based on the Euclidean distance between features, and then performs edge convolution \cite{wang2019dynamic} on the graph: \begin{equation} \label{eq:edgeconv} {\mathbf x}_i^{\ell+1} = G_{\ell}({\mathbf X}^{\ell}) = \operatorname{ReLU}\big( \max_{j \in \mathcal{N}(i)} H_\theta({\mathbf x}_i^{\ell}, {\mathbf x}_j^{\ell} - {\mathbf x}_i^{\ell}) \big). \end{equation} Here, $H_\theta$ is a densely connected multi-layer perceptron (MLP) parameterized by $\theta$, $\mathcal{N}(i)$ denotes the neighborhood of point $i$, and $\max$ is the element-wise max pooling function. To capture higher-order dependencies, multiple dynamic graph convolution layers are chained and connected densely within a feature extraction unit \cite{PPPU2019, liu2019densepoint, huang2017densely, li2019deepgcns}: \begin{equation} \label{eq:graphconv} {\mathbf X}^{\ell} = G_{\ell} \big([ {\mathbf X}^{\ell-1}, \ldots, {\mathbf X}^1, {\mathbf X}^0 ]\big), \end{equation} where $[\ldots]$ denotes the concatenation operation, and ${\mathbf X}^0$ is the input feature. As depicted above, we adopt dense connections both within and between graph convolution layers. Within graph convolution layers, the MLP $H_\theta$ is densely connected --- each fully connected (FC) layer's output is passed to all subsequent FC layers. Between graph convolution layers, the features output by each layer are fed to all subsequent layers. Dense connections reduce the number of the network's parameters and produce features with richer contextual information \cite{PPPU2019,liu2019densepoint}. In addition, we assemble multiple feature extraction units with different $k$-NN values in parallel to obtain features of different scales, and concatenate them before passing them to downstream networks. The final output is a feature matrix ${\mathbf X} \in \mathbb{R}^{N \times F}$, where $N$ denotes the number of points and $F$ denotes the dimension of features. \subsubsection{Differentiable Pooling Operator} Having extracted multi-scale features from the input point cloud ${\mathbf P}$, we propose a differentiable pooling operator on point clouds to sample a subset of points ${\mathbf S}$ from ${\mathbf P}$ adaptively. Ideally, the operator will learn to identify points that are closer to the underlying surface, which capture the surface structure better and thus will be used to reconstruct the underlying manifold at the decoder. Different from existing pooling techniques that often employ hand-crafted rules such as random sampling and farthest point sampling \cite{qi2017pointnet2}, our differentiable pooling learns the optimal downsampling strategy adaptively during the training process. Now we formulate the differentiable pooling operator. Given the learned feature ${\mathbf X} \in \mathbb{R}^{N \times F}$ of the input point cloud ${\mathbf P}$ obtained from the feature extraction unit, our pooling operator first computes a score for each point: \begin{equation} \label{eq:score} {\mathbf s} = \operatorname{Score}({\mathbf X}), \end{equation} where $\operatorname{Score}(\cdot)$ is the score function implemented by an MLP that produces a score vector ${\mathbf s} \in \mathbb{R}^{N\times 1}$. The score function will learn a higher score for points closer to the underlying surface and a lower score for points perturbed with large noise during the end-to-end training process. Points in ${\mathbf P}$ that have top-$M$ ($M<N$) scores will be retained, while the others will be discarded: \begin{equation} \label{eq:gpool10} {\mathbf i} = \operatorname{arg \ top}_M ({\mathbf s}), \end{equation} \begin{equation} \label{eq:gpool1} {\mathbf S} = {\mathbf P}[{\mathbf i}], \end{equation} where ${\mathbf i}$ is the index vector of the top-$M$ points and ${\mathbf S} \in \mathbb{R}^{M\times 3}$ is the downsampled point set. In the experiments, we set $M=\frac{N}{2}$ without loss of generality. To make the score function differentiable so as to be trained by back propagation \cite{gao2019gpool}, we deploy the following gate operation on the features of the sampled point set ${\mathbf X}[{\mathbf i}]$ to acquire the features ${\mathbf Y}$ of ${\mathbf S}$: \begin{equation} \label{eq:gpool2} {\mathbf Y} = {\mathbf X}[{\mathbf i}] \odot \operatorname{sigmoid}({\mathbf s}[{\mathbf i}] \cdot \mathbf{1}^{1\times F}), \end{equation} where ${\mathbf Y} \in \mathbb{R}^{M \times F}$ is the feature matrix of ${\mathbf S}$ after the above gate operation, ${\mathbf X}[{\mathbf i}] \in \mathbb{R}^{M \times F} $ is the feature matrix of ${\mathbf S}$ before the gate operation, ${\mathbf s}[{\mathbf i}] \in \mathbb{R}^{M \times 1}$ is the score vector of the retained points, and $\odot$ denotes element-wise multiplication. To further reduce the noise variance of the sampled point set ${\mathbf S}$, we perform pre-filtering on ${\mathbf S}$: \begin{equation} \label{eq:gpool3} \hat{{\mathbf S}} = {\mathbf S} + \Delta {\mathbf S} , \\ \end{equation} \begin{equation} \Delta {\mathbf S} = \operatorname{MLP}({\mathbf Y}), \end{equation} where $\hat{{\mathbf S}}, \Delta {\mathbf S} \in \mathbb{R}^{M\times 3}$, and $\Delta {\mathbf S}$ is produced by an MLP that takes the feature matrix ${\mathbf Y}$ as input. The pre-filtering term $\Delta {\mathbf S}$ moves each point in ${\mathbf S}$ closer to the underlying surface, which will lead to more accurate manifold reconstruction at the decoder to be discussed. \subsection{Manifold Reconstruction Decoder} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/patch.pdf} \caption{Illustration of the patch manifold reconstruction and resampling. Note that ${\widetilde{\mathbf P}}$ is resampled from the manifolds, so there is no strict point-to-point correspondence between $\hat{{\mathbf S}}$ and ${\widetilde{\mathbf P}}$.} \Description{Patch manifold reconstruction.} \label{fig:patch_m} \end{figure} The manifold reconstruction decoder transforms each point in the pre-filtered low-noise point set $\hat{{\mathbf S}}$ along with its embedded neighborhood feature matrix ${\mathbf Y}$ into a local surface centered around the point---referred to as a {\it patch manifold}. Afterwards, we upsample $\hat{{\mathbf S}}$ to a denoised point cloud ${\widetilde{\mathbf P}} \in \mathbb{R}^{N \times 3}$ based on the inferred patch manifolds. The whole process is illustrated in Fig. ~\ref{fig:patch_m}. As discussed in Sec.~\ref{sec:encoder}, a feature vector ${\mathbf y}_i \in \mathbb{R}^F$ encodes the geometry of the {\it neighborhood surface} surrounding the point ${\mathbf p}_i \in \hat{{\mathbf S}}$, so that ${\mathbf y}_i$ can be transformed into a manifold that describes the local underlying surface around ${\mathbf p}_i$. We refer to such locally defined manifold as a {\it patch manifold} around ${\mathbf p}_i$. Formally, we first define a 2D manifold $\mathcal{M}$ embedded in the 3D space parameterized by some feature vector ${\mathbf y}$ as: \begin{equation} \mathcal{M}(u, v; {\mathbf y}) : [-1,1] \times [-1, 1] \rightarrow \mathbb{R}^3, \label{eq:p_manifold} \end{equation} where $(u,v)$ is some point in the 2D rectangular area $[-1, 1]^2$. Eq.~\eqref{eq:p_manifold} maps the 2D rectangle to an arbitrarily shaped patch manifold parameterized by ${\mathbf y}$. Such mapping allows us to draw samples from the arbitrarily shaped patch manifold $\mathcal{M}$ in the following way: we firstly draw samples from the uniform distribution over $[-1, 1]^2$ and then transform them into the 3D space via the mapping. Having defined a mapping to manifold $\mathcal{M}$, it is natural to define the patch manifold $\mathcal{M}_i$ around each point ${\mathbf p}_i$ in $\hat{{\mathbf S}}$ as: \begin{equation} \mathcal{M}_i(u,v;{\mathbf y}_i) = {\mathbf p}_i + \mathcal{M}(u, v; {\mathbf y}_i), \end{equation} which moves the constructed manifold $\mathcal{M}(u, v; {\mathbf y}_i)$ to a local surface centering at ${\mathbf p}_i$. Now we have a set of patch manifolds $\{\mathcal{M}_i | {\mathbf p}_i \in \hat{{\mathbf S}}\}_{i=1}^M$, which characterize the underlying surface of the point cloud. By sampling on these M patch manifolds, we can obtain the denoised point set ${\widetilde{\mathbf P}}$. Specifically, we assume the number of points in the subsampled point set is the half of that in the input point set, {\it i.e.}, {M=$|\hat{{\mathbf S}}| = \frac{1}{2}|{\mathbf P}|$}. In order to acquire a denoised point set ${\widetilde{\mathbf P}}$ that has the same size as the input point set ${\mathbf P}$, we need to sample twice on each patch manifold. Hence, it is essentially an upsampling process. In practice, the parameterized patch manifold $\mathcal{M}_i(u, v; {\mathbf y}_i)$ is implemented by an MLP: \begin{equation} \mathcal{M}_i(u,v;{\mathbf y}_i) = \operatorname{MLP}_{\mathcal{M}}([u, v, {\mathbf y}_{i}]). \end{equation} We choose the MLP implementation because it is a universal function approximator \cite{leshno1993multilayer} which is expressive enough to approximate arbitrarily shaped manifolds. Then, we sample two points from each patch manifold $\mathcal{M}_i([u, v, {\mathbf y}_i])$, leading to a denoised point cloud : \begin{equation} {\widetilde{\mathbf P}} = \begin{pmatrix} p_1 + \operatorname{MLP}_\mathcal{M}([u_{11}, v_{11}, {\mathbf y}_1]) \\ p_1 + \operatorname{MLP}_\mathcal{M}([u_{12}, v_{12}, {\mathbf y}_1]) \\ \vdots \\ p_M + \operatorname{MLP}_\mathcal{M}([u_{M1}, v_{M1}, {\mathbf y}_M]) \\ p_M + \operatorname{MLP}_\mathcal{M}([u_{M2}, v_{M2}, {\mathbf y}_M]) \end{pmatrix}. \end{equation} To summarize, by learning a parameterized patch manifold $\mathcal{M}(u,v;{\mathbf y}_i)$, $ i = 1, \ldots ,M = | \hat{{\mathbf S}}|$ from each point $i$ in $\hat{{\mathbf S}}$ and sampling on each patch manifold, we reconstruct a clean point cloud from the noisy input. \subsection{Loss Functions} We present loss functions for supervised training and unsupervised training respectively. \subsubsection{Supervised Training Loss} We consider {\it dual loss} in the setting of supervised training to measure the quality of both subsampling and final point cloud reconstruction. That is, we have two parts in the supervised loss function, including 1) a loss function $\mathcal{L}_{\mathrm{sample}}$ to quantify the distance between the subsampled and pre-filtered set $\hat{{\mathbf S}}$ and the ground truth point cloud ${\mathbf P}_\mathrm{gt}$, which explicitly reduces the noise in $\hat{{\mathbf S}}$ but is not required for the convergence of the training; 2) a loss function $\mathcal{L}_{\mathrm{rec}}$ to quantify the distance between the finally reconstructed point cloud ${\widetilde{\mathbf P}}$ and the ground truth ${\mathbf P}_\mathrm{gt}$. Formally, our network can be trained supervisedly end-to-end by minimizing \begin{equation} \min_{\Theta} \mathcal{L}_{\mathrm{sample}} + \mathcal{L}_{\mathrm{rec}}, \end{equation} where $\Theta$ denotes the learnable parameters in the network. We choose the Chamfer distance (CD) \cite{fan2017pointsetgen} as $\mathcal{L}_{\mathrm{sample}}$, since $\hat{{\mathbf S}}$ and ${\mathbf P}_\mathrm{gt}$ exhibit different number of points, {\it i.e.}, $|\hat{{\mathbf S}}| < |{\mathbf P}_\mathrm{gt}|$. It is defined as \begin{equation} \label{eq:cd_train} \mathcal{L}_{\mathrm{sample}} = \mathcal{L}_{\mathrm{CD}}(\hat{{\mathbf S}},{\mathbf P}_\mathrm{gt})= \frac{1}{\left|\hat{{\mathbf S}}\right|} \sum_{{\mathbf p} \in \hat{{\mathbf S}}} \min _{{\mathbf q} \in {\mathbf P}_\mathrm{gt}}\| {\mathbf p} - {\mathbf q}\|_2^2+\frac{1}{\left|{\mathbf P}_\mathrm{gt}\right|} \sum_{{\mathbf q} \in {\mathbf P}_\mathrm{gt}} \min _{{\mathbf p} \in \hat{{\mathbf S}}}\| {\mathbf q} - {\mathbf p}\|_2^2. \end{equation} This loss term improves the denoising quality by explicitly optimizing the sampled and pre-filtered set $\hat{{\mathbf S}}$, but is optional for the network training. We choose the Earth Mover's distance (EMD) \cite{fan2017pointsetgen} as $\mathcal{L}_{\mathrm{rec}}$, which is shown superior to the Chamfer distance in terms of the visual quality \cite{liu2019morphing, achlioptas2018learning}. The Earth Mover's distance is defined when two point clouds have the {\it same} number of points. Fortunately, the denoising task naturally satisfies this requirement. The EMD loss measuring the distance between the denoised point cloud ${\widetilde{\mathbf P}}$ and the ground truth point cloud ${\mathbf P}_\mathrm{gt}$ is given by: \begin{equation} \mathcal{L}_{\mathrm{rec}}=\mathcal{L}_{\mathrm{EMD}}\left({\widetilde{\mathbf P}}, {\mathbf P}_\mathrm{gt}\right)=\min _{\varphi: {\widetilde{\mathbf P}} \rightarrow {\mathbf P}_\mathrm{gt}} \frac{1}{N} \sum_{{\mathbf p} \in {\widetilde{\mathbf P}}}\|{\mathbf p}-\varphi({\mathbf p})\|_{2}^2, \end{equation} where $N = |{\widetilde{\mathbf P}}| = |{\mathbf P}_\mathrm{gt}|$, and $\varphi$ is a bijection. Note that, previous works on denoising \cite{PCN2020, TotalDenoising2019} often suffer from the clustering effect of points, which is alleviated by introducing a repulsion loss. Our architecture does not suffer from this problem thanks to the one-to-one correspondence of points in $\mathcal{L}_\mathrm{EMD}$. \subsubsection{Unsupervised Training Loss} Our network can also be trained in an unsupervised fashion. Leveraging the unsupervised denoising loss in \cite{TotalDenoising2019}, we design an unsupervised loss tailored for our manifold reconstruction based denoising. The key observation is that points with a denser neighborhood are closer to the underlying clean surface, which may be regarded as ground truth points for training a denoiser. In \cite{TotalDenoising2019}, the unsupervised denoising loss is defined as \begin{equation} \label{eq:unsup} \mathcal{L}_{\mathrm{U}} = \frac{1}{N} \sum_{i=1}^{N} \mathbb{E}_{{\mathbf q} \sim P({\mathbf q} | {\mathbf p}_i)} \| f({\mathbf p}_i) - {\mathbf q} \|, \end{equation} where $P({\mathbf q}|{\mathbf p}_i)$ is a prior capturing the probability that a point ${\mathbf q}$ from the noisy point cloud is the underlying clean point of the given ${\mathbf p}_i$ in the noisy point cloud. Empirically, $P({\mathbf q}|{\mathbf p}_i)$ is defined as $ P({\mathbf q}|{\mathbf p}_i) \propto \exp ( - \frac{\| {\mathbf q} - {\mathbf p}_i \|_2^2}{2\sigma^2} )$, so that sampling points from the noisy input point cloud ${\mathbf P}$ according to $P({\mathbf q}|{\mathbf p}_i)$ produces points that are closer to the underlying clean surface with high probability \cite{TotalDenoising2019}. $f(\cdot)$ represents the denoiser that maps a noisy point ${\mathbf p}_i$ to a denoised point ${\mathbf q}_i$. It is a bijection between the noisy point cloud ${\mathbf P}$ and the output point cloud ${\widetilde{\mathbf P}}$. This bijection can be naturally established in previous deep-learning based denoising methods such as PointCleanNet \cite{PCN2020}, Total Denoising \cite{TotalDenoising2019} {\it etc.}, because these methods predict the displacement of each point. However, there is no natural one-to-one correspondence between ${\mathbf P}$ and ${\widetilde{\mathbf P}}$ in our method based on patch manifolds. Hence, we seek to construct one by \begin{equation} f=\arg\min _{f: {\mathbf P} \rightarrow {\widetilde{\mathbf P}}} \sum_{{\mathbf p} \in {\mathbf P}}\|f({\mathbf p}) - {\mathbf p}\|_{2}. \end{equation} Having established the bijection $f$ between ${\mathbf P}$ and ${\widetilde{\mathbf P}}$, the unsupervised loss $\mathcal{L}_{\mathrm{U}}$ in Eq.~\eqref{eq:unsup} can be computed. \subsection{Analysis} Intuitively, our method can be regarded as a generalization of local-surface-fitting based denoising methods. As discussed in Section~\ref{sec:related-nondl}, local-surface-fitting based methods divide point cloud into patches and fit each patch via approximations. The {\it patch manifold} defined in our method is essentially a local surface surrounding some point in the subsampled point set $\hat{{\mathbf S}}$, which is analogous to patches in local-surface-fitting based methods. Our manifold reconstruction decoder leverages on neural networks to infer the shape of patch manifolds, which is analogous to patch fitting. Another intuitive interpretation of our method is that, our differentiable pooling layer is analogous to a low-pass filter which removes high-frequency components ({\it i.e.}, noise), while the manifold reconstruction is similar to high-pass filtering which recovers details from the embedded neighborhood features to avoid over-smoothing. \begin{table*} \caption{Comparison of denoising algorithms. Each resolution and noise level is evaluated by 60 point clouds of different shapes from our collected test dataset, which is a subset of ModelNet-40. } \input{tables/different_methods} \label{tab:comp} \end{table*} \section{Experimental Results} \label{sec:experiments} \begin{table} \caption{Comparison of different denoising methods on the point clouds generated by simulated LiDAR scanning with realistic LiDAR noise. LiDAR noise is an unseen noise pattern to our denoiser since we train our denoiser only on Gaussian noise. Results show that our denoiser is \textit{\textbf{effective in generalizing to unseen noise pattern}}, and its generalizability is better than other denoisers.} \input{tables/blensor} \label{tab:blensor} \end{table} In this section, we compare our method quantitatively and qualitatively with state-of-the-art denoising methods. \subsection{Experimental Setup} \noindent \textbf{Dataset.} For training, we have collected 13 different classes with 7 different meshes, each from ModelNet-40 \cite{wu2015modelnet}. We use Poisson disk sampling to sample points from the meshes, at resolution levels ranging from 10K to 50K points. The point clouds are then perturbed by Gaussian noise with standard deviation from 1\% to 3\% of the bounding box diagonal. Due to the limit of GPU memory, we split the point clouds into patches consisting of 1024 points and feed them into the neural network. For testing, we have collected 20 classes with 3 meshes each, which are different from the training set. Similarly, we use Poisson disk sampling at resolution levels of 20K and 50K points to generate point clouds and perturb them by Gaussian noise with standard deviation of 1\%, 2\%, 2.5\% and 3\% of the bounding box diagonal, leading to 8 classes, each with 60 point clouds. Furthermore, to examine the generalizability of our method to unseen noise patterns, we also generate 60 noisy point clouds via LiDAR simulators. The simulator we use is the simulation package Blensor \cite{gschwandtner2011blensor}, which can produce more realistic point clouds and noise. We use Velodyne HDL-64E2 as the scanner model in simulations. Similar to training, we split the point clouds into patches using the K-means algorithm, and feed them separately into the denoiser. For qualitative evaluation, we additionally use the {\it Paris-rue-Madame} dataset \cite{serna2014paris}, which is obtained from the real world via laser scanners. \newline \noindent \textbf{Metrics.} We use the Chamfer distance (CD) \cite{fan2017pointsetgen} between the ground truth point cloud ${\mathbf P_\mathrm{gt}}$ and the output point cloud ${\widetilde{\mathbf P}}$ as an evaluation metric: \begin{equation} \label{eq:cd_eval} \mathcal{C}({\mathbf P_\mathrm{gt}},{\widetilde{\mathbf P}}) = \frac{1}{\left| {\widetilde{\mathbf P}} \right|} \sum_{{\mathbf q} \in {\widetilde{\mathbf P}}} \min _{{\mathbf p} \in {\mathbf P_\mathrm{gt}}}\|{\mathbf q} - {\mathbf p}\|_2 + \frac{1}{\left| {\mathbf P_\mathrm{gt}} \right|} \sum_{{\mathbf p} \in {\mathbf P_\mathrm{gt}}} \min _{{\mathbf q} \in {\widetilde{\mathbf P}}}\|{\mathbf p} - {\mathbf q}\|_2, \end{equation} where the first term measures a distance from each output point to the target surface, and the second term intuitively rewards an even distribution on the target surface of the output point cloud \cite{PCN2020}. Note that, in Eq.~\eqref{eq:cd_eval}, we use the $\ell 2$ distance, different from the \textit{squared} $\ell 2$ distance used in Eq.~\eqref{eq:cd_train} which is a term in the loss function. This is because, computing the $\ell 2$ distance involves square root operation, which is not preferable in the loss function due to its numerical instability. As our method aims to reconstruct the underlying surface, we also use the point-to-surface distance (P2S): \begin{equation} \mathcal{P}({\widetilde{\mathbf P}}, \mathcal{S}) = \frac{1}{| {\widetilde{\mathbf P}} |} \sum_{{\mathbf p} \in {\widetilde{\mathbf P}}} \min_{{\mathbf q} \in \mathcal{S}} \| {\mathbf p} - {\mathbf q} \|_2, \end{equation} where $\mathcal{S}$ is the underlying surface of the ground truth point cloud ${\mathbf P_\mathrm{gt}}$. These two metrics measure the distance between the denoised point cloud and the ground truth one, with smaller values indicating better results. \textbf{Iterative denoising.} For point clouds at higher noise levels, best possible results are obtained by iterative denoising ({\it i.e.}, feeding the output of the network as the input again), which is similar to previous neural denoising methods such as PCNet and TotalDn. However, compared to them, our method requires much fewer iterations to get the best possible results. We tune the number of iterations for PCNet, TotalDn and our denoiser, and find that for 1\% Gaussian noise, only 1 iteration is required for our denoiser, while 8 iterations are required for PCNet and TotalDn. \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/visualization.pdf} \caption{Visual comparison of denoising methods. (a) Simulated scanner noise. (b) 1\% Gaussian noise. (c) 2\% Gaussian noise.} \Description{Visual comparison.} \label{fig:visual} \end{figure*} \subsection{Quantitative Results} We compare both supervised and unsupervised versions of our method quantitatively to state-of-the-art deep-learning based denoising methods as well as non-deep-learning based methods, including PointCleanNet (PCNet) \cite{PCN2020}, TotalDenoising (TotalDn) \cite{TotalDenoising2019}, bilateral filter \cite{bilat}, Jet fitting \cite{jetsfit2005}, MRPCA \cite{MRPCA2017} and GLR \cite{GLR2019}. For each resolution and noise level, we compute the Chamfer distances (CD) and the point-to-surface (P2S) distances based on the 60 point clouds. Table \ref{tab:comp} shows that the supervised version of our method significantly outperforms previous deep-learning based methods as well as non-deep-learning denoisers. The unsupervised version is inferior to our supervised counterpart, but still outperforms Total Denoising, which is also unsupervised, at higher noise levels. Also, the unsupervised version performs better than non-deep learning denoisers at 2\%, 2.5\% and 3\% noise levels. In general, our method outperforms previous denoising methods, and is more robust to high noise levels. To examine our method's generalizability, we also conduct evaluations on point clouds perturbed by simulated LiDAR noise. Table \ref{tab:blensor} shows that while our denoiser is trained on Gaussian noise, it is effective in generalizing to the unseen LiDAR noise pattern and performs much better than previous methods. \textbf{Discussion on results under differnet metrics.} We notice that the superiority of our method is more significant when measured by the point-to-surface distance (P2S), compared to the Chamfer distance (CD), which is essentially a point-to-point distance. This is because our denoiser reconstructs the underlying manifold of the point cloud and resamples on it. Resampling on the manifold does not guarantee that the newly sampled points are close to the points from the original point cloud, which may lead to comparatively larger point-to-point distances. However, the point-to-point distance may not reflect the quality of surface reconstruction well, while the point-to-surface distance generally provides a better measurement as point clouds are representations of 3D surfaces. Also, \cite{javaheri2017subjective} finds that the point-to-surface distance is more correlated with subjective evaluation of denoising results. The significant superiority in the point-to-surface distance of our method indicates that our method is more visually preferable than previous methods, which is discussed below. \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/ruemadame.pdf} \caption{Qualitative results of our denoiser on the real world dataset \textit{Paris-rue-Madame}.} \Description{The neural network.} \label{fig:rue} \end{figure*} \subsection{Qualitative Results} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/downsample.pdf} \caption{Visualization of the intermediate subsampled point set. Our differentiable pooling operator is effective in sampling points with lower noise.} \Description{Visualization of subsampling} \label{fig:sample} \end{figure} We demonstrate the comparison of visual denoising results under simulated scanner noise and Gaussian noise with different noise levels in Figure \ref{fig:visual}. The reconstruction error of each point is measured by the point-to-surface distance. Points with smaller error are colored more blue, and otherwise colored yellow, as indicated in the color bar. The figure shows that our results are much cleaner and exhibit more visually pleasing surfaces than other methods, especially at higher noise levels. Specifically, our method is more robust to outliers compared to the other two deep-learning based methods TotalDn and PCNet. Compared to the two state-of-the-art non-deep-learning methods MRPCA and GLR, our method explicitly reconstructs the geometry of the underlying surface and thus can produce results with lower bias. In summary, the qualitative results in Figure \ref{fig:visual} are in line with the quantitative results in Table \ref{tab:comp} and \ref{tab:blensor}. Further, we conduct qualitative studies on the real world dataset {\it Paris-rue-Madame}. Note that, the ground truth point cloud is unknown, so the error of each point cannot be visualized as the synthesized datasets. As demonstrated in Figure \ref{fig:rue}, our denoising result is much cleaner and smoother than that of PCNet, while details are well preserved. This validates that our method is effective in generalizing to real world datasets. In addition, we visualize the intermediate subsampled point set output by the differentiable pooling layer in Fig. \ref{fig:sample}. The figure reveals that our differentiable pooling layer is effective in sampling points with lower noise, which provides a good initialization for the reconstruction of patch manifolds. \subsection{Ablation Studies} \begin{table} \caption{Ablation studies. All the proposed components contribute positively to the performance.} \input{tables/ablations2p} \label{tab:ablations} \end{table} We conduct progressive ablation studies to evaluate the contribution of each component: \begin{enumerate} \item \textbf{Differentiable pooling.} We replace the differentiable pooling layer with a static pooling layer, which downsamples point clouds by random sampling. \item \textbf{Dual loss functions.} We remove the Chamfer loss ($\mathcal{L}_\mathrm{CD}$) that explicitly measures the quality of pooling (sampling) and pre-filtering, and employ only the EMD loss ($\mathcal{L}_\mathrm{EMD}$) for final reconstruction. \end{enumerate} The evaluation is based on point clouds of 50K points with 2\% Gaussian noise in our test set. As shown in Table \ref{tab:ablations}, all components contribute positively to the full model. The differentiable pooling enables the denoiser to sample points with lower noise perturbation, relieving the stress of pre-filtering, since the noise level of the input to the pre-filtering layer is lowered. The dual loss functions explicitly guide the pre-filtering layer to learn to denoise, leading to a more accurate subset of points that characterizes the underlying manifold, eventually improving the quality of manifold reconstruction. In summary, the above two components boost the performance of manifold reconstruction, resulting in better denoising output. \section{Conclusion} In this paper, we propose a novel paradigm of learning the underlying manifold of a noisy point cloud from differentiably subsampled points. We sample points that tend to be closer to the underlying surfaces via an adaptive differentiable pooling operation. Then, we infer patch manifolds by transforming each sampled point along with its embedded neighborhood feature to a local surface. By sampling on each patch manifold, we reconstruct a clean point cloud that captures the intrinsic structure. Our network can be trained end-to-end in either a supervised or unsupervised fashion. Extensive experiments demonstrate the superiority of our method compared to the state-of-the-art methods under both synthetic noise and real-world noise.
{ "timestamp": "2020-08-11T02:16:37", "yymm": "2007", "arxiv_id": "2007.13551", "language": "en", "url": "https://arxiv.org/abs/2007.13551" }
\section{Introduction} Core-collapse supernova (CCSN) explosions are among the most energetic astrophysical phenomena in which neutrino emission is a major effect~\cite{Janka:2012wk, Burrows:2012ew}. Neutrino flavor evolution in CCSNe is a very rich and nonlinear phenomenon in which neutrinos can experience collective oscillations due to the high density of the ambient neutrino gas in the SN environment~\cite{Pastor:2002we,duan:2006an, duan:2006jv, duan:2010bg}. In this paper, we study collective neutrino oscillations in the presence of SN turbulent matter density fluctuations which as discussed later herein, can significantly impact the physics of neutrino oscillations in CCSNe. Collective neutrino oscillations could significantly impact the physics of CCSNe. On the one hand, it could influence the SN dynamics and the nucleosynthesis of heavy elements~\cite{Qian:1996xt} in the SN environment by modifying the neutrino and antineutrino energy spectra and consequently, their interaction rates. On the other hand, understanding of collective neutrino oscillations is crucial for future observations of galactic CCSNe neutrino signals~\cite{Scholberg:2012id, GalloRosso:2018ugl} and the upcoming measurements of diffuse supernova neutrino background~\cite{Beacom:2010kk}. The first studies on collective neutrino oscillations in CCSNe were carried out in maximally symmetric models, e.g., the stationary spherically symmetric \emph{neutrino bulb} model~\cite{duan:2006an, duan:2006jv, duan:2007sh, dasgupta:2009mg, duan:2010bg, Galais:2011gh, Duan:2007bt, Galais:2011gh, Duan:2015cqa}. Within these simplistic models it was observed that the onset of collective neutrino oscillations can be at radii much smaller than that of the conversions induced by ordinary matter via the Mikheyev-Smirnov-Wolfenstein (MSW) mechanism (at least in CCSNe with iron cores). Despite this, collective oscillations was still found to be suppressed in very deep SN regions due to the presence of high neutrino/matter densities~\cite{Duan:2010bf,Sarikas:2011am,Chakraborty:2011nf}. However, it was then realized that in multidimensional (multi-D) time-dependent SN models, these suppressions can be dismissed thanks to the breaking of spatial/temporal symmetries~\cite{raffelt:2013rqa, duan:2013kba, duan:2014gfa, abbar:2015mca, Abbar:2015fwa, chakraborty:2015tfa, Chakraborty:2016yeg, Mirizzi:2015fva, Martin:2019kgi, Mangano:2014zda, Martin:2019dof}. Yet, in any realistic SN model, the physical conditions change so quickly that any unstable mode becomes stable before neutrinos can experience significant flavor conversions~\cite{Chakraborty:2016yeg}. This means that in spite of the existence of flavor instabilities, significant flavor conversions should be unlikely to occur in the deepest regions of the SN core. Nevertheless, it was then perceived that neutrinos can also experience the so-called \emph{fast} flavor conversions on scales much shorter than those of traditional (slow) modes~\cite{Sawyer:2005jk, Sawyer:2015dsa, Chakraborty:2016lct, Izaguirre:2016gsx, Wu:2017qpc, Capozzi:2017gqd, Richers:2019grc, Abbar:2017pkh, Abbar:2018beu, Capozzi:2018clo, Martin:2019gxb, Capozzi:2019lso, Doring:2019axc, Johns:2019izj, Shalgar:2019qwg, Cherry:2019vkv, Chakraborty:2019wxe, Abbar:2020fcl, Capozzi:2020kge, Xiong:2020ntn, Bhattacharyya:2020dhu, Shalgar:2020xns}. The fast scales are determined by the neutrino number density, $n_\nu$, and can be as short as a few cm in the deepest SN zones, as opposed to the ones of slow modes which are determined by the vacuum frequency, $\omega = \Delta m^2/2E$, and occur on scales of $\sim$ a few km (for $\Delta m^2_{\rm{atm}}$ and $E=10$ MeV neutrinos). Besides their phenomenological importance, perhaps the most remarkable physical consequence of fast modes is in that they can lead to the occurrence of collective neutrino oscillations in the deepest regions of the SN core. This is because they occur on short enough scales in such a way that the unstable modes can experience significant flavor conversions before the physical conditions vary significantly. In spite of their importance, fast modes do not seem to be a generic feature of CCSNe and even if they exist, they are thought to be present only in a finite region of the SN core~\cite{Abbar:2018shq, Abbar:2019zoq, DelfanAzari:2019tez, Nagakura:2019sig, Morinaga:2019wsv, Glas:2019ijo}. Additionally, fast modes may also be less likely to occur in non-exploding SN models~\cite{Abbar:2020qpi}. Turbulence plays a crucial role in CCSNe~\cite{Abdikamalov:2014oba, Mabanta:2017kyb,Couch:2014kza,Radice:2017kmj,Meakin:2006uh}. The impact of turbulent density fluctuations on neutrino oscillations has been extensively studied in 1D models~\cite{Ma:2018key, Lund:2013uta, Reid:2011zz,Fogli:2006xy, Cherry:2011fm,Patton:2014lza,Yang:2015oya,Friedland:2006ta, Kneller:2007kg,Kneller:2010sc,Borriello:2013tha}, where it can induce flavor conversions through parametric resonances. Here, we demonstrate that the presence of turbulence in CCSNe can also induce \emph{collective} neutrino flavor conversion modes via an entirely different mechanism, i.e., the \emph{leakage} of flavor instabilities between different Fourier modes. This novel effect can significantly influence neutrino flavor evolution in the SN environment and in particular, it can lead to the presence of traditional (slow) collective neutrino oscillations in the deepest SN regions even in the absence of fast modes. What makes this novel effect more promising is in that it survives even for tiny turbulence amplitudes. \section{Linear Stability Analysis} We start by deriving the equation of neutrino flavor evolution in the linear regime, in the two-flavor scenario where the flavor content of \textit{a} neutrino can be described as \begin{align} \varrho = \frac{f_{\nu_e} + f_{\nu_x}}{2} + \frac{f_{\nu_e} - f_{\nu_x}}{2} \begin{bmatrix} s & S \\ S^* & -s \end{bmatrix}, \end{align} where $f_\nu$'s are the neutrino initial occupation numbers and, $S$ and $s$ carry information on neutrino flavor coherence and conversion, respectively. In the absence of collisions, the flavor evolution of the neutrino gas can be described by the Liouville-von Neumann equation ($c=\hbar=1$)~\cite{Sigl:1992fn,Strack:2005ux,Cardall:2007zw,Volpe:2013jgr, Vlasenko:2013fja} \begin{equation} i (\partial_t + \mathbf{v} \cdot \bm{\nabla}) \varrho_{E, \mathbf{v}} = \left[ \frac{\mathsf{M}^2}{2E} + \frac{ \lambda}{2}\sigma_3 + \mathsf{H}_{\nu \nu, \mathbf{v}}, \varrho_{E, \mathbf{v}}\right], \label{eq:EOM1} \end{equation} where $\mathbf{v}$ is the neutrino velocity and $\lambda = \sqrt2 G_{\mathrm{F}} n_e$ is the matter contribution to the neutrino Hamiltonian~\cite{Wolfenstein:1977ue,Mikheev:1986gs}. Here, the energies and occupation numbers are taken to be positive for neutrinos and negative for antineutrinos, $\sigma_3$ is the third Pauli matrix and \begin{equation} \mathsf{H}_{\nu \nu, \mathbf{v}} = \sqrt2 G_{\mathrm{F}} \int_{-\infty}^{\infty} \frac{E'^2\mathrm{d}E'}{(2\pi)^3} \int\! \mathrm{d}\mathbf{v}' \varrho_{E',\mathbf{v}'}(1- \mathbf{v} \cdot \mathbf{v}') \end{equation} is the contribution from the neutrino-neutrino forward scattering \cite{Fuller:1987aa,Notzold:1988kx,Pantaleone:1992xh}. We are here interested in the flavor stability analysis of neutrinos in the linear regime where the flavor conversion is still insignificant and one has $s \simeq1$ and $|S| \ll 1$. By only keeping terms of $\mathcal{O}(|S|)$ in Eq.~(\ref{eq:EOM1}), one reaches~\cite{Banerjee:2011fj, Vaananen:2013qja} \begin{equation}\label{eq:EOM} i (\partial_t + \mathbf{v} \cdot \bm{\nabla}) S_{E,\mathbf{v}} = \big( \omega + \lambda + \Lambda_{\nu \nu, \mathbf{v}} \big) S_{E,\mathbf{v}} - h_{\nu \nu, \mathbf{v}}, \end{equation} where, with the definition $g_{E, \mathbf{v}} = 2\varrho_{E, \mathbf{v}}^{00}(t=0)$, \begin{align} h_{\nu \nu, \mathbf{v}} &= \sqrt2 G_{\mathrm{F}} \int_{-\infty}^{\infty} \frac{E'^2\mathrm{d}E'}{(2\pi)^3}\! \int\! \mathrm{d}\mathbf{v}' g_{E', \mathbf{v}'} S_{E',\mathbf{v'}} ( 1- \mathbf{v} \cdot \mathbf{v}'), \nonumber\\ \Lambda_{\nu \nu, \mathbf{v}} &= \sqrt2 G_{\mathrm{F}} \int_{-\infty}^{\infty} \frac{E'^2\mathrm{d}E'}{(2\pi)^3}\! \int\! \mathrm{d}\mathbf{v}' g_{E', \mathbf{v}'} ( 1- \mathbf{v} \cdot \mathbf{v}'). \end{align} Eq.~(\ref{eq:EOM}) provides a linear set of equations for which one can try collective solutions of the form $S_{E, \mathbf{v}} = Q^{\Omega, \mathbf{k}}_{E, \mathbf{v}} e^{-i\Omega t+i\mathbf{k\cdot x}}$ where $\Omega$ and $\mathbf{k}$ satisfy the dispersion relation (DR) equation corresponding to Eq.~(\ref{eq:EOM}). In a homogenous medium, this leads to \begin{equation}\label{eq:lin} (-\Omega + \mathbf{v} \cdot \mathbf{k} + \omega + \lambda + \Lambda_{\nu \nu, \mathbf{v}}) Q^{\Omega, \mathbf{k}}_{E, \mathbf{v}} = h^{\Omega, \mathbf{k}}_{\nu \nu, \mathbf{v}}. \end{equation} Note that different Fourier modes are decoupled which means that $\mathbf{k}$ is just a \emph{parameter} here and one only needs to find the functional form of $Q^{\Omega, \mathbf{k}}_{E, \mathbf{v}}$ in the $E-\mathbf{v}$ space for a solution of DR equation \section{Turbulent Matter Fluctuations} It simply follows from Eq.~(\ref{eq:lin}) that in a homogenous medium where matter is constant, the matter potential $\lambda$ can be absorbed in the real part of $\Omega$ and therefore, does not affect the stability condition of the dense neutrino gas. However, if the matter is not constant and spatial density fluctuations are present, Eq.~(\ref{eq:lin}) changes to \begin{equation}\label{eq:turb} (-\Omega + \mathbf{v} \cdot \mathbf{k} + \omega + \Lambda_{\nu \nu, \mathbf{v}}) Q_{E, \mathbf{v}}^{\Omega,\mathbf{k}} +\int\! \frac{\mathrm{d}^3 \mathbf{k'}}{(2\pi)^3}\ \lambda_{\mathbf{k'}} Q_{E,\mathbf{v}}^{\Omega, \mathbf{k-k'}} = h^{\Omega, \mathbf{k}}_{\nu \nu, \mathbf{v}}, \end{equation} where $\lambda_{\mathbf{k}}$ is the Fourier component of the matter potential~\cite{Airen:2018nvp}. Note that, most remarkably, different Fourier modes are now coupled through the turbulence-induced convolution term and simple plane waves are not anymore eigenvectors of Eq.~(\ref{eq:turb}). This implies that in order to solve Eq.~(\ref{eq:turb}), one should also consider the distribution of $Q_{E, \mathbf{v}}^{\Omega, \mathbf{k}}$ in the Fourier space because $\mathbf{k}$ is not a parameter anymore and eigenvectors of Eq.~(\ref{eq:turb}) can now have contributions from a range of $\mathbf{k}$'s. \begin{figure*} [t! \centering \begin{center} \includegraphics*[width=.9\textwidth, trim=10 10 10 10, clip]{Fig1.pdf} \end{center} \caption{ Left and middle panels: Overall shape of the eigenvectors of Eq.~(\ref{eq:turb}), corresponding to the unstable mode with the maximum growth rate for two representative values of $\mu$ and for $\mathcal{C} = 10^{-4}$ and $\mathcal{C} = 10^{-6}$. The shaded area indicates the unstable region in the homogenous case (which is extremely narrow in the left panel). Here, we have used Eq.~(\ref{eq:profile}) to relate matter to neutrino number density. Right panel: Evolution of the flavor coherence term in the linear regime (Eq.~(\ref{eq:EOM})) for the representative $k = 10^3\ \mathrm{km}^{-1}$ Fourier mode in a declining $\mu$. Here, we solve Eq.~(\ref{eq:EOM}) numerically for a discrete set of Fourier modes with $\Delta k = 10^3$ km$^{-1}$ and assuming $\lambda_0=300\mu$.} \label{fig:1} \end{figure*} In the following, we assume a Kolmogorov-like spectrum for turbulence where the matter density features power-law fluctuations~\cite{frisch1995turbulence} on a range of scales between the dissipation scale, $\lambda_{\mathrm{diss}}$ ( here $\lambda_{\mathrm{diss}} \ll 10^{-10}$~km~\cite{Abdikamalov:2014oba}), below which the turbulent energy gets efficiently dissipated by viscosity and the cutoff scale, $\lambda_{\mathrm{cut}}$, which is determined by the shock radius $R_\mathrm{s}$ so that $\lambda_{\mathrm{cut}} \sim 2 R_\mathrm{s}$. To be specific, we take the turbulent matter fluctuations to have the form \begin{equation}\label{eq:Kolm} \lambda(x) = \lambda_0\big( 1+ \mathcal{C} \sum_{k\neq0} \xi_k \cos(kx+\eta_k) \big), \end{equation} where $\lambda_0$ is the average matter potential (the zeroth mode), $\eta_k$ is a random phase and $\mathcal{C} = \mathfrak{C}/ \mathfrak{C}_{\mathrm{N}}$ with $\mathfrak{C}$ and $\mathfrak{C}_{\mathrm{N}}$ being a constant coefficient and a normalization factor defined as $\mathfrak{C}_{\mathrm{N}} = (\sum_{k\neq0} \xi_k^2)^{1/2}$, respectively. Here $\mathcal{C}$ is the most meaningful parameter which specifies the relative turbulence amplitude on scales $\sim \lambda_{\mathrm{cut}}$. In addition, $\xi_k$ is assumed to have a Kolmogorov distribution \begin{equation} \xi_k = \bigg(\frac{k}{k_{\mathrm{cut}}}\bigg)^{-\alpha/2}, \end{equation} with $k_{\mathrm{cut}} = 2\pi/\lambda_{\mathrm{cut}} $ which is fixed to be $k_{\mathrm{cut}} = 0.01\ \mathrm{km}^{-1}$ in our calculations. We also set $\alpha = 5/3$ though our results do not depend qualitatively on the value of $\alpha$ for reasonable choices. With these choices, one has $\lambda_k \sim \mathcal{C}\lambda_0 \big(k/k_{\mathrm{cut}}\big)^{-\alpha/2}$ \section{Two-Beam Model} We study neutrino flavor instabilities in a stationary 2D two-beam, monochromatic neutrino gas studied first in Ref.~\cite{duan:2014gfa} (this stationary model is chosen for illustrative purposes, otherwise see Appendix~\ref{temporal} for the turbulence effect on temporal instabilities). Such a model can be used to describe the SN geometry at some distance from the SN core~\cite{Chakraborty:2016yeg} where a periodic boundary condition is imposed in the transverse plane (along the $x$-axis in our model) and we study the evolution of the neutrino gas along the $z$-axis which can also be interpreted as being the radial direction in spherical coordinate. The mono-energetic $\nu_e$ and $\bar{\nu}_e$ beams ($\omega = \pm1$) are assumed to be emitted with $\mathbf{v}_{\pm} = (\pm u,0,v_z)$ where $u = \sqrt{1-v_z^2}$ with $v_z=1/2$ (corresponding to an opening angle of $2\pi/3$ between the two beams) and the ratio between the number densities is fixed to be $n_{\bar{\nu}_e}/n_{\nu_e} = 0.7$. We solve Eq.~(\ref{eq:turb}) for our stationary model ($\Omega=0$) to find unstable modes in the $z$-direction (where the imaginary part of $k_z$ is positive) and the $E-\mathbf{v}$ distributions of the corresponding eigenvectors as a function of the Fourier mode in the $x$-direction, $k$ (hereafter we drop the subscripts $x$ and superscripts $\Omega$). In addition, for the small turbulence amplitudes considered here one can safely ignore the turbulence in the $z$ (longitudinal) direction since it is completely suppressed by the other terms in the equation of motion. This implies that Fourier modes are only coupled in the $x$ (transverse) direction. Note that a similar suppression does not exist for the turbulence in the $x$-direction since there is no other term being able to compete with the turbulence (coupling) effect. To illustrate how turbulent matter density fluctuations impact the stability of a dense neutrino medium, in Fig.~\ref{fig:1} we indicate the overall shape of the eigenvectors of Eq.~(\ref{eq:turb}), defined as \begin{equation} |Q^{k}| = \bigg(\sum_{E, \mathbf{v}} |Q_{E, \mathbf{v}}^{k}|^2 \bigg)^{1/2}, \end{equation} corresponding to the unstable mode with the maximum growth rate, where the eigenvectors are normalized to have unit length. In the left panel, we first consider a neutrino gas with a relatively low neutrino number density, $\mu=\sqrt2G_{\mathrm{F}}n_{\nu_e}=50\ \mathrm{km}^{-1}$. For such values of $\mu$, only very low Fourier modes are unstable in a homogenous neutrino gas. However, the instability structure changes dramatically in a turbulent medium. As expected, there is a dominant peak with $|Q^k|\simeq 1$ at $k\simeq0$. But due to the presence of turbulent matter fluctuations, one can clearly see that the instability can now \textit{leak} to much higher Fourier modes (and make them unstable) which are otherwise completely stable in the homogenous case. It is of utmost importance to note that in spite of its small amplitude (for the tiny turbulence amplitudes used here), the leakage of instabilities can entirely change the stability condition of the neutrino gas. This simply comes from the fact that as far as the flavor stability is concerned, the amplitude (of the unstable modes) is not relevant since any unstable mode can growth exponentially with a growth rate of $\sim 10$~km$^{-1}$ (for slow modes). Thus, even unstable modes with amplitude $|Q^k| \sim 10^{-9}$ can get activated within only $\sim 2$ km. This implies that, surprisingly, even a tiny amount of turbulence in matter would be enough to have a notable influence and make the whole range of (relevant) Fourier modes unstable with reasonable initial amplitudes. This is very different from the turbulence-induced parametric resonances where turbulent matter fluctuations can only generate a noticeable effect when the turbulence amplitude is considerably large~\cite{Lund:2013uta}. To the best of our knowledge, the flavor instability leakage is the only physical effect in CCSNe which is sensitive to such tiny turbulence amplitudes. Indeed, the turbulence effect behaves here like a switch-on effect. Thus, one might be tempted to interpret the leakage phenomenon as an example of the effect of the \emph{background} symmetry breaking in a dense neutrino gas. Note that in the absence of turbulence, $|Q^k|$ should be a $\delta$-function in the Fourier space. The turbulence-induced leakage amplitude is almost independent of $\mu$ and depends only on the density fluctuation amplitude (see Appendix~\ref{app1}) \begin{equation} \mathrm{leakage\ of}\quad k_0 \rightarrow k_0 \pm k:\quad \frac{|Q^{k_0 \pm k}|}{|Q^{k_0}|} \sim \frac{\lambda_{k}}{k}. \end{equation} By using this expression, one can easily make an estimate of the leakage amplitude for a given matter density and turbulence amplitude. In the middle panel of Fig.~\ref{fig:1}, an example of the instability leakage for a high neutrino number density with $\mu=10^4$~km$^{-1}$ is presented. For such a value of $\mu$ which is expected in the SN zones close to the surface of the PNS, only very large $k$'s are unstable in the homogenous case. However, the instability leaks to small $k$'s in the presence of turbulence. In particular, the leakage amplitude for a given turbulence amplitude is much larger in this case since the matter density is quite big in the vicinity of the PNS. Although the form of the eigenvectors of Eq.~(\ref{eq:turb}) changes significantly in a turbulent medium, its eigenvalues do not change noticeably, at least for such small turbulence amplitudes tried here. This implies that this novel effect observed for constant $\mu$'s might be still superficial unless it can also leave its influence on the solutions of Eq.~(\ref{eq:EOM}) for a realistic SN profile where $\mu$ is changing. But this is exactly where the power of the leakage mechanism is best manifested, as illustrated in the right panel of Fig.~\ref{fig:1}. Here to provide a flavor of this effect, we show the evolution of the $k=10^3$~km$^{-1}$ Fourier mode in a model in which the neutrino number density is varying with $\mu(z) = 10^4 \mathrm{exp}(-0.3 z)$~km$^{-1}$ (note that $\mu$ changes very rapidly and goes from $10^4$ to $10$~km$^{-1}$ within only $\sim20$ km). As can be clearly seen, the final amplitude of the Fourier modes (at the point they become dominant) in the presence of turbulence can be larger than those of the homogenous gas by many orders of magnitude. This is due to the fact that all the relevant Fourier modes can always grow exponentially in a turbulent medium in contrast to the homogenous gas in which each Fourier mode has a certain range of instability. This behavior is completely compatible with/predictable from what already observed in the left and middle panels of Fig.~\ref{fig:1} and shows that the assessment based on the shape of the eigenvectors of Eq.~(\ref{eq:turb}) can be very useful in providing a sufficient insight on how Fourier modes grow. Note that the exact turbulence-induced enhancement in the amplitude of a Fourier mode depends on the duration on which the turbulence influences its evolution, which can be much longer for realistic SN profiles~\cite{unpublished2}. The evolution of the neutrino gas here is adiabatic to a very good degree in the sense that the scales on which the eigenvectors of Eq.~(\ref{eq:turb}) grow (exponentially) are much shorter than those of variations in $\mu$ (or in the shape of the eigenvectors themselves), i.e., $\kappa^{-1} \ll \mu/(\rm{d}\mu/\rm{d}r)$. One can then better understand the behavior observed in the right panel of Fig.~\ref{fig:1} in an analytical way, assuming a perfect adiabaticity. In the perfect adiabatic limit, the solution of Eq.~(\ref{eq:EOM}) at each step $z=z_0+\Delta z$ can be obtained analytically from the one at $z=z_0$ by $S(z_0+\Delta z) = \sum_i c_i \Psi_i e^{i (k_z)_i \Delta z} $ where $\Psi_i$ and $(k_z)_i$ are the eigenvectors (which form a complete basis) and eigenvalues of the Hamiltonian of Eq.~(\ref{eq:turb}) at $z=z_0$ and $c_i$'s are the expansion coefficients of $S(z_0) = \sum_i c_i \Psi_i$. Such an analytical adiabatic solution (red curve) shows a very good agreement with the numerical solution of Eq.~(\ref{eq:EOM}). One should note that the key point here is that the eigenvectors of Eq.~(\ref{eq:turb}) at two different steps \emph{are not exactly linearly independent} of each other. In other words, each $\Psi_i^{\rm{new}}$ has contributions from all $\Psi_j^{\rm{old}}$'s, i.e., $\Psi_i^{\rm{new}} = \sum_j c_{ij} \Psi_j^{\rm{old}}$ with $c_{ij}$ being rouphly the turbulence amplitude. This means that any unstable mode grows from an enhanced initial value (occurred during the growth of the modes which were previously unstable) which in turn ensures that all the Fourier modes always grow exponentially during the propagation of neutrinos. This is entirely in contrast to the homogenous case where the new unstable modes at each point are totally linearly independent of the old ones at a previous point and therefore, any exponential growth is present only within a certain period. \section{Discussion and Conclusions} The turbulence-induced leakage of flavor instabilities implies that the notion of $\mu-k$ instability band (see, e.g., Fig.~3 in Ref.~\cite{abbar:2015mca}) developed in a homogenous neutrino gas is not very useful in a turbulent medium where what distinguishes different Fourier modes is actually only their initial amplitudes, $|Q^k|$. One can immediately observe that the leakage mechanism can well address one of the biggest issues with slow instabilities in the deepest SN zones. In particular, it dismisses the necessity of the occurrence of fast modes in order to observe significant flavor conversions near the PNS. To demonstrate this idea, in Fig.~\ref{fig:2} we show the instability footprints of two representative Fourier modes as a function of the distance from the SN core and the turbulence amplitude, $\mathcal{C}$, during the accretion phase of a CCSN\footnote{ Here we \emph{define} the unstable region for a given Fourier mode by requiring $|Q^k|>10^{-13}$ for the eigenvector of the mode with maximum growth rate. This is to ensure that the activation scale of a given mode is shorter than the variation scales of $\mu$. Otherwise, such boundaries for the unstable regions are absolutely artificial.}. Here we take a matter/neutrino density profile approximately similar to that of Ref.~\cite{Chakraborty:2016yeg} in which \begin{equation}\label{eq:profile} \mu(r) = \mu_R (R/r)^4 \quad \mathrm{and}\quad \lambda(r) = \lambda_R (R/r)^3, \end{equation} where $\mu_R$ and $\lambda_R$ are the neutrino and matter densities on the surface of the neutrinosphere, $R$, respectively, for which we have used $R=15$~km, $\mu_R = 2\times10^5\ \mathrm{km}^{-1}$ and $\lambda_R = 6\times10^7\ \mathrm{km}^{-1}$ (corresponding to a matter density of $\rho\simeq 3\times 10^{11}\ \mathrm{g\ cm}^{-3}$). For very small turbulence amplitudes, the instability zones can be extremely narrow specially at small radii which prevents any significant flavor conversions therein. However, as $\mathcal{C}$ increases, the Fourier modes become unstable at all radii which means that they can grow many orders of magnitude (as in the right panel of Fig.~\ref{fig:1}) and easily enter the nonlinear regime. Hence, \emph{no matter} whether fast modes exist or not, collective neutrino oscillations can occur within just a few km above the PNS. In addition, unlike fast modes which can only exist in small SN regions and are less likely to occur in non-exploding models, turbulence-induced flavor conversion modes are ubiquitous and generic. This could have an important impact on the SN dynamics and the nucleosynthesis of heavy elements in CCSNe. \begin{figure} [tb! \centering \begin{center} \includegraphics*[width=.45\textwidth, trim= 10 10 10 10, clip]{Fig2.pdf} \end{center} \caption{Instability regions (shaded areas) of two representative Fourier modes, $k=0$ and $3\times10^5\ \mathrm{km}^{-1}$, as a function of the distance from the SN core and the turbulence amplitude. Here we have employed the two-beam model described in the text. } \label{fig:2} \end{figure} Apart from the crucial impact of turbulence on the flavor stability of a dense neutrino gas, its presence is also important in providing initial seeds for the unstable modes. Specifically, the turbulence term in Eq.~{\ref{eq:EOM}} transfers the initial seed from $k_0$ to $k_0 \pm k$ on scales $\sim \lambda_{ k}^{-1}$, or more accurately, $\sim \mathrm{max}\{\lambda_{k'}^{-1} k/ k'\}$ where the maximum is taken over all turbulence modes. Apart from its impact on slow modes which was discussed here, turbulence can also affect fast neutrino flavor conversion modes (see Appendix~\ref{fast}). However, although a similar leakage can occur therein, the leakage mechanism does not seem to change DR of fast modes. In the above discussions, we have only considered the effects of spatial density fluctuations on the spatial instabilities. However, a similar effect should also be expected for temporal instabilities as shown in Appendix~\ref{temporal}. Indeed, the leakage effect has nothing to do with the eigenvalues of DR equation and the nature of instabilities and, it only arises due to the presence of coupling among different eigenvectors. Additionally as discussed in Appendix~\ref{fast}, such an effect even exists for stable solutions (real eigenvalues of DR equation). Similarly, temporal fluctuations of the matter density can also couple different temporal frequencies. Although extremely rapid temporal density variations are necessary to observe any noticeable effect, it could be still plausible considering the small required turbulence amplitudes. Moreover, while we have only considered the effects of the turbulence on flavor instability in CCSNe, similar effect can be expected in any dense neutrino environment where matter density fluctuations are present, such as neutron star merger remnant accretion disks. Our study is meant only to serve as an introduction to this novel issue and further research is necessary to provide a better understanding of its physical implications. This is yet another time that the rich physics of neutrino flavor evolution in dense neutrino media surprises us. \section*{Acknowledgments} I am deeply indebted to Georg Raffelt and Huaiyu Duan for their comments on the manuscript, their kind encouragements and many insightful discussions during the development of this work. I am also grateful to Cristina Volpe, Ernazar Abdikamalov and Shashank Shalgar for useful conversations/communications and to Hans-Thomas Janka and Francesco Capozzi for reading the manuscript and their comments. I acknowledge partial support by the Deutsche Forschungsgemeinschaft (DFG) through Grant No. SFB 1258 (Collaborative Research Center Neutrinos, Dark Matter, Messengers).
{ "timestamp": "2021-02-10T02:01:28", "yymm": "2007", "arxiv_id": "2007.13655", "language": "en", "url": "https://arxiv.org/abs/2007.13655" }
\section{Introduction}\label{section_intro} Active Galactic Nuclei (AGN) are one of the most energetic phenomena in the universe. The supermassive black hole in the galactic center grows by actively accreting gas and dust from their surroundings. The temperature of the accreting mass depends on distance from the black hole as gravitational potential energy is the primary source of heat. At some distance from the center, the `sublimation radius', temperatures are cool enough for dust to survive. The dust absorbs the optical and UV emission from the accretion process and converts it into infrared radiation. Under the classic AGN unification scheme, the apparent difference between type I and type II AGN is primarily the result of angle-dependent obscuration by a thick obscurer \citep{Antonucci}, commonly interpreted and modelled as a ``dusty torus'' \citep[e.g.][]{1986ApJ...308L..55K,1992ApJ...401...99P,1994MNRAS.268..235G,Nenkova2002,2005A&A...437..861S,2006MNRAS.366..767F,2006A&A...452..459H}. There is convincing evidence now that the physical state of the obscurer is dynamically and structurally more complex though: several observations revealed that an equatorial dense region is accompanied by polar dusty outflows \citep[e.g.][]{hoenig2012,hoenig2013,tristram2014,leftley2018}, possibly extending up to 100 pc \citep{asmus2016} and participating in obscuration \citep[e.g.][]{ricci2017,hoenig2019}. IR interferometry has the capability to provide more detail about the actual geometry of the dust structure, specifically in nearby sources such as the Circinus Galaxy. Mid-IR photometric and interferometric observations of the nucleus of this galaxy have been reproduced with a model based on a compact dusty disc and an extended dusty outflow in the form of an hyperboloid \citep{circinus,circinus2}. It has been suggested that the basic driving mechanism for launching such winds is radiation pressure on dust \citep[e.g.][]{hoenig2012}. Since the dust opacity in the IR is $\gtrsim$10 times the Thomson opacity, the coupling between the radiation pressure and dusty gas is much greater than with dust-free gas, even in low luminosity AGN \citep{pierkrolik}. Along with these polar features, the close environment of accreting supermassive black holes may be shaped by radiative feedback \citep{ricci2017}. Hence, radiative hydrodynamic (RHD) simulations have been carried out, focusing on the role of radiation pressure \citep[e.g.][]{chan1,chan2,wada,david,dorodnitsyn2012,dorodnitsyn2016,namekata2016}. Although there is general consensus that outflows naturally emerge in the presence of radiative processes, these simulations use different implementations and assumptions for solving the RHD equations and may lead to different causes and strengths of the wind. Infrared radiation pressure from the dusty medium itself adds a noticeable cost in these simulations, unless strong approximations are invoked. This paper lays out a simplified semi-analytical framework to calculate the effect of infrared radiation pressure on dusty gas. We consider the gas to be clumpy and treat the clumps as ``test particles'' with physical properties obtained by photo-ionisation simulations. This approximation allows us to characterise the role of IR radiation pressure on the distribution of material around the AGN. The simulations include a treatment of gravity, radiation pressure from the AGN and the re-radiation from the hot dust itself using an ad-hoc geometric setup. The goal is to reproduce qualitative and quantitative properties of the dusty region and compare them to observations. \section{The model}\label{section_model} In this section we lay out our analytical framework to quantitatively assess the spatial distribution of the forces acting in the parsec-scale dusty environment of an AGN. The basic setup of the model is shown in figure \ref{setup}. In the following, we consider a geometrically thin disk consisting of dense clumps of gas and dust. The physical properties of these clumps are inspired by \citet{namekata}, who performed RHD simulations of gas clouds exposed to AGN radiation. It has been shown that dust clumps or clump fragments can survive under such extreme conditions if they have hydrogen density values in the range 10$^{6.5}$ - 10$^{8}$ cm$^{-3}$. Throughout this paper, we assume a hydrogen number density of $n_H=10^7\,\mathrm{cm}^{-3}$. We investigate column densities from 10$^{22}$ to 10$^{24}$ cm$^{-2}$. The lower limit of 10$^{22}$ cm$^{-2}$ corresponds to the regime where $\tau_{NIR}\sim 1$, which is a condition required in order to have effective infrared radiation driving \citep[][and references therein]{krolik2007}. We stop at $N_{H}=$ 10$^{24}$ cm$^{-2}$ because any larger column density value cannot be efficiently accelerated by the radiation pressure and would exhibit a similar behaviour to $N_{H}=$ 10$^{24}$ cm$^{-2}$, as we shall see below. Under these assumptions, we consider the radiation and gravity force terms acting on individual clouds which sit at a certain distance from the AGN and the disk. Each of these clouds is approximated as a dusty test particle, and so we will use the terms ``cloud'' and ``dusty particle'' interchangeably. \begin{figure \centering \includegraphics[width=0.47\textwidth]{model.png} \caption{Schematic setup of the model. The main two components are the AGN and the dusty disk, both generating a radiation pressure on a dusty cloud and acting against the gravity force of the AGN.}\label{setup} \end{figure} \subsection{AGN radiation field} \label{agnfield} The most dynamically important component of radiation transfer in a dusty environment is the absorption of strong optical/UV radiation from the accretion disk by dust. A radiation field with a given monochromatic flux $F_{\nu}$ at frequency $\nu$ exerts an acceleration $k_\nu F_\nu/c$, where $k_\nu$ is the opacity with dimensions of area per unit mass. For a cloud sitting at a certain distance $d$, the monochromatic flux has the form \begin{equation} F_{\nu}=\frac{L_{\mathrm{AGN; \nu}}}{4\pi d^{2}} \end{equation} where $L_{\mathrm{AGN}}$ is the bolometric luminosity of the AGN. Then, the net acceleration experienced by the cloud is \begin{equation} \label{aagn1} {a}_{\mathrm{AGN}}=\frac{ \int_{\mathrm{OUV}} k_{\mathrm{\nu}}L_{\mathrm{AGN; \nu}} d\mathrm{\nu}}{4\pi c d^{2}} \end{equation} where OUV denotes the optical/UV frequencies range of interest here. In a dense cloud dust and gas are tightly coupled via collisions, so that the radiation pressure force on the dust will be transferred to the cloud as whole. Accordingly, we treat the cloud as a single entity and further approximate the frequency-dependent opacity as the entire opacity of the cloud. If the cloud has radius $R_{\mathrm{cl}}$ and mass $m_{\mathrm{cl}}$, then the opacity of the cloud $k_{\mathrm{cl}}$ can be expressed as \begin{equation} k_{\mathrm{cl}}= \frac{\pi R_{\mathrm{cl}}^{2}}{m_{\mathrm{cl}}} \end{equation} i.e. the cloud geometrical area divided by its mass. Therefore, Eq. \eqref{aagn1} reduces to \begin{equation} \label{aagn} {a}_{\mathrm{AGN}}= k_{\mathrm{cl}} \frac{ L_{\mathrm{AGN}}}{ 4\pi c d^{2}} \end{equation} allowing a direct proportionality between the radiative acceleration from the central AGN and its total luminosity. The cloud mass $m_{\mathrm{cl}}$ can be written as $m_{\mathrm{cl}} = m_{\mathrm{p}} n_{\mathrm{H}} \frac{4}{3} \pi R_{\mathrm{cl}}^{3}$, with $n_{\mathrm{H}}$ being the hydrogen number density of the cloud. At the same time the radius of the cloud $R_{\mathrm{cl}}$ can be estimated as $R_{\mathrm{cl}}=N_{\mathrm{H}}/2n_{\mathrm{H}}$. Then, \begin{equation} \label{opacity} k_{\mathrm{cl}}=\frac{3}{2}\frac{1}{m_{\mathrm{p}}N_{\mathrm{H}}} \quad. \end{equation} \subsection{Gravity} The gravitational acceleration has the same $d^{-2}$-dependence as the AGN radiation pressure, but pointing in the opposite direction, i.e. towards the central black hole. Therefore, it is convenient to express these two kinds of central forces in terms of their strength ratio. In doing so, some classical definitions must be introduced. One is the Eddington limit, defined as the luminosity capable of balancing the gravity of a mass $M$ \begin{equation} L_{\mathrm{Edd}}= 4\pi c GM\frac{1}{k}\quad . \end{equation} For a fully ionized gas, the opacity $k$ can be expressed as $k=\sigma_{\mathrm{T}}/{m_{\mathrm{p}}}$, where $\sigma_{\mathrm{T}}$ is the Thomson cross section. Also, we assume that gravity is dominated by the black hole mass so that $M=M_{\mathrm{BH}}$. Then, one can introduce the Eddington ratio \begin{equation} \lambda_{\mathrm{Edd}} =\frac{L_{AGN}}{L_{\mathrm{Edd}}} \end{equation} allowing to express the ratio between the AGN radiative acceleration and gravity as \begin{equation} \label{rag2} \frac{a_{\mathrm{AGN}}}{a_{\mathrm{g}}}=\frac{3}{2}\frac{ \lambda_{\mathrm{Edd}}}{\sigma_{\mathrm{T}} N_{\mathrm{H}}}\quad . \end{equation} This relatively simple equation highlights one fundamental point: clouds in the force field of an AGN are accelerated in a way that depends only on the ratio $\frac{\lambda_{\mathrm{Edd}}}{N_{\mathrm{H}}}$ (for optically thick clouds that are only partially ionised). In particular, the more powerful the AGN ($i.e.$ the higher $\lambda_{\mathrm{Edd}}$), the more likely it is to drive a wind. Second, the more material sits in the cloud ($i.e.$ the higher $N_{\mathrm{H}}$), the less prone is the cloud to be uplifted, being more subjected to gravity. With the use of Eq. \eqref{aagn} and Eq. \eqref{rag2} we can start to visualize the field of the AGN. Figure \ref{grav_agn} shows the resulting distribution in a 2D grid in the plane above the disk. The disk is traced by the grey area and each point in the grid represents a dusty cloud. The acceleration in cm s$^{-2}$ experienced by this cloud is displayed by varying its column density $N_{\mathrm{H}}$ and the Eddington ratio $\lambda_{\mathrm{Edd}}$ of the AGN system. Using the fact that the flux at the sublimation radius $r_\mathrm{sub}$ is constant \citep{elitzur97}, all the distances are scaled with respect to $r_\mathrm{sub}$, allowing us to make our model actually independent of the luminosity. \begin{figure*}[!htb] \centering .\includegraphics[width=0.99\textwidth]{full_no_ir.png} \caption{\label{grav_agn}Spatial distribution of the optical/UV radiative + gravity acceleration field in the plane perpendicular to the disk, traced by the grey region.} \end{figure*} The observed radiation field acquires a typically symmetric structure. Low column density dusty particles are strongly subjected to the AGN radiation pressure, and outflow radially. Very high column density clouds usually do not outflow, unless one considers quasar Eddington ratios $\lambda_{\mathrm{Edd}} \gtrsim 0.15$. This lower limit still exhibits an inflow, though the inflow is very weak as a result of a (nearly) balance of the two forces. \subsection{The disk model} We assume that the AGN is surrounded by a geometrically thick dusty disk, as suggested by the disk+wind model \citep{hoenigkishimoto2017}. The disk absorbs the OUV radiation from the central AGN and re-emits in the IR. We will focus our treatment on the immediate environment close to the sublimation temperature as this is the region where the dust is hottest and has the greatest flux. Lower local temperatures and obscuration will significantly reduce the effect of the infrared radiation pressure at larger scales. \subsubsection{Temperature profile} \label{temperature} Following \cite{hoenig2011}, the luminosity absorbed by a dust grain at a distance $r$ from a source of radiation is \begin{equation} L_{\mathrm{abs}}=16\pi r^2 Q_{\mathrm{abs;P}}(T)\sigma_{SB}T^4 \end{equation} where the $Q_{\mathrm{abs;P}}(T)$ is the plank mean absorption efficiency. For astronomical dust, the latter has a temperature power-law in the infrared of the type $Q_{\mathrm{abs;P}}(T)\propto T^{1.6}$. For clouds directly exposed to the AGN radiation pressure, we can solve for the grain temperature and obtain \begin{equation} \label{tg} T(r) = T_{\mathrm{sub}} \left(\frac{r}{r_{\mathrm{sub}}}\right)^{-2/5.6}\quad . \end{equation} \subsection{Opacities} \label{opacities and Cloudy model} In order to derive the infrared emission from the disk, we need the opacities of the dusty clouds. For this, we used version c17.00 of the photoionization code Cloudy \citep{2017RMxAA..53..385F}. For the input spectra we adopted a modified version of the AGN Cloudy's built-in command, as in \citet{david}. The intensity assumed is the intensity at the sublimation radius. This has been determined by running several Cloudy simulations with fixed luminosity (we used $L_{\mathrm{AGN}}=2.2 \times 10^{43}$ erg s$^{-1}$) and finding the distance of the cloud for which the illuminated face is at $T_\mathrm{sub}=1750$ K, characteristic of large graphite grains. The value obtained for the incident sublimation intensity is $I_{\mathrm{0}}=5.6 \times 10^{7}$ erg cm$^{-2}$ s$^{-1}$, corresponding to a sublimation radius of $r_\mathrm{sub}= 0.03$\,pc. Once the shape and intensity of the incident radiation field has been set, we vary the column density of the illuminated medium and obtain the corresponding set of opacities. \subsection{Obscuration} Assuming that the disk is clumpy, we can use the formalism of \citet{Nenkova2002} to address obscuration affecting its re-emission. As mentioned before, we assume that all clouds are identical with constant hydrogen density $n_{\mathrm{H}}=10^{7}$\,cm$^{-3}$, varying only the column density. Our disk extends from $r_\mathrm{sub}$ to the outer radius $r_\mathrm{out}$ (in units of $r_\mathrm{sub}$). The number of clouds per unit length $N_{\mathrm{c}}(r,z)$ can be expressed in a cylindrical coordinate system with separable functions of the vertical height $z$ and the radial distance $r$. The resulting distribution has the form \begin{equation} \label{nc} N_{\mathrm{c}}(r,z) = \mathcal{C} \eta (z) N_{\mathrm{0}} r^{-1} \end{equation} where $\mathcal{C}=1/\ln r_\mathrm{out}$ is a normalization constant, $N_\mathrm{0}$ is the number of clouds along the equatorial plane of the disk, and $\eta(z)$ represents the vertical distribution of the clouds. The latter is assumed to have a smooth boundary in a form of a Gaussian as \begin{equation} \eta (z) = e^{-z^2/2H^2} \end{equation} mimicking an isothermal disk. To minimize the number of free parameters, we have explicitly fixed the radial power-law exponent to $-1$, $N_\mathrm{0}=7$, $r_\mathrm{out}=30$ $r_\mathrm{sub}$ and $H=0.3$ $r_\mathrm{sub}$. This choice is consistent with clumpy torus modeling and will not affect the general conclusion we are driving. If we define $\mathcal{N}(s^{'},s)= \int_{s}^{s'}N_{{\mathrm{c}}}ds$, the probability that a photon travelling from $s'$ to $s$ is absorbed through his path will be then $P_{\mathrm{esc}}\simeq \mathrm{exp}(-\mathcal{N}\tau_{\mathrm{\nu}})$ for an optical depth $\tau_{\mathrm{\nu}}<1$ (such as infrared photons) and $P_{\mathrm{esc}}\simeq \mathrm{exp}(-\mathcal{N})$ in the opposite limit $\tau_{\mathrm{\nu}}>1$ (UV photons). It means that the radiative acceleration acting on a cloud will be modeled differently depending on wavelength. On the scales we are considering, we will assume that most of the emission and obscuration originates from large grains as implied by observations and modelling of the dust sublimation region \citep[e.g.][]{kishimoto2007,kishimoto2011a,kishimoto2011b,hoenigkishimoto2017,garcia2017}. Thus, within the clumpy disk, the radiation is absorbed according to \begin{equation} \left\{ \begin{array}{ll} a_{\mathrm{AGN}} \xrightarrow{}a_{\mathrm{AGN}}\; e^{- \mathcal{N}} \\ \\ a_{\mathrm{ir}} \xrightarrow{}a_{\mathrm{ir}}\;e^{-0.1 \mathcal{N}} \end{array} \right. \quad . \end{equation} The factor of 0.1 for $a_{\mathrm{ir}}$ accounts for the fact that the opacities of large grains (about 1$\mu$m in size) in the near-IR are typically about a factor of 10 lower than in the optical/UV regime. Accordingly, the temperature profile will be modified as $T(r) = T_{\mathrm{sub}} \left(\frac{r}{r_{\mathrm{sub}}}\right)^{-2/5.6} e^{-\mathcal{N}/5.6}$. The resulting curve is displayed in figure \ref{fig:temperature}. It presents a sharp drop until 5 $r_{\mathrm{sub}}$ and is nearly constant throughout the rest of the disk. \begin{figure}[ht!] \includegraphics[width=0.47\textwidth]{temperature.png} \caption{Temperature variation inside the disk accounting for self-shielding.} \label{fig:temperature} \end{figure} \subsection{The infrared radiation field} \label{diskfield} Given the disk geometry of the emitting medium, we expect to break the radial symmetry of AGN radiation and gravity. To investigate this behavior, we model the dusty disk as a sequence of infinitesimally small annuli of width $dr$ radiating as a black body, with the temperature vary radially according to eq. (\ref{tg}). The calculations have a similar setup as in \citet{tajima}. The cloud is assumed to be a point $P$ with corresponding (Cartesian) coordinates $(x,0,z)$ as shown in figure \ref{analytic}. Consider an infinitesimal element of the dusty disk with polar coordinates $(r,\varphi,0)$. \begin{figure \centering \includegraphics[width=0.5\textwidth]{ring.png} \caption{Geometry for the infrared emitting disk.}\label{analytic} \end{figure} The distance $D$ from the element to the point $P$ is \begin{equation} D=\sqrt{r^2+{x}^2+{z}^2-2\,x\,r\cos{\varphi}}\quad. \end{equation} Then, the force component per unit frequency and for a single annulus of size $d\varphi dr$ are \begin{align} da_{\mathrm{ir,x}}&=\frac{k_{\mathrm{\nu}}\pi B_{\mathrm{\nu}}(T)\, r}{4\pi c D^{2}}\,\frac{(x-r\cos{\varphi})\:d\nu\,d\varphi\,dr}{D} \label{fx} \\[10pt] da_{\mathrm{ir,y}}&=\frac{k_{\mathrm{\nu}}\pi B_{\mathrm{\nu}}(T)\, r}{4\pi c D^{2}}\,\frac{(y-r\sin{\varphi})\:d\nu\,d\varphi\,dr}{D} \label{fy} \\[10pt] da_{\mathrm{ir,z}}&=\frac{k_{\mathrm{\nu}}\pi B_{\mathrm{\nu}}(T)\, r}{4\pi c D^{2}}\,\frac{z\:d\nu\,d\varphi\,dr}{D}\label{fz}\quad. \end{align} We numerically integrate expressions (\ref{fx}),(\ref{fy}),(\ref{fz}) for fixed points and visualize them in figure \ref{disk}. It stands clear now that the radial symmetry is broken, with the disk geometry causing a vertical component at the inner radius. Additionally, this is the strongest component because the thermal emission at the sublimation radius is the strongest. At larger distances, obscuration effects significantly reduce the disk emission strength. \begin{figure} \centering \includegraphics[width=0.46\textwidth]{diskfield.png} \caption{Spatial distribution and strength in cm s$^{-2}$ of the radiative acceleration due to the infrared emission of the dusty disk.} \label{disk} \end{figure} We then add the disk contribution to the gravity + AGN radiation field derived in section \ref{agnfield} and discuss the consequences in the next section. \begin{figure*}[ht!] \begin{center} \includegraphics[width=\textwidth]{full.png} \caption{\label{full} Acceleration field accounting for the total gravity + AGN and IR radiation.} \end{center} \end{figure*} \subsection{The prevalence of polar dusty winds} \label{irwins} Combining gravity, AGN radiation pressure, and IR radiation pressure leads to a force field as shown in figure \ref{full}. This needs to be compared to figure \ref{grav_agn} without the IR radiation pressure to appreciate the influence of the IR radiation field. Overall, it is noted that dusty particles with $N_{\mathrm{H}}=10^{22}$ cm$^{-2}$ are strongly accelerated by the AGN radiation with its characteristic radial profile, and that the infrared contribution is not significantly affecting this scenario. At $N_{\mathrm{H}}=10^{23}$ cm$^{-2}$ and for $\lambda_{\mathrm{Edd}}=0.05$ we observe the emergence of a wind from particles at the base of the disk that are driven more vertically instead of radially, indicating that the infrared radiation is initiating the wind, rather than the AGN. At this column density value, lower values of Eddington ratio $\lambda_{\mathrm{Edd}}<0.05$ are dominated by gravity and do not exhibit outflows. Furthermore, a wind starts to weakly emerge for column densities of $N_{\mathrm{H}}=10^{24}$ cm$^{-2}$ and $\lambda_{\mathrm{Edd}}=0.15$ where we do not observe any outflow without the dust re-radiation. Values of Eddigton ratio larger than $\lambda_{\mathrm{Edd}}=0.15$ will enhance the AGN radiation pressure and favour the emergence of a wind. \newline \indent We investigate more quantitatively the observed change in configuration, favouring the disk contribution, for the regions [$\lambda_{\mathrm{Edd}}=0.05$, $N_{\mathrm{H}}=10^{23}$ cm$^{-2}$] and [$\lambda_{\mathrm{Edd}}=0.15$, $N_{\mathrm{H}}=10^{24}$ cm$^{-2}$]. In fact, one can show that these values are close to the limit where the radiative acceleration from the AGN balances gravity, so that only the infrared dominates. To highlight the parameter space for which this balance happens, we can use Eq. \eqref{rag2} and equate it to unity, obtaining a linear relationship between $\lambda_{\mathrm{Edd}}$ and $N_{\mathrm{H}}$ \begin{equation} \frac{a_{\mathrm{AGN}}}{a_{\mathrm{g}}}\equiv 1\quad \xrightarrow{}{}\quad \lambda_{\mathrm{Edd}}\left(N_{\mathrm{H}}\right)=\frac{2}{3}\sigma_{\mathrm{T}}N_{\mathrm{H}} \end{equation} Results are displayed in figure \ref{ratiofig} and are consistent with our hypothesis. We only show the column density range to corresponding Eddington ratios falling in the Seyfert regime. Note that, while these sets of values might represent an infrared initiated dusty wind, one has to perform numerical simulations to capture the final structure, which is the subject of the next session. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{ratiofig.png} \caption{Critical values of Eddington ratios and column densities for which $a_{\mathrm{AGN}}\equiv{a_{\mathrm{g}}}$ and the disk component dominates as the only net resultant force. Here $N_{\mathrm{H;23}}=N_{\mathrm{H}}/10^{23}$ cm$^{-2}$.} \label{ratiofig} \end{figure} \section{3D radiation-dynamical simulations} With the full 3D component of the total acceleration derived in the previous sections, we can explore how dusty gas clouds move in the dusty environment of AGN. A similar investigation has been carried in \citet{Bannikova}, which derived test particles motion in the gravitational field of an AGN only. Here, we account for both gravity and radiation forces. The equations of motions for a cloud at distances $d$ from the AGN are \begin{equation} \ddot{x}= -\frac{G M_{\mathrm{BH}}}{d^2} + k_{cl}\frac{L_{AGN}}{4\pi c d^2}+\frac{1}{c} \int k_{\mathrm{\nu}} F_{\mathrm{\nu}}^{\mathrm{ir}} d\mathrm{\nu} \end{equation} where $F_{\mathrm{\nu}}^{\mathrm{ir}}$ is the net infrared flux due to the disk, as derived in section \ref{diskfield}. The dynamics equations have been integrated using the standard leapfrog algorithm. We tested its accuracy in reproducing stable orbits against higher order integration methods and found it was performing equally well. We used an adaptive time step $dt= \eta \sqrt{\frac{4\pi }{3}\frac{d}{\max(a_i)}}$, for $i$ running over all the three acceleration: gravity, AGN, infrared. This is just a generalisation of the commonly used time step for systems interacting with $d^{-2}$ forces. The assumption is that $dt=\eta t_{c}=\eta/\sqrt{G\;\rho} = \eta (4\pi d/3a)^{1/2}$, where $t_{c}$ is the characteristic timescale of interaction, $\eta$ is a scaling factor, $\rho$ is the mass density, $G$ the gravitational constant, $d$ is the distance to the mass and $a$ is the acceleration, which refers to gravity in this case. In our case, we consider the smallest interaction scale out of gravity and radiative forces and we fix $\eta=0.003$. \subsection{Radiation pressure induced sub-Keplerian rotation} \label{subkep} We balance the initial velocity profile for the dusty particles by including the contribution of both AGN and IR radiation pressure. The azimuthal velocity as a function of the cylindrical coordinate $r=\sqrt{x^2+y^2}$ is \begin{equation} v_{\mathrm{\phi}} = \sqrt{\frac{GM_{BH}}{r}-r\;a_{\mathrm{rad};r}} \label{eq:vkeff} \end{equation} where $a_{\mathrm{rad};r}$ includes both AGN and infrared radiative acceleration. Note that there are no stable orbits when radiative acceleration exceeds gravitational acceleration. Particle trajectories in this case are more likely to be driven outward in a way that depends on the combined influence of the disk and the AGN contribution for moderately to very high column density, and fully radial for the specific light obscuration case $N_{\mathrm{H}}=10^{22}$ cm$^{-2}$ where the AGN force strongly dominates. If we define the rotation curve as $v_{\phi}\propto r^{-\beta}$, we can find an analytical expression for the velocity exponent $\beta$ \begin{equation} \beta \equiv - \frac{\partial \ln{v_{\mathrm{\phi}}}}{\partial \ln{r}} = \frac{1}{2}\left[1-\frac{\partial \ln{}}{\partial \ln{r}}\left(1-\frac{a_{\mathrm{rad};r}}{a_{\mathrm{g};r}}\right)\right] \end{equation} which reduces to the Keplerian exponent $0.5$ when $a_{\mathrm{rad}}=0$ and has no solution when $a_{\mathrm{rad};r}>a_{\mathrm{g};r}$. Trajectories in this case are are discarded from the simulations. In figure \ref{vkeff} we plot the resultant curve for a column density $N_{\mathrm{H}}=10^{24}$ cm$^{-2}$. A general property met at this column density value is that velocities are very small going closer to r$_\mathrm{sub}$ and this behaviour is further emphasized at higher $\lambda_{\mathrm{Edd}}$. This trend is inverted at 1.6 r$_\mathrm{sub}$ where there is a remarkable departure from the Keplerian motion, in a way that again is strongly enhanced with the Eddington ratio. The value approached for $\lambda_{\mathrm{Edd}}=0.15$ at large distances is $\beta=0.39$. \begin{figure}[ht!] \centering \includegraphics[width=0.47\textwidth]{beta_esponent.png} \caption{Velocity exponent $\beta$ at the equatorial plane for column density $\log{N}=24$ cm$^{-2}$ and different values of $\lambda_{\mathrm{Edd}}$. The grey dashed line traces the Keplerian value $\beta=0.5$. All the curves intersect and turns into sub-Keplerian at a distance 1.6 of r$_\mathrm{sub}$.} \label{vkeff} \end{figure} \section{Results}\label{section_results} \begin{figure*}[ht!] \begin{center} \includegraphics[width=\textwidth]{ricci.png} \end{center} \caption{Example of dust and gas configuration for different values of Eddington ratio and column density. } \label{simulations} \end{figure*} \subsection{Impact of infrared radiation pressure} The inclusion of the IR emitting disk has two major effects. It introduces a more geometrically complex force term, featuring a strong vertical component which breaks the radial symmetry. This is due to both the disk geometry itself and the strong local temperature variation, with the hottest contribution being at the sublimation radius. It also boosts the outflow acceleration, making a wind emerge even for high column density material. In figure \ref{simulations} we run a set of simulations taking one of the ``typical'' parameters for which the vertical accelerations dominate over the radial accelerations, taking as example $\lambda_{\mathrm{Edd}}=0.09$ and $N_{\mathrm{H}}= 2 \times 10^{23}$ cm$^{-2}$ (see section \ref{irwins}). We also show how changes in the parameter values lead to different structures. The aim is to provide a feeling of the diversity of the possible dynamical configurations. At lower Eddington ratios, $\lambda_{\mathrm{Edd}}=0.04$ and $N_{\mathrm{H}}= 2 \times 10^{23}$ cm$^{-2}$, material is driven away radially, being more prominently subjected to radiation pressure from the AGN. The uplift is suppressed at higher column density $N_{\mathrm{H}}= 10^{24}$ cm$^{-2}$, where gravity strongly dominates and all orbits are confined in a compact thick structure. The typical scenario giving an infrared dominated wind appears when $\lambda_{\mathrm{Edd}}=0.09$ and $N_{\mathrm{H}}= 2 \times 10^{23}$ cm$^{-2}$, but the final configuration is true for any values coinciding or close to the parameter space derived in section \ref{irwins}. The cone assumes a funnel like shape, rising vertical from the inner edge of the disk. At the same time, the dust re-radiation puffs up the disk in the region $1-5$ $r_{\mathrm{sub}}$. At higher column densities $N_{\mathrm{H}}= 10^{24}$ cm$^{-2}$, most of the orbits are bound but the higher Eddington ratio (and hence radiation pressure) causes them to accelerate to higher scale-heights or eventually to escape radially after 1-2 orbits. \subsection{Sub-Keplerian motion on parsec scales} We employed an initially sub-Keplerian velocity profile that takes into account radiation pressure corrections to the gravitational potential. Our logic is similar to \cite{chan1,chan2} who showed that sub-Keplerian rotation is necessary for maintaining a long-living torus in the presence of strong radiation pressure. In the present work we systematize this idea by establishing a prescription for the velocity profile based on the relative strength of the total infrared+AGN radiation pressure with respect to gravity. A consequence of our approach is that some trajectories are naturally ruled out, as the initial velocity in Eq. \eqref{eq:vkeff} is defined by a square-root whose argument cannot be negative. This occurs every time the radiative acceleration experienced by the clouds is stronger than gravity, i.e for material with very light column density or very high accretion state. Orbits in this case are unstable, likely turning into a wind. In figure \ref{vkeff} we have shown the resultant profile for $N_{\mathrm{H}}= 10^{24}$ cm$^{-2}$ as the high gravitational force allows the orbits to remain bound within the disk for a large range of Eddington ratios. At this column density value, our simulations show that orbits maintain a sub-Keplerian rotation in the region 1.6-5 $r_{\mathrm{sub}}$ and they get even more sub-Keplerian as $\lambda_{\mathrm{Edd}}$ increases. The velocity exponent at large distances is $\beta\simeq 0.39$ for $\lambda_{\mathrm{Edd}}=0.15$, consistent with the rotation curve of maser spots observed at the sub parsec scales in NGC 1068. The inner part $r<1.6$ $r_{\mathrm{sub}}$ is characterized by very small velocities so that the actual force field will initiate an outflowing or inflowing motion. Specifically for this example and based on our previous analysis, $\lambda_{\mathrm{Edd}}\gtrsim0.15$ is then required to observe an outflow for clouds with $N_{\mathrm{H}}= 10^{24}$ cm$^{-2}$. \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.9\textwidth]{aniso2.png} \caption{\label{Anisotropic} Comparison of the isotropic and anisotropic radiation field with $\eta_a=10^{2}$. On the left panel it is shown the AGN radiation field for $\mathrm{N_{H}= 3 \times 10^{23}\;cm^{-2}}$ for the istropic (top) and anisotropic case (bottom). In the right panel we show the corresponding simulations taking as example $\lambda_{\mathrm{Edd}}=0.13$.} \end{center} \end{figure*} \subsection{Effects of anisotropic accretion disk} In the previous sections, the model assumed isotropic radiation from the central AGN. In more realistic situations, the flattened geometry of the central radiation source causes the emission to emerge anisotropically \citep[e.g][]{1987MNRAS.225...55N}. Radiation hydrodynamical simulations suggest that the anisotropy of the AGN can be equally important as the Eddington ratio in determining the dynamics of the outflow and disk \citep{david}, emphasizing the importance of evaluating the effects of the anisotropy. Recently, \citet{ishibashi2} used a static, analytic scheme to link the geometry of nuclear outflows to the anisotropy as determined by the black hole spin. Using our radiation-dynamical simulations, we can investigate the impact of anisotropic AGN radiation on the emergence of dusty winds. Following \citet{david}, we modify the AGN acceleration \begin{equation} \label{agn_anisotropy} {a}_{\mathrm{AGN}}\xrightarrow{}{a}_{\mathrm{AGN}} f(\theta) \end{equation} where $f(\theta)$ is the anisotropy function, defined as \begin{equation} f(\theta)=\frac{1+a\cos\theta+2a\cos^2\theta}{1+2a/3} \end{equation} with $a=(\eta_a-1)/3$, introducing the parameter $\eta_a$ as the ``anisotropy factor'', equal to the ratio between the polar flux and the equatorial flux. In the left panel of figure \ref{Anisotropic} we show how the radial profile of the AGN radiation pressure is modified when $\eta_a=10^{2}$ for $\mathrm{N_{H}= 3 \times 10^{23}\;cm^{-2}}$, while in the right panel we show the corresponding full simulations, i.e. infrared+AGN radiation pressure and gravity, for $\lambda_{\mathrm{Edd}}=0.13$. The introduction of the anisotropic AGN radiation field produces a change in the outflow opening angle, featuring a wider cone with respect to the isotropic case. This is caused by two effects: first, as the AGN radiation is reduced in the plane of the dusty disk, the sublimation radius moves closer to the AGN and the AGN radiation pressure at this inner radius remains the same. As the dusty disk retains its temperature profile (the inner radius is still equivalent to the sublimation temperature), the IR radiation field also remains the same. Second, when a particle is swept upwards, the AGN radiation pressure starts to increase because of the $\theta$-dependence of the radiation profile, introducing a stronger radial component at the same scaled position as compared to the isotropic case. Therefore, the radial radiation pressure component from the AGN will be stronger than the more vertical component of the infrared radiation pressure, resulting in a wider cone. The disk configuration is not significantly modified, as the AGN radiation pressure does not penetrate deeply. Therefore, even though the gravitational force at the sublimation radius is larger in the anisotropic than in the isotropic case, the disk dynamics remain similar. \begin{figure*}[ht] \begin{center} \includegraphics[width=\textwidth]{dust_properties3.png} \caption{\label{dust_model} Simulations for $\mathrm{N_{H}= 3\times 10^{23}\;cm^{-2}}$ and $\lambda_{\mathrm{Edd}}=0.13$ for test runs of different dust properties.} \end{center} \end{figure*} \subsection{Effect of different dust properties} The infrared emission, and hence the infrared radiation pressure strength, is determined by the physical properties of the dust. Throughout this paper, we adopted the standard interstellar medium (ISM) dust grain model as set in \textsc{Cloudy}. It consists of graphites and silicates with sizes following the MRN distribution \citep{MRN}. The grain model also accounts for dust sublimation. In the following, we will examine how our assumptions on the dust composition influence the properties of the dusty wind. For that, we set $\mathrm{N_{H}= 3\times 10^{23}\;cm^{-2}}$ and $\lambda_{\mathrm{Edd}}=0.13$. As a first test, we perform a simulation where we turn off the dust sublimation. In this ``no sublimation'' run, we allow dust to overheat and exist beyond its critical sublimation temperature. In figure~\ref{dust_model}, we compare the standard ISM simulation (left) with the case of no dust sublimation (middle panel). As dust can heat to higher temperatures as before, the emitted infrared flux (and hence the infrared radiation pressure) will dramatically increase, since $p_\mathrm{rad} \propto T^{4+\gamma}$ \citep[with $\gamma$ being the power-law index of the drop of dust absorption efficiency in the near-IR as defined by][]{1987ApJ...320..537B}. With respect to the ``ISM dust'' run where dust sublimation is included, the disk appears much thicker as the orbits of particles are puffed up to larger scale heights. At the same time, the wind cone becomes wider as particles at further distance from the sublimation radius are (mostly radially) driven into the wind. Second, we consider a dust model that accounts for sublimation but includes large graphite grains only. This is motivated by near-infrared observations of nearby AGN that find high dust emissivities \citep[e.g.][]{kishimoto2011b,2020A&A...635A..92G}. Results are shown in figure \ref{dust_model} (right panel). Since graphites have on average larger opacities than the corresponding ISM dust composition, this will result in an enhanced infrared radiation pressure in both the disk and wind as the more radiation is re-emitted for the same temperature. This causes a stronger radial pressure and a wider cone, similarly to the run without dust sublimation, as the launching region becomes larger in the same manner. The disk is also thicker than for the standard ISM case, but the dynamics are less affected then in the ``no sublimation'' case. \section{Discussion}\label{section_discussion} \subsection{Comparison to radiation-regulated obscuration models of AGN} A recent X-ray study of a large sample of local AGN found that the obscuration covering factor strongly depends on the Eddington ratio \citep{ricci2017}. They conclude that radiation pressure on dusty gas is the main mechanism driving the distribution of the circumnuclear material, favouring a unifying radiation-regulated obscuration model \citep[see also][]{hoenig2019}. In particular, a constant Compton thick obscuration ($\mathrm{N_{\mathrm{H}}>10^{24}\; cm^{-2}}$) is found with a small covering factor and a Compton thin obscuration varying with the Eddington ratio. The latter has a large covering factor when $\lambda_{\mathrm{Edd}}<10^{-1.5}$ and then drops at larger $\lambda_{\mathrm{Edd}}$, with most of the material found in the form of an outflow. In the framework of Seyfert-like Eddington ratios, the elements emerging from our simulations strongly favours the obscuration structure proposed by \citet{ricci2017} and can be examined using figure \ref{simulations} as a reference. Our choice to have a velocity profile dependent on radiation affects low column density material the most. Indeed, it is not possible to have bound orbits within the disk for $r \lesssim 5$ $r_{\mathrm{sub}}$ when $\mathrm{N_{H}\simeq 10^{22}\;cm^{-2}}$ since those particles are strongly subjected to the AGN radiation pressure and the square-root term in Eq. (\ref{eq:vkeff}) would become negative. In such a strong AGN radiation field, particles are likely to be driven radially outward and so giving a larger covering factor of low column density material as observed in \citet{ricci2017}. At moderate column density $\mathrm{N_{H}\simeq 10^{23}\;cm^{-2}}$ the infrared radiation pressure becomes effective and polar outflows start to emerge. At the same time, higher column density material experiences a stronger gravitational pull, which keeps material bound within the disk, rotating according to the previously discussed sub-Keplerian profile. At larger $\mathrm{N_{H}}$, the uplift is effectively suppressed in cases where the Eddington is in the Seyfert regime or below. Matter with $\mathrm{N_{H}\simeq 10^{24}\;cm^{-2}}$ will generally settle in the disk plane, forming a low covering factor of very Compton thick material, as observed by \citet{ricci2017}. The idea that radiation pressure regulates the AGN obscuration properties has been further investigated \citep{ricci2017} by analysing how the observational data populate the ``$\mathrm{N_{H}}$-$\lambda_{\mathrm{Edd}}$'' plane, defined by the column density versus the Eddington ratio \citep[e.g][]{fabian1}. AGN seem to avoid a wedge-like region starting at $\lambda_{\mathrm{Edd}}\sim10^{-1.5}$ and $\mathrm{N_{\mathrm{H}}\sim10^{22}\; cm^{-2}}$. \citep{ishibashi} used an analytic model of AGN and infrared radiation pressure based on a spherical wind (and an approximation for the effect on clouds) and found that the forbidden region is dominated by strong radiation pressure, probably clearing out the material from around the AGN. We can test these analytic results with our dynamical model. Indeed, as shown in the previous sections, the parameter range of the ``forbidden region'' agrees with the parameters for which we found infrared-induced outflows to emerge. \subsection{Comparison to observations and models of Circinus} The idea that radiative feedback is driving the obscuration in AGN also affects the emitting dusty outflow structure, not just the obscuration properties. The clear detection of polar emission in Circinus \citep{tristram2014} and other sources served as key motivation for the presented study. The latest radiative transfer model of the high angular resolution data of Ciricinus is based on a compact dusty disc with a dusty, hyperbolic cone \citep{circinus,circinus2}. We tested the hyperboloid wind scenario of Circinus using the $\mathrm{\lambda_{Edd}}$ and $\mathrm{N_{H}}$ inferred in \cite{circinus2}. The authors assume a line-of-sight column density of $\mathrm{N_{H} \gtrsim 10^{24} \; cm^{-2}}$. We consider single clumps with $\mathrm{N_{H}=5 \times 10^{23} \; cm^{-2}}$, with a number of clouds along the equatorial plane being $N_{0}=7$. This provides a value consistent with the one estimated in \cite{circinus2}. The Eddington ratio reported is $\mathrm{\lambda_{Edd}=0.2}$. Based on the arguments given in section \ref{irwins}, our simulation will specifically consider $\mathrm{\lambda_{Edd}=0.22}$ as this illustrates the domain where the infrared radiation pressure dominates for clouds with $\mathrm{N_{H}=5 \times 10^{23} \; cm^{-2}}$. Results of our simulations are shown in figure \ref{circinus} with different inclination with respect to the disk plane. Overall, the structure achieved agrees well with the disk + hyperboloid polar wind scenario as depicted in \cite{circinus}. \begin{figure}[t!] \centering \includegraphics[trim={0 0 0 2cm},clip,width=0.46\textwidth]{circinus.png} \caption{Three-dimensional views of the proposed configuration for the Circinus-like structure, for the edge-on case ($Top$) and an inclination above the disk plane of 15$^{\circ}$ ($Bottom$). We used $\mathrm{\lambda_{Edd}=0.22}$ and a column density of $\mathrm{N_{H}=5 \times 10^{23} \; cm^{-2}}$. } \label{circinus} \end{figure} The half opening angle we found is 26$^{{\circ}}$ and the disk flaring angle is $\simeq 4^{{\circ}}$, both consistent with observations. The outer wall of the hyperboloid wind $r^{hyp}_{\mathrm{out}}$ is located in our simulations at 1.27 $\mathrm{r_{sub}}$ which corresponds to 0.05 pc.\footnote{Knowing that the inferred luminosity for Circinus is $L_{\mathrm{Circ}}=3.9 \times 10^{43}$ erg s$^{-1}$ \citep{tristram2014} we can estimate the sublimation radius for Circinus to be 0.04 pc.} The latter is roughly 10 times lower than the value found in \cite{circinus}. We argue that the wind boundary we found can be pushed further away to at least 0.2 pc as we do observe an unstable wind region up to 5 $\mathrm{r_{sub}}$, where the trajectories receive a significant puff up from radiation pressure or escape outward. This argument links back to the temperature profile assumed for the disk (sec. \ref{fig:temperature}). The distance 5 $\mathrm{r_{sub}}$ corresponds to a typical temperature $\simeq 700$ K which, as per Wien law, corresponds in turns to an emission peaking at 5 $\mathrm{\mu}$m. This traces exactly a key observational features in AGN, namely the 3-5 $\mathrm{\mu}$m bump. Beyond 5-7 $\mathrm{r_{sub}}$ the temperature drops, which causes the infrared radiation pressure to drop as a consequence so that any uplift is suppressed. Finally, Circinus is well known for its maser disk emission seen at $\sim 0.1 -0.4$ pc \citep{Greenhill} with the disk rotation marginally sub-Keplerian. The observationally derived velocity exponent is roughly $\beta\sim0.45$. Interestingly, the distance at which these masers are found corresponds approximately to the region where we start to find bound orbits, i.e. 5 $\mathrm{r_{sub}}$, which can be considered a first consistency with observations. The value for the velocity exponent $\beta$ we find at 5 $\mathrm{r_{sub}}$ radius is $\beta \sim 0.33$ and gradually approaches the Keplerian value at larger distances, producing approximately the observed mean velocity for the distances where most of the masers are located. These findings suggest that radiation pressure may affect the dynamics of the maser disks, which might explain the observed sub-keplerian rotation velocities. \subsection{Relation to outflows emerging from dust-free regions} The presented dusty wind configuration might also create a link to the structure of outflows observed in AGN at much smaller, dust-free, scales. Interestingly, a very similar Circinus-like geometry of a funnel-shaped wind has been derived empirically by \citet{elvis2000} to explain the structure of outflows inside the sublimation radius. Additionally, it has been noted that the structure is subjected to luminosity-dependent changes, reducing the cylindrical part of the outflow or modifying its half opening angle. As suggested in \citet{hoenig2019}, both the high opacity and mass content of the dust characterising the outflow at parsec-scales are likely to define the boundary of the material closer to the accretion disk. If this is the case, the wind geometry reproduced by our simulations might as well provide insights into the observed outflows emerging from dusty free regions, making them dependent on the Eddington ratio, rather than the luminosity. \section{Conclusions}\label{section_conclusions} We have presented the results of 3D numerical simulations of dusty gas clouds moving around an AGN and considering the infrared re-radiation of the hot dust itself. The aim of this work is to offer insights on the obscuration properties of AGN, with particular reference to the emergence of radiatively driven dusty winds. We first proposed a semi-anaytical model based on a dense clumpy disk circumnuclear to a central AGN. Then we consider the radiation pressure from the AGN in the optical/UV, gravity from the central black hole and the radiation pressure in the infrared coming from the dusty disk. From our investigation, we have found several results: \begin{itemize} \item Infrared radiation pressure from a hot disk is sufficient to produce a polar wind in the hot inner regions of AGN and an overall puff-up along the entire disk surface. \vspace{2pt} \item The IR radiation pressure is most effective at launching a wind around a critical limit where the AGN radiation pressure approximately balances gravity from the central black hole, so that the IR pressure from the disk is the dominant component of the effective force. \vspace{2pt} \item Radiation-dynamical simulations show that it is possible to have a stable rotating structure if initial sub-Keplerian velocities are assumed. Those sub-Keplerian disk velocities are a consequence of both optical and infrared radiation pressure. \vspace{2pt} \item Our model favors radiation-regulated obscuration scenarios: the amount of material observed and the covering factor are shaped by the combination of the Eddington ratio and column density of dusty clouds in the AGN vicinity. \vspace{2pt} \item We have been able to qualitatively reproduce high-angular infrared observations and radiative transfer modelling of the AGN in the Circinus Galaxy. Specifically, we replicate the hyperboloid shape of the wind proposed by \citet{circinus2}. \vspace{2pt} \item We discussed the impact of anisotropic radiation field. This mainly affects the outflow configuration, resulting in a wider cone, while the disk remains unchanged. \vspace{2pt} \item The dust model assumed is a fundamental factor to determine the disk and wind configuration, as this affects the local infrared radiation field strength. \end{itemize} The model presented here has deliberately been kept simple in order to highlight the role of infrared radiation pressure in shaping the AGN environment. An account of fragmentation processes of optically thick clouds under radiation pressure is a separate issue which is beyond the scope of this paper. It is likely that dust clumps with the physical properties adopted in this work can survive in the strong radiation field of an AGN, as implied by \citet{namekata}. Hence, the basic characteristics of the resulting outflow should not significantly change by more elaborate considerations. \section*{Acknowledgements} This work has been supported by the European Research Council Grant ERC-StG-677117 DUST-IN-THE-WIND. \newpage \bibliographystyle{aasjournal}
{ "timestamp": "2020-07-28T02:39:37", "yymm": "2007", "arxiv_id": "2007.13554", "language": "en", "url": "https://arxiv.org/abs/2007.13554" }
\section{Introduction} \textit{Rheology} is a field of science dedicated to study the deformation and flow of matter, according to the etymological definition of the word coined by Prof. Bingham in 1920 \cite{Barnesycol1993}. \textit{Rheometry} is dedicated to determine experimentally the rheological properties of complex fluids under well defined and simple flow conditions \cite{GalindoRosales2018}. These standard flows allow sharing and comparing rheological data either for quality control of new formulations, getting insights of the internal structure of the material by comparing with data in the literature or validating constitutive equations\cite{Morrison2001}. Simple shear and extensional flows are complementary standard flow conditions typically considered for the characterisation of a complex fluid, once any complex flow can be split up into components of shear and extensional flows\cite{BarnesMaia2010}. The scientific instrument that allow imposing controlled flow conditions and measure the response of the fluid is named as \textit{rheometer}. Simple shear flow conditions, which only has one non-zero component of the strain rate tensor \cite{Ovarlez2012}, can be achieved experimentally with relative ease either by imposing a pressure difference in a close channel/pipe (pressure driven flow) or by imposing a relative velocity between two solid surfaces (drag flow)\cite{Macosko1994}; and for this reason shear rheometry was developed earlier than extensional rheometry \cite{Birdetal1987a,Galindoetal2013}. Pressure driven rheometers are very useful for determining the viscosity of high viscous materials, nevertheless the lack of homogeneity in the deformation is something that prevents their use for the characterisation of time dependent behaviours\cite{Dealy1998}. Moreover, rheometers based on drag flows, and particularly the rotational rheometers, are much more versatile because they allow to impose different flow kinematics while preserving the homogeneous deformation in the fluid sample and, consequently, providing as a result many different materials functions, and that is the reason their dominant presence in any rheology laboratory worldwide. Rotational rheometers are equipped with a set of geometries, typically plate-plate, cone-plate and concentric cylinders, having each of them several key features that make them ideal for different kind of fluids and different flow conditions. Concentric cylinders are adequate for low shear rates and low viscosity fluids, while cone-plate allows homogeneous shear rate through out all the volume sample, and plate-plate allows to use different gaps and reach very high shear rate values \cite{Mezger2002,Haake_libro}. All of them allow reaching reliable flow viscosity curves under steady shear flow. These steady shear viscosity curves are obtained by shearing the fluid sample at different shear rates until steady state shear stress response is recorded, or vice versa. This is the typical curve, which demonstrates the viscosity as a function of shear rate $\eta=f\left(\dot\gamma\right)$, used for characterising any complex fluid and fitting the corresponding Generalised Newtonian Fluid (GNF) model, e.g. Carreau, Cross, Bingham, etc. \cite{Morrison2001}, that will be later on used in numerical studies. Despite this is a correct methodology for characterising complex fluids and predicting their flow behaviour under steady flow conditions, it may produce misleading results when used for predicting the transient response of these fluids. GNF models assume that there is no time delay between the applied shear rate and the change in the viscosity of the fluid; nevertheless, most of complex fluids from colloidal suspensions to polymer solutions shows memory effects, exhibiting a change in the viscosity that is not synchronised with the application of the shear rate. In other words, they exhibit a viscoelastic response, where the elastic component is represented by the response in phase with the deformation ($G'$) and the viscous component is given by the response in phase with the rate of deformation ($G''$). This is very relevant in the case of shear thickening fluids (STFs), which are typically used in shock absorbing applications where the flow conditions are intrinsically transient \cite{GalindoRosales2015326,FJGRLCDPatent,galindoApplSci2016}. Despite STFs exhibit both viscoelastic moduli ($G'$ and $G''$)\cite{Goede2019}, they are typically characterised under steady shear flow conditions and their viscosity is the key parameter used for designing shock absorbing devices \cite{GURGEN2016312, GURGEN2018, KHODADADI2019643}. It is hard to correlate the mechanical response of the shock absorbers in very short time scales with the rheological information obtained under steady flow conditions, even more when it has been widely reported that STFs cannot instantaneously change their rheological properties from liquid like to solid like due to its viscoelastic nature. The paradigm is currently changing and new studies are published reporting that the characterisation of STFs for improving the mechanical properties of composites should be done in terms of the \textit{instantaneous viscosity} instead of the steady state viscosity \cite{Pinto2017}. However, the problem is to determine how instantaneous is \textit{instantaneous viscosity} measured of a STF in a rotational rheometer. In rotational rheometers, there are two major limitations when characterising the mechanical response of complex fluids at short time scales, which are the instrument inertia and fluid inertia. In the seminal book chapter by Ewoldt et al. \cite{Ewoldt2015}, it is clearly stated that instrument inertia limits the maximum frequency at which a rotational rheometers can provide reliable data ($\omega<\sqrt{\frac{GF_{\gamma}}{IF_{\tau}}}$), being $G$ either $G^\prime$ or $G^{\prime\prime}$, and $\frac{IF_{\tau}}{F_{\gamma}}$ the instrument inertia associated to the measurement geometry); they also clearly expose that instrument inertia affects the minimum acquisition time to provide reliable data in step tests due to the instrument acceleration at short-time scale ($t>\sqrt{2\frac{\eta^{+}F_{\tau}I}{F_{\gamma}}}$, where $\eta^{+}$ is the transient shear viscosity\cite{nomenclature}). Additionally, one has to consider the fluid inertia associated to the secondary flows, due to curved streamlines and high velocities; as well as the presence of a wave propagating through the volume sample as a result of either viscous momentum diffusion or elastic shear waves or both, which wavelength $l$ should be much greater than the geometry gap $D$ in order to avoid this artifact ($l\gg D$). Thus, it becomes evident that conventional rheometers have the disadvantage of being too massive, resulting in inertial problems and limiting their range of operation to relatively low frequencies. New approaches would be required to determine the \textit{instantaneous viscosity}. The sliding plates rheometers, conversely to rotational rheometers, have been reported to be a successful approach for characterising shear thinning fluids under ``large, rapid, transient shear deformations'' \cite{Dealy1998,Giacomin1989,ORTMAN2011884} allowing to reach high frequencies when scaling down in the gap size (sliding microrheometers) \cite{CLASEN20041, Moon2008, Verbaan2015}. Thanks to plane Couette flow conditions, it is also possible to perform simultaneously non-mechanical measurements, i.e. neutron scattering analysis. However, not everything is perfect on slide rheometry, and wall-slip issues may arise\cite{Hatzikiriakos1991}. Despite the slide rheometry looks like a good approach for fast transient measurements, to the best of authors knowledge it has never been used for characterising shear thickening samples at short time scales, probably due to problems related with keeping the gap size constant, overloading issues of the transducer, shear fracture or even wall-slip \cite{Dealy1998}. The approach in the sliding cylinders rheometer is similar to the sliding plate rheometer, but it prevents the edge effects and bearing friction issues. The former shares the same principle with the falling rod viscometer \cite{Dealy1998}, which is considered a precise method for measuring the absolute viscosity of Newtonian fluids ranging from $10^{-3}$ to $10^{7}$ Pa$\cdot$s \cite{debruyn}. In both cases, when the relative gap between the cylinders is very small, there is no need to know the constitutive equation of the fluid to calculate the shear strain and shear rate, as in the sliding plates rheometer\cite{Dealy1998}. In 1948, Bikerman \cite{BIKERMAN194875} proposed the \textit{penetroviscometer} as a new viscometer for determining the viscosity of Newtonian fluids with a viscosity between $100$ and $100,000$ Pa$\cdot$s under steady state conditions with a remarkable error below 3.5\%. The configuration is similar to the falling rod viscometer in the sense that the fluid is contained between two coaxial cylinder, being the outer one fixed and the inner one movable; but they are different because the area of contact between the liquid and the inner cylinder is not constant in the \textit{penetroviscometer}. Besides, another difference is that in the viscosimeter proposed by Bikerman, when the inner cylinder moves downward, the fluid is forced to flow upward through the annular gap between the two cylinders. Bikerman's analysis was done for measurements on steady state and to the best of authors knowledge it has never been assessed for the measurement of the transient shear viscosity of complex fluids at large, rapid, transient shear deformations. From the experimental point of view, it would be extremely easy to convert a standard drop weight impact machine into a penetroviscometer, just by recording the position of the tip of the inner cylinder with time and measuring the force by means of a piezoelectric force transducer located at the shaft of the inner cylinder. In this way, deformation and the stresses in the fluid will be decoupled, as in a separated motor-transducer instruments, helping in avoiding the instrument inertia effect \cite{Ewoldt2015}. In this work, we perform a numerical analysis on the usefulness of the penetroviscometer-like viscometer for measuring transient viscosities of complex fluids under large, rapid, transient shear deformations. Our attention is given to fluids with a viscosity ranging from $10^{-3}$ to $10^{3}$ Pa$\cdot$s and we analyze different potential experimental conditions, such as the ratio between the radius of the concentric cylinders, the velocity of the inner cylinder (in case it was a control parameter experimentally) and the initial position of the inner cylinder's tip. \section{Materials and methods} \subsection{Geometry and initial conditions} The geometry consists of a stationary cylindrical reservoir with an inner diameter $D$, and a sliding cylinder (inner cylinder) coaxial with the reservoir and with a smaller diameter ($d$), which moves inside it in the direction of the gravity ($g$). The origin of the coordinate system is located at the axis of the cylinders and at the same height as the interface between the two fluids before the experiment starts. A sketch of the geometry is shown in Fig. \ref{fig:1}. Three different geometries are considered for this study, based on three different blockage ratios (BR=$d/D=1/1.5,~ 1/3$ and $1/6$). \begin{figure}[t] \centering {\label{fig: Sketch}\includegraphics[width=0.75\linewidth]{Sketch.eps}} \caption{Sketch of the penetroviscometer. The volume of the reservoir filled with blue represents the air, while the red color represents the liquid to be tested. The reference of the coordinate system is located at the interface of the liquid before the impact.} \label{fig:1} \end{figure} At time $t=0$, it is considered that the tip of the inner cylinder is submerged in the liquid a distance $z_{0}$, being $z_{0}=0,~ d, ~2d,~ 16d$. Its initial velocity is 1.2 m/s, which is a typical velocity value for drop weight impact machines. Additionally, both fluids are considered to be at rest and the height of the liquid is $h_{0}$. \subsection{Governing equations} The governing equations are the mass conservation equation (Eq. \ref{eq: Continuity}): \begin{equation} \label{eq: Continuity} \dfrac{\partial \rho}{\partial t} + \nabla \cdot \rho U =0, \end{equation} \noindent and the momentum conservation equation (Eq. \ref{eq: Momentum}): \begin{equation} \label{eq: Momentum} \dfrac{\partial \left( \rho U \right)}{\partial t} + \nabla\cdot \left( \rho UU \right) =-\nabla P + \nabla \cdot \tau + \rho g + f_s, \end{equation} \noindent where $U$ is the velocity vector shared between the two fluids in the entire domain, $P$ is the pressure, $\tau=2\mu S - 2 \mu \left( \nabla \cdot U \right) I /3$ is the deviatoric viscous stress tensor, $S=1/2 \left[\nabla U+ \left( \nabla U \right)^T \right]$ is the strain rate tensor, $I$ is the unity matrix. The interface liquid-air will be treated by considering the volume of fluid (VOF), which is a powerful method to approximate free boundaries in finite-difference numerical simulations. It was proposed by Hirt and Nichols \citep{HirtN1981} and it uses a scalar function ($\alpha$), called volume fraction, to define if the region is occupied by the liquid ($\alpha=1$), empty of that liquid ($\alpha=0$) or if it corresponds to a free surface ($0<\alpha<1$). Therefore, Eq. \ref{eq: Continuity} and Eq. \ref{eq: Momentum} will be solved in combination with the transport equation for volume fraction (Eq. \ref{eq: MixtureFraction}): \begin{equation} \label{eq: MixtureFraction} \dfrac{\partial \alpha}{\partial t} + \nabla \cdot \left( U \alpha \right)= 0, \end{equation} \noindent in which $\alpha$ varying in the range of $0 \leq \alpha \leq 1$. The surface tension $f_s$ is also calculated as follows \cite{Andersson2010}: \begin{equation} f_s=\sigma\left( \nabla \cdot \left( \dfrac{\nabla \alpha}{|\nabla \alpha|} \right) \right) \left( \nabla \alpha \right), \end{equation} \noindent where $\sigma$ is the interface coefficient and $\nabla \alpha=n$ is the vector normal to the interface \cite{Andersson2010}. The fluid properties which are density and dynamic viscosity can also be obtained at each computational cell using the volume fraction \begin{equation} \rho=\alpha \rho_l+ \left( 1 - \alpha \right) \rho_a, \end{equation} \begin{equation} \mu=\alpha \mu_l+ \left( 1 - \alpha \right) \mu_a, \end{equation} \noindent here indexes $l$ and $a$ are representative of liquid and air, respectively. \subsection{Boundary conditions} Fig. \ref{fig:2} displays a slice of the computational domain and the boundaries. No-slip boundary condition with zero velocity at the side and bottom walls of the reservoir is imposed. At the lateral surface and at the tip of the sliding cylinder, again no-slip boundary condition, but the velocity is given by $v(t)$. For the velocity at the outlet \codebox{pressureInletOutletVelocity} is imposed, in which zero-gradient is applied and the velocity is obtained from the patch-face normal component of the internal-cell value \cite{Weller1998}.\\ \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{boundaryConditions_all.eps} \caption{A slice of the computational domain and the boundaries which are shown with differentcolors, BR=1/3 $z_0=d$.} \label{fig:2} \end{figure} For pressure, \codebox{zeroGradient} boundary condition is applied at all the boundaries except at the outlet, where a total pressure equal to zero is set. For the mesh motion, the velocity $v(t)$ is given to the sliding cylinder and its tip in \codebox{pointMotionUz} file, while no-slip boundary condition (\codebox{fixedValue uniform (0 0 0)}) is set at the reservoir. A uniform fixed value equal to zero is also used at the bottom and outlet. Three different functions for the velocity have been considered for the sliding cylinder $v(t)$: \begin{itemize} \item Constant: $v(t)=1.2$ m/s; \item Linear: $v(t)=1.2-29.09 t$ m/s; \item Parabolic: $v(t)=1.2 -1.84\cdot 10^{-4}t -705 t^2$ m/s. \end{itemize} These velocity profile have been considered based on the results of some preliminary experiments \cite{Rafael2017}, where the initial impact velocity was 1.2 m/s and the time evolution of the impactor's velocity followed a parabolic deceleration until it was stop after $\sim40$ ms. The linear velocity profile was defined considering the same initial velocity and the 40 ms to be stopped (Fig. \ref{fig:velocityprofiles}). \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{velocityProfilesImpactor.eps} \caption{Different velocity profiles $v(t)$ considered for the impactor in the assessment of the penetroviscosimeter to measure the transient viscosity of the liquid.} \label{fig:velocityprofiles} \end{figure}\\ \begin{table}[h!] \caption{Computational domain length consist of the length of the sliding cylinder at $t=0$ s and the length downstream of the impactor, number of grids in stream-wise and the ratio of the smallest to the largest cell ($R_c$) for the cases with $z_0=0$ and $z_0$ $\neq$ 0.} \centering \begin{tabular}{c c c} \hline\hline Impactor's velocity & $L_{impactor_{t_0}}$ & $L_{downstream}$ \\ [0.5ex] & [cells, $R_c$] & [cells, $R_c$] \\ [0.5ex] \hline constant ($z_0=0$) & $10d$[320,0.8] & $16.4d$[80,0.05] \\ linear ($z_0=0$)& $5d$[160,0.8] & $16.4d$[80,0.05] \\ parabolic ($z_0=0$)& $6.7d$[213,0.8] & $16.4d$[80,0.05]\\ [0.5ex] \hline parabolic: $z_0=d$ & $7.7d$[120,0.8] & $16.4d$[40,0.05]\\ [0.5ex] parabolic: $z_0=2d$ & $8.7d$[136,0.8] & $16.4d$[40,0.05]\\ [0.5ex] parabolic: $z_0=16d$ & $22.7d$[320,0.8] & $16.4d$[40,0.05]\\ [0.5ex] \hline linear (doubled resolution) & 5d[320,0.8] & $16.4d$[160,0.05]\\[0.5ex] \hline \end{tabular} \label{table: domain sizes} \end{table} \subsection{Numerical considerations} OpenFOAM 2.4.0 is used to numerically model the fluid flows between the two cylinders. Among the OpenFOAM's solvers, \codebox{multiphaseInterDyMFoam} is chosen to impose the motion of sliding cylinder (impactor) inside reservoir applying dynamic mesh and solve the Navier-Stokes equations for a multiphase flow. This solver applies volume of fluid (VOF) to capture the interface of the fluids (here air and liquid), as it will be briefly discussed in the following section. The \codebox{multiphaseInterDyMFoam} solver was employed to handle dynamic mesh for solving Navier Stokes equations of two phases flow consisting of air ($\mu=1.8375\cdot10^{-6}$ Pa$\cdot$s and $\rho=1.225$ kg/m$^{3}$) and the liquid. The simulations are carried out starting from a very viscous liquid (silicon-1000), continued with another fluid with a less viscosity (silicon-1), and accomplished with water. The key properties of the three different liquids are given in Table \ref{table: liquids}. \begin{table}[h!] \caption{Physical properties of the working fluids used in this study.} \centering \begin{tabular}{c c c c} \hline\hline \multirow{2}{*}{Fluid}& Dynamic viscosity & Density & Surface tension \\ [0.5ex] & $\mu$ [Pa$\cdot$s] & ($\rho$ [kg/m$^{3}$]) & ($\sigma$ [mN/m])\\ [0.5ex] \hline Silicon-1000 & 1000 & 970 & 35 \\ Silicon-1 & 0.970 & 970 & 35 \\ Water & 0.000997 & 997 & 70 \\ \hline \end{tabular} \label{table: liquids} \end{table} Cylindrical polar coordinates are considered ($r,z,\theta$). The structured mesh is generated using blockMesh utility. The cross-section of the sliding cylinder is created and meshed as a combination of a $d/2 \times d/2$ square at the center with a resolution of $20 \times 20$ cells, and then each side of the square is transformed to the impactor's edge by 10 for the cases with BR=1/1.5 and $z_0=0$. A coarser grid is used for the cases with $z_0 \neq 0$; the same square size ($d/2 \times d/2$) covered by $12 \times 12$ cells and the area between the square and the inner cylinder's edge is covered by 6 cells for the case with BR=1/1.5. The grid spacing size is kept the same for the other two blockage ratios. Fig. \ref{fig:numericaldomain} shows some details of the numerical domain and Table \ref{table: domain sizes} provides the domain length, number of cells in stream-wise ($z$) direction, and the ratio of the smallest to the largest cells. While the grid is uniform in the radial direction, a constant stretch ratio is applied in the stream-wise direction ($z$) to have highest resolution at the impactor's tip, where there is air/liquid interface for $z_0=0$. The simulations are carried out using a fixed time step $\Delta t=1.375$ $\mu$s, with the simulation time equal to $41$ ms. Euler time integration is applied and the data is printed every $1.375$ ms. Gauss linear discretization scheme is used for the derivatives, except $\nabla \cdot \left (U \alpha \right)$, for which Van Leer divergence scheme is employed. \begin{figure}[h!] \centering \subfloat[Mesh corresponding to the fluid domain at the interface liquid-air.]{\label{fig:outlet}\includegraphics[width=0.4\linewidth]{outlet.eps}}\hfill \subfloat[Mesh corresponding to the fluid domain at the bottom of the reservoir.]{\label{fig:bottom}\includegraphics[width=0.4\linewidth]{bottom.eps}}\\ \subfloat[Mesh corresponding to the sliding cyliner's tip.]{\label{fig:impactorTip}\includegraphics[width=0.25\linewidth]{impactorsTip.eps}}\hfill \subfloat[A longitudinal cut view of a the domain including the sliding cylinder.]{\label{fig:cutViewOfTheDomain} \includegraphics[width=0.6\linewidth]{halfDomain.eps}} \subfloat[Side view of the mesh used for the reservoir.]{\label{fig:cutViewOfTheDomain} \includegraphics[width=\linewidth]{reservoir.eps}} \caption{Computational domain and mesh for the case in which BR=1/3 and $z_0=0$.} \label{fig:numericaldomain} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{U_coarseGrid_fineGrid.eps} \caption{Velocity profile near the sliding cylinder's tip for silicon-1000, BR=1/1.5 with linear imposed velocity to the sliding cylinder; a comparison between the coarse and fine (doubled resolution) grids at $t \sim 2$ ms.} \label{fig: gridAnalysis} \end{figure} \begin{figure}[h!] \centering \subfloat[]{\label{fig: Forces_silicon_BR1.5_z0=0}\includegraphics[width=0.33\linewidth]{Force_BR1-5_silicon1000.eps}} \subfloat[]{\label{fig: Forces_silicon_BR3_z0=0}\includegraphics[width=0.33\linewidth]{Force_BR3_silicon1000.eps}} \subfloat[]{\label{fig: Forces_silicon_BR6_z0=0}\includegraphics[width=0.33\linewidth]{Force_BR6_silicon1000.eps}} \subfloat[]{\label{fig: Forces_silicon_BR1.5_nu0001_z0=0}\includegraphics[width=0.33\linewidth]{Force_BR1-5_silicon1.eps}} \subfloat[]{\label{fig: Forces_silicon_BR3_nu0001_z0=0}\includegraphics[width=0.33\linewidth]{Force_BR3_silicon1.eps}} \subfloat[]{\label{fig: Forces_silicon_BR6__nu0001_z0=0}\includegraphics[width=0.33\linewidth]{Force_BR6_silicon1.eps}} \subfloat[]{\label{fig: Forces_water_BR1.5_z0=0}\includegraphics[width=0.33\linewidth]{Force_BR1-5_water.eps}} \subfloat[]{\label{fig: Forces_water_BR3_z0=0}\includegraphics[width=0.33\linewidth]{Force_BR3_water.eps}} \subfloat[]{\label{fig: Forces_water_BR6_z0=0}\includegraphics[width=0.33\linewidth]{Force_BR6_water.eps}} \caption{Contribution of the friction drag to the total force sense by the inner cylinder for the cases in which $z_0=0$, for different blockage ratios and different imposed velocity profiles to the sliding cylinder.: (a)-(c) silicon-1000, (d)-(f) silicon-1, and (g)-(i) water.} \label{fig:3} \end{figure} \begin{figure}[h!] \centering \subfloat[Water, BR=1/1.5.]{\label{fig:alphaW-BR1.5}\includegraphics[width=0.9\linewidth]{Water_parabolic_alpha_BR1-5.eps}}\\ \subfloat[Water, BR=1/3.]{\label{fig:alphaW-BR3}\includegraphics[width=0.9\linewidth]{Water_parabolic_alpha_BR3.eps}}\\ \subfloat[Water, BR=1/6.]{\label{fig:alphaW-BR6}\includegraphics[width=0.9\linewidth]{Water_parabolic_alpha_BR6.eps}}\\ \subfloat[Silicon1000, BR=1/3.]{\label{fig:alphaWS1000}\includegraphics[width=0.9\linewidth]{Silicon_parabolic_alpha_BR3.eps}} \caption{(a) to (c): Water splash when impactor moves in the case of a parabolic velocity profile for the inner cylinder and for blockage ratios 1/1.5, 1/3 and 1/6 respectively. (d) Silicon-1000 ($z_0=0$) for the case of BR=1/3.} \label{fig:5} \end{figure} \begin{figure}[h!] \centering \subfloat[]{\label{fig: Forces_silicon_parabolicVelocity_BR1.5_z0=d}\includegraphics[width=0.5\linewidth]{z0_BR1-5.eps}} \subfloat[]{\label{fig: Forces_silicon_parabolicVelocity_BR1.5_z0=2d}\includegraphics[width=0.5\linewidth]{z0_BR3.eps}} \subfloat[]{\label{fig: Forces_silicon_parabolicVelocity_BR1.5_z0=d2}\includegraphics[width=0.5\linewidth]{z0_BR6.eps}} \caption{Normalized force for silicon-1000 and imposed parabolic velocity at different blockage ratio and different $z_0\neq0$.} \label{fig:6} \end{figure} \begin{figure}[h!] \centering \subfloat[]{\label{fig:alphaS1000d2BR1.5}\includegraphics[width=\linewidth]{Silicon_BR1-5_parabolic_alphad2.eps}} \subfloat[]{\label{fig:alphaS1000d2BR3}\includegraphics[width=\linewidth]{Silicon_BR3_parabolic_alphad2.eps}} \subfloat[]{\label{fig:alphaS1000d2BR6}\includegraphics[width=\linewidth]{Silicon_BR6_parabolic_alphad2.eps}} \caption{Influence of blockage ratio on the contact between the inner cylinder and the liquid for the case of using silicon-1000, a parabolic velocity profile and $z_{0}$=16d: (a) BR=1/1.5, (b) BR=1/3 and (c) BR=1/6.} \label{fig:7} \end{figure} \section{Results and discussion} \subsection{Analysis of the forces} Let's consider that a motor would be controlling the movement of the inner cylinder and the response of the fluid would be measured experimentally by means of a piezo-electric 1-component force sensor installed at the inner cylinder. That force measured by the sensor ($F_{s}$) would be the sum of different contributions: \begin{equation} \label{eq:FSI} F_{s}= F_{p} + F_{w}+F_{b}, \end{equation} \noindent where $F_{p}$ would be the pressure drag, $F_{w}$ the friction drag and $F_{b}$ the buoyancy, all of them corresponding, obviously, to the inner cylinder. As experimentally it will be impossible to decouple the contribution of these forces to the total force measured by the sensor, if it is intended the penetroviscometer to provide the instantaneous viscosity, then the design should result in friction drags dominating over the other components, so that $F_{s}\approx F_{w}$. Fig. \ref{fig:3} shows the importance of the friction drag ($F_{w}$) with regards to the total force sense by the inner cylinder ($F_{s}$; Eq. \ref{eq:FSI}) for the three different fluids, the three different blockage ratios and the three different imposed velocity profiles to the sliding cylinder. Despite it cannot be observed, due to the normalisation, that the values of the friction drag are also lower for the cases with lower viscosities, it is indeed observed that the friction drag is negligible for viscosity values of the order of 1 mPa$\cdot$s, regardless the blockage ratio and the velocity profile of the cylinder (Figs. \ref{fig: Forces_water_BR1.5_z0=0}-\ref{fig: Forces_water_BR6_z0=0}). This is supported by Figs. \ref{fig:alphaW-BR1.5}-\ref{fig:alphaW-BR6} where the splash produced in the liquid avoid any contact with the lateral surface of the inner cylinder. Fig. \ref{fig:alphaWS1000} makes evident that an increment in the viscosity of the liquid results in an increase of the contact area of the liquid with the inner cylinder and, therefore, $F_{w}$ becomes more relevant. Despite $F_{w}$ increases with the viscosity and with smaller blockage ratios, none of the cases shown in Fig. \ref{fig:3} can be considered as useful for calculating the instantaneous viscosity from the measurement of $F_{s}$, since $\frac{F_{w}}{F_{s}}\ll 1$ in all cases. In order to increase the value of $F_{w}$, it would be required to start the experiment with the tip of the inner cylinder submerged a distance $z_{0}>0$. Fig. \ref{fig:6} compares the friction and pressure drags for different values of $z_{0}$ and for the different blockage ratios, but just for the case of imposing a parabolic profile to the sliding cylinder and for the silicon-1000. It can be observed that increasing $z_{0}$ up to a value of $16d$ results in a friction drag dominant over the pressure drag and buoyancy, for small blockage ratios (BR$\leq$1/3). Fig. \ref{fig:7} shows that it would be preferable to use a BR as small as possible, in order to to minimize the interaction with the outer wall and assume $F_s \approx F_w$. It can also be observed that, in these later cases, there is a mass added effect when the experiment is started, which results in an initial peak in the friction force that vanishes after $\sim1.3$ ms. \subsection{Inertial artefacts} When measuring with the penetroviscometer, inertia can interfere in the measurement of $F_{w}$, depending on the viscosity of the fluid sample and the geometry (i.e. $d$, $D$, BR and $z_{0}$), which may potentially lead to artificial results for the instantaneous viscosity (Section \ref{Section:InstVisco}). Figures \ref{fig:alphaW-BR1.5}-\ref{fig:alphaW-BR6} show the problems of measuring the instantaneous viscosity with the penetroviscometer for water-like fluids when $z_{0}=0$ at any BR, due to the lack of lateral contact between the sample and the inner cylinder. Figure \ref{fig:6} showed that increasing the viscosity, the value of $z_{0}$ and BR will provide us $F_{s}\approx F_{w}$; however there is inertia problems start at $t=0$ s due to the added mass effect. An added mass force is created when the mass of fluid surrounding a body is suddenly accelerated or decelerated. Unavoidably, additional fluid forces will act on the surfaces in contact with the fluid and the measurement of $F_{s}$ will be affected by these forces. Therefore, it is required to defined when this inertial artifacts occur and define the range of reliability for the penetroviscometer to determine the instantaneous viscosity. Therefore, this added mass force will only appear at the beginning in those cases in which the sliding cylinder is already submerged in the $z_{0}\neq 0$, and at the end of the experiment in those cases in which the velocity profile of the sliding cylinder is not constant. For the computation of this force, only the volume of the sliding cylinder submerged into the liquid will be considered, as it is around three order of magnitude denser than air: \begin{equation}\label{eq:addeddrag} F_{a}=m_{a} \frac{d}{dt}\left(v\left(t\right)-v_{l}\left(t\right)\right), \end{equation} \noindent where $m_{a}=\rho_{l}\frac{\pi d^2}{4}z\left(t\right)$ is the added mass and represents the equivalent added mass of the entire flow field about the accelerating/decelerating body, $v \left(t\right)$ is the velocity of the inner cylinder and $v_{l}\left(t\right)$ is the velocity of the liquid surrounding it. It is also important to remember that $z\left(t\right)=z_{0}+\int_{0}^{t}v\left(t\right)dt$, and therefore $z_{0}$ is an amplifier parameter for the added mass force. Looking at Eq.\ref{eq:addeddrag} it can be observed that both, at the very beginning, when $v_{l}\approx 0$ and $v(t)$ is at its maximum, and at the very end of the experiment, when $v_{l}$ is at its maximum and $v(t)\approx 0$, result in $|F_{a}|\gg F_{w}$. \subsection{Instantaneous viscosity}\label{Section:InstVisco} In order to calculate the instantaneous viscosity $\eta\left(t\right)$, it is just required to compute the shear stress at the wall of the inner cylinder $\tau_{w}\left(t\right)$, the shear rate at the wall of the inner cylinder $\dot\gamma_{w}\left(t\right)$, and divide one by the other as in Eq. \ref{Eq:InstVisco}: \begin{equation} \eta\left(t\right)=\frac{\tau_{w}\left(t\right)}{\dot\gamma_{w}\left(t\right)} \label{Eq:InstVisco} \end{equation} Similarly to the calculation made in a rotational rheometer, in the penetroviscometer $\tau_{w}\left(t\right)$ is proportional to the force measured by the sensor $F_{s}\left(t\right)$ (Eq. \ref{Eq:shearstress}): \begin{equation} \tau_{w}\left(t\right)=\frac{F_{s}\left(t\right)}{A\left(t\right)}=\frac{F_{s}\left(t\right)}{\left[z_{0}+z\left(t\right)\right]\pi d}=\frac{F_{s}\left(t\right)}{\left[z_{0}+\int^{t}_{0}v\left(t\right)dt\right]\pi d}, \label{Eq:shearstress} \end{equation} \noindent where the constant of proportionality ($\frac{1}{\left[z_{0}+\int^{t}_{0}v\left(t\right)dt\right]\pi d}$) depends exclusively on the experimental parameters $d$, $v\left(t\right)$ and $z_{0}$. \begin{figure}[t!] \centering \subfloat[$t=2.5$ ms]{\label{fig:velocprofile_t=0}\includegraphics[width=0.49\linewidth]{U_t1.eps}}\hfill \subfloat[$t=5$ ms]{\label{fig:velocprofile_t=0}\includegraphics[width=0.49\linewidth]{U_t2.eps}}\hfill \subfloat[$t=10$ ms]{\label{fig:velocprofile_t=0}\includegraphics[width=0.49\linewidth]{U_t3.eps}}\hfill \subfloat[$t=20$ ms]{\label{fig:velocprofile_t=0}\includegraphics[width=0.49\linewidth]{U_t4.eps}}\hfill \subfloat[$t=40$ ms]{\label{fig:velocprofile_t=0}\includegraphics[width=0.49\linewidth]{U_t5.eps}}\hfill \caption{Fluid velocity profile at different z-positions and different instant of time for the case of using silicon-1000, a parabolic velocity profile, $z_{0}=16d$ and BR$=1/6$.} \label{fig:velocityProfilesLiquid} \end{figure} \indent The same philosophy is applied to the shear rate, which will also be a function of experimental parameters and the imposed velocity to the inner cylinder. Fig. \ref{fig:velocityProfilesLiquid} shows the velocity profile within the fluid between at different time steps and at different z-positions in contact with the inner cylinder. It can be observed that the velocity profile is not dependent of the z-position, but only depends on the radial position $r$ and it follows a quadratic expression ($v_{f}=a\left(r-\frac{d}{2}\right)^{2} + b\left(r-\frac{d}{2}\right) +c$). This velocity profile must satisfy the non-slip condition at both walls: $v_{f}\left(r=\frac{d}{2},t\right)=-v\left(t\right)$ and $v_{f}\left(r=\frac{D}{2},t\right)=0$; moreover, as the volume ($Q$) of fluid moved by the tip of the inner cylinder must be preserved, the following conservative condition must be also satisfied: \begin{equation} Q=v\left(t\right)\pi d^{2}/4=\int^{D/2}_{d/2} v_{f}\left(r\right)2\pi r dr. \label{Eq:conservQ} \end{equation} \noindent In this way the following linear system of equations must be solved out in order to have the coefficients $a$, $b$ and $c$ of the velocity profile of the liquid: \begin{equation} \label{eq:system} \begin{bmatrix} \left(\frac{d}{2}\right)^{2} & \frac{d}{2} & 1 \\ \left(\frac{D}{2}\right)^{2} & \frac{D}{2} & 1 \\ \frac{1}{4}\left[\left(\frac{D}{2}\right)^{4}-\left(\frac{d}{2}\right)^{4}\right] & \frac{1}{3}\left[\left(\frac{D}{2}\right)^{3}-\left(\frac{d}{2}\right)^{3}\right] & \frac{1}{2}\left[\left(\frac{D}{2}\right)^{2}-\left(\frac{d}{2}\right)^{2}\right] \\ \end{bmatrix} \cdot \begin{bmatrix} a \\ b \\ c \end{bmatrix} = \begin{bmatrix} -v\left(t\right) \\ 0 \\ \frac{v\left(t\right)d^2}{8} \end{bmatrix} , \end{equation} \noindent which has the following solution: \begin{equation}\label{eq:a} a\left(t\right)=8v\left(t\right)\frac{D^2+Dd-5d^2}{\left(D^4-2D^3d+2Dd^3-d^4\right)}, \end{equation} \begin{equation}\label{eq:b} b\left(t\right)= -2v(t) \frac{\left( 3D^2+2Dd-11d^2 \right) }{\left( D-d \right) \left( D^2-d^2 \right)},\\ \end{equation} \begin{equation}\label{eq:c} c\left(t\right)=v(t). \end{equation} \begin{figure}[t!] \centering \subfloat[$t=2.5$ ms]{\label{fig:velocprofileAnalyticalNumerical_t=1}\includegraphics[width=0.49\linewidth]{U1_numericalAnalytical.eps}}\hfill \subfloat[$t=5$ ms]{\label{fig:velocprofile_t=2}\includegraphics[width=0.49\linewidth]{U2_numericalAnalytical.eps}}\hfill \subfloat[$t=10$ ms]{\label{fig:velocprofile_t=3}\includegraphics[width=0.49\linewidth]{U3_numericalAnalytical.eps}}\hfill \subfloat[$t=20$ ms]{\label{fig:velocprofile_t=4}\includegraphics[width=0.49\linewidth]{U4_numericalAnalytical.eps}}\hfill \subfloat[$t=40$ ms]{\label{fig:velocprofile_t=5}\includegraphics[width=0.49\linewidth]{U5_numericalAnalytical.eps}}\hfill \caption{Comparison between the analytical and the numerical velocity profiles in the liquid at different z-positions and for the case of using silicon-1000, a parabolic velocity profile, $z_{0}=16d$ and BR$=1/6$. Instant of time $t=5$ ms.} \label{fig:velocityProfilesAnVsNum} \end{figure} \noindent These coefficients depend on time ($t$), but not on the z-position. Fig. \ref{fig:velocityProfilesAnVsNum} compares the numerical and the analytical solution for the velocity profile in the liquid contained between the two cylinders.\\ Once the velocity field in the fluid is known ($\vec{v_{f}}=v_{f}\left(t\right)\vec{e}_{z}$), the shear rate tensor can be calculated (Eq. \ref{eq:shearratetensor}). \begin{equation} \bar{\bar{\dot\gamma}}=\frac{1}{2}\left(\nabla\vec{v_{f}}+\nabla\vec{v_{f}}^{T}\right)= \left(\begin{matrix} 0 & 0 & \frac{dv_{f}}{dr} \\ 0&0& 0 \\ \frac{dv_{f}}{dr} & 0 & 0\\ \end{matrix} \right) \label{eq:shearratetensor} \end{equation} Consequently, the shear rate at the wall of the inner cylinder is given by Eq. \ref{eq:shearratescalar}, which is only a function of the velocity of the inner cylinder and the dimensions of the concentric cylinders: \begin{equation} \label{eq:shearratescalar} \dot\gamma_{w}\left(t\right)=\left. \frac{dv_{f}\left(t\right)}{dr} \right|_{r=\frac{d}{2}} =b\left(t\right)=-\frac{2v(t) \left( 3D^2+2Dd-11d^2 \right) }{\left( D-d \right) \left( D^2-d^2 \right)},\\ \end{equation} Thus, the instantaneous viscosity is therefore given by Eq. \ref{Eq:InstVisco}, which can be rewritten as follows: \begin{equation} \eta\left(t\right)=-\frac{\tau_{w}\left(t\right)}{\dot\gamma_{w}\left(t\right)}=\frac{\left( D-d \right) \left( D^2-d^2 \right) F_s(t)}{2v(t) \left( 3D^2+2Dd-11d^2 \right) \left[z_{0}+\int^{t}_{0}v\left(t\right)dt\right]\pi d} \label{Eq:InstVisco2} \end{equation} Eq. \ref{Eq:InstVisco2} is also useful for defining the experimental setup for the penetroviscometer, as it contains all the involved experimental variables: $d$, $D$, $z_{0}$, $v$ and $F_{s}$. In this way, for example, one could get an estimation of the order of magnitude of the viscosity; then considering BR$\sim 1/6$, and defining the velocity of the inner cylinder, it is possible to determine the range of the force transducer. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{viscosity_error.eps} \caption{Relative error in the calculation of the viscosity by means of the Eq. \ref{Eq:InstVisco2}.} \label{fig:12} \end{figure} \section{Conclusion and future works} Inspired by the penetroviscometer proposed by Bikerman\cite{BIKERMAN194875} more than 70 years ago, we have performed a numerical and analytical study to assess the usefulness of this kind of devices for measuring the instantaneous viscosity curve of shear thickening fluids under impact conditions. To do so, we have considered Newtonian fluids ranging from $10^{-3}$ to $10^{3}$ Pa$\cdot$s, three different blockage ratios (BR=1/1.5, BR=1/3 and BR=1/6), three different impact velocity profiles (constant, linear and parabolic) and three different initial positions for the inner cylinder (at the interface air-liquid $z_{0}=0$, and submerged into the fluid sample at $z_{0}=d$, $2d$ and $z_{0}=16d$). From the experimental point of view, the fluid sample must be in contact with the lateral area of the inner cylinder in order to compute the shear stress at the wall, and this is only accomplished for high viscous fluids ($\mu\gg1$ Pa$\cdot$s). Thus, it is expected to be useful for shear thickening fluids. Additionally, in order to calculate the instantaneous viscosity, the shear stress over the lateral area of the inner cylinder must dominate over the pressure at the tip; then, the results reported in this work recommended to start the experiments with the inner cylinder submerged into the fluid sample as much as possible and the gap between the two cylinder should be as large as possible. Moreover, inertial artefact may be an issue, particularly at the beginning and at the end of the experiment due to added mass effect; therefore, these experimental data should be discarded. We ended up with an analytical expression (Eq. \ref{Eq:InstVisco2}) that is able to provide the instantaneous viscosity based on geometric parameters ($z_{0}$, $d$ and $D$), the velocity of the inner cylinder ($v\left(t\right)$) and the measured force ($F_{s}$). For the case of silicon-1000, a parabolic profile for the inner cylinder, $z_{0}=16d$, and BR=1/6, this device using Eq. \ref{Eq:InstVisco2} would be able to provide the instantaneous viscosity with an accuracy of $\approx 93\%$. In this sense, we can consider that these device may provide useful experimental data to help in the development of impact protective devices and new constitutive models accounting for the transient behavior of shear thickening fluids. However, in order to provide reliable data, the non-slip condition must be accomplished.
{ "timestamp": "2020-07-28T02:41:43", "yymm": "2007", "arxiv_id": "2007.13619", "language": "en", "url": "https://arxiv.org/abs/2007.13619" }
\section{Proof of \pref{lem:kahdisp}} \begin{proof} Let $A$ be the adjacency matrix of $W$. Let $P_{h-1}$ and $P_h(X)$ be the orthogonal projections onto $X_{h-1}$ and onto $X_{h}$, respectively. Let $P_{\leq h-1}$ and $P_{\leq h}(X)$ be the orthogonal projections onto $\textnormal{Ball}_{h-1}(X)$ and $\textnormal{Ball}_{h}(X)$, respectively. We need to show that \[ \frac{\|P_{h}g\|^2}{\|P_{h}s\|^2}\geq\frac{\|P_{h-1}g\|^2}{\|P_{h-1}s\|^2}. \] Call $A_h=P_{\leq h}AP_{\leq h}$ (so $A_h$ performs the adjacency operator on $\textnormal{Ball}_{h}(X)$). By the conditions of the lemma, we know that there are constants $\alpha,\beta$ and $\gamma$ such that \begin{equation}\label{eq:kahassume3} P_hA_hs=\gamma P_hs \end{equation} and \begin{equation}\label{eq:kahassume2} A_hP_hs=\alpha P_hs+\beta P_{h-1}s. \end{equation} By assumption, \begin{equation}\label{eq:kahassume} A_h s\leq \mu P_{\leq h-1}s+\gamma P_h s. \end{equation} Therefore by applying $P_{\leq h-1}$ to both sides of \pref{eq:kahassume}, \begin{eqnarray*} P_{\leq h-1}A_hs&\leq& \mu P_{\leq h-1}s\\ &\leq&\mu P_{\leq h}s-\mu P_h s. \end{eqnarray*} Now we apply $A_h$ to both sides: \begin{align*} A_hP_{\leq h-1}A_hs&\leq \mu A_hs-\mu A_hP_{h}s\\ &\leq\mu A_hs-\mu(\alpha P_hs+\beta P_{h-1}s) & \textnormal{by }\pref{eq:kahassume2}\\ &\leq\left(\mu^2 P_{\leq h-1}+\mu(\gamma-\alpha)P_{h}-\mu\beta P_{h-1}\right)s.& \textnormal{by }\pref{eq:kahassume} \end{align*} Define the matrix $B:=\mu^2 P_{\leq h-1}+\mu(\gamma-\alpha)P_{h}-\mu\beta P_{h-1}-A_hP_{h-1}A_h$. $B$ has no positive entries on the off diagonal. Take any eigenvector $\psi$ of $B$. Without loss of generality assume that $\psi$ has a positive entry. Then take $i=\argmax_u\psi(u)/s(u)$. As $\psi\leq (\psi(i)/s(i))s$, $(B\psi)(i)\geq B(\psi(i)/s(i))s)(i)$. The quantity on the right is nonnegative, meaning that the eigenvalue with eigenvector $\psi$ is nonnegative. As $\psi$ was arbitrary, $B$ is positive semidefinite. Because $B$ is positive semidefinite, \begin{equation}\label{eq:quadform} g^*A_hP_{\leq h-1}A_hg\leq g^*\left(\mu^2 P_{\leq h-1}+\mu(\gamma-\alpha)P_{h}-\mu\beta P_{h-1}\right)g. \end{equation} For any orthogonal projection $P$, $P^2=P$. Therefore $g^*A_hP_{\leq h-1}A_hg=\|P_{\leq h-1}A_hg\|^2$. Moreover \pref{eq:quadform} becomes \[ \|P_{\leq h-1}A_hg\|^2\leq \mu^2\|P_{\leq h-1}g\|^2+\mu(\gamma-\alpha)\|P_hg\|^2-\mu\beta\|P_{h-1}g\|^2. \] By assumption, $\|P_{\leq h}A_hg\|=\mu\|P_{\leq h}g\|$. Therefore \begin{equation}\label{eq:simpquadform} (\gamma-\alpha)\|P_hg\|^2\geq \beta\|P_{h-1}g^2\|. \end{equation} Moreover, as $A_h$ and $P_h$ are self adjoint, $s^*A_hP_hs=s^*P_hA_hs$, so $\alpha\|P_hs\|^2+\beta\|P_{h-1}\|^2=\gamma\|P_hs\|$. Combining this with \pref{eq:simpquadform}, we obtain \pref{eq:kahlemma}. \end{proof} \section{Introduction} This paper is concerned with expander graphs, which are ubiquitous in theoretical computer science. A natural and highly well-studied quantity associated with a $d$-regular graph is its \emph{edge expansion} defined as \[ \min_{|S|\leq \epsilon n} E(S,\overline S)/|S|, \] namely the minimum ratio of edges leaving a set $S$ to the size of $S$ for all $S$ of appropriately bounded size. While edge expansion is known to be intractable to compute, there are explicit constructions of good edge expanders, and it is closely related to the second largest magnitude eigenvalue of its adjacency matrix, also known as \emph{spectral expansion} of a graph, via the expander mixing lemma and Cheeger's inequality \cite{Che1}. Spectral expansion is easily computable. In particular, an application of the expander mixing lemma proves that small enough sets in graphs with spectral expansion $o(d)$ have near-optimal edge expansion of $(1-o_d(1))d$. A natural analog to edge expansion is \emph{vertex expansion}, defined as \[ \min_{|S|\leq \epsilon n} |\Gamma(S)|/|S| \] for some constant $\epsilon$, where $\Gamma(S)$ is the neighborhood of the set $S$ (potentially containing vertices of $S$). However, as difficult as edge expansion is to ascertain, vertex expansion has proven far more challenging. As witnessed by balls around a vertex, we cannot hope for vertex expansion greater than $d-1$. Therefore we call a graph a \textit{lossless vertex expander} if for every $\delta$, there exists an $\epsilon$ such that there is vertex expansion $d-1-\delta$ for sets of size $\epsilon n$. Lossless vertex expanders exist since a random $d$-regular graph is one with high probability (see \cite[Theorem 4.16]{HLW} for a proof). However no deterministic construction of such graphs is known. In an effort to understand lossless vertex expansion better and give explicit constructions, a natural question to ask is: \begin{displayquote} \emph{What properties of random graphs leads to lossless vertex expansion?} \end{displayquote} Since a random $d$-regular graph is near-Ramanujan with high probability \cite{Fri}, and since near-Ramanujan graphs have near-optimal edge expansion, it is natural to inquire if spectral expansion has any implications for vertex expansion as well. Kahale \cite{Kah} showed that the spectral expansion gives a bound on the vertex expansion. Specifically, Ramanujan graphs (namely graphs with optimal spectral expansion) have vertex expansion at least $d/2$. While this is a nontrivial implication, it falls short of achieving the coveted \emph{losslessness} property. Kahale also proved that the bound of $d/2$ is tight. In particular, he exhibited an infinite family of near-Ramanujan graphs with vertex expansion $d/2$, which means spectral expansion alone is not sufficient for lossless vertex expansion. The occurrence of a copy of $K_{2,d}$\footnote{complete bipartite graph with $2$ vertices on one side and $d$ vertices on the other} as a subgraph is the obstruction to lossless vertex expansion in Kahale's example. Kahale's example deviates from a random graph in that it is highly unlikely for a random graph to contain a copy of $K_{2,d}$ as a subgraph. More generally, random graphs have the property that with high probability any two ``short'' cycles are far apart, which Kahale's example doesn't satisfy. Thus, it is natural to ask if the ``near-Ramanujan'' property in conjunction with the ``separatedness of cycles'' property of random graphs break past the $d/2$ barrier of Kahale. The ``separatedness of cycles'' property is especially interesting to consider since it is a key property of random graphs exploited in proofs of Alon's conjecture \cite{Fri,Bor}. A concrete question we can ask is: \begin{displayquote} \emph{Do Ramanujan graphs with $\Omega(\log_{d-1} n)$ girth have lossless vertex expansion?} \end{displayquote} An affirmative answer to the above question would prove that the Ramanujan graphs of Lubotzky, Phillips, and Sarnak \cite{LPS} are lossless vertex expanders. Towards answering the above question, we prove the following negative result: \begin{theorem} \label{thm:main-negative} For every $d = p+1$ for prime $p$, there is an infinite family of $d$-regular graphs $G$ on $n$ vertices of girth $\ge\left(\frac{2}{3}-o_n(1)\right)\log_{d-1} n$ where there is a set of vertices $U$ such that $|\Gamma(U)|\le(d+1)|U|/2$, $|U|\le n^{1/3}$, and $\max\{\lambda_2(G),-\lambda_n(G)\} \le 2\sqrt{d-1}+O(1/{\log_{d-1} n})$. \end{theorem} We also complement the above with a positive result which can be summarized as ``small enough sets in Ramanujan graphs expand nearly losslessly'': \begin{theorem} \label{thm:main-positive} Let $G$ be a $d$-regular Ramanujan graph with girth $C\log_{d-1}n$, then every set of $S$ of size $\le n^{\kappa}$ for $\kappa < \frac{C}{4}$ has vertex expansion $(1-o_d(1))d$. \end{theorem} \subsection{Technical overview} We give a brief description of how \pref{thm:main-negative} and \pref{thm:main-positive} are proved. \paragraph{Overview of proof of \pref{thm:main-negative}.} Our proof is inspired by that of Kahale's. At a high level, Kahale embeds a copy of $K_{2,d}$ within a Ramanujan graph. We proceed similarly to Kahale, but instead of embedding a $K_{2,d}$, we embed a single subgraph $H$ that is high girth but a lossy vertex expander and show that if $H$ has size $n^\alpha$ for some $0 < \alpha\leq 1/3$, the overall graph is still near-Ramanujan. Our proof involves two steps: the first step is in proving that the subgraph $H$ being embedded has spectral radius bounded by $2\sqrt{d-1}$, and the second step is in proving that planting $H$ within a Ramanujan graph results in a near-Ramanujan graph. For the first step, we describe an infinite graph containing $H$ and bound its spectral radius via a trace moment method. The trace moment method involves bounding the number of closed walks satisfying certain properties within a graph, and is inspired by an encoding argument from Bordenave's proof of Friedman's theorem \cite{Bor}. The second step is in proving that our method of embedding a copy of $H$ within a Ramanujan graph does not perturb the eigenvalues by a large amount. Towards doing so, we use the fact that the spectral radius of $H$ is bounded by $2\sqrt{d-1}$ in conjunction with Kahale's argument about dispersion of eigenvalues in high-girth graphs. \paragraph{Overview of proof of \pref{thm:main-positive}.} We first prove that if a set $S$ in a Ramanujan graph has ``lossy'' vertex expansion, then we can construct a graph $H$ on vertex set $S$ such that (i) the girth of $H$ is at least half the girth of $G$, and (ii) the average degree of $H$ is ``high'' (in particular, the worse the vertex expansion of $S$, the higher the average degree of $H$). We then employ the irregular Moore bound, which gives a quantitative tradeoff between the average degree of a graph and its girth. In particular, this would imply that a Ramanujan graph with ``lossy'' vertex expansion necessarily must have ``low'' girth. \subsection{Related work} \paragraph{Applications of vertex expanders.} There are many applications of expander graphs where having vertex expansion is particularly useful. For example, lossless expanders are particularly of interest in the field of error correcting codes \cite{LMSS,SipSpiel,Spiel}. Lossless vertex expanders give linear error correcting codes that are decodable in linear time \cite{SipSpiel}. Guruswami, Lee and Razborov \cite{GLR} use bipartite vertex expanders to construct large subspaces of $\R^n$ where all vectors $x$ in the subspace satisfy $(\log n)^{-O(\log \log \log n)}||x||_2\leq ||x||_1\leq \sqrt n||x_2||$. \paragraph{Explicit constructions.} Constructions of Ramanujan graphs of \cite{LPS,Mar88,Mor} of all degrees that are of the form $p^r+1$ for $p$ prime, as well as the construction of near-Ramanujan graphs of every degree of \cite{MOP} have vertex expansion $\sim\frac{d}{2}$ just by virtue of being Ramanujan via Kahale's result. In fact no deterministic construction has improved upon the $d/2$ bound obtained from solely spectral information. In a remarkable work, Capalbo et. al. \cite{CRVW} exhibited an explicit construction of a bipartite graph where subsets of one side of the bipartite graph expand losslessly to the other, using a zig-zag product so the the losslessness of a small, random-like graph boosts the expansion from a large, potentially lossy vertex expanding graph. \paragraph{Quantum Ergodicity.} Quantum ergodicity is another area where both local and global properties of random-like graphs are used. In particular, Anantharaman and Le Masson \cite{AL} proved that graphs that have few short cycles (and are therefore close to high girth) and spectral expansion are quantum ergodic, which in this context means the eigenvectors are equidistributed across vertices. Anantharaman, as well as Brooks, Le Masson, and Lindenstrauss exhibited alternative proofs \cite{Anth, BLL}. The proof from \cite{BLL} shows that quantum ergodicity is equivalent to the mixing of a certain graphical operator. They then use high girth to show that this is equivalent to showing mixing on the infinite tree, then expansion to show the nonbacktracking operator mixes on the tree. \paragraph{Eigenvector delocalization.} Ganguly and Srivastava, and later Alon, Ganguly and Srivastava \cite{GS,AGS} give a perturbation of the LPS graph similar to Kahale's argument, but instead of individual vertices, two trees are added and connected to the graph. By assuming the tree is sufficiently deep and carefully connecting the tree to the rest of the graph, the authors create a graph that is high girth but contains eigenvectors that are localized. These graphs are also lossy vertex expanders. However, they show that these graphs cannot be Ramanujan, but rather have spectral radius at least $(2+c)\sqrt{d-1}$ where $c>0$ is a constant. Alon \cite{Alon} used eigenvector delocalization to create near-Ramanujan expanders of every degree by perturbing known constructions of Ramanujan or near-Ramanujan graphs. Paredes \cite{PP} used similar techniques to remove short cycles in a graph while preserving expansion and uses this to algorithmically create graphs that are near-Ramanujan and also have girth at least $\Omega(\sqrt{\log n})$. \paragraph{Complexity of constraint satisfaction problems.} Proofs that it is hard for even linear degree Sum-of-Squares to refute random 3XOR and 3SAT instances on $n$ variables \cite{Gri,Sch} rely on lossless vertex expansion of some sets in a graph underlying a random instance, which suggests a connection between deterministic algorithms for constructing lossless vertex expanders and algorithms for explicit hard instances for Sum-of-Squares. \section{Preliminaries} \subsection{Elementary graph theory} \begin{definition} The \textit{girth} $g(G)$ of a graph $G$ is the length of the smallest cycle in $G$. \end{definition} \begin{definition} For $G=(V,E)$, the \textit{valency} of $a\in V$ to $B\subset V$ is $|\Gamma(a)\cap B|$, where $\Gamma(S)$ for $S\subset V$ is the set of neighbors of $S$ in $G$. \end{definition} \begin{definition} The \textit{ball} of radius $h$ around a set $U\subset V$, denoted $\textnormal{Ball}_h(U)$, is the set of vertices of distance at most $h$ from $U$. \end{definition} \begin{definition} The \textit{vertex expansion} of a set $U\subset V$ is \[ \Psi(U):=\frac{|\Gamma(U)|}{|U|}. \] Similarly, the $\epsilon$-vertex expansion of a graph $G$ is: \[ \Psi_{\epsilon}(G)=\min_{|U|\leq \epsilon|V|}\Psi(U) \] where $U$ ranges over subsets of $V$, and $\epsilon$ is an arbitrary constant. \end{definition} \begin{definition} Given a graph $G$, we use $A_G$ to denote its adjacency matrix. When $G$ is a finite graph on $n$ vertices, the eigenvalues of $A_G$ can be ordered as $\lambda_1(G)\ge\lambda_2(G)\ge\dots\ge\lambda_n(G)$. \end{definition} \begin{definition} We use $B_G$ to denote the \emph{nonbacktracking matrix} of a graph $G$ which is a matrix with rows and columns indexed by directed edges of $G$ defined as follows: \[ B[(u,v),(w,x)] = \begin{cases} 1 &\text{if $v = w$ and $u \ne x$}\\ 0 &\text{otherwise.} \end{cases} \] \end{definition} \begin{definition} The \emph{spectral expansion} of a finite graph $G$, denoted $\lambda(G)$ is defined as $\max\{\lambda_2(G), -\lambda_n(G)\}$, which can equivalently be described as the ``second largest absolute eigenvalue''. \end{definition} We now state the following standard fact known as the \emph{expander mixing lemma} (see \cite[Lemma 2.5]{HLW}). \begin{lemma}[Expander Mixing Lemma] \label{lem:expander-mixing-lemma} Let $G$ be a $d$-regular graph on $n$ vertices. For any two subsets of vertices, $S,T\subseteq V(G)$, let $e(S,T)$ be the number of pairs of vertices $(x,y)$ such that $x\in S, y\in T$ and $\{x,y\}$ is an edge in $G$. Then: \[ \left|e(S,T) - \frac{d}{n}|S|\cdot|T|\right| \le \lambda(G)\sqrt{|S|\cdot|T|}. \] \end{lemma} And finally, we state the ``irregular Moore bound'' of \cite{AHL} which articulates a tradeoff between the average degree of a graph and its girth. \begin{lemma} \label{lem:irreg-moore-bound} Let $G$ be a $n$-vertex graph with average degree-$d$. Then \[ g(G) \le 2\log_{d-1} n + 2. \] \end{lemma} \subsection{Operator theory} In this section, let $V$ be a countable set and $T:\ell_2(V)\to\ell_2(V)$ be a bounded linear operator. \begin{definition} The \emph{spectrum} of $T$, which we denote $\mathsf{spec}(T)$, is the set of all $\lambda\in\C$ such that $\lambda\Id-T$ is not invertible. \end{definition} \begin{definition} The \emph{spectral radius} of $T$, which we denote $\rho(T)$ is defined as $\sup\{|\lambda|:\lambda\in\mathsf{spec}(T)\}$. \end{definition} \begin{fact} The \emph{operator norm} of $T$, which we write as $\|T\|$ is equal to $\sqrt{\rho(TT^*)}$ where $T^*$ is the adjoint of $T$.\footnote{Since $\ell_2(V)$ comes equipped with the inner product $\langle f, g\rangle \coloneqq \sum_{v\in V}f(v)g(v)$, $T^*$ is simple the ``transpose'' of $T$.} \end{fact} \begin{fact} $\rho(T) = \lim_{\ell\to\infty} \|T^\ell\|^{1/\ell}$. \end{fact} \begin{fact}[{Consequence of \cite[Theorem 6]{Que96}}] \label{fact:specrad-max-basis} Suppose $T$ is a self-adjoint operator, and $\Phi$ is a basis of $\ell_2(V)$. Then: \[ \rho(T) = \sup_{\phi\in\Phi}\limsup_{k\to\infty}|\langle \phi, T^k\phi\rangle|^{1/k}. \] \end{fact} \begin{fact} Let $A$ be any principal submatrix of $T$. Then $\rho(A)\le\rho(T)$. \end{fact} \begin{corollary} \label{cor:subgraph-less-specrad} If $H$ is a subgraph of (possibly infinite) graph $G$, then $\rho(A_H)\le\rho(A_G)$. \end{corollary} \section{Infinite trees hanging from a biregular graph} Let $H$ be any $(2,d-1)$-biregular graph where the partition with degree-$(d-1)$ vertices is called $U$ and the partition with degree-$2$ vertices is called $V$. Let $X$ be the infinite graph constructed from $H$ in the following way: \begin{displayquote} At every vertex in $U$, the $(d-1)$-regular partition, glue an infinite tree where the root has degree-$1$ and the remaining vertices have degree-$d$. At every vertex in $V$, the $2$-regular partition, glue an infinite tree where the root has degree-$(d-2)$ and every other vertex has degree-$d$. \end{displayquote} Note that $X$ is a $d$-regular infinite graph. The main result of this section is: \begin{lemma} \label{lem:inf-graph-specrad} $\rho(A_X)\le 2\sqrt{d-1}$. \end{lemma} To prove \pref{lem:inf-graph-specrad}, we instead turn our attention to the nonbacktracking matrix of $X$, called $B_X$. In particular, we bound $\rho(B_X)$ and then employ the Ihara--Bass formula of \cite{AFH} for infinite graphs to translate the bound on $\rho(B_X)$ into a bound on $\rho(A_X)$. Thus, we first prove: \begin{lemma} \label{lem:nb-matrix-bound} $\rho(B_X)\le\sqrt{d-1}$. \end{lemma} We use the following version of the Ihara--Bass formula of \cite{AFH} for infinite graphs. \begin{theorem} \label{thm:inf-IB} Let $G$ be a (possibly infinite) graph. Then \[ \mathsf{spec}(B_G) = \{\pm 1\}\cup\{\lambda:(D_G-\Id)-\lambda A_G + \lambda^2\Id\text{ is not invertible}\}. \] \end{theorem} An immediate corollary that we will use is: \begin{corollary} \label{cor:nb-bound-to-adj} Let $G$ be a $d$-regular graph. Then $\rho(B_G)\le\sqrt{d-1}$ implies that $\rho(A_G)\le 2\sqrt{d-1}$. \end{corollary} \begin{proof} If there is $\mu$ in $\mathsf{spec}(A_G)$ such that $|\mu|>2\sqrt{d-1}$, then $\mu\Id - A_G$ is not invertible. Consequently, by \pref{thm:inf-IB} $\lambda = \frac{\mu+\sqrt{\mu^2-4(d-1)}}{2}$, which is greater than $\sqrt{d-1}$, is in $\mathsf{spec}(B_G)$. \end{proof} In light of \pref{cor:nb-bound-to-adj}, we see that \pref{lem:nb-matrix-bound} implies \pref{lem:inf-graph-specrad}. Towards proving \pref{lem:nb-matrix-bound}, we first make a definition. \begin{definition} We call a walk $W$ a $(a\times b)$-linkage if it can be split into $a$ segments, each of which is a length-$b$ nonbacktracking walk. \end{definition} \begin{proof}[Proof of \pref{lem:nb-matrix-bound}.] The first ingredient in the proof is the fact that for any $\ell \ge 0$, \[ \rho(B_X)^\ell \le \|B_X^\ell\| \] and thus \[ \rho(B_X) \le \limsup_{\ell\to\infty} \|B_X^{\ell}\|^{1/\ell} \] Since $\|B_X^\ell\| = \sqrt{\|B_X^\ell(B_X^*)^\ell\|} = \sqrt{\rho(B_X^\ell(B_X^*)^\ell)}$ it suffices to bound $\rho(T)$ where $T := B_X^\ell(B_X^*)^\ell$ is a bounded self-adjoint operator, and hence by \pref{fact:specrad-max-basis}: \[ \rho(T) = \max_{uv\in \vec{E}(X)} \limsup_{k\to\infty}|\langle 1_{uv}, T^{k}1_{uv}\rangle|^{1/k}. \] The quantity $\langle 1_{uv}, T^k 1_{uv}\rangle$ is bounded by the number of $(2k\times(\ell+1))$-linkages that start and end at vertex $u$, which we can bound via an encoding argument. In particular, we will give an algorithm to uniquely encode such linkages and bound the total number of possible encodings. \paragraph{Encoding linkages.} Each length-$(\ell+1)$ nonbacktracking segment can be broken into $3$ consecutive phases (of which some can possibly be empty): the phase where distance to $H$ decreases on each step (Phase 1), the second phase where distance to $H$ does not change on each step (Phase 2), and the third phase where distance to $H$ increases on each step (Phase 3). We further break the third phase into two (possibly empty) subphases --- the first subphase where the distance to $u$ decreases on each step (Phase 3a), and the second subphase where the distance to $u$ increases on each step (Phase 3b). To encode the linkage, for each length-$(\ell+1)$ nonbacktracking we specify four numbers denoting the lengths of Phases $1$, $2$, $3$a, and $3$b. Note that Phase $2$ is nonempty only if it is contained in $H$. For each step $ab$ in Phase $2$ that goes from $U$ (the $(d-1)$-regular partition) to $V$ (the $2$-regular partition) we specify a number $i$ in $[d-1]$ such that $b$ is the $i$th neighbor of $a$ within $H$. If the first step $ab$ in Phase $2$ is from $V$ to $U$ we specify a number in $[2]$ denoting if $b$ is the first or second neighbor of $a$. For each step $ab$ in Phase $3$b we specify a number $i$ in $[d-1]$ such that $b$ is the $i$th neighbor of $a$ that does not lie in the path between between $u$ and $H$. \paragraph{Recovering linkages from encodings.} We recover a linkage from its encoding ``segment-by-segment''. Suppose the first $t$ segments have been recovered, we show how to recover the $(t+1)$-th segment. Let $x$ be the vertex the walk is at after it has traversed the first $t$ segments. The steps taken in Phase $1$ can be recovered from the length of the Phase since there is a unique path from any vertex to $H$. The steps in Phase $2$ alternate between stepping from $V$ to $U$ and from $U$ to $V$. It is easy to recover the first step of Phase $2$ as well as any step from $U$ to $V$; a step $ab$ from $V$ to $U$ that is not the first step of Phase $2$ is uniquely determined by the previous step, since $a$ has $2$ neighbors in $U$ and by the nonbacktracking nature of the walk there is only one choice for $b$. Note that Phase 3a is nonempty only if $u$ is not in $H$ and all the steps are contained in the same branch as $u$. Since there is a unique shortest path between the start vertex of Phase 3a and $u$, the steps taken in Phase 3a can be recovered from its length. Finally, it is easy to recover the steps taken in Phase 3b since they are explicitly given in the encoding. \paragraph{Counting encodings.} Now we turn our attention to bounding the total number of encodings. For given $\alpha,\beta \ge 0$ such that $\alpha+\beta = 2k(\ell+1)$ we first bound the number of walks such that $\alpha$ steps occur in Phase $2$ (i.e. are within $H$) and $\beta$ steps occur outside Phase $2$ (i.e. are outside $H$). Let $v_1,v_2,\dots, v_{2k(\ell+1)}$ be the sequence of vertices visited by the walk in order. Since $d(v_1,H) = d(v_{2k(\ell+1)},H)$, $|d(v_i,H)-d(v_{i+1},H)|\le 1$ always and $|d(v_i,H)-d(v_{i+1},H)|=0$ for every step in Phase $2$, the number of steps of the walk that occur in Phase $3$ of their respective segments is at most $\frac{\beta}{2}$. In particular, the number of steps that occur in Phase $3$b of their respective segments is bounded by $\frac{\beta}{2}$. The following bounds hold: \begin{itemize} \item The number of possible encodings of the lengths of phases is bounded by $(\ell+1)^{8k}$. \item The number of possible encodings of the first step of Phase $2$ of each segment is bounded by $2^{2k}$. \item The number of possible encodings of the list of $U$-to-$V$ steps in Phase $2$ is bounded by $(d-1)^{\frac{\alpha+1}{2}}$ because the steps taken in Phase 2 alternate between going from $V$ to $U$ and from $U$ to $V$. \item The number of possible encodings of the list of steps in Phase $3$b is bounded by $k(\ell+1)(d-1)^{\frac{\beta}{2}}$. \end{itemize} The above bounds combined with the fact that there are at most $2k\ell$ choices for $(\alpha,\beta)$ pairs gives a bound on the number of $(2k\times(\ell+1))$-linkages of \[ 2(k(\ell+1))^2(\ell+1)^{8k}2^{2k}(d-1)^{\frac{\alpha+1}{2}}(d-1)^{\frac{\beta}{2}} \le 2(k(\ell+1))^2(\ell+1)^{8k}2^{2k}\sqrt{d-1}^{2k(\ell+1)+1}. \] Thus, \[ \rho(T) \le \limsup_{k\to\infty} \left(2(k(\ell+1))^2(\ell+1)^{8k}2^{2k}\sqrt{d-1}^{2k(\ell+1)+1}\right)^{1/k} = 4(\ell+1)^8\sqrt{d-1}^{2(\ell+1)} \] Consequently, \[ \rho(B_X) \le \limsup_{\ell\to\infty} \rho(T)^{1/2\ell} \le \limsup_{\ell\to\infty} \left(4(\ell+1)^8\sqrt{d-1}^{2(\ell+1)}\right)^{1/{2\ell}} = \sqrt{d-1}. \] \end{proof} \section{High-girth near-Ramanujan graphs with lossy vertex expansion} We will plant a high girth graph with low spectral radius within a $d$-regular Ramanujan graph. We will show that such a construction is a spectral expander, but has low vertex expansion. By $u\sim_G v$, we mean that $u$ and $v$ are adjacent in the graph $G$. We will write $u\sim v$ when the graph is clear from context. Consider a $(2,d-1)$ biregular bipartite graph $H=(U,V,E)$, with vertex components $U$ and $V$. $U$ is the degree-$(d-1)$ component and $V$ the degree-$2$ component. Therefore if we define $\gamma:=|U|$, requiring $\gamma$ to be even, then $|V|=(d-1)\gamma/2$. Call the vertices of $U$ and $V$ $\{u_1,\ldots,u_\gamma\}$ and $\{v_1,\ldots, v_{\gamma(d-1)/2}\}$, respectively. We connect $U$ and $V$ in such a way to maximize the girth of $H$. \begin{lemma}\label{lem:hgirth} \[ g(H)\geq 2\log_{d-1}\gamma. \] \end{lemma} \begin{proof} Because of the valency conditions on $H$, there is a graph $\wt H$ on $\gamma$ vertices $\{\wt u_1,\ldots,\wt u_\gamma\}$, where $\wt u_i\sim_{\wt H} \wt u_j$ if and only if $\exists v_k\in H$ such that $u_i\sim_{H} v_k$ and $u_j\sim_H v_k$. Namely, $U$ corresponds to the vertex set of $\wt H$, and $V$ corresponds to the edge set. $\wt H$ is $d-1$ regular, and, as paths in $\wt H$ of length $r$ correspond to paths of length $2r$ in $H$, $g(H)=2g(\wt H)$. By a result of Linial and Simkin \cite{LS}, there exists a graph $\wt H$ that has girth at least $c\log_{d-2}\gamma$, for any $c\in (0,1)$, assuming $\gamma$ is even. Therefore by setting $c=\log(d-2)/\log(d-1)$, we have that $g(\wt H)\geq \log_{d-1}\gamma$ and $g(H)\geq 2\log_{d-1}\gamma$. \end{proof} We add a new set of vertices $Q=\{q_1,\ldots, q_\gamma\}$ and add a matching between $Q$ and $U$, adding the edge $q_iu_i$ for $1\leq i\leq \gamma$. Similarly, we add another set of vertices $R=\{r_{i,j}\}, 1\leq i \leq \gamma(d-1)/2,1\leq j\leq d-2$. For each $1\leq i\leq \gamma(d-1)/2$, we then add an edge from $v_i$ to each of $r_{i,j}$ for $1\leq j\leq (d-2)$. We call $H'$ the graph on $U\cup V \cup Q\cup R$. At this point vertices of $U$ and $V$ have degree-$d$, and vertices of $Q$ and $R$ have degree-$1$. Also, note $\Psi(U)=(d+1)/2$. We wish to embed $H'$ into a larger, high girth expander, and show that this new graph maintains high girth and expansion, even though the set $U$ is a lossy vertex expander. Our argument follows that of \cite[Section 5]{Kah}, but instead of embedding individual vertices, we will embed $H'$. \begin{figure} \[\includegraphics[height=7cm]{hprime.png}\] \caption{$H'$, with labeled components for $d=4$, $\gamma=4$. Note that $\Psi(U)=(d+1)/2$. To create $G'$, we connect $Q$ and $R$ to a well spaced matching in $G$.} \label{fig:hprime} \end{figure} \begin{theorem} For every $d = p+1$ for prime $p\geq 3$, there is an infinite family of $d$-regular graphs $G_m=(V_m,E_m)$ on $m$ vertices, such that $\exists U_m\subset V_m$ with $\Psi(U_m)=(d+1)/2$ for $|U_m|\leq m^{1/3}$, $g(G_m)=(\frac23-o_m(1))\log_{d-1} m$, and such that $\lambda(G_m)\leq 2\sqrt{d-1}+O(1/{\log_{d-1} m})$. \end{theorem} \begin{proof} By the result of Lubotzky, Phillips and Sarnak \cite{LPS}, for such $d$, there exists an infinite family of $d$-regular graphs, where graphs of $n$ vertices have girth $(\frac43-o_n(1)) \log_{d-1}n$ and have spectral expansion $\leq 2\sqrt{d-1}$. For a given graph $G=(V,E)$ of this type of size $n$, we attach $H'$ by removing a matching $M\subset E$, $M=\{(a_{1,1},a_{1,2}),\ldots, (a_{k,1},a_{k,2})\}$ for \begin{equation}\label{eq:kbound} k:=\gamma(d-1)(2+(d-1)(d-2))/4. \end{equation} We take a matching such that the pairwise distance between edges in the matching is maximized in $G$. \begin{lemma}\label{lem:spacing} In a $d$-regular graph on $n$ vertices, there exists a matching $M$ of size $k$ such that for every pair of edges $(a_{i_1,1},a_{i_1,2}),(a_{i_2,1},a_{i_2,2})\in M$, $i_1\neq i_2$, \[ d((a_{i_1,1},a_{i_1,2}),(a_{i_2,1},a_{i_2,2}))\geq \log_{d-1} n-\log_{d-1}\gamma-O_n(1). \] \end{lemma} \begin{proof} For a given pair of adjacent vertices $(a_{i,1},a_{i,2})$, as our graph is $d$ regular, there are at most $1+d\frac{(d-1)^r-1}{d-2}$ vertices at distance at most $r$ from $a_{i,1}$, and at most $(d-1)^r$ vertices at distance $r$ from $a_{i,2}$ and distance $r+1$ from $a_{i,1}$. Therefore for any $d\geq 4$, the number of edges at distance at most $r$ from a given edge is less than $4(d-1)^r$. We then greedily add edges by choosing an arbitrary edge with vertices at distance at least $r$ away from all already chosen edges. A $k$th such edge will exist as long as $4k(d-1)^r\leq n$. For our $k$ given in (\ref{eq:kbound}) we can set $r=\log_{d-1} n-\log_{d-1}\gamma-O_n(1)$. \end{proof} To connect $H'$ to $G$, we first delete the matching $M$. Then for every vertex of $Q$ and $R$, we add $d-1$ edges to the set of vertices of $M$, connecting to each vertex of $M$ exactly once. Namely, the induced subgraph on $(Q\cup R)\cup M$ is a $(d-1,1)$ biregular bipartite graph. Call $G'=(V',E')$ the new graph formed from $G$ and $H'$. We wish to show that $G'$ remains high girth and a good spectral expander. For the girth of $G'$, cycles are either completely contained in $H'$, completely contained in $G$, or a mix between the two. Cycles in $H'$ have length at least $2\log_{d-1}\gamma$ by \pref{lem:hgirth}. Cycles in $G$ have length at least $(\frac43-o_n(1))\log_{d-1}n$ by the construction of \cite{LPS}. For cycles that are a mix of $H'$ and $G$, we must go from one vertex of $H'$ to another vertex of $H'$ through $G$. Therefore by \pref{lem:spacing}, the length of such a cycle is at least $\log_{d-1}n-\log_{d-1}\gamma-O_n(1)$, giving \[g(G')\geq \min\{2\log_{d-1}\gamma,\log_{d-1}n-\log_{d-1}\gamma-O_n(1)\}.\] To show that the spectrum is not adversely affected, we follow the argument of \cite[Theorem 5.2]{Kah}, with some adjustments. For our new graph, assume that there is an eigenvector $g\perp\bf1$ corresponding to an eigenvalue $|\mu|>2\sqrt{d-1}$. Call $A$ the adjacency matrix of $G'$, and $A_G$ the adjacency matrix of $G$. Then we have \[ g^*Ag=g_G^*A_Gg_G+g_{H'}^*Ag_{H'}-2\sum_{i=1}^k g(a_{i,1})g(a_{i,2})+\sum_{\substack{u\in Q\cup R\\a_{i,j}\in M\\u\sim a_{i,j}}} g(u) g(a_{i,j}) \] where $g_G$ and $g_{H'}$ are the projections of $g$ onto $G$ and $H'$, respectively. We know that \[|g_G^*A_Gg_G|\leq 2\sqrt{d-1}||g_G||^2+\frac{d}{n}\left(\sum_{u\in G} g(u)\right)^2\] by decomposing $g$ into parts parallel and perpendicular to the all ones vector. By a combination of \pref{lem:inf-graph-specrad} and \pref{cor:subgraph-less-specrad}, the spectral radius of $H'$ is $2\sqrt{d-1}$, and therefore we have \[ |g_G^*A_Gg_G|+|g_{H'}^*A g_{H'}|\leq 2\sqrt{d-1}||g||^2+\frac{d}{n}\left(\sum_{u\in H'} g(u)\right)^2 \] as $\sum_G g(u)=-\sum_{H'}g(u)$, considering $g\perp \textbf 1$. To show that $|\mu|=2\sqrt{d-1}+O(1/\log n)$, we then need to show \begin{align} \frac{1}{\|g\|^2}\left(\frac{d}{n}\left(\sum_{H'} g_{H'}(u)\right)^2-2\sum_{i=1}^k g(a_{i,1})g(a_{i,2})+\sum_{\substack{u\in Q\cup R\\a_{i,j}\in M\\u\sim a_{i,j}}} g(u)g(a_{i,j})\right) = O\left(\frac{1}{\log n}\right). \label{eq:small-interface} \end{align} The first term of \pref{eq:small-interface} can be bounded as \begin{equation} \frac dn\left(\sum_{H'} g_{H'}(u)\right)^2\leq \frac{d}{n}\left|H'\right|\|g_{H'}\|^2\leq \frac{\gamma(2+(d-1)(d-2))d}{2n}\|g_{H'}\|^2. \label{eq:first-term-above} \end{equation} The second term we can bound as \begin{equation} \left|2\sum_{i=1}^k g(a_{i,1})g(a_{i,2})\right|\leq \sum_{a_{i,j}\in M} g(a_{i,j})^2. \label{eq:second-term-above} \end{equation} Now we will bound the last term of \pref{eq:small-interface} using Cauchy Schwarz. \begin{eqnarray} \left|\sum_{\substack{u\in Q\cup R\\a_{i,j}\in M\\u\sim a_{i,j}}} g(u)g(a_{i,j})\right| \leq \sqrt{(d-1)\sum_{u\in Q\cup R} g(u)^2}\sqrt{\sum_{a_{i,j}\in M} g(a_{i,j})^2}. \label{eq:last-term-above} \end{eqnarray} We use the following lemma to bound the right hand sides of \pref{eq:second-term-above} and \pref{eq:last-term-above}. The lemma is a generalized version of \cite[Lemma 5.1]{Kah}. The result follows from the same proof, which we reproduce in the appendix for completeness. Here, for two vectors $a,b\in \R^n$, $a\leq b$ if $\forall i\in[n], a(i)\leq b(i)$. \begin{lemma}[Lemma 5.1 of \cite{Kah}] \label{lem:kahdisp} Consider a graph on a vertex set $W$, a subset $X$ of $W$, a positive integer $h$, and $s\in L^2(W)$. Let $X_i$ be the set of nodes at distance $i$ from $X$. Assume the following conditions hold: \begin{enumerate}[(1)] \item For $h-1\leq i,j\leq h$, all nodes in $X_i$ have the same number of neighbors in $X_j$. \item If $u\in X_{h-1}$ and $v\in X_{h}$ and $u\sim v$, then $s(u)/s(v)$ does not depend on the choices of $u$ and $v$. \item $s$ is nonnegative and $As\leq \mu s$ on $\textnormal{Ball}_{h-1}(X)$, where $\mu$ is a positive real number. \end{enumerate} Then for any $g\in L^2(W)$ such that $|Ag(u)|=\mu|g(u)|$ for $u\in \textnormal{Ball}_{h-1}(X)$, we have \begin{equation}\label{eq:kahlemma} \frac{\sum_{v\in X_h} g(v)^2}{\sum_{v\in X_h}s(v)^2}\geq \frac{\sum_{v\in X_{h-1}} g(v)^2}{\sum_{v\in X_{h-1}}s(v)^2}. \end{equation} \end{lemma} To use the lemma, we set $X_0=U\cup V$, and $h$ will vary from $2\leq h\leq \lfloor r/2\rfloor$. Assuming that the girth of $G'$ is at least $r$, the $\lfloor r/2\rfloor$ neighborhoods of each vertex do not overlap. Our test vector decays exponentially, with a small adjustment. \[ s(y)=\left\{ \begin{array}{cc} \frac{1}{(d-1)^{h/2}}&X_{h,U} \\ \frac{2}{\sqrt{d-1}}-\frac{1}{(d-1)^{3/2}}& y\in X_{0,V}\\ \left(\frac{2}{d-2}-\frac{2}{(d-1)(d-2)}\right)\frac{1}{(d-1)^{(h-1)/2}}&y\in X_{h,V}, h\geq 1. \end{array}\right. \] For this assignment of values we have $As\leq (2\sqrt{d-1})s$. In fact, this inequality is sharp at all coordinates except for $y\in X_{1,V}$. For this $s$, we have that $ \sum_{y\in X_h} s(y)^2 $ is constant for $h=1,\ldots, \lfloor r/2\rfloor$. Also, recall $Q\cup R=X_1$ and $M=X_2$. By \pref{lem:kahdisp}, as $g$ corresponds to an eigenvalue $|\mu|>2\sqrt{d-1}$, the mass on each of first 2 layers of $X$ can only be at most $2/(r-2)$ of the total mass. Combining \pref{eq:first-term-above}, \pref{eq:second-term-above}, and \pref{eq:last-term-above}, we can bound \pref{eq:small-interface} as \begin{eqnarray*} \pref{eq:small-interface} &\leq& \frac{\gamma(2+(d-1)(d-2))d}{2n}\|g_{H'}\|^2+\sum_{a\in X_2} g(a)^2+\sqrt{(d-1)\sum_{u\in X_1} g(u)^2}\sqrt{\sum_{a\in X_2} g(a)^2}\\ &\leq& \left(\frac{\gamma(2+(d-1)(d-2))d}{2n}+(1+\sqrt{d-1})\frac{2}{r-2}\right)\|g\|^2. \end{eqnarray*} If we set $\gamma=n^{1/3}$ and $r=\frac{2}3\log_{d-1} n-O_n(1)$, for fixed $d$ this becomes \[ O\left(\frac{1}{\log n}\right)\|g\|^2, \] meaning that $\mu\leq 2\sqrt{d-1}+O(1/\log n)$. This also gives the desired bounds on vertex expansion and girth, by setting $U=U_m$. Because $|V'|=(1+o_n(1))n$, the bounds on $\Psi(U_m)$, $g(G')$ and $\lambda(G')$ given in terms of $n$ do not change when they are given in terms of $m$. \end{proof} \section{Lossless expansion of small sets} In this section, we prove that sufficiently small sets in a high-girth spectral expander expand losslessly. \begin{theorem} Let $G$ be a $d$-regular graph on $n$ vertices with girth at least $2\alpha\log_{d-1} n + 4$. Then for any set $S$ with $n^{\kappa}$ vertices, \[ \frac{|\partial S|}{|S|} \ge d - \lambda(G) - \frac{d^{\kappa/\alpha}}{2} - \frac{d}{n^{1-\kappa}}. \] \end{theorem} \begin{proof} Let $S$ be a set of vertices of size $n^{\kappa}$ in $G$. Let $e_S$ denote the number of internal edges within $S$. Let $n_i$ denote the number of vertices in $\partial S$ that have $i$ edges from $S$ incident to it. Then: $|\partial S| = n_1 + n_2 + \dots + n_d$ and $|E(S,\partial S)| = n_1 + 2n_2 + \dots + dn_d$. Note that $|E(S,\partial S)|$ is also equal to $d|S|-2e_S$. Now consider the graph $H_S$ on vertex set $S$ and edge set given by induced edges on $S$ along with new edges introduced by adding an arbitrary spanning tree for every set of $i$ vertices that are neighbors of a vertex in $\partial S$ with exactly $i$ neighbors in $S$. The number of edges in $H(S)$ is equal to \[ e_S + n_2 + 2n_3 + \dots + (d-1)n_d = e_S + |E(S,\partial{S})| - |\partial S| = d|S| - e_S - |\partial S| \] and $g(H(S))\ge \frac{1}{2}g(G)\ge\alpha\log_{d-1} n + 2$. As a consequence of the expander mixing lemma (\pref{lem:expander-mixing-lemma}), \[ e_S \le \left(\lambda(G) + \frac{d|S|}{n}\right)|S| \] for some absolute constant $c$. Consequently, \[ |E(H(S))| \ge \left(d-\lambda(G)-\frac{d|S|}{n}\right)|S| - |\partial S|, \] which means the average degree is lower bounded by \[ 2\left(d-\lambda(G)-\frac{d|S|}{n}-\frac{|\partial S|}{|S|}\right). \] Thus by the irregular Moore bound (\pref{lem:irreg-moore-bound}), \[ g(H(S))\le\frac{2\log n^{\kappa}}{\log \left(2\left(d-\lambda(G)-\frac{d|S|}{n}-\frac{|\partial S|}{|S|}\right)-1\right)} + 2 \] and hence \[ \frac{\alpha}{\log (d-1)} \le \frac{2\kappa}{\log \left(2\left(d-\lambda(G)-\frac{d|S|}{n}-\frac{|\partial S|}{|S|}\right)-1\right)}. \] This implies \[ d-\lambda(G)-\frac{d|S|}{n}-\frac{|\partial S|}{|S|} - \frac{1}{2} \le \frac{d^{2\kappa/\alpha}}{2}, \] and finally by rearranging the above and plugging in $|S| = n^{\kappa}$ \[ \frac{|\partial S|}{|S|} \ge d - \lambda(G) - \frac{d^{2\kappa/\alpha}-1}{2} - \frac{d}{n^{1-\kappa}}. \] \end{proof} \begin{remark} If $G$ is a $n$-vertex $d$-regular Ramanujan graph with girth $\frac{4}{3}\log_{d-1}n$ (which is a condition satisfied by the Ramanujan graphs of \cite{LPS}) then for every set $S$ of size $n^{\kappa}$ for $\kappa<1/3$, \[ \frac{|\partial S|}{|S|} \ge d(1 - o_d(1)). \] \end{remark} \section*{Acknowledgements} We would like to thank Shirshendu Ganguly and Nikhil Srivastava for their highly valuable insights, intuition, and comments. \bibliographystyle{alpha}
{ "timestamp": "2021-02-23T02:06:04", "yymm": "2007", "arxiv_id": "2007.13630", "language": "en", "url": "https://arxiv.org/abs/2007.13630" }
\section{Introduction} Collective synchronization in large ensembles of self-sustained oscillators is a pervasive phenomenon in nature and technology \cite{Win80,PRK01,Izh07}. The first successful attempt to model collective synchronization is due to Winfree \cite{Win67}. Relying on his intuition he devised a model where the only degrees of freedom were the oscillators' phases, and the coupling was uniform and global (i.e.~mean-field type). In the numerical simulations a macroscopic cluster of synchronized oscillators emerged spontaneously when either the natural frequencies of the oscillators were {\color{black} narrowly} distributed or the coupling was large enough. In mathematical language, the phases in the Winfree model are governed by a set of $N$ ordinary differential equations ($i=1,\ldots,N$): \begin{subequations} \label{winfree} \begin{eqnarray} \dot\theta_i=\omega_i+ {\tilde Q}(\theta_i)\, A, \label{wina}\\ A= \frac{\epsilon}{N} \sum_{j=1}^N P(\theta_j). \label{winb} \end{eqnarray} \end{subequations} Here $\omega_i$ is the natural frequency of the $i$-th oscillator, and $\epsilon>0$ is a parameter controlling the coupling strength. The $2\pi$-periodic function $P$ specifies the pulse shape. The function $\tilde Q$ is also $2\pi$-periodic and is either called infinitesimal (or linear) phase-response curve (iPRC), or sensitivity function \cite{Win80,Kur84,Izh07}. As already mentioned, the Winfree model relies on two assumptions: weak coupling and all-to-all geometry. Weak coupling permits, first of all, ignoring the oscillators' amplitudes: the limit cycles are strongly attracting compared to perturbations causing amplitudes to be strongly damped degrees of freedom. In addition, the effect of the mean field $A$ on the phase is exactly proportional to $A$ ---higher powers of $A$ are absent in Eq.~\eqref{wina}---, {\color{black} which} only holds in the limit of asymptotically small interactions \cite{Win80,Kur84,Izh07,sacre14,pietras19}. In this work we generalized the Winfree model considering nonlinear interactions. Mathematical tractability imposes certain restrictions on the distribution of the natural frequencies and on the class of ``non-infinitesimal'' (also called ``finite'' or ``non-linear'') PRCs, but we believe it is remarkable that such analytic solutions exist. This limited progress should be welcome given the relevance of PRC theory in theoretical neuroscience \cite{ET10,Boergers}, and recent experiments evidencing the {\color{black} insufficiency} of the linear approximation \cite{rode19,totz}. Our analysis is based on the so-called ``Ott-Antonsen (OA) theory'', which assumes a certain ansatz (the Poisson kernel) for the density of the phases in the thermodynamic limit ($N\to\infty$). The OA ansatz was initially applied to the Kuramoto model and its variants \cite{OA08,OA09}, but eventually found application in several systems of pulse-coupled oscillators: the original Winfree model \cite{PM14,gallego17} (and a variant with heterogeneous iPRCs \cite{PMG19}), ensembles of theta neurons \cite{LBS13,laing14,SLB14}, quadratic integrate-and-fire neurons \cite{MPR15,PM16,RP16}, and excitable active rotators \cite{okeeffe16,roulet16}. \section{Winfree model with non-infinitesimal PRC} We consider a modification of the Winfree model \eqref{winfree}, in which Eq.~\eqref{wina} is replaced by \begin{equation} \dot\theta_i=\omega_i+ Q(\theta_i,A),\qquad i=1,\ldots,N, \label{model} \end{equation} where $A$ is the mean field defined by \eqref{winb}. At the lowest order in $A$, the model \eqref{model} converges to the Winfree model \eqref{winfree}: $dQ(\theta_i,A)/dA|_{A=0}=\tilde Q(\theta_i)$. {\color{black} Assuming $Q(\theta,A)$ linear in $A$ is equivalent to approximate the isochrons of a limit cycle by straight lines (or hyperplanes if the dimensionality is larger than two) in the phase reduction procedure \cite{Kur84,Izh07}. } \subsection{Non-infinitesimal PRC} Prior to specifying the PRC $Q$, we devote a few lines to the iPRCs. Traditionally, iPRCs are classified as type I or type II \cite{hansel95}. For type II, either an advance or a delay in the phase are possible depending upon the timing of the perturbation, while in the case of type I the timing of the perturbation does not change the sign of the phase shift. The canonical examples of each type\cite{Izh07,sacre14} are $\tilde Q(\theta)\propto1-\cos\theta$ for type I (e.g. the theta neuron), and $\tilde Q(\theta)\propto\sin\theta$ for type II (e.g. the Stuart-Landau oscillator). For non-infinitesimal PRCs, the previous classification falls short as the character of $Q$ may change with the strength of the stimulus \cite{Izh07,sacre14}. The types of PRC we consider are conditioned by the applicability of the OA ansatz, as it enables a drastic dimensionality reduction. The OA ansatz imposes that no harmonics in $\theta$ beyond the first one are present in $Q(\theta,A)$. Still, the family of PRCs with only first harmonic in $\theta$ is wide enough to make the problem nontrivial. As we shall adopt pulses $P(\theta)$ with peak value at $\theta=0$ (and multiples of $2\pi$), we impose the additional constraint $Q(0,A)=0$ motivated by the fact that the PRC vanishes at spiking/flashing times for most neurons \cite{reyes93,netoff05} and certain fireflies \cite{buck88,hanson78}. Therefore, we restrict to a family of PRCs of this form: \begin{equation} Q(\theta,A)=f_1(A) (1-\cos\theta) - f_2(A) \sin\theta, \label{PRC} \end{equation} where $f_1$ and $f_2$ are arbitrary functions of $A$, provided that $f_{1,2}(0)=0$ for obvious physical reasons. In similarity with the classification of iPRCs, we refer to the two terms in \eqref{PRC}, proportional to $(1-\cos\theta)$ and $\sin\theta$, as the type-I and the type-II components of the PRC, respectively. \subsection{Pulse shape} In the study of the classical Winfree model several pulse shapes can be considered, see \cite{gallego17}. In this work, we adopt a ``rectified Poisson kernel''\cite{gallego17}: \begin{equation} P(\theta)=\frac{(1-r)(1+\cos\theta)}{1-2r\cos\theta +r^2} . \label{eq:pulse} \end{equation} This is a particularly convenient shape for the theoretical analysis below. $P(\theta)$ is a symmetric unimodal function in the interval $[-\pi,\pi]$ (with the normalization $\int_{-\pi}^{\pi} P(\theta) d\theta=2\pi$) that peaks at $\theta=0$ and vanishes at $\theta=\pm\pi$. Parameter $r$ is a real number allowing a continuous interpolation between a flat pulse for $r=-1$ and a Dirac-delta pulse, $P(\theta)=2\pi\delta(\theta)$, for $r=1$. In Fig.~\ref{fig:pulse} the pulse function $P(\theta)$ is depicted for three different values of $r$. \begin{figure} \centerline{\includegraphics[width=75mm,clip=true]{fig1.pdf}} \caption{Rectified-Poisson pulse \eqref{eq:pulse} for three values of parameter $r$.} \label{fig:pulse} \end{figure} \subsection{Natural frequencies} For the sake of achieving the maximal dimensionality reduction, we assume the natural frequencies to be distributed according to a Lorentzian distribution of half-width $\Delta$ centered at $\omega_0$: \begin{equation} g(\omega)= \frac{\Delta/\pi}{(\omega-\omega_0)^2+\Delta^2}. \label{lorentzian} \end{equation} \section{Ott-Antonsen theory} Once the building blocks of the model have been introduced, we apply the OA theory~\cite{OA08}. In this way we derive a complex-valued ODE reproducing the long-time evolution of the model at the macroscopic level. As the procedure is standard\cite{OA08,gallego17}, the readers interested in the final result are pointed to Eqs.~\eqref{eq:Z} and \eqref{hZ}. First of all, one must realize that our model \eqref{model} belongs to a general class of oscillator systems of the form \begin{gather}\label{eq:winsys} \dot\theta_i(t)=\omega_i+B(t)+\mathrm{Im}\left[H(t)e^{-i\theta_i(t)}\right] , \end{gather} which can be analyzed with the OA ansatz \cite{OA08,OA09,OHA11,PD16}. Functions $B$ and $H$ may depend explicitly on time or indirectly through a mean field. For the PRC \eqref{PRC} we have \begin{equation}\label{BH} B(t)=f_1(A), \qquad H(t)=f_2(A)-if_1(A). \end{equation} In the thermodynamic limit we can define a phase density $F(\theta|\omega,t)$, such that $F(\theta|\omega,t)d\theta$ is the fraction of oscillators of frequency $\omega$ at time $t$, with phases in the interval $[\theta,\theta+d\theta]$. It is convenient to introduce the Fourier expansion of the density \begin{gather} F(\theta|\omega,t)=\sum_{m=-\infty}^\infty\alpha_m(\omega,t)e^{im\theta} ,\nonumber \end{gather} with $\alpha_{-m}=\alpha_m^*$. We notice as well that, by conservation of the number {\color{black} of} oscillators, $F$ satisfies the continuity equation: $\partial_tF+\partial_\theta(F\dot\theta)=0$, where $\dot\theta$ is the speed of an oscillator of natural frequency $\omega$. Inserting the Fourier series of $F$ into the continuity equation we get: \begin{equation}\label{am} \partial_t\alpha_m(\omega,t) =-im(\omega+B)\alpha_m+ \frac{m}{2}\left(H^*\alpha_{m-1}-H\alpha_{m+1}\right). \end{equation} A particular solution of this equation, the OA ansatz, is obtained equating the coefficient of $m$-th mode to the $m$-th power of the first mode: $\alpha_m=\alpha_1^m$. Hence, for the solution in this so-called OA manifold, we only need to consider the evolution of $\alpha_1\equiv\alpha$: \begin{equation}\label{eq:alpha} \partial_t\alpha(\omega,t) =-i(\omega+B)\alpha+ \frac{1}{2}\left(H^*-H\alpha^2\right). \end{equation} This is still an infinite set of coupled integro-differential equations. A sharp reduction in the dimensionality of the problem is achieved for rational $g(\omega)$, and specially for the Lorentzian distribution \cite{OA08}. As the Kuramoto order parameter\cite{Kur75} $Z=\overline{e^{i\theta}}$ is related to $\alpha$ via $Z^*(t)=\int_{-\infty}^\infty \alpha(\omega,t) g(\omega) d\omega$, we can evaluate this integral resorting to the {\color{black} residue} theorem obtaining $Z^*(t)=\alpha(\omega_0-i\Delta,t)$. (This is the result of performing an analytic continuation of $\alpha$ from real to complex $\omega$, and evaluating $\alpha$ at the pole of $g(\omega)$ in the lower half $\omega$-plane.) Thus, setting $\omega=\omega_0-i\Delta$ in \eqref{eq:alpha}, we get a complex-valued ODE for the Kuramoto order parameter: \begin{equation}\label{eq:Z} \dot Z=(-\Delta+i\omega_0)Z-\frac{i}2f_1(A)(1-Z)^2+\frac12f_2(A)(1-Z^2), \end{equation} where $B$ and $H$ have been written in terms of $f_1$ and $f_2$ according to Eq.~\eqref{BH}. To close Eq.~\eqref{eq:Z} we need to express the mean field $A$ as a function of $Z$. For the pulse shape in Eq.~\eqref{eq:pulse} and a Lorentzian frequency distribution it can be proven (see \cite{gallego17} or the supplemental material of \cite{MP18}) that: \begin{equation}\label{hZ} A=\epsilon \, \mathrm{Re}\left(\frac{1+Z}{1-r \, Z}\right). \end{equation} {\color{black} Note that $0\le A \le A_{\mathrm{max}}$, where the maximal value $A_{\mathrm{max}}=2\epsilon/(1-r)$ is achieved if $Z=1$ (all oscillators exactly at $\theta_j=0$).} In addition to this, the central natural frequency $\omega_0$ is hereafter set to 1, as this can always be achieved by rescaling time and $f_{1,2}$. \section{Four illustrative PRC\lowercase{s}} Among the infinite set of functions $f_1(A)$ and $f_2(A)$ we selected a few illustrative case studies. In each of these cases the character of the PRC undergoes a crossover as $A$ grows: from one iPRC type to a different PRC type for large $A$. We denote the limiting PRC at $A\to\infty$ as `asymptotic PRC' (aPRC). From now on, we apply the classical distinction between types I and II to both iPRCs and aPRCs. Recall that if the sign is the same for all $\theta$ we call the iPRC (or the aPRC) type I (implying $f_2=0$), while in the complementary case with $f_1=0$ we refer to the iPRC (or to the aPRC) as canonical type II, or simply type II. Notably, type II may either promote or impede synchronization depending on the sign of $f_2$. In turn, we distinguish between two subclasses of the type II: $\mathrm{II}_s$ ($f_2>0$) and $\mathrm{II}_r$ ($f_2<0$) corresponding to the synchronizing and repulsive interactions, respectively. \begin{table} \caption{\label{tabla} The four cases of non-infinitesimal PRCs analyzed in this paper. Functions $f_1$ and $f_2$ determine the PRC in \eqref{PRC} ($\sigma(A)$ is a crossover function, see \eqref{sigma}). The code in the last column indicates the iPRC and the asymptotic PRC at large $A$ (see text).} \begin{ruledtabular} \begin{tabular}{cccc} Case & $f_1(A)$& $f_2(A)$ & Code\\ \hline a & $\sigma(A)$ & $A \, \sigma(A)$ & $\mathrm{I}-\mathrm{II}_s$\\ b & $A\, \sigma(A)$ & $\sigma(A)$& $\mathrm{II}_s-\mathrm{I}$\\ c & $0$ & $(1-A)\sigma(A)$ & $\mathrm{II}_s-\mathrm{II}_r$\\ d & $0$ & $-(1-A)\sigma(A)$ & $\mathrm{II}_r-\mathrm{II}_s$\\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \centerline{\includegraphics[width=0.99\linewidth,clip=true]{fig2.pdf}} \caption{The non-infinitesimal PRCs analyzed in this work as a function of $\theta$ for four representative values of $A$, including $A=0^+$ (iPRC) and $A=8$ (resembling the aPRC). Panels (a) to (d) correspond to cases a to d, respectively (see Table \ref{tabla}). The code iPRC-aPRC is indicated in each panel.} \label{fig:prcs} \end{figure} As we are interested in introducing one crossover in the PRC between the iPRC and the aPRC, and have three fundamentally different types (I, $\mathrm{II}_s$, and $\mathrm{II}_r$) this gives 6 possible combinations. However, we shall consider only four of these iPRC-aPRC pairs, since only type $\mathrm{II}_s$ favors synchrony and is to be included either in the iPRC or in the aPRC. Otherwise no synchronization phenomena are expected: type I is neutral and type $\mathrm{II}_r$ is repulsive. Hence, we focus on the four cases listed in Table \ref{tabla}, in which different PRC types characterize small and large $A$ regimes. As a guide, in the fourth column of the Table we write a code X-Y, where X refers to the iPRC and Y to the aPRC. The saturation function $\sigma(A)$ in the Table has positive slope at $A=0$, and saturates at large $A$. In particular, we chose this specific saturation function in our study: \begin{equation}\label{sigma} \sigma(A)=\frac{A}{1+A}. \end{equation} (Our results have been occasionally tested against another choice $\sigma(A)=\tanh(A)$, finding no qualitative difference.) Graphical representations of the four PRCs (cases a to d){\color{black}, for four representative $A$ values,} are shown in Figs.~\ref{fig:prcs}(a)-(d). {\color{black} In each panel the PRC appears divided by $A$ as usual \cite{Izh07}, and the lack of overlapping between different lines evidences its nonlinearity.} \begin{figure} \centerline{\includegraphics[width=0.99\linewidth,clip=true]{fig3_v4.pdf}} \caption{(a) Raster plot ---a dot indicates at which time one oscillator phase crosses a multiple of $2\pi$--- for a population of $N=500$ and the non-infinitesimal PRC of case d, see Table I and Fig.~\ref{fig:prcs}(d). The initial condition is uniform $\theta_j(t=0)=0.01$ and parameters are $\Delta=0.01$, $r=0.9$, $\epsilon=0.4$. The frequencies are deterministically drawn from a Lorentzian distribution: $\omega_i=\omega_0+\Delta \tan[\pi(2i-N-1)/(2N)]$. (c) The same as (a) but for a random initial distribution of phases. (b) and (d) depict the Kuramoto order parameter $Z(t)=N^{-1}\sum_j e^{i\theta_j(t)}$ for 50 t.u., once the simulations in (a) and (c), respectively, reached the stationary state. The red dashed line and the red cross in panels (b) and (d) are the periodic and fixed point attractors of Eq.~\eqref{eq:Z}, coexisting at the same parameter values.} \label{fig:raster} \end{figure} In the next section we obtain the phase diagrams corresponding to each of the four cases introduced here, based on the analysis of the complex-valued ordinary differential equation \eqref{eq:Z}. But before doing so, it is worth making direct simulations of the full system \eqref{model}, and test (and understand) the correspondence with the solutions of Eq.~\eqref{eq:Z}. We simulated the full model in case d with $\omega_0=1$, heterogeneity parameter $\Delta=0.01$, pulse-shape parameter $r=0.9$ and coupling constant $\epsilon=0.4$. As may be seen in Fig.~\ref{fig:raster}, the population exhibits bistability between a desynchronized state and a synchronized state with some oscillators oscillating with the same frequency. This bistability is not surprising as the system is ``more synchronizing'' when already synchronized since the aPRC is of type $\mathrm{II}_s$, while it is hardly synchronizable when already desynchronized by virtue of the type $\mathrm{II}_r$ iPRC. In terms of $Z$, the synchronous solution is (approximately) a periodic orbit, while the desynchronized state exhibits only small fluctuations around a point due to finite size effects ($N=500$). The agreement with the stable fixed point and the stable limit cycle of Eq.~\eqref{eq:Z}, also represented in Figs.~\ref{fig:raster}(b) and \ref{fig:raster}(d), is excellent. \section{Phase diagrams} In the {\color{black} remainder} of this paper we obtain the phase diagrams for the four reference cases by means of Eq.~\eqref{eq:Z}. As \eqref{eq:Z} is a (generic) planar system, the only possible attractors are fixed points and limit cycles. {\color{black} Their bifurcation loci, depicted in the phase diagrams below, have been obtained using the {\sc matcont} toolbox \cite{matcont} of {\sc matlab}.} Moreover, we recall that $Z$ is only physically meaningful inside the unit disk $|Z|\le1$, and therefore attractors and bifurcations occurring outside it are ignored. As seen in Fig.~\ref{fig:raster}, limit cycles correspond to synchronized solutions, in which a macroscopic part of the population rotates at the same average frequency. \begin{figure*} \centerline{\includegraphics[width=0.9\textwidth,clip=true]{fig4_v2.pdf}} \caption{Synchronization regions of the Winfree model in the $(\Delta,\varepsilon)$-plane for the four cases of PRCs described in Table~\ref{tabla}, and three different values of the parameter $r\in\{0.1,0.5,0.9\}$. {\color{black} Panels (a), (b), (c) and (d) correspond to cases a, b, c, and d, respectively.} For the value $r=0.9$, light shaded regions indicate where there is a stable limit cycle, corresponding to a macroscopic synchronized state. In the dark shaded regions, the limit cycle (synchronous state) coexists with a stable fixed point (asynchronous state). The codimension-two points are depicted by specific symbols: Generalized Hopf (GH-{$\blacktriangle$}), saddle-node separatrix-loop (SNSL-{$\blacksquare$}), Bogdanov-Takens (BT-$\bullet$), and transcritical (TC-{$\bigstar$}) bifurcations. The bifurcation corresponding to each line type is indicated in the legend of the respective panel. {\color{black} Insets in panels (c) and (d) are magnifications of the regions inside the respective rectangles.}} \label{fig:pd} \end{figure*} \subsection{Case a: $\mathrm{I}-\mathrm{II}_s$} In Fig.~\ref{fig:pd}(a) we show the phase diagram spanned by parameters $\Delta$ and $\varepsilon$. Bifurcation lines for three values of parameter $r$, controlling the pulse width, are depicted. The results almost replicate those in Ref.~\cite{gallego17} for the standard Winfree model with type $\mathrm{II}_s$ iPRC. Synchronization is found in two adjacent regions, in one of them (dark shaded) coexisting with a desynchronized state. (There exists a region (not shown) besides the bistability region where two desynchronized states coexist, see \cite{PM14,gallego17}). In contrast to the averaging approximation (the Kuramoto-Sakaguchi model), valid at small $\epsilon$ and $\Delta$, synchronization becomes impossible if the population is too heterogeneous (large $\Delta$). For small coupling (and heterogeneity), synchronization emerges from a supercritical Hopf bifurcation undergone by the desynchronized state, akin to the classical Kuramoto transition\cite{OA08}. This Hopf bifurcation line terminates at a double zero eigenvalue (Bogdanov-Takens, BT) point. A homoclinic (Hom) line emanates from the BT point limiting the coexistence region. As observed for the regular Winfree model \cite{PM14,gallego17} synchronization is more efficient for narrow pulses. The pulse width does not qualitatively change the phase diagram. The phase diagram only differs appreciably from those in ~\cite{gallego17} at the origin. We see that, due to the type-I iPRC, the Hopf line approaches the origin with an infinite slope. In particular, the asymptotic dependence of the critical $\epsilon_H$ on $\Delta$ {\color{black} follows an unusual square-root law with the frequency dispersion $\Delta$:} \begin{gather}\label{sqrt} \varepsilon_H=\sqrt{\frac{2\Delta}{1+r}}. \end{gather} We can deduce this result deriving the associated Kuramoto-Sakaguchi model of model \eqref{model} via averaging. Or, alternatively, preserving in \eqref{eq:Z} only linear, rotationally invariant terms in $Z$, and equating the linear coefficient to $i\Omega$. \subsection{Case b: $\mathrm{II}_s-\mathrm{I}$} In case b, iPRC and aPRC are interchanged with respect to case a. This means that synchronization is favored at small coupling, but becomes increasingly difficult as the coupling grows. {\color{black} Accordingly, the phase diagram in Fig.~\ref{fig:pd}(b) shows the expected supercritical Hopf bifurcation line emanating as a straight line from the origin\cite{PM14,gallego17}: $\epsilon_H\propto\Delta+O(\Delta^2)$. At large $\epsilon$ there is a bistability region such that} the synchronized state disappears in a saddle-node bifurcation of limit cycles (SNLC). The locus of the SNLC is a line that emanates from a generalized Hopf (or Bautin) point (GH), and terminates at the $\epsilon$ axis at a point marked with a star on the $\epsilon$-axis of the phase diagram. The stars pinpoint the (equivariant) transcritical (TC) bifurcation \cite{ashwin92}, in which the fully synchronized state ($\theta_i(t)=\theta_j(t)$) of identical oscillators ($\Delta=0$) becomes unstable. For $r=0.9$, the instability of full synchronization takes place at $\epsilon_c=9.555\ldots$, far above the range of $\epsilon$ displayed in the phase diagram. The location of $\epsilon_c$ was not calculated using \eqref{eq:Z}, but {\color{black} by} directly looking for the stability threshold to the fully synchronized state, see Appendix. Finally, note that the synchronization region shrinks as the pulse becomes wider, but there is not a qualitative change in the phase diagram whatsoever. \subsection{Case c: $\mathrm{II}_s-\mathrm{II}_r$} In this case the aPRC is repulsive, in contrast to case b where the aPRC is type I (i.e.~neutral in terms of synchronization). In turn the phase diagram in Fig.~\ref{fig:pd}(c) shows a quite small synchronization region (notice the scale of the axes). Synchronization is bounded exclusively by a supercritical Hopf bifurcation, save for broad pulses. In the latter case a GH point is found, and the Hopf bifurcation is subcritical at the left of it. {\color{black} Accordingly,} we find a bistability region bounded by a line of saddle-node bifurcation of limit cycles (SNLC) and a subcritical Hopf bifurcation{\color{black}, as in case b}. The precise value of $r$ below which the bistability region exists (i.e.~the GH point is present) is $r_*\simeq0.27891$. Note also the presence of a TC point in the phase diagram at $\Delta=0$, above which full synchrony destabilizes \footnote{As the ``OA manifold'' is not attracting for identical oscillators \cite{OA08,OA09,OHA11}, the resulting dynamics depends on the initial conditions. (For initial conditions in the OA manifold, such as purely random phases ($Z=0$), the system converges to a state of quasiperiodic partial synchronization \cite{Vre96,PR15}, a state which $Z$ oscillates periodically while the individual oscillators behave quasiperiodically.)}. The transcritical bifurcation is not structurally stable, see e.g. Fig.~11 in \cite{crawford91}, and increasing $\Delta$ from $0$ may either leave no trace of bifurcation or ``decay'' into two saddle-node bifurcation of limit cycles. The latter scenario occurs for $r<0.27577\ldots$, see the bifurcation lines for $r=0.1$ in Fig.~\ref{fig:pd}(c), but in our case one of the bifurcations is not shown as it entails $|Z|>1$. \subsection{Case d: $\mathrm{II}_r-\mathrm{II}_s$} Case d {\color{black} exhibits the most complex phase diagram among all those obtained here.} The aPRC is of type $\mathrm{II}_s$, as in case a, and (accordingly) the large $\epsilon$ region {\color{black} is organized by two codimension-two points: The Bogdanov-Takens (BT), and the saddle-node separatrix-loop (SNSL) codimension-two points. The associated region of bistability between synchrony and asynchrony is bounded by} homoclinic, saddle-node and Hopf bifurcations. Remarkably, there is also a bistability region at small $\epsilon$ values for $r>r_*\simeq0.27891$ (recall the simulations in Fig.~\ref{fig:raster}), which is bounded by a subcritical Hopf and a saddle-node of limit cycles bifurcations. In contrast to previous cases, this synchronization region is detached from the origin due to the repulsive character of the iPRC. To be more precise, the bottom corner of the lower bistability region located at point TC approaches the origin as $r\to1$. \section{Conclusions} In this work we have studied a non-trivial extension of the Winfree model in which the PRC is nonlinear in the mean field. If the PRC contains only the first harmonic of the angle, the OA ansatz permits a sharp dimensionality reduction. Among all possible dependencies of the PRC on the mean field, we have considered only those with a crossover between two different canonical components. In particular, we have analyzed four cases in which an attractive type-II component competes either against a repulsive type-II component or against a type-I component. Synchronization regions are peculiar for each case. Bistability between macroscopic synchronization and complete desynchronization are found in all cases (in case c, only for broad pulses), but in different relative locations in the $\Delta-\epsilon$ plane. Our results indicate that the nonlinearity of the PRC with the forcing, by itself, is not enough to generate complex collective phenomena. This is {\color{black} certain} for a Lorentzian distribution of frequencies since the reduced system is only two dimensional, irrespective of the exact form of $f_1(A)$ and $f_2(A)$. As happens in Kuramoto-like models, phenomena such as clustering or glassy dynamics may require multiple Fourier components \cite{okuda93} (in the PRC) or stronger heterogeneity \cite{IMS14}, respectively. Concerning collective chaos, other ingredients such as a time-varying coupling \cite{so11}, two interacting populations \cite{bick18} or multimodal frequency distributions \cite{cheng} appear to be imperative. Needless to say, our study is only a drop in the ocean of possible PRCs and model generalizations. For instance, relaxation oscillators \cite{sacre16} and bursting (neuronal) oscillators \cite{sherwood} have PRCs very different from the first-harmonic shape function in Eq.~\eqref{PRC}. Nevertheless, in spite of its limitations, we regard the model defined by Eqs.~\eqref{model} and \eqref{PRC} as a noteworthy example of system in which the OA theory can be fully applied. \begin{acknowledgments} We acknowledge support by the Agencia Estatal de Investigaci\'on and Fondo Europeo de Desarrollo Regional under Project No.~FIS2016-74957-P (AEI/FEDER, EU). \end{acknowledgments} \section*{DATA AVAILABILITY} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{Appendix: Identical Oscillators} If the oscillators are identical there is a fully synchronized solution $\theta_j(t)=\Psi(t)$. The dynamics of $\Psi$ obeys: \[ \dot\Psi=\omega_0 + f_1[\varepsilon P(\Psi)](1-\cos\Psi)-f_2[\varepsilon P(\Psi)]\sin\Psi. \] Next, we calculate the stability threshold of full synchrony, fixing $\omega_0=1$ as in the main text. In the thermodynamic limit ($N\to\infty$), we may perturb one oscillator, say the first one, without changing the mean field. Hence, one infinitesimal perturbation $\delta\theta=\theta_1-\Psi$ obeys: \[ \dot{\delta\theta}=\lambda(\Psi)\, \delta\theta, \] where the multiplicative factor $\lambda(\Psi)=f_1[\varepsilon P(\Psi)]\sin\Psi-f_2[\varepsilon P(\Psi)]\cos\Psi$ depends on time through $\Psi(t)$. In order to know the average exponential growth (or contraction) rate of $\delta\theta$ we need to integrate over variable $\Psi$, taking into account its density $\rho(\Psi)$. These means that the sign of constant $\lambda$, given by \[ \lambda=\int_{-\pi}^{\pi} \lambda(\Psi) \rho(\Psi) d\Psi , \] determines the stability of the fully synchronized solution. If $\lambda$ is positive, the oscillator ``evaporates'' from the main cluster, i.e. full synchrony is unstable. The density $\rho(\Psi)$ is proportional to the inverse of the speed: $\rho(\Psi)\propto \dot\Psi^{-1}$. Imposing $\lambda=0$, we obtain the condition for the stability threshold of full synchrony: \[ \int_{-\pi}^{\pi} \frac{f_1[\varepsilon_c P(\Psi)]\sin\Psi-f_2[\varepsilon_c P(\Psi)]\cos\Psi} {1 + f_1[\varepsilon_c P(\Psi)](1-\cos\Psi)-f_2[\varepsilon_c P(\Psi)]\sin\Psi} d\Psi=0. \] This integral cannot be solved analytically, but the threshold coupling $\varepsilon_c$ is easily found numerically. \section*{References}
{ "timestamp": "2020-07-28T02:40:47", "yymm": "2007", "arxiv_id": "2007.13588", "language": "en", "url": "https://arxiv.org/abs/2007.13588" }
\section{Introduction} A variety of wavelengths of the electromagnetic spectrum are typically involved in the fabrication of structures with details ranging from micro- to nano-scale exploiting a plethora of techniques including holography, laser ablation and UV lithography \cite{infusino2012polycryps, sahin2014nanoscale, sze1985physics}. Large area ($cm^2$) and features of hundreds of nanometers are at hand in systems like microfluidic channels, optical devices, MEMs, and transistors, just to name a few \cite{leclerc2004microfluidic, psaltis2006developing}. Pushing the resolution to nanometric scale requires a different approach like nanoimprinting and electron beam lithography, albeit at the expense of reduced work area and time-consuming procedure \cite{ferraro2018directional, sze1985physics}. In the last decades, the new frontier of nanometric fabrication is embodied by two photon direct laser writing (TP-DLW) lithography. By exploiting a nonlinear two-photon absorption process, the involved photoresin is cured only in the focal point of the used laser, the voxel (short for volume pixel), thus sensibly increasing the resolution of realized nanostructures. However, nanotechnology still moves forward seeking hyper-resolution and recent years have witnessed a very large number of attempts to further increase the fabrication performance. Among them, chemical reagents have been involved to improve the photoresin capabilities \cite{zhang2010designable}, or complex reaction procedures, as modification of initiation and termination polymerization phases by means of a gain medium have been considered. In the latter case, a resolution of $\sim \lambda/20$ has been achieved \cite{li2009achieving}. Results are also achieved in the TP-DLW technique in terms of 3D micro scale structures \cite{zhang2010designable}, for improving as example fiber tip fabrication \cite{bratton2006recent} and high resolution of 3D systems in hydrogels \cite{xing20153d}. In the last decade, in terms of achieved resolution, a $100nm$ limit was possible with the use of radical quenchers in femtosecond laser direct writing \cite{lee2008advances}, of $40nm$ by using an activation beam \cite{li2009achieving}, of $25nm$ and $20nm$ using the scan speed manipulation and self-smoothing effect \cite{tan2007reduction, wu2010high}, or using photosensitive sol-gels \cite{passinger2007direct}. Finally, a technical expedient, like changing the interface height, can limit the single line width and height to around $100nm$ \cite{park2009two}. A completely different approach involves optical epsilon-near-zero ($\varepsilon_{NZ}$) nano-cavities in the metal/insulator/metal/insulator (MIMI) configuration \cite{liocolor}. These systems pave the way for the realization of very versatile devices with unusual optical features. In this manuscript, the extraordinary self-collimation of light enabled by a MIMI plasmonic metamaterial is proposed as a ground-breaking possibility to improve the resolution of TP-DLW lithography. In particular, nanostructures with typical sizes of few tens of nanometers are within reach in few minutes writing time. In the following, it is shown how the metamaterial is exploited to successfully fabricate 1D gratings with a height adjustable from $5 nm$ to $50 nm$ and a nanometric 3D bas-relief of Da Vinci's portrait \textit{"Lady with an Ermine"} with the dimension of $50X80X0.5 \mu m$ with its full height divided in $25$ slices of $20nm$ thickness each. The proposed approach is characterized by a very fast patterning process, able to realize dielectric nanometric structures with almost any shape/form as ultra-thin diffractive optical elements \cite{chen2013reversible, chen2018broadband}. Moreover, the proposed MIMI can be also realized directly over the objective of the TP-DLW apparatus enabling the nanafabrication on different substrates. These results are extremely important for industrial applications in several fields such as anti-counterfeiting, flat optics. \section{Principle and Materials of Hyper Resolute Two Photon Lithography} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.2]{Fig_1_bis_2.png} \caption{Sketch of TP-DLW process where the photo-sensible resin is placed a) on the glass substrate, b) on the MIMI device working as substrate. The inset show the proposed MIMI structure made by Ag/ZnO/Ag/ZnO allowing the transmission of wavelengths of 390 and 780 nm. c) Resonant cone angle for the considered MIMI. d) Point spread function (PSF) for the four considered cases . The PSF reveals a beam shrinking about $36 \%$ with MIMI respect to the glass. e-f) Transmitted spot trough glass and mimi at $\lambda 405 nm$ and e-f) at $\lambda=780 nm$, acquired trough beam profiler.} \label{1} \end{center} \end{figure} The TP-DLW apparatus used in this work exploits a femtosecond Ti:Sapphire laser ($\lambda = 780 nm$) connected to an inverted microscope. The laser beam is focused on the sample through a $63X$ objective, with 1.4 numerical aperture (NA). 2D and 3D structures can be fabricated through the glass by diving the objective in the photoresist (Fig. \ref{1}a). Since the writing process occurs plane by plane, to create the highest reliefs of a structures, the laser beam is attenuated during the travel through the objects already fabricated, generating smaller effective voxels. As illustrated in Fig. \ref{1}b, it is possible to drastically increase the resolution in terms of voxel sizes by using a MIMI directly deposited on a classical substrate used for the TP-DLW. The inset reports a schematic view of the MIMI constituted of a silver layer with thickness $t_{Ag}=30nm$, a thicker ZnO layer working as an optical nano-cavity ($t_{cav}$), another silver layer with the same ($t_{Ag}$) thickness and a final thin ZnO layer ($t_{ZnO}=30nm$). In order to obtain a hyper-resolute TP-DLW process, it is necessary that selected wavelengths ($\lambda=780nm$ and $\lambda=390nm$) are let through the MIMI device. By using a Finite Element Method (FEM) model, based on numerical ellipsometer analysis (NEA)\cite{lio2019comprehensive}, and by varying the thickness of the dielectric nano-cavity ($t_{cav}$), it is possible to retrieve the thickness value to obtain the minimum in reflectance and the maximum in transmittance for a normally incident wave at the two above mentioned wavelengths. In our case, the optimal cavity thickness is $160 nm$ that actually supports the plasmonic resonant modes marked with dashed white lines in the reflectance and transmittance maps reported in Figure S1a and b, respectively. A further validation of the occurrence of double modes is provided by a modified effective medium theory (EMT), taking into account the dielectric constant and the thickness of each layer \cite{zeng1988effective, rousselle1993effective}. This analysis shows the double $\varepsilon$NZ behavior of the proposed MIMI device presenting the zero crossing point between the real perpendicular $Re \left\{ \varepsilon_{\perp}\right \}$ and parallel $Re \left\{ \varepsilon_{\parallel}\right \}$ dielectric constant respectively for the two resonant wavelengths (Figure S1c in supplementary material). In case of off-normal light incidence on the same MIMI ($\theta_i$ varying from $0^\circ$ to $80^\circ$) reflectance and transmittance curves are reported in Figures S1d-g. Experimental curves are measured by analyzing the fabricated MIMI by means of a W-VASE ellipsometer, while the numerical analysis is performed again by the NEA model. The quite reliable agreement between numerical and experimental results confirms the possibility to exploit the MIMI for TP-DLW with a focused ($\lambda = 780 nm$) laser beam at normal incidence. As reported in past studies \cite{mocella2009self, polles2011self, di2012digital, arlandis2012zero}, $\varepsilon_{NZ}$ metamaterials have the remarkable ability to collimate light. The propagation of light within these metamaterials can be rigorously described by means of the dyadic Green's function \cite{potemkin2012green}. Such an analysis confirms that light emitted in the direction of the extraordinary axis from a localized source placed on the top of $\varepsilon_{NZ}$ metamaterials, propagates within the medium in the so-called resonance cone \cite{newman2013enhanced}. The resonance cone is visible as two lobes propagating through the medium, separated by a semi-angle $\theta_{res-cone}$, which is calculated as follows \cite{shekhar2014hyperbolic}: \begin{equation} \left | \theta_{res-cone} \right |= \sqrt{-\frac{\varepsilon_\parallel}{\varepsilon_\bot}} \end{equation} For the considered MIMI device the $\theta_{res-cone}$ has been calculated using the parallel and perpendicular dielectric constant retrieved with the EMT. As reported in Figure \ref{1}c, the MIMI device presents two points where the resonant cone angle is zero meanimg that, for $\lambda\sim 400nm$ and $\lambda\sim770nm$, the light passing through the MIMI remains completely collimated. Due to the incident angle independency in the EMT and $\theta_{res-cone}$ calculation, the mismatch in terms of desired and evaluated wavelengths is not worrisome. The $\theta_{res-cone}$ and the beam behavior is experimentally confirmed by the use of a homebuilt confocal setup (details reported in Figure S2). Each spot has been collected using a beam profiler (Thorlabs BC106N-VIS spectral range from $350 nm$ to $1100 nm$). The MIMI is compared with a standard glass (glass coverslip $24X24X0.15 mm$) used in TP-DLW, the produced focalized spot is measured in order to estimate the beam shrinking by the point spread function (PSF) as illustrated in Figure \ref{1}d. The intensity maps are reported in Figures \ref{1}e-h. The beam with wavelength of $\lambda=405 nm$ and $\lambda=780 nm$ passing through the MIMI presents a reduction of the initial full width at half maximum (FWHM) of $1\mu m$ and about $0.9 \mu m$, respectively, which corresponds to a reduction of $36 \%$ when collected with a 50x objective. In case of a 10x objective, the beam reduction that occurs is about $20\%$ as detailed in the field maps and PSF reported in Figure S3. \section{Sample Fabrication and Characterization} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.2]{Fig_2.png} \caption{ AFM morphology of a fabricated grating with a Laser Power (LP) of $20 mW$ and a Scan Speed (SS) of $4 mm$ a) trough the classical glass substrate and b) trough the MIMI device. c) Profiles comparison of the two fabricated grating. d) Grating realized trough the MIMI device using a LP of $12 mW$ and a SS of $4 mm$ producing an element height of $\sim 5\pm2 nm$. e) Height and Width for all the fabricated gratings at different laser power and scan speed using the glass substrate (black symbols) and the MIMI (red symbols). f) Height as function of the LP for glass (black square) and MIMI (red squares) substrates showing a reduction of $\sim89\%$. } \label{2} \end{center} \end{figure} To demonstrate the hyper resolution in a TP-DLW process induced by the presence of a MIMI, an array of 1D gratings has been fabricated on top of the MIMI previously deposited on a glass coversplip substrate. In the latter case, the interface selected to begin the laser writing process is established between the resin and the glass; while in presence of the MIMI, the chosen interface is between the first silver layer and the glass. The use of an array permits to characterize each elements as a function of the laser power $LP$, which varies from $12 mW$ to $\sim25 mW$, and the scan speed $SS$ which varies from $2 mm/s$ to $10 mm/s$. The topography analysis conducted using an atomic force microscope (AFM) with high precision tips (see details in the methods section) ensures to collect very high quality data. A comparison between the AFM measurements performed on the structures realized through a simple glass coverslip and through the MIMI is shown in the Figure \ref{2}. The difference between the standard process and through the MIMI stands out immediately offering the possibility to clearly trace straight lines without defects (Figures \ref{2}a and b). A comparison between two fabricated gratings, using $LP = 20 mW$ and $SS = 4 mm/s$, has been done based on the raw images of the AFM profile. As illustrated in Figure \ref{2}c, a remarkable difference in terms of height and width of each grating element is present, with a height $H\sim 420nm$ for the grating made trough glass and an average $\tilde{H}\sim{30}nm$ and and same value for the half width of the MIMI. In Figure \ref{2}d, a grating image is reported realized using the lowest laser power and the fastest scan speed producing an $\tilde{H}\sim{7}nm$ and a width $\tilde{W}\sim{150}nm$. In order to complete and better understand the comparison on both substrates, the size of each grating element contained in the test array have been collected and reported in the graph shown in Figure \ref{2}e. The latter evidences an interesting and remarkable difference between the grating sizes produced through the glass and the MIMI. In fact, the resolution is improved in percentage by $\sim89\%$ and $\sim50\%$ in terms of height and width respectively. A further validation has been done by comparing the produced element height at different laser powers. The trend is similar for both substrate maintaining equal the difference that occurs between them as presented in Figure \ref{2}f. The proof of concept of the reliability of the proposed technique, able to force the beam self-collimation during writing, is the fabrication of particularly complex three-dimensional TP-DLW tests. The MIMI functionality has been challenged in producing a polymer bas-relief version of the famous Da Vinci's portrait \textit{"Lady with an Ermine"} (Figure \ref{3}a). The choice to realize a bas-relief has been done considering that the process involves a 3D lithography and this portrait also contains very tiny details only reproducible in presence of hyper resolution. The first step is create a computer-aided design (CAD) of the portrait image to be used in the TP-DLW process. The obtained design is shown in Figure \ref{3}b. The full height of the portrait is chosen as $H_z= 500nm$ that is divided in $25$ slices of $20nm$ thickness each. The optical microscopy image collected by using unpolarized white light and a $40x$ objective clearly underlines the contours of the portrait and details like face, dress, hand, ermine and necklace (Figure \ref{3}c). This high quality optical image is also the consequence of the self-collimation of the microscope impinging light passing through the sample and reaching the objective experimenting the lens effect of the $\epsilon_{NZ}$ metamaterial as demonstrated in previous works\cite{fang2002imaging, liu2007far, casse2010super, zhao2011nanoscale, kim2015metamaterials}. The collection of the whole bas-relief (Figure \ref{3}d-e) is obtained by means of a fluorescence confocal microscopy analysis (details in the methods section) performed with a slicing of $10 nm$ along the z-axis . This imaging method allows to easily recognize at the minimum height the silhouette of the "Lady" (Figure \ref{3}d). Then by increasing the $H_z\sim 0.5\mu m$, the frame shows the details present on top of the sample, as illustrated in Figure \ref{3}e. Finally, once the z-stack is completed, by using the proprietary software is possible to recombine all acquired images producing their full overlap and the final picture reported in Figure \ref{3}f. The preliminary test highlights the reliability of the hyper resolute 3D TP-DLW with slice distance of $20 nm$ and an high level of details reproduced in a very small volume. This technique is foreseen as a valuable possibility to realize anti-counterfeiting tags in the recent research framework of physical unclonable functions. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.6]{Fig_3.png} \caption{a) Original portrait and b) TP-DLW software model of the bas-relief \textit{"Lady with an Ermine"}. c) Optical image of the realized 3D sample collected with an objective with magnification 40X trough the glass and enlighten from the top. d-f) Emission confocal images of the bas-relief. In particular d) reports the base of the 3D print showing the silhouette of the "Lady". e) Image collected close to the top ($H_z\sim 0.5 \mu m$) of the portrait and some details are shown as the hand, the shoulders, and parts of dress and face. f) A whole sample image has been reconstructed highlighting the reliability to "print" very well all particulars at the nanoscale. The scale-bar in each picture is $15 \mu m$. } \label{3} \end{center} \end{figure} \section{Conclusions} In this work, a novel technique is presented to sensibly improve the performance, in terms of writing resolution, of a generic two-photon direct laser writing process. The diffraction-free ability of an $\varepsilon_{NZ}$ metamaterial enables an extraordinary collimation of the writing laser light and hence the hyper resolution of the TP-DLW. In fabricating test structures like 1D gratings, a reduction of the voxel size of about $89\%$ and $50\%$, for height and slit width respectively, is observed corresponding for the height of the grating to a reduction from $250\pm2 nm$ to $5\pm2 nm$. The proposed technique gives its best when more complex 3D structures are considered. An hyper-resolute bas-relief version of the famous Da Vinci's portrait \textit{"Lady with an Ermine"} with a full height of only $500nm$ divided in $25$ slices of $20nm$ thickness each. These new frontier results find immediate application in the trendsetting scenario of physical unclonable functions and flat-optics. \section*{Methods} \textbf{Sample fabrication} The MIMI substrate is realized by DC sputtering deposition on glass substrate (thickness $\sim 150\mu m$), Then, the substrate is placed inside the two-photon lithography apparatus which is able to define the grating pattern or the desired shape in a drop of photo-resine placed on top of the MIMI device. Finally, the sample is developed in a bath of propylene glycol methyl ether acetate (PGMEA) for 25 minutes, then in a bath of isopropanol alcohol (IPA) for other 5 minutes. During all the development process, the sample is soaked.\\ \textbf{AFM characterization} has been done using a confocal microscope Zeiss LSM 780 equipped with an AFM head-stage. The AFM measurements are performed with high resolute tips with precision of $\pm 2 nm$. Each scan has been collected at high resolution of 1024x1024 px in order to reduce any background noise. \\ \textbf{Fluorescence Confocal} 3D image in fluorescence (see Figures \ref{3}) has been acquired using a confocal microscope equipped with a 3D piezoelectric scanner. A 488 nm laser was focused on the cover-slip through a 40x air objective, which allows a spatial resolution of 200 nm, and a z-resolution of 10 nm. The emitted light collected by the objective is sent to a beam-splitter and filtered with a MBS T80/R20. Finally it passes through a pin-hole and is collected by a 580-600 nm detector. \section*{Contributions} G.E.L. conceived the idea. G.E.L., T.R. and A.F. fabricated and fully characterized the structures. G.E.L. and A.F. performed the measurements and analyzed the data. R.C, A.DL. and M.G. devised the experiments and supervised the work. G.E.L. and A.F prepared and wrote the manuscript with input from all authors. \section*{Acknowledgements} The authors thank the Infrastructure ``BeyondNano" (PONa3-00362) of CNR-Nanotec for the access to research instruments. They also thank the ``Area della Ricerca di Roma 2", Tor Vergata, for the access to the ICT Services (ARToV-CNR).
{ "timestamp": "2020-07-28T02:38:09", "yymm": "2007", "arxiv_id": "2007.13509", "language": "en", "url": "https://arxiv.org/abs/2007.13509" }
\section{Introduction} Surfactants are widely used to control the evaporation behavior of sessile droplets on a flat substrate~\cite{Sempels2013,Alvaro2016,Kwiecinski2019}. The motivation is driven by various applications in inkjet printing, surface coating and patterning~\cite{park2006control,kong2014}, which mainly aim to optimize the drying rate and the final deposition. The biggest challenge for a controlled uniform coating by droplet evaporation originates from the well-known ``coffee-stain effect"~\cite{deegan1997}. It has been shown that surfactant-induced Marangoni flow can play an essential role to suppress this effect~\cite{Still2012,kim2016controlled}. In these studies, one of the most common ionic surfactants, ``sodium dodecyl sulfate" (SDS)~\cite{Piret2002,Kumar2015,Choudhary2018} is added to the system at small concentration, typically $\leq 1$ wt\%. The surfactants are therefore considered to be always soluble in the system during most of the evaporation lifetime. However, in many practical cases, the relevant liquids contain a high concentration of surfactants; e.g., liquid detergents can contain surfactant ingredients at up to 40\% by weight. Such high loading of surfactants may lead to undesired effects, such as separation and crystallization. Sodium dodecyl sulfate (SDS) may crystallize in liquid solutions upon cooling~\cite{SMITH2004} or upon seeding with 1-dodecanol~\cite{SUMMERTON2016}. On the other hand, selective evaporation of some liquid components with larger volatilities can also lead to phase separation in multicomponent mixtures~\cite{Li2018,Kim2018,Li2019}. Consequently, the nonvolatile surfactant (SDS) is expected to separate from an evaporating liquid system by crystallization due to the preferential evaporation of volatile liquids. Therefore, the wide-usage of SDS in evaporating droplet systems deserves a more detailed explanation of the crystallization behaviour. In this work, we study a multicomponent droplet system consisting of a mixture of glycerol, water, and SDS and let it evaporate in ambient air. SDS is not miscible with pure glycerol, but it does dissolve in glycerol-water mixtures for large enough water concentration ratios. This behavior qualitatively resembles the ternary ``ouzo'' system~\cite{Vitale2003} consisting of water, ethanol, and anise oil, which nucleates in droplets for low enough ethanol concentrations. Tan \textit{et al.}~\cite{Tan2016,Tan2017a} triggered this emulsification threshold by the selective evaporation of ethanol in an evaporating ouzo droplet. Similarly, the varying solubility of SDS in glycerol-water binary systems may also lead to phase separation due to the concentration change caused by the selective evaporation of water alone. In contrast to crystallization by cooling~\cite{SMITH2004,SUMMERTON2016}, here the oversaturation with SDS and the subsequent nucleation and growth of SDS crystals is caused by the preferential evaporation of water at room temperature~\cite{crystal2018}. To better understand the evaporation-induced crystallization in the mixture droplet system, two main questions need to be addressed: how does a surfactant-laden mixture droplet evaporate and how to model the crystallization during the evaporation? In this paper we want to answer these questions. A typical snapshot of an evaporating droplet is shown in Fig.~\ref{fgr:capture}, where the two life phases can be distinguished: the evaporation phase and the crystallization phase. The focus of our study is on the dynamics of the evaporation and the kinetics of crystallization, and not on the micro-scale crystal morphology. \section{Experimental Methods} \subsection{Materials and preparation} The liquid solution was prepared with an initial composition of 78\% (w/w) Milli-Q water (Reference A+, Merck Millipore, 25$^\circ$), 19.6\% (w/w) glycerol (Sigma Aldrich; purity $\geq 98$) and 2.4\% (w/w) sodium dodecyl sulfate (Sigma Aldrich, purity 98\%). The initial concentration of SDS is 13 CMC (critical micelle concentration). Experiments were carried out on a transparent hydrophobic octadecyltrichlorosilane (OTS)-glass substrate~\cite{Peng2014}. The static contact angle of Milli-Q water and glycerol on the substrate are $105^{\circ} \pm 3^{\circ}$ and $90^{\circ} \pm 3^{\circ}$, respectively. The glycerol-water binary droplet with 50\%/50\% (w/w) has a $95^{\circ}$ static contact angle. Prior to each experiment, the samples were cleaned by sonication in an ultrasonic bath of ethanol and subsequently in water, then dried under a flow of nitrogen gas. \subsection{Experimental setup} We performed two different experiments to separately study the evaporation phase and crystallization phase. To study the evaporation behavior, the droplets were deposited on the substrate by a Hamilton 2 $\mu$L syringe, which was mounted vertically on a computer-controlled motorized pump, that allowed the dispense of droplets of a controlled volume through a needle. We measured the geometry of the deposited droplet by bright-field imaging in side view. The whole process was recorded by an OCA 15 (Dataphysics, Germany) contact angle device (Fig.~\ref{fgr:setup}.a): a CCD camera coupled to a microscope, which was back-illuminated by a LED light from the opposite side of the droplet. For the crystallization study, we observed the droplet in bottom view with a confocal microscope (Fig.~\ref{fgr:setup}.b). By focusing on the layer close to the substrate (at a $\approx 10\ \mu$m height), the dynamic growth of the crystals was visualized in a 2-dimensional view. The experiments were performed at room temperature of 21.4 $\pm$ 1$^{\circ}$C and at relative humidity of 50$\%\pm 5\%$. These parameters were monitored and recorded for each measurement. \begin{figure}[H] \centering \includegraphics[width=0.98\textwidth]{setup.eps} \caption{A schematic sketch of the experimental setups. (a) The contact angle device contains a CCD camera with a microscope and a LED light source illuminating the droplet. (b) The droplet is illuminated from above and recorded by a camera equipped with a $10\times$ microscope objective underneath. The whole setup is part of a confocal microscope (Nikon A1 confocal laser microscope system, Nikon Corporation, Tokyo, Japan).} \label{fgr:setup} \end{figure} \subsection{Imaging analysis} For the side-view geometrical measurement, images were analyzed using a custom-made Matlab code to detect the droplet profile with sub-pixel accuracy~\cite{vanderbos2014}. The sizes of all droplets are smaller than the capillary length $\sqrt{\gamma/(\rho g)} \approx 2.7$~mm for the used liquids~\cite{cazabat2010}, where $\gamma \approx 70$~mN/m and $\rho \approx 10^3$ kg/m$^3$ are the surface tension and density of the mixture, and $g = 9.8$~m/s$^2$ is the gravitational acceleration. The detected profile is fitted to a spherical cap during the evaporation phase, which enables us to calculate the volume $V$ of the droplet with footprint radius $R$ and contact angle $\theta$. As shown in Fig.~\ref{fgr:capture}A, the dark blue solid line is the position of the substrate: the spherical shape above it is the sessile droplet, the one underneath is its reflection. For the documentation of the crystallization process from the bottom view, a manual detection with ImageJ was used to measure the crystallized area at every time instant, see details in Supplementary Materials. \begin{figure}[H] \centering \includegraphics[width=0.98\textwidth]{capture.eps} \caption{Experimental snapshots of the evaporation and drying process of a typical drop on a flat surface. (A) and (B) show the evaporation phase of the drop: here, the drop retains a spherical cap shape; no crystallization occurs. (C) The final state of the drop: due to the crystallization of SDS in the bulk, the surface of the drop buckles and no longer remains spherical. The crystallization of the SDS shields the surface and brings the evaporation process to an end. The scale bar represents 0.5~mm.} \label{fgr:capture} \end{figure} \section{Experimental results} \subsection{Evaporation phase} The left column of Fig.~\ref{fgr:parameters} displays the temporal evolution of the drop-characterizing geometrical parameters for four droplets with different initial sizes : volume $V$ (A1), contact angle $\theta$ (B1), and footprint radius $R$ (C1). From the plots, it is evident that all the droplets evaporate following the ``stick-slide" mode~\cite{stauber2014,lohse2015rmp}, in which the droplet's footprint radius first remains constant until it reaches a critical contact angle, then the contact line starts to shrink. We only measure the volume until buckling occurs (as marked by the red circles in Fig.~\ref{fgr:parameters}A1, C1), and after that, the droplet shape deforms and no regular shape is reestablished, which from then on renders accurate volume measurement impossible. Fig.~\ref{fgr:evap_rate}A shows the average evaporation rate of various droplets in the first 30 sec after deposition with initial volumes ranging from 0.12~$\mu$L to 2.40~$\mu$L. The evaporation rate monotonically increases with increasing droplet size, apart from fluctuations due to experimental uncertainties. \begin{figure}[H] \centering \includegraphics[width=0.98\textwidth]{parameters.eps} \caption{(A1,B1,C1) Measured temporal evolution of the geometrical parameters: volume $V$ (A1), contact angles $\theta$ (B1)and lateral sizes $R$ (C1). The red dots mark the moments when buckling occured. (A2,B2,C2) Same parameters as in experiment, but now non-dimensional and plotted against the scaled time following Eq.~(\ref{eq:scaled_time}). The data collapse clearly shows the universality of the drop evaporation process. (A2) The final volume is controlled by the occurrence of crystalization, rather than by the liquid-vapor equilibrium relation, which is shown by the black dashed line.} \label{fgr:parameters} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.98\textwidth]{evap_rate.eps} \caption{(A) The initial rate of volume loss of the drop varies for different initial volumes. (B) The same data normalized by the initial volume is plotted against the initial volume. The straight line shows the scaling relation with slope -2/3, demonstrating good agreement with the experimental data.} \label{fgr:evap_rate} \end{figure} \subsection{Crystallization phase} Figure~\ref{fgr:crystal_large} shows the complete crystallization process of an evaporating surfactant-laden mixture droplet. The droplet starts to evaporate at time $t_0$. At approximately 50 sec, the first crystals appear near the contact line (CL) region. After a few more seconds, several crystals nucleate at the rim. Then they grow and coalesce to form a larger piece and finally occupy the whole bulk of the droplet. Figure~\ref{fgr:crystal} presents a zoomed-in bottom view of the contact region of another evaporating surfactant-laden mixture droplet. Initially, the droplet is transparent with a smooth CL. After evaporating for 280 sec, a crystal nucleates near the CL and floats to the position labeled by the yellow circle. A few seconds later, more crystals nucleate at the rim, slightly deforming the CL. The nucleated crystals grow and coalesce with neighbouring crystals. Eventually, the whole droplet is occupied by the crystals and the CL deforms and is no longer smooth. Figure~\ref{fgr:JMAK}A shows the temporal evolution of the transformed fraction measured in a 2D bottom view for three different droplets. $X$ is the area fraction occupied by crystals and $t$ is the time which has elapsed after the first crystallization had been observed. The area fraction $X$ increases as the growth of crystals at a different rate for each droplet. \begin{figure}[H] \centering \includegraphics[width=0.98\textwidth]{crystal_large.eps} \caption{Bottom-view of a complete drop life time. (A) The drop evaporates on the substrate with receding contact line. (B) The first crystal appears near the contact line region. (C) Several crystals nucleate and grow independently. (D) Growing crystals coalesce with neighbouring ones. (E) The crystals cover the whole drop and bring the evaporation to an end. (B to E) The contact line basically remains the same until the final state of the drop, but slightly deforms due to the buckling of the drop surface. The scale bar represents 50~$\mu$m.} \label{fgr:crystal_large} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.98\textwidth]{crystal.eps} \caption{Bottom-view snapshots of the contact region of an evaporating surfactant-binary drop. (A) The moment of deposition of the drop: the drop starts evaporating on the substrate. (B) A small crystal nucleates (yellow circle), floats and grows near the contact line. (C) The crystals heterogenously nucleate at the contact line. (D) The nucleated crystals grow and merge with neighbouring crystals. (E) The crystalized SDS fully occupies the drop and eventually brings the evaporation to an end. The scale bar represents 20~$\mu$m.} \label{fgr:crystal} \end{figure} \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{JMAK.eps} \caption{(A) Measurement of the crystallized area fraction of three droplets against time. The analytical results (solid lines) according to Eq.~(\ref{eq:crys_X}) are also shown. (B) The circular regions with areas $\Delta_1$ and $\Delta_2$, represent the areas where nucleation can or cannot occur in time $t$, respectively. The orange region indicates the area transformed at time $t$, due to a nucleation at N. $S$ is an arbitrary point in $\Delta_1$ which has distance $z$ from the boundary. Within time $t$, it must be transformed due to a nucleus on the boundary.} \label{fgr:JMAK} \end{figure} \section{Theoretical analysis} \subsection{Theory of mixture droplet evaporation} We first study the evaporation characteristics of the surfactant-laden mixture droplet. In general, for a droplet evaporating on a flat surface under ambient conditions and in the absence of any correction, the evaporation is fully controlled by the diffusion of the vapor away from the droplet~\cite{picknett1977,hu2006}: the liquid molecules change their phase and diffuse as vapor molecules into the surrounding air. Popov~\cite{popov2005} derived an analytical solution by using the solution of the equivalent problem of an electric potential around a charged lens-shaped conductor: \begin{equation} \centering \frac{\text{d}m}{\text{d}t} = -\pi DR(c_{\text{s}}-c_{\infty})f(\theta) \label{eq:popov} \end{equation} with \begin{equation} \centering f(\theta)=\frac{\text{sin}(\theta)}{1+\text{cos}(\theta)}+4\int_{0}^{\infty} \frac{1+\text{cosh}(2\theta\varepsilon)}{\text{sinh}(2\pi\varepsilon)}\text{tanh}[(\pi-\theta)\varepsilon] \text{d}\varepsilon \end{equation} with $m$ the droplet mass, $D$ the diffusion coefficient of the droplet liquid vapor in air, $c_s$ the saturated concentration of liquid vapor molecules, and $c_\infty$ the ambient concentration of the liquid vapor far away from the drop. For the evaporation of multicomponent droplets, we first employ the method suggested by Brenn~\cite{Brenn2007}, namely considering the total evaporation rate of the mixture droplet as the sum of the evaporation rate of each individual component. In our surfactant-laden glycerol-water droplet, glycerol and SDS are non-volatile under ambient condition\cite{Geballe2010}. Therefore, only the diffusive flux of water contributes to the total evaporation rate. The essential difference between the evaporation of pure droplets and multicomponent droplets is the vapor-liquid equilibrium: the non-volatile component in the system alters the saturated concentration of water vapor at the interface~\cite{Tan2016,Diddens2017a}. Raoult's law~\cite{Raoult1887} is used to calculate the saturated water vapor concentration of the binary system: $c_{w,\text{s}} = X_w c_{w,\text{s}}^{0}$, where $X_w$ is the mole fraction of water at the interface and $c_{w,\text{s}}^{0}$ is the saturated vapor concentration of pure water. However, Raoult's law relies on an idealized solution and as such ignores any interaction between the components. To overcome this limitation, the so-called activity coefficient $\psi$~\cite{chu_prosperetti_2016} was introduced to describe this interaction. In our case, it specifically addresses the interaction between water and the other components: $c_{\text{w,s}} = \psi_{\text{w}}X_\text{w} c_{\text{w,s}}^{0}$. By using the water activity coefficient $\psi_{\text{w}}$~\cite{marcolli2005} in the modified Raoult's law, we obtain a theoretical model to express the evaporation rate for the binary droplet: \begin{equation} \centering \frac{\text{d}m}{\text{d}t}=-\pi DR(\psi_{\text{w}}X_\text{w} c_{\text{w,s}}^{0}-c_{\text{w,}\infty})f(\theta). \label{eq:normal_popov} \end{equation} There is, however, one added complexity in our system: it is difficult to determine the exact $c_{\text{w,s}}$ without knowing the exact mole fraction of water, glycerol, and SDS molecules. Hence we cannot analytically predict the evaporation rate for each time instant. To compare different sets of experimental data, we rescale the measured droplet volume and time, by introducing the non-dimensional volume $\hat{V} = V/V_0$ and time $\hat{t} = t/\tau_c$, with $V(t)$ the measured droplet volume and $V_0$ its initial volume. $\tau_c$ is the characteristic timescale of the diffusive evaporation~\cite{gelderblom2011,lohse2015rmp}, which can also be read from Eq.~(\ref{eq:popov}), \begin{equation} \tau_c = \frac{\rho R_0^2}{D\Delta c}. \label{eq:scaled_time} \end{equation} Figures~\ref{fgr:parameters}A2,B2,C2 show that the rescaled experimental data for volume $V/V_0$, contact angle $\theta$ and footprint radius $R/R_0$ versus the dimensionless time $t/\tau_c$ follow a universal curve for all measured droplet sizes. The collapse of all the curves demonstrates that regardless of the initial size, the droplets with the same initial composition always follow the same evaporation behavior, with a universal evolution of all geometrical characteristics. Based on this, we can conclude that the variations of not just the geometry but also of the internal composition concentration and distribution are universal, independent of the droplet size. We also compare the initial evaporation rate of different initial volumes by introducing the dimensionless volume loss rate $\text{d}\hat{V}/\text{d}t = \text{d}(V/V_0)/\text{d}t$. According to Eq.~(\ref{eq:normal_popov}), the dimensionless initial evaporation rate is \begin{equation} \frac{\text{d}\hat{V}}{\text{d}t}\bigg|_{t=0} \propto \frac{D\Delta cR_0}{\rho V_0} \propto \frac{D\Delta c}{\rho V_0^{2/3}}. \label{eq:evap_rate} \end{equation} Based on the $V_0^{-2/3}$ proportionality of Eq.~(\ref{eq:evap_rate}), we rescale the experimental data of Fig.~\ref{fgr:evap_rate}A, see Fig.~\ref{fgr:evap_rate}B, and plot them on a double logarithmic scale. Indeed, the data follow the scaling law as suggested by Eq.~\ref{eq:evap_rate}, confirming our model assumptions. Besides controlling the evaporation rate, the model also yields the terminal state of the evaporation, which is when the saturated water vapor concentration equals the environmental concentration, $c_{w,s} = c_{w,\infty}$. Essentially, the evaporation stops when the active mole fraction of water equals the relative humidity $H$ of the surrounding air, $\psi_w X_w = H$. For the same reason as mentioned above, we only compare the experimental data with the analytical prediction for glycerol-water binary system, ignoring the mole fraction of the surfactant. From the relative humidity $H$ measured in experiment, we can calculate analytically the ``theoretical final volume" $V_t$ (see Supplementary Materials) as \begin{equation} V_t = \left(\frac{M_w}{M_g}\frac{H}{\psi_w-H}+\frac{\rho_w}{\rho_g}\right)\left(\frac{1-C_g}{C_g}+\frac{\rho_w}{\rho_g}\right)^{-1}V_0, \label{eq:final} \end{equation} where $M_g = 9.21 \times 10^{-2}$ kg/mol and $M_w = 1.8 \times 10^{-2}$ kg/mol is the molecular mass of glycerol and water, respectively, $\rho_g = 1.226 \times 10^3$ kg/m$^3$ and $\rho_w = 0.997 \times 10^3$ kg/m$^3$ are their liquid densities at room temperature, and $C_g$ is the initial mass concentration of glycerol in each measurement. The final volume of the equilibrium state (dashed line in Fig.~\ref{fgr:parameters}A2) lies below the final volumes of all the droplets, which indicates that the shielding of water by the crystallized interface blocked any further evaporation before the system reached its equilibrium state. \subsection{Theory of 2-dimensional finite system crystalization} As it is well-known, the evaporation rate has a singularity at the rim of the droplet, provided the contact angle is smaller than 90$^\circ$~\cite{deegan1997,Tan2016}, which in our system, where the contact angle ranges from 65$^\circ$ to 40$^\circ$ during the evaporation process, indeed is the case. The singularity implies that the water depletes the fastest at the rim, which locally leads to a higher concentration of glycerol at that part. It is therefore also expected that crystal nucleation occurs first near the CL region due to the highest degree of oversaturation of SDS. To model the crystallization, we employ a 2-dimensional model which is derived by extending the JMAK formalism~\cite{Avrami1939,Avrami1940,Avrami1941,Johnson1939,Kolmogorov1937} to a finite 2-dimensional system with non-uniform nucleation. Based on the spherical shape of the droplet, the footprint area is circular and the nucleation starts near the contact line. We assume that the crystallization process occurs within a circular region, and nucleation is permitted at $t = 0$ at various points on the perimeter of the area. Figure~\ref{fgr:JMAK}B shows the geometry of the two regions $\Delta_1$ and $\Delta_2$ within a circle with radius $R$ (drop radius): the $\Delta_2$ region is completely free of crystallization, while $\Delta_1$ is partially crystalline. The maximum growth radius of the crystals is given by $\Gamma t$, where $\Gamma$ is the constant growth rate. Weinberg~\cite{WEINBERG1991,WEINBERG1997} proposed an analytical model to describe the fraction $X(t)$ transformed at time $t$, namely \begin{equation} X(t) = [1 - (1-\Gamma t/R)^2] X_1(t). \label{eq:crys_X} \end{equation} where $X_1(t)$ represents the fraction which has crystalized in $\Delta_1$. It can be expressed as~\cite{WEINBERG1997} \begin{equation} X_{1}(t) = 1- \int_{1-y}^{1} \text{exp} \left[-2P_{1}R \text{cos}^{-1}\left(\frac{1+\Phi^{2}-y^{2}}{2\Phi}\right)\right] \Phi\text{d}\Phi \times \frac{1}{2}[1-(1-\Phi)^{2}]^{-1}, \label{eq:crys_X1} \end{equation} with $\Phi = (R-z)/R$ and $y = \Gamma t/R$. $z$ denotes the distance between an arbitrary point $S$ in the $\Delta_1$ region and the boundary. $P_1$ is the nucleation probability per unit length in region $\Delta_1$. We demonstrate that the transformation rate is more sensitive to the growth rate $\Gamma$ rather than to the seeding probability $P_1$, as shown in Supplementary Materials. Here we set $P_1 = 1000~\mu$m$^{-1}$ by assuming a saturated nuclei density. We test this theory for the three cases in Fig.~\ref{fgr:JMAK}A with droplet footprint radius $R_1 = 146\ \mu$m, $R_2 = 110\ \mu$m and $R_3 = 86\ \mu$m. By fitting the theoretical curves to the experimental data, we obtain the growth rate as the fitting parameter: Quite consistently, the results are $\Gamma_1 = 2.48\ \mu$m/s, $\Gamma_2 = 2.42\ \mu$m/s, and $\Gamma_3 = 2.58\ \mu$m/s for the three analyzed cases. From Fig.~\ref{fgr:crystal}C, we estimate the crystal growth rate in the early crystallization stage by measuring the increasing rate of crystal size near the contact line within the yellow circle. We obtain the estimate $\Gamma \approx 25 \pm 5~\mu$m/10~s = 2.5 $\pm$ 0.5~$\mu$m/s, which is comparable to the values $\Gamma_1, \Gamma_2$, and $\Gamma_3$ obtained from our model. Even though we applied a 2D model to a 3D problem, the theoretical predictions show good agreement with experimental data: the reason is that our droplet is relatively flat, with a contact angle of about 40$^\circ$ when crystallization occurs. \section{Conclusions and outlook} In summary, crystallization of sodium dodecyl sulfate induced by selective evaporation in a surfactant-laden glycerol-water mixture droplet is observed during the evaporation process. We studied experimentally the dynamics of evaporation prior to the occurrence of crystal nucleation and the kinetics of crystallization, thereafter. We applied a diffusion model extended by Raoult's law to describe the evaporation characteristics and could reveal a universal evaporation behavior, independent of the size of the droplets. Finally, we applied a 2-dimensional model building on the JMAK nucleation model to describe the kinetics of the crystallization. Thanks to the the low contact angle, this model can successfully describe our experimental data on nucleation. Surfactants attract significant attention as their ubiquitous role in fluid dynamics of either nature or technology~\cite{manikantan2020}. Our findings clearly show an unexpected consequence of using surfactants in such evaporating systems. This particularly holds for inkjet printing where surfactants are extensively used. As nearly all inks contain various components with different volatilities, the variations of the composition ratio caused by the selective evaporation of more volatile components may lead to the segregation of surfactants in the form of liquid phase separation~\cite{Li2018} or crystallization. Our study may rise the awareness of using surfactants with cautions in such multicomponent systems, which normally involves rich physicochemical processes~\cite{lohse2020}. Some issues remain open and unexplored. As the temperature can change the CMC of SDS in glycerol-water mixture~\cite{ruiz2008}, does the crystallization behavior also depend on the temperature? How to describe the buckling behavior after the occurrence of crystallization? Another question is on the morphology of the SDS crystals, e.g., is the crystal structure different from the one induced upon cooling? Such questions are of great interest in view of crystal chemistry, and it is worthwhile to further investigate such crystallization behavior from a microscopic perspective in the future. \begin{acknowledgement} We thank Shuai Li for valuable suggestions on the manuscript. This work is part of an Industrial Partnership Programme (IPP) of the Netherlands Organization for Scientific Research (NWO). This research programme is co-financed by Canon Production Printing Netherlands B.V., University of Twente and Eindhoven University of Technology. DL gratefully acknowledges support by his ERC-Advanced Grant DDD (project number 740479). \end{acknowledgement}
{ "timestamp": "2020-07-28T02:42:49", "yymm": "2007", "arxiv_id": "2007.13641", "language": "en", "url": "https://arxiv.org/abs/2007.13641" }
\section{Introduction} There is a rich and successful history of describing manifolds with torus actions through discrete objects. Most famously this is illustrated by the bijective correspondence between toric varieties and their fans or more specifically the correspondence between symplectic toric manifolds and Delzant polytopes. In \cite{MR2431667} Masuda proved that the equivariant isomorphism type of a toric manifold, when considered as a complex variety, is determined by its integral equivariant cohomology ring (or equivalently by its GKM graph \cite{MR4038723}). The more ambitious cohomological rigidity problem, in its original form posed by Masuda--Suh in \cite[Problem 1]{MR2428362}, asks if a toric manifold is determined up to (nonequivariant) homeomorphism by its integral cohomology ring. While the problem in this form is unsolved as of today, several variants have been considered in the literature. For instance, when restricting attention to certain (generalized) Bott manifolds, it was seen to be true, see \cite{MR2962979}, Sections 2.1 and 2.2 and references therein. Motivated by the success in the toric case it is natural to look for similar results in a generalized setting. Prominent candidates are the classes of quasitoric manifolds and torus manifolds which were also considered in the light of the cohomology rigidity problem in \cite{MR2928539, MR2846908, MR3030690} (note that the latter fails for torus manifolds, see \cite[Example 3.4]{MR2428362}). The main object of study in the present article is the class of so-called (integer) GKM manifolds, named after Goresky, Kottwitz and MacPherson \cite{MR1489894}, which in particular generalizes toric manifolds. These are compact manifolds with vanishing odd-degree (integral) cohomology, equipped with an action of a compact torus, whose fixed point set is finite, and whose one-skeleton is a union of invariant $2$-spheres. To such actions one associates a labelled graph, its GKM graph, which encodes the one skeleton in a combinatoric fashion, see Section \ref{sec:GKM}. Given a certain connectedness condition of the stabilizers, the GKM graph somewhat surprisingly encodes the entire integral equivariant and nonequivariant cohomology as well as all characteristic classes of the manifold (see \cite[Theorem 3.1 and Proposition 3.5]{MR4088417}). With regards to the previously mentioned rigidity problems, the naive follow up question would be whether the GKM graph determines the (nonequivariant) homotopy type, the homeomorphism type, or even the diffeomorphism type. Note that this kind of question belongs in the simply-connected realm since one cannot hope to encode the fundamental group in the GKM-graph (see \cite[Section 2]{MR3030690}) and furthermore the motivating example of a toric manifold is also simply-connected. The purpose of this article is to give a counterexample to these questions and to discuss edge cases, where results of the above type actually do hold. Our main result reads as follows \begin{thm}\label{thm:einleitungmainthm} On the total spaces of the two $S^2$-bundles over $S^6$ there exist GKM $T^3$-actions with identical GKM graph. \end{thm} While the two spaces in question have identical integral cohomology ring and characteristic classes (as needs to be the case for any counterexample) they are not homotopy equivalent by Lemma \ref{lem:pi5} below due to differing fifth homotopy groups. In particular this also shows that the equivariant cohomological rigidity problem \cite[Theorem 1.1]{MR2431667} does not generalize to integer GKM manifolds. Regarding geometric structures, we find an almost complex structure which is invariant under a two dimensional subtorus (see Theorem \ref{thm:almostcomplex}): \begin{thm} The total space of the nontrivial $S^2$-bundle over $S^6$ admits an almost complex structure invariant under a circle action (even an action of the two-dimensional torus) with exactly four fixed points. \end{thm} This implies an example of a simply connected $8$-manifold with an invariant almost complex circle action with four isolated fixed points which is not diffeormophic to $S^2 \times S^6$. In \cite{MR3113861} Kustarev constructs many non-simply-connected $6$-manifolds with almost complex circle actions fixing exactly two points. Crossing these examples with $S^2$ one obtains non-simply-connected $8$-manifolds each endowed with an almost complex circle action with four isolated fixed points. We remark that the significance of our example is due to the vanishing of the odd integer cohomology. This forces that each even Betti number must be equal to one (remember that the Euler characteristic must be equal to the number of isolated fixed points). As a further remark we note that by \cite{2001.10699v1} our example is unitary cobordant to $S^2 \times S^6$ furnished with its standard almost complex structure. While the above examples are of course not symplectic, slight modifications give rise to the following \begin{thm} On the total spaces of two $\mathbb{C}P^1$-bundles over $\mathbb{C}P^3$ which are not homotopy equivalent, there exist Hamiltonian GKM $T^3$-actions with identical x-ray. \end{thm} The notion of x-ray encodes in particular the GKM graph and is recalled in Section \ref{sec:symplecticstructures}. As a consequence, we show in Proposition \ref{prop:nosymplecticrigidity} that symplectic cohomological rigidity, as defined in \cite{2002.12434v1}, does not hold for Hamiltonian integer GKM manifolds. We also discuss positive answers to the rigidity question in special cases with respect to the number of fixed points, dimension, and complexity of the action, where the complexity of a $T^r$-action on $M^{2n}$ is defined as $n-r$. The situation for simply-connected integer GKM manifolds with connected stabilizers, as known to the authors, is depicted in the table below. In particular this shows that the counterexample from Theorem \ref{thm:einleitungmainthm} is simultaneously minimal with respect to all three of the above parameters. Details are discussed in Section \ref{sec:minimality}. Of the results in the table, only rows 4 and 6 are original to this paper. The first row follows from \cite[Theorem p.\ 537]{MR268911} (alternatively, the arguments in \cite[Section 6]{MR3030690} for $6$-dimensional torus manifolds are also applicable). The second row is a consequence of \cite[Theorem 3.1]{MR4088417}. The third row makes use of the generalized Poincar\'e conjecture \cite{MR137124} and is, together with the fourth row, discussed at the end of Section \ref{sec:minimality}. Note that we do not know if any exotic sphere admits an action of GKM type. In the remaining fifth row the first two columns are deduced from \cite[Theorem 3.4]{MR3355120} and the third follows from \cite[Theorem 4.1]{MR3030690} (for the latter we remark that equivariant homeomorphisms preserve the GKM graph up to isomorphism). \begin{center} \begin{tabular}{c|cC{3cm}cC{4cm}} type: & homotopy & homeomorphism & diffeomorphism & equivariant homotopy/homeo/diffeo\\ \hline $\dim\leq 4$ &yes & yes &yes & yes\\ $\dim\leq 6$ &yes & yes& yes& ?\\ $2$ fixed points &yes & yes & ?& ?\\ $3$ fixed points &yes & yes & yes& ?\\ complexity $0$ &yes& yes& no& ?\\ else& no & no & no & no \end{tabular} \end{center} \paragraph{Acknowledgements.} We are grateful to Yael Karshon and Susan Tolman for valuable remarks on a previous version of the paper, including a simplified construction of the action on the bundle $E$ in Section \ref{sec:construction}. We wish to thank Michael Wiemeler for several helpful comments. This work is part of a project funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 452427095. \section{GKM manifolds}\label{sec:GKM} In this paper we consider actions of compact tori $T$ on closed manifolds $M$. Given such an action, we denote its fixed point set by $M^T$, and its one-skeleton by $M_1=\{p\in M\mid \dim Tp\leq 1\}$. \begin{defn}[\cite{MR1489894}] We say that a $T$-action on a closed manifold $M$ is \emph{(integer) GKM} if \begin{enumerate} \item $H^{{\mathrm{odd}}}(M,\mathbb{Z})=0$ \item $M^T$ is finite \item $M_1$ is a finite union of $T$-invariant $2$-spheres. \end{enumerate} \end{defn} For any GKM $T$-action on $M$, the quotient $M_1/T$ is homeomorphic to a graph, with one vertex for each fixed point, and one edge for each invariant $2$-sphere. The tangent space of any such sphere in each of its two fixed points is an invariant subspace of the respective isotropy representation. We attach as a label to its edge in the graph the corresponding weight, considered as an element in $\mathbb{Z}_{\mathfrak{t}}^*/\pm 1$, where $\mathfrak{t}$ is the Lie algebra of $T$ and $\mathbb{Z}_{\mathfrak{t}}^*\subset \mathfrak{t}^*$ its weight lattice. This labelled graph will be called the \emph{GKM graph} of $M$. If $M$ is equipped with a $T$-invariant almost complex structure, then the above-mentioned weights are naturally elements of $\mathbb{Z}_\mathfrak{t}^*$. In this setting, we attach instead to any oriented edge the corresponding weight at the initial vertex, and sometimes speak about the \emph{signed GKM graph} of the action. \begin{ex} Any toric symplectic manifold is of GKM type. In this case, the GKM graph is given by the one-skeleton of the momentum polytope; the labels of the edges are given by the primitive vectors pointing in direction of its slopes. \end{ex} The (signed) GKM graph of an action is of relevance because under a certain connectedness assumption on the isotropy groups it encodes various topological properties of $M$ and the action, such as its (equivariant) cohomology algebra \cite[Corollary 2.2]{MR2790824}, and its (equivariant) characteristic classes \cite[Proposition 3.5]{MR4088417}. \section{The minimal example}\label{sec:construction} We construct two simply-connected GKM $T^3$-manifolds in dimension $8$ which are not (non-equivariantly) homotopy equivalent but have the same GKM graph. The underlying manifolds will be $S^2\times S^6$ as well as the total space of the non-trivial $S^2$-bundle over $S^6$. \subsection{The construction} We begin by recalling the construction and show that the total spaces of the two $S^2$-bundles over $S^6$ are indeed not homotopy equivalent. Principal $\mathrm{SU}(2)$-bundles over $S^6$ are given as pullbacks of the universal bundle $\mathrm{SU}(2)\rightarrow S^\infty\rightarrow \mathbb{H}P^\infty$ along maps $S^6\rightarrow \mathbb{H}P^\infty$. The space $\mathbb{H}P^\infty$ has a CW-structure with cells only in dimensions which are multiples of $4$. In particular its $7$-skeleton is $\mathbb{H}P^1=S^4$ and thus $S^4\hookrightarrow \mathbb{H}P^\infty$ induces an isomorphism on homotopy groups in dimensions up to $6$. We now use the nontrivial generator $f\colon S^6\rightarrow S^4$ of $\pi_6(S^4)\cong \mathbb{Z}_2$ to pull back the universal bundle. The resulting commutative diagram \[\xymatrix{ \mathrm{SU}(2)\ar[r]\ar[d] &P\ar[r]\ar[d] & S^6\ar[d]\\ \mathrm{SU}(2)\ar[r] & S^\infty \ar[r]& \mathbb{H}P^\infty }\] induces a commutative diagram \[\xymatrix{\cdots\ar[r]& \pi_6(S^6)\ar[r]\ar[d]^{f_*}& \pi_5(\mathrm{SU}(2)) \ar[r]\ar[d]^{\mathrm{id}}& \pi_5(P)\ar[r]\ar[d]& 0\ar[r]\ar[d]& \cdots\\ \cdots\ar[r]& \pi_6(\mathbb{H}P^\infty)\ar[r] & \pi_5(\mathrm{SU}(2))\ar[r]& 0 \ar[r]& \pi_5(\mathbb{H}P^\infty)\ar[r]&\cdots}\] in which the rows are the long exact homotopy sequences of the bundles in question. By the choice of $f$, the map $f_*\colon \pi_6(S^6)\rightarrow \pi_6(\mathbb{H}P^\infty)=\pi_6(S^4)$ is surjective. Furthermore, the connecting homomorphism $\pi_6(\mathbb{H}P^\infty)\rightarrow \pi_5(\mathrm{SU}(2))$ is an isomorphism since $S^\infty$ is contractible. As a consequence the connecting homomorphism $\pi_6(S^6)\rightarrow \pi_5(\mathrm{SU}(2))$ is surjective. In combination with $\pi_5(S^6)=0$ this implies $\pi_5(P)=0$. Finally, we obtain a $\mathbb{C}P^1$-bundle $X\rightarrow S^6$ from $P$ by factoring out the fiberwise action of the circle $\mathrm{S}(\mathrm{U}(1)\times\mathrm{U}(1))$. Since this action is free we obtain a fiber bundle $S^1\rightarrow P\rightarrow X$. Again, via the long exact homotopy sequence, this implies the following well known \begin{lem}\label{lem:pi5} We have $\pi_5(X)=0$ whereas $\pi_5(S^2\times S^6)=\pi_5(S^2)=\mathbb{Z}_2$. \end{lem} We construct a GKM action on $X$. To do this, we show that the map $f\colon S^6\rightarrow S^4$ in the construction above can be chosen to be $T^3$-equivariant with respect to certain actions. Then we lift the action on $S^4$ to an action of the restricted universal bundle which descends to the projectivization. In this way $X$ will be an equivariant pullback and will naturally carry a $T^3$-action. Consider the $T^3$-action on $S^5\subset \mathbb{C}^3$ given by $(s,t,u)\cdot(v,w,z)=(su,tw,uz)$. The subcircles $K_1=\{(s,s,1)\in T^3\}$, $K_2=\{(s,1,s)\in T^3\}$, and $K_3=\{(s,1,1)\in T^3\}$ generate $T^3$. \begin{lem}\label{lem:S5nachS4} The orbit space $S^5/K_1$ is homeomorphic to $S^4\subset \mathbb{C}\oplus\mathbb{R}\oplus \mathbb{C}$ in a way such that the induced action of $K_2$ corresponds to $(s,1,s)\cdot(v,h,w)=(sv,h,sw)$ and $K_3$ acts as $(s,1,1)\cdot (v,h,w)=(sv,h,w)$ for $(v,h,w)\in S^4$. \end{lem} \begin{proof} Recall the suspension homeomorphism $\Sigma S^n\rightarrow S^{n+1}$ given by \[[(a_0,\ldots,a_n),t]\mapsto (\varphi(t)a_0,\ldots,\varphi(t)a_n,t),\] where we set $\varphi(t)=\sqrt{1- t^2}$. Applying this twice, we see that the $K_1$-action corresponds to the doubly suspended diagonal action on $\Sigma^2 S^3$. In particular its orbit space is $\Sigma^2S^2=S^4$ as claimed. To prove the statement on the actions we need to recall the explicit form of the homeomorphism $S^3/S^1=\mathbb{C}P^1\cong S^2$. Let $S^2_+=\{(v,h)\in S^2\subset \mathbb{C}\oplus \mathbb{R}~|~ h\geq 0\}$ be the upper hemisphere and denote by $A\subset S^2_+$ the equator. Then $S^2_+/A$ is homeomorphic to $\mathbb{C}P^1$ via $(v,h)\mapsto [v,h]$. On the other hand $S^2_+/A$ is homeomorphic to $S^2$ by the stretching map $(v,h)\mapsto (\alpha(h)v,2h-1)$, where $\alpha(h)\in [0,1]$ is a suitable scaling factor. Finally, note that for ${z}\in \mathbb{C}$ with $|z|\leq 1$ and $t=\mathrm{Im}(z)$, $s={\varphi(t)}^{-1}{\mathrm{Re(z)}}$, we have $\varphi(s)\varphi(t)=\varphi(\vert z\vert)$. Thus, under the double suspension map, when explicitly defined as above, the preimage of a point $(v,w,z)\in S^5$ is given by $\left[({\varphi(\vert z\vert)}^{-1}v,{\varphi(\vert z\vert)}^{-1}w),s,t\right]$ with $s,t$ as above. Piecing everything together we obtain \begin{align*} S^5\cong \Sigma^2S^3\rightarrow\Sigma^2\mathbb{C}P^1\cong \Sigma^2 S^2_+/A \cong \Sigma^2 S^2\cong S^4\subset \mathbb{C}\oplus\mathbb{R}\oplus \mathbb{C} \end{align*} given by \begin{alignat*}{2} (v,w,z)&\mapsto [(\varphi(|z|)^{-1}v,\varphi(|z|)^{-1}w),s,t] && \in \Sigma^2S^3 \\ & \mapsto [(\varphi(|z|)^{-1} \beta(w) v,\varphi(|z|)^{-1}|w|),s,t] && \in \Sigma^2 S^2_+/A \\ &\mapsto [\left(\alpha \left(\varphi(|z|)^{-1}|w|\right)\varphi(|z|)^{-1}\beta(w) v,2\varphi(|z|)^{-1}|w|-1\right),s,t]\quad && \in \Sigma^2 S^2 \\ &\mapsto \left(\alpha\left(\varphi(\vert z\vert)^{-1}\vert w\vert\right)\beta(w) v,2\vert w\vert-\varphi(\vert z\vert),z\right)&& \in S^4 , \end{alignat*} where $\beta(w)$ is $1$ if $w=0$ and $\overline{w}/\vert w\vert$ otherwise. Since $\vert z\vert$ is invariant under multiplication with $S^1$, we see that the identification $S^5/K_1\cong S^4$ above is equivariant with respect to $S^1$-multiplication in the first and third coordinate. \end{proof} We now consider the suspended $T^3$-action on $S^6$. \begin{prop}\label{prop:S6nachS4} The orbit space $S^6/(K_1K_2)$ is homeomorphic to $S^4$ in a way that the orbit map $f:S^6\to S^6(K_1K_2)\cong S^4$ is $T^3$-equivariant with respect to the $T^3$-action on $S^4$ in which $K_1$ and $K_2$ act trivially, while $K_3$ acts via $(s,1,1)\cdot (v,w,h)=(sv,w,h)$. Furthermore, $f$ defines a generator of $\pi_6(S^4)$. \end{prop} \begin{proof} As we have seen in the previous lemma, the projection $S^5\rightarrow S^5/K_1\cong S^4$ corresponds to the double suspension $\Sigma^2 g$ of the Hopf map $g\colon S^3\rightarrow S^2$. It also follows from the lemma that $S^4\rightarrow S^4/K_2\cong \Sigma \mathbb{C}P^1\cong \Sigma S^2\cong S^3$ can be identified with $\Sigma g$ and can be chosen $K_3$-equivariant with respect to the action $(s,1,1)\cdot (v,w)=(sv,w)$ on $S^3$. Thus $S^5\rightarrow S^5/(K_1K_2)\cong S^3$ is the composition $\Sigma g\circ \Sigma^2 g$ which is known to give a generator of $\pi_5(S^3)$, cf. \cite[p. 475]{MR1867354}. The suspension is a generator of $\pi_6(S^4)$ by the Freudenthal suspension theorem and satisfies the equivariance property as claimed in the lemma. \end{proof} Consider the quaternionic Hopf bundle \[S^3\longrightarrow S^7\longrightarrow S^4\] which is given by dividing out the quaternionic diagonal action by right multiplication on $S^7\subset \mathbb{H}^2$ and identifying the orbit space $\mathbb{H}P^1$ with $S^4$. The orbit map factors as \[S^7\rightarrow \mathbb{C}P^3\rightarrow \mathbb{H}P^1\cong S^4\] where the first map divides by complex diagonal multiplication from the right (viewing $\mathbb{C}$ as a subset of $\mathbb{H}$). The second map is a fiber bundle with fiber $\mathbb{C}P^1$, and as the quaternionic Hopf bundle is the restriction of the universal $\mathrm{SU}(2)$-bundle $S^\infty\to \mathbb{H}P^\infty$ to $\mathbb{H}P^1$, by construction $X$ is the pullback of $\mathbb{C}P^3\rightarrow S^4$ along $f$. We will now construct compatible actions on $\mathbb{C}P^3\rightarrow S^4$ which pull back to $X$. On $S^7 \subset\mathbb{H}^2$ consider the $S^1$-action from the left defined by \[s\cdot (p,q)=(sp,sq)\] where $p,q\in \mathbb{H}$. This commutes with the actions of the complex and quaternionic diagonals from the right. In particular $\mathbb{C}P^3\rightarrow S^4$ is equivariant with respect to the induced circle action. \begin{prop}\label{prop:s1actionons4} On $S^4\subset \mathbb{C}^2\oplus \mathbb{R}$ this action can be identified with $s\cdot(v,w,h)=(s^2v,w,h)$. On the fibers of $\mathbb{C}P^3\rightarrow S^4$ over the fixed points of $S^4$, the induced action is equivariantly diffeomorphic to the rotation $s\cdot (v,h)=(s^2v,h)$ on $S^2\subset \mathbb{C}\oplus \mathbb{R}$. \end{prop} \begin{proof} We recall the identification $\mathbb{H}P^1\cong S^4$. Denote by $S^4_+$ the hemisphere $\{(q,h)\in S^4\subset \mathbb{H}\oplus \mathbb{R}\mid h\geq 0\}$ and by $A$ the equator at height $0$. Now the map $(q,h)\mapsto [q: h]$ induces a homeomorphism $S_+^4/A\cong \mathbb{H}P^1$. For $h\in\mathbb{R}$ we have $s\cdot [q:h]=[sq:sh]=[sqs^{-1}:h]$. The conjugation action $s\cdot q=sqs^{-1}$ leaves the complex plane fixed and rotates the $j$-$k$-plane at double speed. Thus with the correct identification $S_+^4/A\cong S^4$ this is indeed the action described in the proposition. The fixed points are the points $[v:w]\in \mathbb{H}P^1$ with $v,w\in \mathbb{C}$. For $v,w\in \mathbb{C}$, not both zero, the map $q\mapsto [vq:wq]_\mathbb{C}$ induces a diffeomorphism from $S^3/S^1$ onto the fiber over $[v:w]$, where the quotient $S^3/S^1$ is by quaternionic multiplication with complex numbers from the right, and the notation $[\cdot:\cdot]_\mathbb{C}$ describes points in the quotient of $S^7$ by the complex diagonal action from the right. Using that $v,w\in \mathbb{C}$, the fiber inclusion $S^3/S^1\rightarrow \mathbb{C}P^3$ satisfies \[ sqS^1\mapsto [vsq: wsq]_\mathbb{C}=s\cdot[vq:wq]_\mathbb{C},\] and is thus equivariant with respect to quaternionic multiplication with $S^1$ from the left on $S^3/S^1$. This can be identified with the action from the proposition. \end{proof} The $S^1$-action on the bundle $\mathbb{C}P^3\rightarrow S^4$ has kernel generated by $-1$. We divide by the corresponding subgroup to make the action effective. Then we pull back the $S^1$-actions along the homomorphism $\psi\colon T^3\rightarrow S^1$, $(s,t,u)\rightarrow st^{-1}u^{-1}$. By Proposition \ref{prop:S6nachS4} the resulting $T^3$-action on $S^4$ is the one with respect to which the map $f\colon S^6\rightarrow S^4$ is equivariant. The space $X$ is thus the pullback of the $T^3$-equivariant bundle $\mathbb{C}P^3\rightarrow S^4$ along the $T^3$-equivariant map $f\colon S^6\rightarrow S^4$ and thus inherits a $T^3$-action. For later purposes we will need \begin{rem} Since the total space of $\mathbb{C} P^{3} \to S^4$ is obtained by dividing out a subcircle action of $\mathrm{SU}(2)$, we have that the $\mathbb{C} P^{1}$-bundle $\mathbb{C} P^{3} \to S^4$ has structure group $\textrm{SU}(2)$ inherited from the principal $\textrm{SU}(2)$-bundle $S^7 \to S^4$. Therefore the structure group of the pullback $\mathbb{C} P^{1}$-bundle $X \to S^6$ is $\mathrm{SU}(2)$ as well. \end{rem} \begin{rem}\label{rem:structure group} The map $f$ may be approximated $T^3$-equivariantly by a smooth map. This does not affect the pullback, so we obtain a smooth structure on $X$ with respect to which the $T^3$-action is smooth. \end{rem} \begin{thm}\label{thm:non-rigidity} The $1$-skeleton of the $T^3$-action on $X$ is $T^3$-equivariantly homeomorphic to the $1$-skeleton of the product $T^3$-action on $S^2\times S^6$ which acts on $S^2$ via $(s,t,u)\cdot(v,h)=(st^{-1}u^{-1}v,h)$ and on $S^6$ via $(s,t,u)\cdot(v,w,z,h)=(sv,tw,uz,h)$ for $v,w,z\in\mathbb{C}$, $h\in\mathbb{R}$. \end{thm} This has the following immediate \begin{cor} On $X$ and $S^2\times S^6$ there exist GKM $T^3$-actions with isomorphic GKM graphs. \end{cor} The corresponding GKM graph is given by \begin{center} \begin{tikzpicture} \node (a) at (0,0)[circle,fill,inner sep=2pt] {}; \node (b) at (6,0)[circle,fill,inner sep=2pt]{}; \node at (3,1) {$(1,0,0)$}; \node at (3,0.3) {$(0,1,0)$}; \node at (3,-1) {$(0,0,1)$}; \draw (a) to[very thick, in=160, out=20] (b); \draw (a) to[very thick] (b); \draw (a) to[very thick, in=200, out=-20] (b); \node (c) at (0,4)[circle,fill,inner sep=2pt] {}; \node (d) at (6,4)[circle,fill,inner sep=2pt]{}; \node at (3,5) {$(1,0,0)$}; \node at (3,4.3) {$(0,1,0)$}; \node at (3,3) {$(0,0,1)$}; \draw (c) to[very thick, in=160, out=20] (d); \draw (c) to[very thick] (d); \draw (c) to[very thick, in=200, out=-20] (d); \draw (c) to[very thick] (a); \draw (d) to[very thick] (b); \node at (7.2,2) {$(1,-1,-1)$}; \node at (-1.2,2) {$(1,-1,-1)$}; \end{tikzpicture} \end{center} \begin{proof}[Proof of Theorem \ref{thm:non-rigidity}.] Denote by $A\subset S^6$ the $1$-skeleton of the $T^3$-action. The $1$-skeleton of $X$ is certainly contained in $p^{-1}(A)$, where $p\colon X\rightarrow S^6$ is the projection. By naturality of the pullback $p^{-1}(A)$ is the pullback of $\mathbb{C}P^3\rightarrow S^4$ along $f|_A$. Recall that $f$ was defined as the suspension of the orbit map $S^5\rightarrow S^5/(K_1K_2)$. The space $A$ is the suspension of the $1$-skeleton $B\subset S^5$ which consists of the three disjoint circles containing those elements $(v,w,z)\in S^5$ where two coordinates are zero. As derived in Lemma \ref{lem:S5nachS4}, the map $S^5\rightarrow S^5/K_1\cong S^4\subset \mathbb{C}\oplus\mathbb{R}\oplus\mathbb{C}$ can be explicitly described as \[(v,w,z)\mapsto \left(\alpha\left(\varphi(\vert z\vert)^{-1}\vert w\vert\right)\beta(w) v,2\vert w\vert-\varphi(\vert z\vert),z\right).\] From this it follows that under $S^5\rightarrow S^5/(K_1K_2)\cong S^3$, the space $B$ maps to three distinct points which are fixed by $K_3$. Thus the suspended map sends $A$ to three joined lines in $(S^4)^{K_3}$. From the description of the $K_3$-action (see also Proposition \ref{prop:S6nachS4}) we deduce that $ (S^4)^{K_3}=(S^4)^{T^3}\cong S^2$. As $f|_A$ is not surjective onto this $S^2$ it is homotopic within $S^2$ to the constant map at some point $x\in S^2$. This homotopy is automatically equivariant because $S^2$ is fixed under $T^3$. By homotopy invariance of the pullback, $p^{-1}(A)$ is $T^3$-equivariantly isomorphic to the diagonal action on $A\times F_x$ where $F_x$ denotes the fiber of $\mathbb{C}P^3\rightarrow S^4$ over $x$. In combination with Proposition \ref{prop:s1actionons4} it follows that the 1-skeleton of $X$ has the desired form. \end{proof} \subsection{Almost complex structures}\label{subsec:acs} The manifold $X$ does not admit a $T^3$-invariant almost complex structure as no structure of a signed GKM graph that is compatible with the GKM graph of $X$ can admit a connection in the sense of \cite{MR1823050} (to see this, note that transport along a horizontal edge via a connection can not map the other two horizontal edges onto themselves or one another, so they would both need to map to the single adjacent vertical edge). However we now argue that there is an almost complex structure on $X$ that is invariant under a $2$-dimensional subtorus. We want to stress that this restricted action will not be GKM. Recall that $X$ is a $T^3$-equivariant $\mathbb{C} P^{1}$-bundle over $S^6$, where $T^3$ acts on $S^6$ in the standard way $(s,t,u)\,\cdot \,(v,w,z,h)= (sv,tw,uz,h)$ in $S^6 \subset \mathbb{C}^3 \oplus \mathbb{R}$. It is well-known that $S^6$ admits an almost complex structure that is invariant unter an action of the compact Lie group ${\mathrm{G}}_2$. The action of a maximal torus $T^2\subset {\mathrm{G}}_2$ on $S^6$ can be identified as a subaction of our $T^3$-action, namely as that of the subtorus $T^2:=\{(st,s^{-1},t)\}\subset T^3$, see e.g.\ \cite[Example 1.9.1]{MR1823050}. Now, the tangent bundle $TX$ decomposes as $TX = \pi^* TS^6 \oplus V_F$, where $V_F$ is the subbundle consisting of the tangent spaces of the fibers of $X\rightarrow S^6$. On $\pi^*TS^6$ we obtain a $T^2$-invariant almost complex structure by pulling back the $T^2$-invariant almost complex structure from $S^6$. For $V_F$ consider a fiber of $X$ over $S^6$ which can be identified with $\mathbb{C} P^{1}$. We choose an $\textrm{SU}(2)$-invariant almost complex structure on $\mathbb{C} P^{1}$, which defines an almost complex structure on the fiber. Since the structure group is $\textrm{SU}(2)$ this definition does not depend on the identification of the fiber with $\mathbb{C} P^{1}$, cf. Remark \ref{rem:structure group}. This defines an almost complex structure on $V_F$. Moreover the $T^3$-action leaves this almost complex structure invariant. To see this recall that the $T^3$-action is induced by a circle action on $S^7$ which commutes with the action of $\textrm{SU}(2)$ of the principal bundle (see the paragraph before Proposition \ref{prop:s1actionons4}). In particular this means that the circle action preserves the fibers of $\mathbb{C} P^{3} \to S^4$ and the transformations are given by elements of the structure group $\textrm{SU}(2)$. Thus the circle action respects the almost complex structure on the fibers and consequently so does the pullback $T^3$-action on $V_F$. Altogether one obtains a $T^2$-invariant almost complex structure on $X$ by restricting the $T^3$-action on $V_F$ to $T^2$. Now, we arrive at the following theorem, which gives an example of an almost complex circle action on an $8$-manifold with four isolated fixed points such that the odd cohomlogy vanishes and which is not diffeomorphic to $S^2 \times S^6$. \begin{thm}\label{thm:almostcomplex} The manifold $X$, which is not diffeomorphic to $S^2\times S^6$, admits an almost complex structure invariant under a circle action (even an action of the two-dimensional torus) with exactly four fixed points. \end{thm} \section{Minimality of the example}\label{sec:minimality} In this section we observe that our example is optimal with regards to three properties: complexity, dimension, and number of fixed points.\\ \noindent {\bf Complexity.} For an effective $T^r$-action with non-empty fixed point set on a compact $2n$-dimensional manifold one has $r\leq n$ and the number $n-r$ is called the complexity of the action. Thus the previously constructed $T^3$-action on $X$ is of complexity $1$. Complexity $0$ manifolds of the above type are also known as torus manifolds. Theorem 3.4 in \cite{MR3355120} states that the nonequivariant homeomorphism type of a simply-connected torus manifold $M$ with $H^{{\mathrm{odd}}}(M,\mathbb{Z})=0$ is encoded in the face poset of its orbit space together with a function that associates to each face the corresponding isotropy group. If we denote by $T$ the acting torus, then the face poset is -- as a partially ordered set -- equivalent to the closed orbit type stratification $\chi$ which is the collection of all connected components of $M^H$ where $H$ runs through all subgroups of $T$, ordered by inclusion. On this poset one considers the function that associates the respective principal isotropy group to a connected component of $M^H$. We stress that the homeomorphism type of $M$ is determined just by the combinatorial data of the poset together with the function which associates the isotropy groups and does not require specific knowledge of the isotropy submanifolds. For GKM manifolds this information is encoded in the GKM graph: Let $M$ be a GKM $T$-manifold (not necessarily of complexity $0$) with GKM graph $\Gamma$. For a subgroup $H\subset T$, set $\Gamma^H\subset \Gamma$ to be the minimal subgraph that contains all edges whose weights -- understood as elements in $\mathrm{Hom}(T\rightarrow S^1)$ -- vanish on $H$. Define $\chi_\Gamma$ to be the set consisting of all connected components of subgraphs of the form $\Gamma^H$, partially ordered by inclusion. \begin{lem}\label{lem:stratification} There is bijection $\chi\rightarrow \chi_\Gamma$ which preserves the partial ordering such that for some $N\in \chi$ its principal isotropy group is given by the intersection of all kernels of the weights of all edges emanating from a single vertex in the corresponding element of $\chi_\Gamma$. \end{lem} \begin{proof} The one-skeleton of $M^H$ is encoded in the minimal (possibly disconnected) subgraph $\Gamma^H\subset \Gamma$. As the odd cohomology of (every component of) $M^H$ vanishes, see \cite[Lemma 2.2]{MR2283418}, the $T$-action on every component of $M^H$ is again GKM and, since GKM graphs are always connected, the connected components of $\Gamma^H$ are the GKM graphs of the components of $M^H$. The principal isotropy type on a component of $M^H$ can be reconstructed in the way described in the lemma, since the weights at a fixed point determine the action on a neighbourhood. This also implies the injectivity of the correspondence: if $N\subset M^H$ and $N'\subset M^{H'}$ are connected components corresponding to the same element of $\chi_\Gamma$ then the principal isotropy groups of $N$ and $N'$ agree and are equal to some $U$ containing both $H$ and $H'$. Thus both are components of $M^U$ and have nonempty intersection, which implies $N=N'$. \end{proof} \noindent {\bf Dimension.} The main theorem in \cite{2003.11298v1} implies that for a simply-connected $6$-dimensional integer GKM manifold $M$ whose stabilizer groups satisfy a certain assumption which is satisfied if they are connected, the GKM graph encodes the nonequivariant diffeomorphism type of $M$. In dimension $4$, even the equivariant diffeomorphism type is known to be encoded by \cite[Theorem p.\ 537]{MR268911} (or by the arguments for $6$-dimensional torus manifolds in \cite[Section 6]{MR3030690}). Thus, our $8$-dimensional examples of GKM manifolds that are not homotopy equivalent but have the same GKM graph have the lowest possible dimension with this property.\\ \noindent {\bf Number of fixed points.} Our example has exactly four fixed points. We would like to argue that this is the minimal possible number of fixed points in our situation. For a simply-connected integer GKM manifold of positive dimension we always have at least two fixed points. In case of exactly two fixed points, the manifold has the integer homology of a sphere. In this situation, by collapsing the complement of a disc, one always finds a map to the standard sphere that induces an isomorphism in integer homology, which by the homology Whitehead theorem is a homotopy equivalence. From the (topological) generalized Poincar\'e conjecture \cite{MR137124} we deduce that the manifold has to be homeomorphic to the standard sphere. For three fixed points we have the following proposition: \begin{prop}For a simply-connected integer GKM manifold $M$ in arbitrary dimension, with three fixed points, the GKM graph determines the nonequivariant diffeomorphism type of $M$. \end{prop} \begin{proof} If the action has precisely three fixed points, then the integral homology of $M$ is $\mathbb{Z}$ in degrees $0$, $n$, and $2n$, where $2n$ is the dimension of $M$, and $0$ in all other degrees. In this case, by Corollary $B$ of \cite{MR2355782} the diffeomorphism type of $M$ is determined by the Pontryagin number $p_n^2(M)[M]\in \mathbb{Z}$. Note that the cohomology ring of $M$ is given by $\mathbb{Z}[x]/\langle x^3 \rangle$ with $x \in H^n(M)$, thus they have a natural orientation given by $x^2 = (-x)^2$. Hence the Pontryagin number is encoded in the GKM graph of $M$ by \cite{MR4088417}. \end{proof} \section{The Hamiltonian example} We will construct a Hamiltonian $T^3$-action of GKM type on a space $Y$ which has the signed GKM graph of a diagonal action on $\mathbb{C}P^1\times \mathbb{C}P^3$, while $Y$ is not homotopy equivalent to the latter product. \subsection{The construction} The $T^3$-space $Y$ arises as an equivariant pullback of the bundle $X\rightarrow S^6$ along a map $k\colon \mathbb{C}P^3\rightarrow S^6$. Explicitly we define $k$ as the collapsing map $\mathbb{C}P^3\rightarrow \mathbb{C}P^3/\mathbb{C}P^2\cong S^6$, where $\mathbb{C}P^2$ is embedded as those points $[z_0:\ldots:z_3]\in \mathbb{C}P^3$ with $z_3=0$. The identification of the quotient space with $S^6$ can be done such that $k$ is equivariant with respect to the action \[(s,t,u)\cdot [z_0:z_1:z_2:z_3]=[sz_0:tz_1:uz_2:z_3]\] on $\mathbb{C}P^3$ and the standard $T^3$-action on $S^6\subset \mathbb{C}^3\oplus\mathbb{R}$. Explicitly, let $S^6_+=\{(v,w,z,h)\in S^6~|~ h\geq 0\}$ be the upper hemisphere and $A\subset S^6_+$ the equator. Then one has homeomorphisms \[\mathbb{C}P^3/\mathbb{C}P^2\leftarrow S^6_+/A\rightarrow S^6,\] where the first map is defined by $(v,w,z,h)\mapsto [v:w:z:h]$ and the second map is defined by stretching along the real coordinate as in the proof of Lemma \ref{lem:S5nachS4}. We define $Y$ as the pullback of the $T^3$-equivariant bundle $X\to S^6$ along $k$, which is the same as the pullback of the $S^1$-quotient $\mathbb{C}P^3\to S^4$ of the quaternionic Hopf bundle $S^7\to S^4$ along $f\circ k$, where $f$ is defined in Proposition \ref{prop:S6nachS4}. Since $k$ maps the one-skeleton of $\mathbb{C}P^3$ to the one-skeleton of $S^6$ and the restriction of $f$ to the one-skeleton of $S^6$ was shown to be equivariantly homotopic to a constant map, the same holds for the composition $f\circ k$ when restricted to the one skeleton of $\mathbb{C}P^3$. Analogously to Theorem \ref{thm:non-rigidity} we obtain \begin{thm}\label{thm:non-rigidity-symplectic} The $1$-skeleton of the $T^3$-action on $Y$ is $T^3$-equivariantly homeomorphic to the $1$-skeleton of the product $T^3$-action on $\mathbb{C}P^1\times \mathbb{C}P^3$ which acts on $\mathbb{C}P^1$ via $(s,t,u)\cdot [v,w]=[st^{-1}u^{-1}v,w]$ and on $\mathbb{C}P^3$ via $(s,t,u)\cdot [z_0:z_1:z_2:z_3]=[sz_0:tz_1:uz_2:z_3]$. \end{thm} It remains to prove that $Y$ is not homotopy equivalent to the product $\mathbb{C}P^1\times \mathbb{C}P^3$. While the strategy is largely analogous to that of Lemma \ref{lem:pi5}, the details are slightly more involved, drawing from classical results on homotopy groups of spheres. \begin{lem}\label{lem:fancyhomotopystuff} Let $g$ be the Hopf map $S^3\rightarrow S^2$ and $i\colon S^4\cong \mathbb{H}P^1\rightarrow \mathbb{H}P^\infty$ be the inclusion. Then $i\circ \Sigma^2 g\circ \Sigma^3 g\circ \Sigma^4 g$ defines a non-trivial element of $\pi_7(\mathbb{H}P^2)$. \end{lem} \begin{proof} As observed in Proposition \ref{prop:S6nachS4}, the map $\Sigma g\circ \Sigma^2 g$ is a generator of $\pi_5(S^3)$. In particular it induces a non-trivial map $\pi_5(S^5)\rightarrow \pi_5(S^3)$. Since $g$ is the projection of an $S^1$-bundle, it induces isomorphisms on higher homotopy groups, so the composition $g\circ \Sigma g\circ \Sigma^2g$ induces a non-trivial map on $\pi_5$. Hence it is a generator of $\pi_5(S^2)$. The next step is to show that its two-fold suspension $\Sigma^2 g\circ \Sigma^3 g\circ \Sigma^4 g$ is still non-trivial. This follows from the EHP-sequence \cite{MR83124} (or \cite{MR1867354} for a modern exposition) which involves the suspension homomorphism or rather its localization at $2$ in a long exact sequence. The part of the sequence we are interested in reads \[ \cdots\rightarrow \pi_5(S^2)_{(2)}\xrightarrow{E_{(2)}}\pi_6{(S^3)}_{(2)}\rightarrow \pi_6(S^5)_{(2)}\rightarrow \pi_4(S^2)_{(2)}\xrightarrow{E_{(2)}} \pi_5(S^3)_{(2)}\rightarrow\cdots \] where $E$ denotes the suspension homomorphism and the index $(2)$ denotes localization at $2$. Using that $\pi_5(S^2)=\mathbb{Z}_2$ (cf. \cite{MR37507}) and $\pi_6(S^5)=\mathbb{Z}_2$ by the Freudenthal suspension theorem as well as $\pi_6(S^3)=\mathbb{Z}_{12}$ (cf. \cite{MR0143217}), the left part of the sequence becomes \[ \cdots\rightarrow\mathbb{Z}_2\xrightarrow{E_{(2)}} \mathbb{Z}_4\rightarrow \mathbb{Z}_2\rightarrow 0 \] since $E\colon \pi_4(S^2)\rightarrow \pi_5(S^3)$ is an isomorphism (as argued in section \ref{sec:construction}, the two groups are $\mathbb{Z}_2$ and generated by $g\circ \Sigma g$ and $\Sigma g\circ\Sigma^2 g$ respectively). Thus $E\colon \pi_5(S^2)\rightarrow \pi_6(S^3)$ is injective because this holds for the localized map and $\pi_5(S^2)\rightarrow\pi_5(S^2)_{(2)}$ is an isomorphism. An analogous sequence exists for the suspension $E$ on $S^3$. For odd spheres, no localization is necessary and the relevant part of the sequence reads \[\cdots\rightarrow\pi_6(S^3)\xrightarrow{E} \pi_7(S^4)\rightarrow \pi_7(S^7)\rightarrow\cdots\] which is $\mathbb{Z}_{12}\rightarrow \mathbb{Z}_{12}\oplus \mathbb{Z}\rightarrow\mathbb{Z}$. Consequently the left hand map is necessarily injective. In total we deduce that the map $\Sigma^2 g\circ \Sigma^3 g\circ \Sigma^4 g$, which is the double suspension of the generator of $\pi_5(S^2)$, defines a non-trivial element of order $2$ in $\pi_7(S^4)$. It remains to prove that it survives the inclusion into $\mathbb{H}P^\infty$. It suffices to consider the inclusion into the $8$-skeleton $\mathbb{H}P^2$. The latter arises from $S^4$ by attaching a single $8$-cell via the projection in of the Hopf fibration $g'\colon S^7\rightarrow S^4$. By homotopy excision the kernel of $\pi_7(S^4)\rightarrow\pi_7(\mathbb{H}P^2)$ is generated by $g'$. From the long homotopy sequence of $S^3\rightarrow S^7\rightarrow S^4$ we deduce that $[g']\in\pi_7(S^4)$ is of infinite order. This implies that the class of $\Sigma^2 g\circ \Sigma^3 g\circ \Sigma^4 g$, which is torsion, is not contained in the kernel of $\pi_7(S^4)\rightarrow \pi_7(\mathbb{H}P^2)$. \end{proof} \begin{prop} We have $\pi_6(Y)= \mathbb{Z}_6$ whereas $\pi_6(\mathbb{C}P^1\times \mathbb{C}P^3)=\mathbb{Z}_{12}$. \end{prop} \begin{proof} Denote by $Q$ the pullback of the universal $\mathrm{SU}(2)$-bundle along the map $ i\circ f\circ k\colon \mathbb{C}P^3\rightarrow \mathbb{H}P^\infty$, where $i,f,k$ are as above. We have a principal $S^1$-bundle $S^1\rightarrow Q\rightarrow Y$ so it suffices to compute the homotopy groups of $Q$. There is a pullback diagram \[\xymatrix{ \mathrm{SU}(2)\ar[r]\ar[d] &Q\ar[r]\ar[d] & \mathbb{C}P^3\ar[d]^{i\circ f\circ k}\\ \mathrm{SU}(2)\ar[r] & S^\infty \ar[r]& \mathbb{H}P^\infty }\] which induces a commutative diagram \[\xymatrix{\cdots\ar[r]& \pi_7(\mathbb{C}P^3)\ar[r]\ar[d]^{(i\circ f\circ k)_*}& \pi_6(\mathrm{SU}(2)) \ar[r]\ar[d]^{\mathrm{id}}& \pi_6(Q)\ar[r]\ar[d]& 0\ar[r]\ar[d]& \cdots\\ \cdots\ar[r]& \pi_7(\mathbb{H}P^\infty)\ar[r] & \pi_6(\mathrm{SU}(2))\ar[r]& 0 \ar[r]& \pi_6(\mathbb{H}P^\infty)\ar[r]&\cdots}\] on long exact homotopy sequences. The group $\pi_7(\mathbb{C}P^3)$ is generated by the canonical projection $p\colon S^7\rightarrow \mathbb{C}P^3$. By \cite[Lemma 9.2]{MR270373} the composition $k\circ p$ is homotopic to the quadruple suspended Hopf map $\Sigma^4 g$. We have $f = \Sigma^2 g\circ \Sigma^3 g$ (cf. proof of Proposition \ref{prop:S6nachS4}) and thus by Lemma \ref{lem:fancyhomotopystuff} we conclude that $(i\circ f\circ k)_*([p])=[i\circ \Sigma^2 g\circ \Sigma^3 g\circ \Sigma^4 g]$ is a non-trivial element of order $2$ in $\pi_7(\mathbb{H}P^\infty)$. Using the fact that $\pi_7(\mathbb{H}P^\infty)\rightarrow\pi_6(\mathrm{SU}(2))$ is an isomorphism and $\pi_6(\mathrm{SU}(2))=\mathbb{Z}_{12}$ we see that $\pi_6(Y)=\mathbb{Z}_{6}$. \end{proof} \subsection{Symplectic structures}\label{sec:symplecticstructures} Recall that the space $Y$ was defined as the total space of the pullback of an equivariant $\mathbb{C}P^1$-fiber bundle $\mathbb{C}P^3\to S^4$ along $f\circ k:\mathbb{C}P^3\to S^6\to S^4$. Let $\pi \colon Y \to \mathbb{C} P^{3}$ be the projection map, then by Thurston's construction \cite{MR0402764}, see also \cite[Theorem 6.1.4]{MR3674984}, $Y$ admits a sympletic structure $\omega = \omega_F+C\pi^\ast(\omega_B)$, where $\omega_B$ is the Fubini-Study form on $\mathbb{C} P^{3}$, $\omega_F$ is a closed $2$-form which restricts to the Fubini-Study form on the fibers (after identifying them with $\mathbb{C} P^1$ up to elements of the structure group) and $C>0$ large enough. Note that Thurston's construction is applicable since the Fubini-Study form on $\mathbb{C} P^{1}$ is invariant under the structure group of $\pi$ (cf. Remark \ref{rem:structure group}) and the inclusion of a fiber is surjective on cohomology due to the fact that the odd-dimensional cohomology of fiber and base vanishes. As argued in Section \ref{subsec:acs}, the $T^3$-action on $Y$ preserves the fibers of $\pi$, with the transformations between fibers corresponding to elements of the structure group. The restriction of $\omega_F$ to any fiber is invariant under the structure group. Thus if we average $\omega_F$ over $T^3$ we obtain a $T^3$-invariant closed $2$-form $\widetilde\omega_F$ whose restriction to any fiber agrees with that of $\omega_F$. Clearly $\omega_B$ is also invariant with respect to the $T^3$-actions, hence $\widetilde \omega = \widetilde\omega_F + C\pi^\ast(\omega_B)$ is a $T^3$-invariant symplectic form (potentially replacing $C$ by a larger constant) on $Y$. In this way, a nonempty open set in the second cohomology of $Y$ can be realized as cohomology classes of $T^3$-invariant symplectic forms on $Y$. As $Y$ is simply-connected, the $T^3$-action on $Y$ is automatically Hamiltonian with respect to any of these symplectic forms. \begin{rem} It is possible to write the fiber bundle $\mathbb{C} P^{3} \to S^4$ as the projectivization of the vector bundle $E:=S^7 \times_{\mathrm{SU}(2)} \mathbb{C}^2 \to S^4$, where $\mathrm{SU}(2)$ acts on $\mathbb{C}^2$ via the standard representation. Thus, the space $Y$ is the projectivization of the $T^3$-equivariant complex vector bundle $(f\circ k)^*E\to \mathbb{C} P^3$. By \cite[Chapter I, \S 6]{MR2815674}, any complex vector bundle over $\mathbb{C} P^3$ admits a holomorphic structure, which may be no longer equivariant, even if the original bundle was equivariant. Consequently, by \cite[Proposition 3.18]{MR2451566} its projectivization admits a K\"ahler structure. It follows that the space $Y$ is (nonequivariantly) K\"ahler. We do not know if it admits a $T^3$-invariant K\"ahler structure. \end{rem} In this section, we compare the symplectic structures on $Y$ and $\mathbb{C} P^1\times \mathbb{C} P^3$ in regard to two aspects: their x-ray and the symplectic cohomological rigidity problem \cite{2002.12434v1}. We recall the notion of the x-ray of a Hamiltonian action of a torus $T$ on a manifold $M$ with momentum map $\mu\colon M\rightarrow \mathfrak{t}^*$: it is given by the datum of the closed orbit type stratification $\chi$ (as a partially ordered set) which consists of the connected components of all submanifolds $M^H\subset M$, where $H\subset T$ is a subgroup, together with the function that associates to $N\in \chi$ the polytope $\mu(N)\subset \mathfrak{t}^*$. Thus in the GKM case it encodes in particular the (signed) GKM graph (associated to an almost complex structure which is compatible with the symplectic form). However, the information on the lengths of the edges is lost, when passing from the x-ray to the GKM graph. We have the following addendum to Theorem \ref{thm:non-rigidity-symplectic}: \begin{prop} On $Y$ and $\mathbb{C}P^1\times\mathbb{C}P^3$ there exist $T^3$-invariant symplectic forms with momentum maps whose x-rays coincide. \end{prop} \begin{proof} We have seen already that the one-skeletons of the two actions are equivariantly homeo\-morphic. It suffices to prove the existence of respective momentum maps which agree on the one skeleton with respect to this homeomorphism: by Lemma \ref{lem:stratification}, $\chi$ is encoded in the GKM graph. For some $N\in \chi$, the momentum image $\mu(N)$ is determined by the image under $\mu$ of all fixed points that are contained in $N$. These are however also determined by the corresponding subgraph. In order to prove that there are momentum maps on $Y$ and $\mathbb{C}P^1\times\mathbb{C}P^3$ which coincide on the one-skeleton (with respect to a fixed homeomorphism), we start by investigating a momentum map $\mu$ of the symplectic form $\omega$ on $Y$ as constructed above. For a fixed $\omega$, $\mu$ is unique up to translation by some element in $\mathfrak{t}^*$. Let $A$ be the one skeleton of the action on $Y$ and $B$ be the one skeleton of $\mathbb{C}P^3$. Every invariant $2$-sphere in $A$ gets mapped by $\mu$ to an affine linear segment in $\mathfrak{t}^*$. The slope of this segment, when moving from a fixed point $p$ in a sphere to the other fixed point $q$, is determined up to sign by the weight of the sphere in $\mathfrak{t}^*/\pm$. The sign is determined by the orientation on the sphere which is induced by $\omega$. The projection $\pi\colon Y\rightarrow \mathbb{C}P^3$ maps some spheres in $A$ homeomorphically onto invariant spheres in $\mathbb{C} P^3$ (we call those horizontal) and is constant on the remaining spheres which are precisely the fibers over the fixed points of $\mathbb{C}P^3$ (we call those vertical). From the construction of $\omega$ we see that $\pi$ is not necessarily a symplectomorphism on horizontal spheres (where spheres in $\mathbb{C}P^3$ are equipped with the restriction of $\omega_B$). However if the constant $C$ is large enough, at least $\pi$ is orientation preserving on horizontal spheres. By Theorem \ref{thm:non-rigidity-symplectic}, the subspace of all horizontal spheres in $A$ decomposes into two connected components $A_h^+$ and $A_h^-$, each of which gets homeomorphically mapped to $B$ in equivariant and sphere-wise orientation preserving fashion. From the specific weights and orientations we deduce that $\mu|_{A_h^\pm}$ has to agree with $\mu_B\circ \pi$ up to translation and rescaling, where $\mu_B$ is a momentum map for $\omega_B$. Now the vertical spheres get mapped to parallel line segments of slope $\pm(1,-1,-1)$ in the standard basis of $\mathfrak{t}^*$. From the previous considerations we deduce that the signs of the slope must agree when moving on vertical spheres from $A_h^+$ to $A_h^-$. Also this forces $\mu|_{A_h^+}$ and $\mu|_{A_h^-}$ to be scaled in the same way and implies that all the lengths of the segments that are the images of vertical spheres under $\mu$ agree. To sum up the above discussion, $\mu|_A$ is determined up to translation, global rescaling as well as the length and sign of the vertical edges. The same considerations apply not only to $Y\rightarrow \mathbb{C}P^3$ but also to the trivial bundle $\mathbb{C}P^1\times\mathbb{C}P^3\rightarrow \mathbb{C}P^3$. Thus it remains to show that the above parameters can be manipulated in a way that the momentum maps agree on the one skeleton. Translation can be manipulated arbitrarily and global rescaling of an associated momentum map is achieved by rescaling the symplectic form $\omega$. The sign of $\mu$ on vertical edges can be changed by replacing $\omega=\tilde{\omega}_F+C\omega_B$ with $-\tilde{\omega}_F+C\omega_B$. Finally the length of the vertical edges is the only thing that can not be manipulated freely. However we can make it arbitrarily short when compared to the basic edges by enlarging the constant $C$. Thus we can change the symplectic forms to make the momentum maps agree on the one-skeleton. \end{proof} We wish to demonstrate that the pair of $Y$ and $\mathbb{C} P^1\times \mathbb{C} P^3$ are a counterexample to the symplectic cohomological rigidity problem for integer GKM manifolds. The symplectic variant of the cohomological rigidity problem, as posed in \cite{2002.12434v1}, asks for families of symplectic manifolds that are distinguished by their cohomology ring and the cohomology classes of their symplectic structures. \begin{prop}\label{prop:nosymplecticrigidity} On $Y$ and $\mathbb{C} P^1\times \mathbb{C} P^3$ there exist symplectic forms that are intertwined via the isomorphism on cohomology induced by the equivariant isomorphism of the respective one-skeleta from Theorem \ref{thm:non-rigidity-symplectic}. \end{prop} \begin{proof} We observed at the beginning of the section that an open subset of the second cohomology of $Y$ is realized by symplectic structures on $Y$; on $\mathbb{C} P^1\times \mathbb{C} P^3$ the same is true for an open and dense subset of the second cohomology group. The assertion is immediate. \end{proof} We observe that even more is true: a closed equivariant extension of the symplectic form $\omega$ on a symplectic manifold $M$ with Hamiltonian $T$-action is an equivariant differential form of the form $\omega + \mu$, where $\mu$ is a momentum map of the $T$-action, see \cite[Example 4.16]{MR4025581}. One can choose appropriate momentum maps on $Y$ and $\mathbb{C} P^1\times \mathbb{C} P^3$ such that the isomorphism $H^2_{T^3}(Y) \cong H^2_{T^3}(\mathbb{C} P^1\times \mathbb{C} P^3)$ induced by the homeomorphism from Theorem \ref{thm:non-rigidity-symplectic} intertwines the corresponding equivariant cohomology classes. In other words, these examples are not equivariantly symplectically cohomologically rigid.
{ "timestamp": "2020-09-21T02:10:30", "yymm": "2007", "arxiv_id": "2007.13620", "language": "en", "url": "https://arxiv.org/abs/2007.13620" }
\section{Introduction} Recently, the proliferation of Internet of Things (IoT) has triggered a surge in data traffic for future wireless networks. To alleviate such traffic conflicts in existing terrestrial infrastructures as well as provide cloud functionalities on demand, multi-dimensional integrated networking has been envisioned as the inevitable network architecture along with achieving the worldwide connectivity and coverage \cite{D20Nat}. Fueled by such big data driven scenario and increasing computing power, {\iffalse { \color{blue}{multi-dimensional network architecture, incorporating terrestrial and aerial radio access points to bring cloud functionalities where and when needed on demand.}} the deployments of unmanned aerial vehicles (UAVs) are expected to provide complementary services from the sky, owing to its mobility and flexibility. Refer to the 5G enhancement for UAV connections as addressed in the latest 3GPP Release 17 \cite{20203GPPUAV}, the dynamic aerial network topology of UAV swarms can be built up to realize inter-node communication. To execute multi-missions of such connected agents, it's vital to employ the powerful machine learning (ML) framework to provide efficient solutions with significant performance improvements. \fi} the machine learning (ML)-enabled method is appealing in providing low computational cost and extrapolating new features from environments \cite{LBH15DL}. However, the stringent requirement of stable/continuous network connections and substantial energy consumed by central controller pose rigorous challenges to the centralized scene composed of amount intelligent mobile agents (e.g., unmanned aerial vehicle (UAV)) \cite{ZS20arXiv}. To this direction, by decentralizing central service and spreading its burden to edge devices, the data computation and model training can be dealt locally in real-time. Additionally, powered by the decentralized data management mechanism, general regulations governing data privacy can be satisfied \cite{YY20privacy}. \begin{figure} [t] \centering \includegraphics[width=80mm]{UAV_FL111.pdf} \caption{Fully decentralized federated learning-assisted UAV communications.} \label{[system model]} \vskip -0.1 in \end{figure} In the context of decentralized manner, federated learning (FL) has been recognized as an emerging approach to the collaborative model training via the topology of connected agents, while keeping the raw data locally dispersed\cite{K19Federated}. Towards this, both the privacy-preservation and communication/computation efficiency can be guaranteed by leveraging a fully decentralized FL framework. Furthermore, to deploy production of such fully decentralized FL system in practice, the fleet of agents are expected to be capable of holding the reliable peer-to-peer communication, which is the key enabler for employing FL-based mechanism \cite{K19Federated}. For instance, referring to the 5G enhancement for UAV connections addressed in the latest 3GPP Release 17 \cite{20203GPPUAV} and the Flying Ad-Hoc Networks (FANETs) considered in IEEE 802.11 \cite{GJ15Survey}, the dynamic aerial network topology of UAV swarms can be built up to realize inter-node communications, which provide the suitability of investigating the FL-enabled UAV community \cite{ZS20arXiv}. \iffalse Moreover, the salient features of the dataset held by FL differentiates it from the other decentralized approaches, that is , non-i.i.d, distributed and unbalanced training data. \fi Aiming at mitigating the impact of dynamic communication environment on the reliability concerned in production system, the exploration of FL-enabled networks has sparked an extreme interest to realize the ultra-reliable low-latency communication (URLLC) \cite{SBSD20TCOMM}, while the comprehensive research is still in infancy. \iffalse Till now, recent development of FL has been investigated to provide efficient solutions for wireless communications, while still in infancy. \fi In \cite{EC20arxiv}, neural network (NN) model employed at the base station (BS) is trained by gradient data collected from multiple users. However, the FL-based training process in \cite{EC20arxiv} is orchestrated by a central server, which implies the fragile state with a single point (i.e., central server) of failure. \iffalse as well can not be coupled with multiple local models. \fi The implementation of FL-based tasks optimization in wireless sensor networks are increasing in popularity for multiple local models \cite{ZS20arXiv}, \cite{L20arXiv}. The first work that applies FL scheme in UAV swarms can refer to \cite{ZS20arXiv}, in which the joint power allocation and flying trajectory design of UAV swarms are provided. Moreover, a multi-dimensional contract-matching incentive mechanism for UAVs is designed by adopting FL-based sensing and collaborative learning scheme \cite{L20arXiv}. Practice wisdom encourages the application of theoretical research in real-world, yet current FL-based researches seldom consider the real-time stochastic data collection and time varying network topology. \iffalse While individually training of each local learner over its limited dataset leads to partial models, by collaborative training, a comprehensive model can be achieved. On-device machine learning is essentially about training a high-quality centralized model in a decentralized manner, whereby training data is unevenly distributed and every device has access to a tiny fraction of it. \fi In this work, we propose an inexact stochastic parallel random walk alternating direction method of multipliers (ISPW-ADMM) algorithm that copes with decentralized FL tasks, along with maintaining the high communication/learning efficiency as well realizing enhanced privacy preservation. Besides, the proposed framework can also meet the challenges in time-varying connectivity graphs and stochastic data collection with potentially fast convergence. For a specific on-board mission in practice, the robust beamforming (BF) design is first realized by adopting local extreme learning machine (ELM) model. Then, all local models are supposed to reach the global consensus solution, by integrating the decentralized FL framework. Through numerical results, the proposed algorithm is validated to be both communication and time efficient. \section{Fully Decentralized Federated Learning Framework} \label{[System Framework]} \subsection{Fully Decentralized Framework}\label{Fully_Decentralized_Framework} As illustrated in Fig. \ref{[system model]}, a swarm of traveling UAVs (agents) $(\mathcal{N} =\{1,...,N \})$ are considered to provide wireless services to the ground terminals in geographically distributed regions. With the aim to solve the decentralized consensus optimization problem in such multi-agent system \cite{MY19IOT}, we have \begin{equation} \label{Centralized_Consecus} \min _{x\in { {\mathbb{R}}^{p}}}~\sum _{i=1}^{N}\mathbb{E}_{\bm \zeta_i}[f_{i}(x_i;\bm \zeta_i)],~s.t.~x_i=z,\forall i\in\mathcal{N}, \end{equation} with $f_i: \mathbb{R}^{p} \rightarrow \mathbb{R} $ being the local loss function of model weights $x_i\in\mathbb{R}^p$ privately held by UAV $i\in\mathcal{N}$. In FL tasks \cite{K19Federated}, the agents cooperate with each other to find a global model (i.e., $z$) through drawing sequences of identical and independent (i.i.d.) observations from the random vector $\{\bm \zeta_i|i\in\mathcal{N} \}$. Specifically, $\bm\zeta_i$ obeys a fixed distribution $\text{P}_l$ out of the set $\textbf{P}=\{\text{P}_1,...,\text{P}_L| L\in\mathbb{N}^+\}$. Hereinafter, we denote $\mathbb{E}_{\bm\zeta_i}[f_i(x_i;\bm\zeta_i)]=f_i(x_i)$ for simplicity. Refer to the fully decentralized manner presented as PW-ADMM in \cite{YY20CL}, problem (\ref{Centralized_Consecus}) can be equivalently rewritten as \vspace{-0.05in} \begin{equation} \label{Decentralized_Consecus} \begin{aligned} \min _{\boldsymbol {x}, \boldsymbol {z}}~\sum _{i=1}^{N}f_{i}(x_{i}), ~\quad s.t.~ {{\mathbbm {1}}_p}\otimes \frac {1}{M}\sum _{m=1}^{M}z_{m}-\boldsymbol {x}=\bm{0}, \end{aligned} \end{equation} with $\otimes$ denoting Kronecker product, ${\mathbbm {1}}_p= [1, \cdots, 1]^T \in \mathbb{R}^{p}$ and $\boldsymbol{x}=[x_1,\cdots, x_N]\in \mathbb{R}^{pN}$. $\boldsymbol{z}=[z_1,\cdots, z_M]\in \mathbb{R}^{pM}$ denotes the tokens held by random walks $\mathcal{M}=\{1,...,M \}$. For FL in decentralized manner, the global consensus solution shall be the average value of all the tokens, i.e., $\frac{1}{M}\sum_{m=1}^Mz_m$. Thus, the augmented Lagrangian for (\ref{Decentralized_Consecus}) is given by \begin{equation} \begin{aligned} \hspace{-0.5pc}\mathcal {L}_{\rho }(\boldsymbol {x},\boldsymbol {z},\boldsymbol {y})= \sum _{i=1}^{N}f_{i}(x_{i})+& \bigg\langle \boldsymbol {y }, {{\mathbbm {1}}_p}\otimes \frac {1}{M}\sum _{m=1}^{M}z_{m}-\boldsymbol {x} \bigg\rangle \\ + & \frac {\rho }{2} \bigg\Vert {{\mathbbm {1}}_p}\otimes \frac {1}{M}\sum _{m=1}^{M}z_{m}-\boldsymbol {x} \bigg\Vert^{2}, \end{aligned} \end{equation} where $ \boldsymbol {y }$ is a Lagrange multiplier and $\rho>0$ denotes the constant. Following updates of synchronous inexact ADMM \cite{chang2014multi} and the proximal stochastic ADMM \cite{huang2018mini}, the solution to (\ref{Decentralized_Consecus}) can be obtained in iterations, wherein the updates for the $(k+1)$-th iteration follow \begin{subequations} \begin{align} &x_i^{k+1}:=\left\{\begin{aligned} &\arg \min_{x_i} \hat{\mathcal{L}}_{\rho,i }^{k} (x_i, \bm{z}^{k } , y_i^k ) ,~i=i_{m_{k+1}} ;\\ &x_i^{k },~\text{otherwise}; \end{aligned} \right. \label{xup1}\\ &y_i^{k+1}:=\left\{\begin{aligned} & y_i^{k } + \gamma\rho \bigg(\frac{1}{M}\sum_{m=1}^{M}z_m^{k }-x_{i}^{k +1} \bigg),~i=i_{m_{k+1}};\\ & y_i^{k },~\text{otherwise}; \end{aligned} \right. \label{yup1} \\ &z_m^{k+1}:=\left\{\begin{aligned} \label{zup1} &\arg \min_{z_m} \mathcal{L}_{\rho} (\bm{x}^{k+1},z_m, \bm{z}_{-m}^k ,\bm{y}^{k+1} ),~m=m_{k+1} ;\\ &z_m^{k}, \text{otherwise}; \end{aligned} \right. \end{align} \end{subequations} where $\bm{z}_{-m}^k=\{z_1^k,...,z_{m-1}^k,z_{m+1}^k,...,z_M^k \} $, and \begin{equation} \begin{aligned} &\hat{\mathcal{L}}_{\rho,i }^{k} (x_i, \bm{z}^{k } , y_i^k )= g_i(x_i^k;\bm{\zeta}_i^k)(x_i-x_i^k) \\ &~~~~~~~~+ \frac {\rho }{2} \bigg\| \frac {1}{M}\sum _{m=1}^{M}z_{m}^k-x_i +\frac{y_i^k}{\rho} \bigg\|^{2}+\frac{\tau}{2}\| x_i - x_i^k\|^2, \end{aligned} \end{equation} $\tau$ and $\gamma$ are step sizes for primal and dual updates, respectively, while $g_i(x_i; \bm{\zeta}_i)\triangleq \nabla f_i(x_i;\bm{\zeta}_i)$ is the stochastic gradient. According to \cite{YM19arXiv}, the convergence speed of ADMM with first-order approximation may degrade from traditional ADMM. Besides, the stochastic property of $g_i(\cdot)$ will introduce data variance in the primal update \cite{zheng2016fast}. To this point, the biased first-order moment estimate is further proposed to stabilize and speed up convergence for stochastic updates, that is \begin{equation}\label{fbe} \mu_{i}^{k+1}:= \eta \mu_{i}^{k} + (1-\eta) g_{i} (x_{i}^k;\bm{\zeta}_{i}^k ),~i=i_{m_{k+1}}, \end{equation} where $\eta\in[0,1)$ denotes exponential decay rates for the first order moment estimate. Till now, the updates of tokens $\bm z$ are still in centralized manner. Then by initializing $\bm{z}^0= \bm{x}^0= \bm{y}^0=\bm{0}$ in (\ref{zup1}), it is clear to find that $z_{m}^{k+1}$ can be \textit{incrementally} updated as \begin{equation}\label{zup} \begin{aligned} z_{m}^{k+2} = z_{m }^{k+1} + \frac{M}{N} \bigg[\bigg(x_i^{k+1}-\frac{y_i^{k+1}}{\rho} \bigg) - \bigg(x_i^{k }-\frac{y_i^{k }}{\rho} \bigg)\bigg], \end{aligned} \end{equation} where $m=m_{k+1}$ and $i=i_{m_{k+1}}$ denote the activated random walk and agent, respectively. That is, the update of $z_m$ does not require the information from other tokens, i.e., $\bm{z}_{-m}$. Thus, the updates can be carried out in parallel eventually. Following conventional PW-ADMM \cite{YY20CL}, the updates given by (\ref{xup1})-(\ref{zup1}) can be explained in asynchronous manner, by approximating $\frac{1}{M}\sum_{m=1}^M z_m$ with received token $z_m$ in (\ref{xup1}) and (\ref{yup1}). Since all agents and parallel random walks can keep independent updating clock, two variables are introduced, i.e., $k_i$ and $s_m$ corresponding to agent $i\in\mathcal{N}$ and random walk $m\in\mathcal{M}$, respectively. \iffalse we introduce independent clock $k_i$ and $s_m$, corresponding to agent $i\in\mathcal{N}$ and random walk $m\in\mathcal{M}$, respectively. \fi Different from the conventional PW-ADMM, the mobility of such UAV swarm system indicates the time-varying undirected graph $\mathcal{G}^{(t)}=( \mathcal{E}^{(t)},\mathcal{N} )$, where $t$ is time stamp and $\mathcal{E}^{(t)}$ is the set of links at time $t$. In specific, if agent $j$ travels within the communication range of $i$ at time $t$, we have $(i,j)\in\mathcal{E}^{(t)}$. By defining $\overline{\mathcal{N}}_i^{(t)}$ as the set of neighboring agents for agent $i$ at time $t$, we summarize the inexact stochastic PW-ADMM (ISPW-ADMM) in Algorithm 1. Note that with $M=1$, the parallel random walk token transmission reduces to the conventional random walk strategy \cite{MYHY20TSP}. \begin{algorithm}[t] \caption{ISPW-ADMM } \begin{algorithmic}[1] \STATE \textbf{initialize}: $\{z^0 = x_i^0 = y_i^0= \mu_i^0=\bm{0}, k_i=0,\eta, \gamma |i\in\mathcal{N}\}$; \STATE \textbf{Algorithm for the $m$-th random Walk:} \FOR{$s_m=0,1,...$} \STATE wait token $z^{s_m}_m$ arrive at agent $i=i_{s_m}$; \STATE draw i.i.d. samples $\bm{\zeta}_i^{k_i} \sim \overline{\text{P}}_{i}^{(t)}$ with $i=i_{s_m}$; \STATE update $\mu_{i}^{k_i+1}$ by (\ref{fbe}) with $i=i_{s_m}$; \STATE update $x_{i }^{k_i+1}$ by (\ref{xup}) with $i=i_{s_m}$; \begin{equation}\label{xup} \begin{aligned} x_{i }^{k_i+1}&:=\arg\min_{x_{i}} \mu_{i }^{k_i+1} (x_{i }-x_{i }^{k_i} ) \\&~~~ +\frac{\rho}{2} \bigg\| z^{s_m}_m-x_{i} +\frac{y_{i}^{k_i} }{\rho} \bigg\| + \frac{\tau}{2}\|x_{i}-x_{i}^{k_i} \|; \end{aligned} \end{equation} \STATE update $y_{i}^{k_i+1}$ by (\ref{yup}) with $i=i_{s_m}$; \begin{equation}\label{yup} y_i^{k_i+1}:=y_i^{k_i} + \gamma \rho (z_m^{s_m} - x_i^{k_i+1}); \end{equation} \STATE update $z^{s_m+1}_m$ according to (\ref{zup}); \STATE set $k_i\gets k_i + 1$ with $i=i_{s_m}$; \STATE choose $i_{s_m+1}(\in \overline{\mathcal{N}}_{i}^{(t)} )$ according to $\bm{P}^{(t)}_{i}$ with $i=i_{s_m}$; \STATE send token $z_m^{s_m+1} $ to agent $i_{s_m+1} $; \ENDFOR \end{algorithmic} \end{algorithm} In ISPW-ADMM, after the token $z_m^{s_m}$ arriving at agent $i=i_{s_m}$ via the $m$-th random walk, the collected samples $\bm \zeta_{i}^{k_i}\sim \overline{\text{P}}_i^{(t)}$ are used for training local model $x_{i}$, where $\overline{\text{P}}_i^{(t)}\in \textbf{P}$ is determined by the location of UAV $i$ at time $t$. One more, the transition of token $z_m^{s_m+1}$ follows the embedded Markov chain with time-varying probability matrix ${\bm{{P}}}_{i}^{(t)}$ \cite{YY20CL}. \iffalse Moreover, as considering the mobility of the UAV swarm, the interaction topology among the $N$ drones at time $t$ can be characterized by an undirected graph, denoted as ${G}^{(t)}=\{\mathcal{A}, \mathcal{E}^{(t)},\mathcal{W}^{(t)}\}$. Specifically, $\mathcal{A}$ is the set of drones with biconnectivity considered, while, the two-edge connectivity is descried in a set of edges, shown as $ \mathcal{E}^{(t)}=\{(\mathcal{A}_i,\mathcal{A}_j), i,j\in K\}$. To explain the neighbor relationship in such UAV swarm system, a interaction strength matrix is given as $ \mathcal{W}^{(t)}=[{w}_{ij}]\in \mathbb{R}^{N\times N}$ within ${w}_{ij}\geq 0$. Mathematically, the mobility of such UAV swarm system indicates the time-varying undirected graph ${G}^{(t)}$. Moreover, the symmetric matrix ${G}^{(t)}$ can be described for two different UAV swarm layouts, that is, the connected graph (i.e., $\exists w_{ij}=0$) is assumed in most cases, and while, completed graph (i.e., $\forall w_{ij}>0$) can also be established in some extreme instances (e.g., extremely large communication capability of UAVs, or relatively close UAV swarms). To this end, the transition of token held by $m$-th random walk randomly follow the probability matrix ${G}^{(t)} $, thus while, the activated agent sets at time $t$ can be described as \begin{equation} \label{agent_activate} \begin{aligned} \{{\mathcal{F}}^{(t)} \}= \left( \mathcal{G}_m^{(t)}> w_{tp} \right), \end{aligned} \end{equation} with $w_{tp}$ being the threshold for successful token transmission. To this end, with a given time $t$ (corresponding to a fixed graph $\mathcal{G}^{(t)}$), for each iteration at the $m$-th walk, one-hop neighboring UAVs of agent $\mathcal{A}_i$ can be denoted by $ \mathcal{V}_i^{z_m(t)}$, which can be randomly selected from the activated sets given in (\ref{agent_activate}). Considering the possibility of self-loop token transmission, the next token visited set of $\mathcal{A}_i$ is decided by $\overline{\mathcal{V}_i}^{z_m(t)}=\{i, \mathcal{V}_i^{z_m(t)} \}$, which is depend with the graph ${G}^{(t)} $. \fi \subsection{Discussions} Regarding the integration of the fully decentralized framework with FL learning model, the local NN model (i.e., $x_i$) is first designed to output local solutions, then individual NN models gradually reach the desired global solution via the dynamic connectivity graph $\mathcal{G}^{(t)}$. \begin{remark}\label{Rm1} The ISPW-ADMM can be applicable to any decentralized FL tasks over time-varying graphs. \end{remark} {\iffalse all local models are expected to reach the desired global solution, i.e., To this end, the framework presented in section \ref{Fully_Decentralized_Framework} is proposed by integrating the federated and fully decentralized (peer-to-peer) learning. In real world, In this fully-decentralize framework for multi-UAV communications, the unified intelligent wireless communication design in an area with diverse geo-distributions can be achieved by the collaborative learning, that is, the federated learning. Towards this, the NN structures in such federated learning-based system are uniformly designed as $L_n=L_i, \forall i$, without loss of generality. \fi} In what follows, preliminary statements are entailed for meeting the challenges in the FL-based model with dynamic connections, \begin{itemize} \item[$-$] {\textit{High communication/learning efficiency:} } The W-ADMM achieves the less communication cost with single random node being activated in sequence, while PW-ADMM allows multiple walks in parallel to reduce the running time. Hence according to \cite{YY20CL}, the proposed ISPW-ADMM can be utilized to trade-off the communication cost and running time. \item[$-$] {\textit{Enhanced privacy preservation:} } To further develop privacy preserving, partially homomorphic encryption \cite{Alex20encrop} can be exploited in the transition of tokens (i.e., $\bm z$) to protect the exchanged information between the connected agents . \item[$-$] {\textit{Time varying topology:} } Apart from getting more relevant to practical mobile communications, the multiple random walk mechanism in ISPW-ADMM allows each node to be traversed equally in long-run updating, even with dynamic connected graph. By doing so, the time-varying matrices (i.e., ${\bm{{P}}}_{i}^{(t)}$ and $\mathcal{G}^{(t)}$) will not heavily hurt the resulting averaged global performance. \item[$-$] {\textit{Stochastic database:} } In sight of the unbalanced/biased database collected by the traveling agents, the proposed scheme can be realized potentially converge fast by utilizing the stochastic gradients and biased estimation on first-order moment. {\iffalse \item[$-$] \textit{Biased or unbalanced training data:} As the training datasets collected by UAV are affected by the limited endurance of UAV in coverage associated mission, such as data observations, trajectory planing and user localization, etc. Thus, it's possible to have $\mathcal{D}_i^{(t_m)}\neq\mathcal{D}_j^{(t_n)}$, where $\mathcal{D}_i^{(t_m)}$ denotes the dataset collected by UAV $\mathcal{A}_i$ at time $t_m$. \item[$-$] {\textit{Privacy preservation:}} To further enhance the Encryption mechanisms can be applied at the exchanged token among multi-agents, which To learn the global database efficiently and securely, multi-devices are periodically exchange their NN output weights during local training, resulting to a joint representative learning model. By doing so, it performs much more security without explicitly having to share the private data or even its gradients. Towards this, the transited token $z_m$ and local optimization variable $x_i$ shown in (\ref{Decentralized_Consecus}) are both correlated with $\boldsymbol{\beta}_i$, which implies that with $\mathcal{A}_i$ being activated at the $m$-t path, the content of token is decided by the local output weight. \item[$-$] {\textit{High communication/learning efficiency:} } Apart from exploiting the fully decentralized FL to enhance the learning efficiency in the high-mobility UAV-enabled wireless communications, the local training model and communication-efficient random walk updated algorithm shall be properly designed, the latter has been designed as the parallel random walk shown in (\ref{Decentralized_Consecus}) within $M>1$. \fi} \end{itemize} \section{FL-based beamforming design} Aiming at the concrete on-board mission encouraged by the proposed full decentralized FL framework, we first present local ELM model at single UAV agent for robust BF design with respect to noisy channel state information (CSI) in this section. Inspired by the multiple random walk mechanism ISPW-ADMM as stated in Section \ref{[System Framework]}, all local models can gradually converge to consensus. Eventually, the desired global BF design can be realized, while considering the dynamic UAV swarms and stochastic CSIs collection during the traveling. {\iffalse In the context of ML, the BF algorithm designed by observing real CSIs is first provided as accurate label of the training data sample feeding into the selected ML model. After extrapolating data features from limited noisy CSIs, the trained NN can be adopted to predict the BF properly when meeting new CSIs in the same distribution. \fi} \subsection{Beamforming Design} For UAV MIMO communications, the millimeter wave (mmWave) channel coefficient experienced from UAV $i\in\mathcal{N}$ to the ground terminal is denoted by ${{\bf{H}}_{{{i}}}} \in {\mathbb{C}}^{N_r \times N_t}$, with $N_t$ and $N_r$ being the transmit and receive antennas equipped at UAV and the ground node, respectively. \iffalse While, non-trivial for the UAV-enabled communications is the channel estimation and tracking, which haven been investigated in \cite{ZG18CL}.\fi Accordingly, the optimal fully digital (FD) beamforming ${\bf{F}}^{\text{opt}}$ shall be designed to maximize the achievable rate obtained over the mmWave channel, that is, \begin{equation}\label{Rate} \begin{aligned} {\bf{F}}^{\text{opt}}=\arg\max_{{\bf{F}}} \log_2\left(\left\vert{\bf{I}}+{\rho_{{r}}} {\bf{H}}_i {\bf{F}} {\bf{F}}^{H} {\bf{H}}_i^{H} \right\vert\right), \end{aligned} \end{equation} where ${\rho_{{r}}}$ denotes the average received SNR. Based on the singular value decomposition (SVD) of ${\bf{H}}_i$, (\ref{Rate}) can be reformulated as \begin{equation}\label{SVD_Rate} \begin{aligned} {\bf{F}}^{\text{opt}}=\arg\max_{{\bf{F}}} \log_2\left(\left\vert{\bf{I}}+{\rho_{\rm{r}}} {\mathbf{\Sigma}}^2 {\mathbf{V}}^{H} {\bf{F}} {\bf{F}}^{H} {\mathbf{V}} \right\vert\right), \end{aligned} \end{equation} where ${\mathbf{\Sigma}}$ and ${\mathbf{V}}$ denote the diagonal matrix and right unitary matrix of ${\bf{H}}_i$ respectively, deriving from ${\bf{H}}_i={\bf{U}} {\bf{\Sigma}} {\bf{V}}^{H} $. For the BF design with $N_s$ transmitted data streams, the right unitary matrix shown in (\ref{SVD_Rate}) can be separated as ${\bf{V}}=\left[{\bf{V}}^{(1)}, {\bf{V}}^{(2)}\right]$, with ${\bf{V}}^{(1)}\in {\mathbb{C}}^{N_t \times N_s}$ and ${\bf{V}}^{(2)}\in {\mathbb{C}}^{N_t \times \text{rank}({\bf{H}}_i)-N_s}$. Thus, one can find that the optimum FD beamformer can be simply expressed by \begin{equation}\label{F_opt} \begin{aligned} {\bf{F}}^{\text{opt}}={\bf{V}}^{(1)}, \end{aligned} \end{equation} under the approximation of that ${\bf{V}}^{(2)} {\bf{F}}^{\text{opt}} \approx \bf{0}$ \cite{AR14TWC}. Even for the hybrid analog and digital BF design, it's equivalent to approaching the performance of FD beamformer by minimizing the Frobenius norm of the gap between such two schemes, e.g., the orthogonal matching pursuit (OMP) algorithm \cite{AR14TWC}. However, as a matrix factorization technique, large matrices manipulation makes the SVD-related algorithm not fit in the main memory of mobility drones, i.e., the complexity for computing the SVD of a $N_r \times N_t$ matrix is $ \mathcal{O}\left(N_t N_r \ \min(N_t, N_r)\right)$ \cite{SVD17arxiv}. Towards this, it might not be flexible to apply traditional BF design in the fast random access communications by observing diverse CSIs in practice. \subsection{FL-based Robust BF Design} In order to circumvent the high complexity of optimization algorithms, it's imperative to build up an ML framework with low computational complexity, which is also capable of extrapolating new features from a limited set of noisy training data \cite{HYX20arXiv}. For easier implementations, single layer feedforward neural network (SLFN) has demonstrated powerful potentials for data regression and classification in faster learning speed and least human intervene compared with the conventional ML technique. Furthermore, without need of tuning the hidden layer parameters, ELM has been developed for the ``generalized'' SLFN, performing in low complexity \cite{HG11}. Thus, due to the hardware constraint of energy-limited devices (e.g., UAV), ELM has been verified to be the fast technique for the BF design in low latency communication \cite{HYX20arXiv}. \iffalse In essences of ELM, all UAVs in Fig. \ref{[system model]} are supposed to predict its local beamformer for noisy channels after extrapolating the new features from stochastic databases. \fi By deploying ELM model at each UAV, the output weights $\boldsymbol{x}_i$ of ELM scheme at UAV $i$ shall be learned form a training database $\bm\zeta_{i(l,t)}$. In detail, the term $i(l,t)$ is decided by the location of UAV $i$ traveling at time $t$, by which the data distribution follows the distribution $\text{P}_l$. For simplicity, we denote $i={i(l,t)}$. Similarly, we have $\bm\zeta_i=\bm\zeta_{i(l,t)}$ hereinafter, given by \begin{equation}\label{Training_data} \begin{aligned} \bm\zeta_i =\{(\boldsymbol{S}_{i,d},\boldsymbol{T}_{i,d}) \vert~ d=1,..., D_{i} \}, \end{aligned} \end{equation} with $\boldsymbol{S}_{i,d}$ and $\boldsymbol{T}_{i,d}$ being the sample and target for the $d-$th training data fed into the ML model at UAV $i$, respectively. $D_i$ denotes the number of training samples collected by UAV $i$. More specific, the input format of the training samples to ELM model is given by \begin{equation}\label{Training_data} \begin{aligned} \boldsymbol{S}_{i,d}=\left[{\textrm{Re}}\left({\textrm{vec}}({{\bf{H}}^{(r,c)}_{{{i}}}})\right),\textrm{Im}\left({\textrm{vec}}({{\bf{H}}^{(r,c)}_{{{i}}}})\right)\right]^{T} \in \mathbb{R}^{2 N_r N_t}, \end{aligned} \end{equation} where $\textrm{Re}(\cdot)$ and $\textrm{Im}(\cdot)$ denote the real and imaginary part, respectively. Moreover, we have \begin{equation} \{{\bf{H}}^{(r,c)}_{{{i}}}= \mathcal{CN}({{\bf{H}}^{(r)}_{{{i}}}}, \sigma^2_{\text{Train}})\vert r\in(1,...,R_i), c\in(1,..., C_i)\}, \end{equation} which are the noisy channels based on $R_i$ different realizations. For each realization, we assume there are $C_i$ samples for training the ELM model. Till now, all the collected samples at UAV $i$ is $D_i=R_i C_i$. To evaluate the variance of the white Gaussian noise (AWGN) added to desired signals, each channel entry can be explained by $\text{SNR}_{\text{Train}} (\text{dB})=\big|{[{\bf{H}}^{(r,c)}_{{{i}}}]_{(.,.)}}\big|^2-[\sigma^2_{{\text{Train}}{(.,.)}}]$, with $\sigma^2_{\text{Train}}$ being the variance of noise. Similarly, the target can be obtained by substituting ${\bf{H}}^{(r)}_{{{i}}}$ into (\ref{F_opt}), that is \begin{equation}\label{Training_target} \begin{aligned} \boldsymbol{T}_{i,d}=\left[\text{Re}\left(\text{vec}\big({\bf{F}}_{i}^{\text{opt}{(r)}}\big)\right),\text{Im}\left({\text{vec}}\big({\bf{F}}_{i}^{\text{opt}{(r)}}\big)\right)\right] \in \mathbb{R}^{2 N_t N_s}. \end{aligned} \end{equation} Thus, the accurate labels of the training data samples have been provided appropriately. Refer to the principles of ELM in \cite{HG11}, the output of an ELM model at UAV $i$ related to $Q$ hidden nodes is given by \begin{equation}\label{ELM_output} \begin{aligned} \boldsymbol{Y}_{i,Q}(\boldsymbol{S}_{i,d})=\sum_{q=1}^{Q}x_{i,q} G_{i, q}(\boldsymbol{S}_{i,d})=\boldsymbol{G}_i(\boldsymbol{S}_{i,d})x_i, \end{aligned} \end{equation} where $x_i\in \mathbb{R}^{Q\times2N_tN_s}$ is the output weight, $\boldsymbol{G}_i(\boldsymbol{S}_{i,d})$ denotes the feature mapping relation of the training input $\boldsymbol{S}_{i,d}$ from $2{N_r}{N_t}$ to $Q$ dimensions. With the given randomized weights connecting the $q$-th hidden node and the input nodes $\{\bold{a}_{q}\}$ and the bias of $q$-th hidden node $\{b_{q}\}$, a nonlinear piecewise continuous function can be used at the hidden node as its activation function \cite{HG11}. For example, the well-known Sigmoid function is given by \begin{equation}\label{ELM_output} \begin{aligned} G_{i, q}(\bold{a}_{q}, b_{q} \boldsymbol{S}_{i,d}) =(1+\exp(-(\bold{a}_{q}\boldsymbol{S}_{i,d}+b_{q})))^{-1}. \end{aligned} \end{equation} To this end, we implement ISPW-ADMM with the local loss function \begin{equation}\label{optimum_LossF} \begin{aligned} f_{i}(x_i;\bm \zeta_i )= \frac{1}{2\lambda_{e}}\| \bm{W}_i x_i- \bm{T}_{i}\|^{2} + \frac{1}{2}\| x_i\|^{2}, \end{aligned} \end{equation} with $\boldsymbol{W}_i=[\bm{G}_i(\bm{S}_{i,1}),...,\bm{G}_i(\boldsymbol{S}_{i,D_i})]^T\in \mathbb{R}^{Q\times2N_rN_t}$ being the hidden layer output matrix at agent $ i$. $\lambda_e$ denotes the tradeoff parameter between separating margin and training error. \begin{figure*}[t] \vskip -0.2 in \minipage{0.3333\textwidth} \includegraphics[width=\linewidth]{tu1} \caption{ Testing NMSE vs running time.}\label{[NMSE_Time]} \endminipage\hfill \minipage{0.3333\textwidth} \includegraphics[width=\linewidth]{tu2} \caption{Testing spectral efficiency vs $\text{SNR}_{\text{Test}}$.}\label{[Rate_SNR]} \endminipage\hfill \minipage{0.3333\textwidth}% \includegraphics[width=\linewidth]{tu3} \caption{ Testing NMSE vs communication energy.}\label{[NMSE_Commu]} \endminipage \vskip -0.2 in \end{figure*} \section{Simulation} {\iffalse In this section, numerical results are provided to demonstrate the availability and effectiveness of the proposed ISPW-ADMM method.\fi} To verify the practicality of ISPW-ADMM scheme coped with the energy effective and fast network access in UAV communications, we compare ISPW-ADMM with state-of-the-art decentralized optimization methods, including W-ADMM, PW-ADMM, and decentralized gradient descent (DGD) ($\alpha=10^{-2}$) \cite{DGD}, distributed-ADMM (D-ADMM) ($\rho=2$) \cite{d-admm}. If not otherwise specified, the values of all parameters are given by $N=10$, $M=2$, and $L=2$, $N_t=16$, $ N_r=1$, and $R_i=10$, $Q=200$, and $\lambda_e=10^{-2}$, $\eta=0.95$, $\rho=2$, and $\gamma=1$, $\tau=10$. Without loss of generality, we assume that $\text{SNR}_{\text{Train}}=\text{SNR}_{\text{Test}}$, that is, the SNR utilized for data training and testing data are set the same. In Fig. \ref{[NMSE_Time]}, the normalized mean square error (NMSE) performance of different decentralized algorithms are presented versus running time. It's intuitive to find that the proposed ISPW-ADMM-based training in the dynamic connectivity graph of UAV swarm is the most efficient in time cost. Even for training by static connected UAV swarm, the proposed ISPW-ADMM scheme still performs better than the synchronous W-ADMM and PW-ADMM method. This is because ISPW-ADMM with inexact updates outperforms the conventional stochastic synchronous and asynchronous method. While considering the robust beamforming design in the proposed FL-enabled ELM learning model, the spectral efficiency achieved by the aforementioned decentralized methods are presented versus different $\text{SNR}_{\text{Test}}$ in Fig. \ref{[Rate_SNR]}, with perfect CSI and imperfect CSI assumed, and we have $\rho_r=20$ dB. Overall, the performance of all schemes increase with enlarging $\text{SNR}_\text{Test}$, apart from the FD beamformer with perfect CSI assumed which is immune to any noise. The benchmark is presented by the imperfect CSI-based FD scheme without ELM training. Recall to that ELM works on extracting new features from noisy database, the ELM-based FD is robust against the noisy channels, even with extreme large noise added. Moreover, due to that the classes of channels in one region are less than those in multi-region (e.g., two regions), the case of one region (denoted by Region-1) performs better than that of two regions (i.e., Region-2). By the results, one can find that the stochastic data driven ISPW-ADMM algorithm is superior in achieving higher spectral efficiency, compared with PW-ADMM, especially as shown in the case of Region-2. The testing NMSE over communication cost is shown in Fig. \ref{[NMSE_Commu]}, which is another vital metric concerned in the energy-limited UAV community, i.e., less communication cost indicates higher communication efficiency. Herein, we consider the unicast and the communication cost for transmitting a $Q$-dimensional vector is unit 1. It's clear to see that the ISPW-ADMM is the most energy efficient proposal compared with W-ADMM, DGD, and D-ADMM. This is due to that all the links in DGD and D-ADMM are active in each iteration, which consume more energy for information sharing. Thus, the proposed ISPW-ADMM is valid to be effectively used in realizing the robust beamforming design for fully decentralized UAV communications. \section{Conclusion} In this work, we have proposed the multiple random walk mechanism for ISPW-ADMM based consensus optimization. By which, any fully decentralized FL tasks over time-varying graphs can be solved, along with maintaining high communication/learning efficiency and enhanced privacy preservation. Moreover, with the unbalanced data collected in practice, the stochastic gradients and biased first-order moment estimation leveraged in ISPW-ADMM can guarantee the fast convergence. Then, a specific on-board mission is presented to further verify the effectiveness of ISPW-ADMM in wireless applications, i.e., the ELM-enabled robust beamforming design, as it is verified by the presented numerical results. \iffalse is investigated to assist the collaborative model training in UAV swarms with dynamic connections, from which the ELM-based robust beamforming designs can be realized in an effective and privacy-preserving solution. Motivated by the random training database collected by the traveling UAVs and its inherent undirected connectivity graph, a novel ISPW-ADMM method is proposed in asynchronous manner. Eventually, compared with other state-of-the-art decentralized optimization methods, numerical results have been provided to verify the high efficient running time and communication energy cost of ISPW-ADMM, which can also achieves higher spectral efficiency. \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2020-07-28T02:41:35", "yymm": "2007", "arxiv_id": "2007.13614", "language": "en", "url": "https://arxiv.org/abs/2007.13614" }
\section{Introduction} With the explosive growth of the Internet-of-Things, massive number of users and devices will access wireless networks at the same time~\cite{B_19WireFuture,J_LiuMA18_R2,J_Zhang19prospective}. However, the large amount of data flowing in various wireless propagation channels poses significant privacy and security challenges to the next-generation wireless system design~\cite{J_19SurveyFutureNOMA}. Although physical layer security exploits channel capacity difference between the legitimate user and the eavesdropper to protect the legitimate transmission, this technique, when applied in its primitive form, is vulnerable in multiple access systems since eavesdroppers have more targets to choose from~\cite{J_Wu18Survey}. Fortunately, non-orthogonal multiple access (NOMA) technique provides a promising solution by serving a group of legitimate users through power domain multiplexing~\cite{J_Wei_DN20}, thus generating artificial interference to the eavesdroppers~\cite{J_SeNOMA19_Zeng}. Pioneering works on NOMA with security consideration assume perfect channel state information (CSI) of the legitimate users' channel and imperfect CSI of the eavesdropper's channel~\cite{J_Sun18NOMA,J_Feng19SeNoma,J_YueSecure20NOMA}. Under this assumption, secure transmission with NOMA achieves a higher secure transmission rate (i.e., transmission rate with secrecy outage probability constraint satisfied) than the time-division multiplexing and frequency-division multiplexing access~\cite{J_Wang19SecureNOMI}. While these results are encouraging, the assumption on perfect CSI of the legitimate users' channels is too strong in practice, especially with massive users where a large number of CSIs from legitimate users need to be fed back to the base station (BS). As a result, limited feedback or vector quantization error is inevitable. Due to the uncertainty in the legitimate users' CSIs, an outage may also occur during the legitimate transmission~\cite{J_Wei17OptPower}. To ensure both the legitimate transmission outage probability and the secrecy outage probability are within tolerable levels, the secure transmission scheme design should incorporate two probabilistic constraints, which unfortunately do not admit closed-form expressions for further analysis. To overcome this challenge, this paper adopts the quantization cell approximation~\cite{J_Yoo_pdf07} and the Bernstein-type inequality~\cite{J_Bech09Berntein} to transform the intractable probabilistic constraints into deterministic ones. Furthermore, by introducing an auxiliary variable and leveraging Lambert W function, the constraints are decoupled. Then, the resultant problem is readily solved via block coordinate ascent approach~\cite{C_Xie18BCA}, where branch-and-bound algorithm~\cite{J_Mult_FP01} and difference of convex (DC) programming~\cite{J_An05Opt} are respectively used to handle each subproblem. While the above algorithm is a workable solution, it does not scale well with network size. When the size of the network is large, both branch-and-bound algorithm and DC programming would be too time-consuming. To scale the transmission scheme to massive access, a fast first-order algorithm is further proposed by exploiting the block separability of constraints under the alternating maximization framework~\cite{Bert1989Padc}. In particular, to replace the branch-and-bound algorithm, the multiple-ratio structure of the objective function is exploited, and a quadratic transform is employed to obtain an iterative algorithm, which converges to a local optimal solution. On the other hand, to get around the DC programming, a projected-gradient method~\cite{B_Beck09_AM} is employed to obtain a stationary point. It is proved that the overall first-order algorithm is guaranteed to converge. Furthermore, simulation results demonstrate that the proposed first-order algorithm reduces computation time by at least two orders of magnitude compared to the method employing branch-and-bound and DC programming while achieving the same performance. Finally, simulation results show that the resultant transmission scheme achieves a significantly higher security guaranteed sum-rate than the orthogonal multiple access scheme and NOMA scheme ignoring CSI uncertainty. The rest of this paper is organized as follows. System model and the security guaranteed sum-rate maximization problem are formulated in Section~\ref{Sec:II}. Then, the probabilistic constraints are transformed into deterministic constraints in Section~\ref{Sec:III}. In Sections~\ref{Sec:IV} and~\ref{Sec:V}, a conventional solution and a first-order algorithm are respectively proposed. Simulation results are presented in Section~\ref{Sec:VI}. Finally, conclusion is drawn in Section~\ref{Sec:VII}. $\mathit{Notation:}$ Column vectors and matrices are denoted by lowercase and uppercase boldface letters, respectively. Conjugate transpose, transpose, Frobenius norm, trace, the modulus of a scalar and mathematical expectation are denoted by $(\cdot)^H$, $(\cdot)^T$, $\|\cdot\|_F$, $\mathrm{Tr}(\cdot)$, $|\cdot|$ and $\mathbb{E}\{\cdot\}$, respectively. The notations $[x]^+$ and $\mathrm{Pr}(\cdot)$ stand for $\max\{x,0\}$ and probability, respectively. $\mathcal{CN}\left(0,b\right)$ denotes the circularly symmetric complex normal distribution with zero mean and variance $b$, and $\mathrm{Exp}(\lambda)$ denotes the exponential distribution with mean $\lambda$. The principal branch of Lambert W function is defined by $W_0(x)e^{W_0(x)}=x$ for $x\geq-1/e$ with $W_0(x)\geq-1$~\cite{Lambert_W96}. \section{System Model and Problem Formulation}\label{Sec:II} \begin{figure}[tb] \centering \includegraphics[scale=0.52]{Fig1.pdf} \caption{System model of downlink massive access secure NOMA network.}\label{fig:SystemModel} \end{figure} We consider a downlink secure multiple-input single-output (MISO) system as shown in Fig.~\ref{fig:SystemModel}. There are one $N$-antennas BS, $M$ clusters of single-antenna legitimate users with cluster $m$ having $K_m$ users, and $J$ passive single-antenna non-colluding eavesdroppers (Eves) who potentially wiretap any message being sent. This paper considers the massive access setting in which $\sum_{m=1}^{M} K_m > M$ and assumes that all wireless channels experience quasi-static Rayleigh block fading (i.e., all channels are subject to zero mean complex Gaussian distribution). The main channel (i.e., from the BS to user $k$ in cluster $m$) and the eavesdropper's channel (i.e., from the BS to Eve $j$) are respectively denoted by $d^{-\alpha/2}_{m,k}\mathbf{g}_{m,k}\in \mathbb{C}^{N\times 1}$ and $d^{-\alpha/2}_{e,j}\mathbf{g}_{e,j}\in \mathbb{C}^{N\times 1}$, where $\mathbf{g}_{m,k}\sim \mathcal{CN}(\mathbf{0},\mu^2_{m,k}\mathbf{I}_N),\mathbf{g}_{e,j}\sim \mathcal{CN}(\mathbf{0},\mu^2_{e,j}\mathbf{I}_N)$ are the small-scale fading vectors, $d_{m,k}$ and $d_{e,j}$ respectively denote the distances from the BS to user $k$ in cluster $m$ and the Eve $j$, and $\alpha$ denotes the path-loss exponent. At network initialization, a randomly generated codebook consisting of $M=2^B$ unit-norm vectors each with length $N$ (denoted by $\{\hat{\mathbf{g}}_{m}\}_{m=1}^M$) is designed off-line and made known at both the BS and users via codebook distribution scheme~\cite{B_Clerckx13}. Then, the BS sends a sequence of training symbols to all users who perform channel estimation to obtain knowledge of their own channels. Since channel estimation error is negligible when the training sequence is long and when signal-to-noise ratio is high~\cite{J_YCestimation08}, it is assumed that the CSI of the main channel is accurate at the users. After channel quantization (using the codebook obtained earlier from the BS), each user conveys its channel direction information (CDI) using $B$ bits over the feedback channel to the BS. Due to the limited feedback per channel coherence block, the CSI of the main channel obtained at the BS is imperfect~\cite{J_Kout12DownSDMA}. But the BS can still group users into clusters based on the obtained CDI. Employing the well-known quantization cell approximation~\cite{J_Jindal06Finite}, the users having the maximum inner product between their small-scale fading vectors and $\hat{\mathbf{g}}_{m}$ are assigned to cluster $m$, and the value of $K_m$ is automatically obtained after the grouping. The normalized channel vector $\tilde{\mathbf{g}}_{m,k}:=\mathbf{g}_{m,k}/\|\mathbf{g}_{m,k}\|$ and $\hat{\mathbf{g}}_{m}$ are related by~\cite{J_Kout12DownSDMA} \begin{equation}\label{eq:rela_3_gk_channel_imperfect} \tilde{\mathbf{g}}_{m,k}=(\cos\beta_{m,k} ) \hat{\mathbf{g}}_{m}+(\sin \beta_{m,k}) {\mathbf{e}}_{m,k}, \end{equation} where ${\mathbf{e}}_{m,k}\in \mathbb{C}^{N\times 1}$ is the unit-norm quantization error vector isotropically distributed in the nullspace of $\hat{\mathbf{g}}_{m}$, and $\beta_{m,k}$ represents the angle between $\tilde{\mathbf{g}}_{m,k}$ and $\hat{\mathbf{g}}_{m}$ with $\sin^2 \beta_{m,k}$ being a random variable with variance determined by $B$~\cite{J_Jindal06Finite}. By employing NOMA transmission, the superimposed signal $\mathbf{x}$ transmitted by the BS is $\mathbf{x}=\sum_{m=1}^{M}\left(\mathbf{w}_m\sum_{k=1}^{K_m}\sqrt{P\theta_{m,k}}s_{m,k}\right)$, where $\mathbf{w}_m\in \mathbb{C}^{N\times 1}$ is the unit-norm beamforming vector for cluster $m$; $P$ represents the total transmit power; $\theta_{m,k}$ and $s_{m,k}$ respectively denote power allocation ratio and information bearing signals for user $k$ in cluster $m$ with $\mathbb{E}\{|s_{m,k}|^2\}=1$. The received signals at user $k$ in cluster $m$ and Eve $j$ are respectively given by \begin{equation}\label{eq:user_signal_mk} y_{m,k}=d^{-\alpha/2}_{m,k}\mathbf{g}^H_{m,k} \sum_{i=1}^{M}\left(\mathbf{w}_i\sum_{v=1}^{K_i}\sqrt{P\theta_{i,v}}s_{i,v}\right)+n_{m,k}, \end{equation} \begin{equation} y_{e,j}=d^{-\alpha/2}_{e,j}\mathbf{g}^H_{e,j}\sum_{i=1}^{M}\left(\mathbf{w}_i\sum_{v=1}^{K_i}\sqrt{P\theta_{i,v}}s_{i,v}\right)+n_{e,j}, \end{equation} where $n_{m,k}\sim \mathcal{CN}(0,\sigma^2_b)$ and $n_{e,j}\sim \mathcal{CN}(0,\sigma^2_e)$ are the receiver noises at user $k$ in cluster $m$ and Eve $j$, respectively. To suppress the interference among clusters, we employ zero-forcing beamforming based on $\{\hat{\mathbf{g}}_{m}\}_{m=1}^M$, and the normalized beamformer $\mathbf{w}_m$ for cluster $m$ is chosen to satisfy~\cite{J_Kout12DownSDMA,J_Wang19SecureNOMI} \begin{equation}\label{eq:zf_beamformer} \hat{\mathbf{g}}^H_n\mathbf{w}_m=0, \quad \forall n\neq m, ~ n\in\{1,\ldots,M\}. \end{equation} On the other hand, to suppress intra-cluster interference, we can apply successive interference cancellation (SIC) in each cluster based on $\{d_{m,k}\}_{k=1}^{K_m}$~\cite{J_Yang16ImpefectCSI}. Since the user locations are fixed, the distance and path loss are deterministic, and $\{d_{m,k}\}_{k=1}^{K_m}$ can be obtained through the Global Positioning System (GPS) or estimated based on signal propagation model~\cite{J_Andes95_Prog}. Without loss of generality, it is assumed that $d_{m,1}<\dots<d_{m,K_m}$. Furthermore, since the interference cancellation is conducted at the user side for power domain downlink NOMA~\cite{J_Ding17App}, perfect SIC could be obtained at users~\cite{J_Yang16ImpefectCSI,J_Fang17Joint}. As a result, the received signal-to-interference-plus-noise ratio (SINR) at user $k$ in cluster $m$ is given by~\cite{J_Chen18NOMASecure} \begin{align} &\rho_{m,k} \nonumber\\ =&\frac{|\frac{\mathbf{g}^H_{m,k}\mathbf{w}_m}{\mu_{m,k}}|^2 \theta_{m,k}}{|\frac{\mathbf{g}^H_{m,k} \mathbf{w}_m}{\mu_{m,k}}|^2\sum\limits_{i=1}^{k-1}\theta_{m,i}+{P_m}\sum\limits_{v\neq m}|\frac{\mathbf{g}^H_{m,k}\mathbf{w}_v}{\mu_{m,k}}|^2+\frac{1}{\gamma_{m,k}}}\label{eq:phi_User}\\ =&\frac{|\frac{\mathbf{g}^H_{m,k}\mathbf{w}_m}{\mu_{m,k}} |^2\theta_{m,k}}{|\frac{{\mathbf{g}}^H_{m,k}\mathbf{w}_m}{\mu_{m,k}} |^2\!\sum\limits_{i=1}^{k-1}\!\theta_{m,i}\!+\!P_m\|\!\frac{\sin \!\beta_{m,k}{\mathbf{g}}_{m,k}}{\mu_{m,k}}\!\|^2\!\sum\limits_{v\neq m} \!|\mathbf{e}^H_{m,k}\! \mathbf{w}_v|^2\!+\!\frac{1}{\gamma_{m,k}}},\label{eq:SIR_User} \end{align} where~\eqref{eq:SIR_User} is obtained by putting~\eqref{eq:rela_3_gk_channel_imperfect} and~\eqref{eq:zf_beamformer} into~\eqref{eq:phi_User}, $\gamma_{m,k}=P\mu^2_{m,k}d^{-\alpha}_{m,k}/\sigma^2_b$, $P_m=\sum_{k=1}^{K_m}\theta_{m,k}$ is the transmit power allocated to cluster $m$, and it satisfies $\sum_{m=1}^{M}P_m=1$. In contrast, it is assumed that all Eves have no information about the decoding order in the SIC detector for a cluster. Accordingly, they cannot perform SIC within a cluster, and the corresponding SINR of eavesdropping user $k$ in cluster $m$ at Eve $j$ is given by~\cite{J_Chen18NOMASecure} \begin{equation}\label{eq:SIR_Eve} q^j_{m,k}=\frac{|\frac{{\mathbf{g}}^H_{e,j} \mathbf{w}_m}{\mu_{e,j}}|^2\theta_{m,k}}{|\frac{{\mathbf{g}}^H_{e,j}\mathbf{w}_m}{\mu_{e,j}}|^2\sum\limits_{i\neq k}\theta_{m,i}+{P_m}\sum\limits_{v\neq m}|\frac{{\mathbf{g}}^H_{e,j}\mathbf{w}_v}{\mu_{e,j}}|^2+\frac{1}{\gamma_{e,j}}}, \end{equation} where $\gamma_{e,j}=P\mu^2_{e,j}d^{-\alpha}_{e,j}/\sigma^2_e$. In light of~\eqref{eq:SIR_User} and~\eqref{eq:SIR_Eve}, the channel capacity for user $k$ in cluster $m$ and the corresponding eavesdropping capacity at Eve $j$ are given by $\log_2\left(1+\rho_{m,k}\right)$ and $\log_2\left(1+q^j_{m,k}\right)$, respectively. For the main channel, the messages can be reliably received by user $k$ in cluster $m$ when the corresponding transmission rate $R_{m,k}$ satisfies $R_{m,k}\leq \log_2\left(1+\rho_{m,k}\right)$~\cite{J_Wynner75The}. However, since there is only one CDI feedback per cluster, the inter-cluster interference cannot be perfectly removed from user point-of-view. The residual interference is reflected by the term $\sum_{v\neq m} |\mathbf{e}^H_{m,k} \mathbf{w}_v|^2$ in~\eqref{eq:SIR_User}, and therefore $\log_2\left(1+\rho_{m,k}\right)$ is inaccurately known by the BS. As a result, reliable transmission cannot always be guaranteed because it is possible that $R_{m,k}>\log_2\left(1+\rho_{m,k}\right)$. Hence, the connection outage probability (COP) of user $k$ in cluster $m$ is expressed as~\cite{J_Hu18SecureIoT} \begin{equation}\label{eq:ROP_user_k_ori} \begin{split} \mathrm{COP}: \quad p^{m,k}_{co} &=\mathrm{Pr}\left\{R_{m,k}>\log_2\left(1+\rho_{m,k}\right)\right\}. \end{split} \end{equation} For the $j^{th}$ Eve's channel, since the BS does not know its perfect CSI due to the passive nature of Eve $j$, the knowledge of $q^j_{m,k}$ is uncertain~\cite{J_Wang17SeMulti}. Consequently, a secrecy outage event occurs at the BS when $\log_2\left(1+q^j_{m,k}\right)$ exceeds the redundancy rate of user $k$ in cluster $m$, denoted by $D^j_{m,k}$, and the secrecy outage probability (SOP) of user $k$ in cluster $m$ for Eve $j$ is given by~\cite{J_Wang17SeMulti} \begin{equation}\label{eq:SOP_Cons} \begin{split} \mathrm{SOP}: \quad p^{m,k,j}_{so} &=\mathrm{Pr}\left\{D^j_{m,k}<\log_2\left(1+q^j_{m,k}\right)\right\}. \end{split} \end{equation} Considering the non-collaborative eavesdropping model, in which all Eves do not exchange their observations or outputs, the achievable secrecy rate for user $k$ in cluster $m$ is given by~\cite{J_multi08_eves} $\min\limits_{1\leq j\leq J}~[R_{m,k}-D^j_{m,k}]^+$, which is the minimum over the secrecy rates achieved by all Eves. For user $k$ in cluster $m$, maximizing $1-p^{m,k}_{co}$ would improve reliable transmission while maximizing the achievable secrecy rate $\min\limits_{1\leq j\leq J}~[R_{m,k}-D^j_{m,k}]^+$ would improve secure transmission. Considering both the reliability and security requirements for all users, we aim to maximize $\sum_{m=1}^{M}\sum_{k=1}^{K_m}\left(1-p^{m,k}_{co}\right)\min\limits_{1\leq j\leq J}\left[R_{m,k}-D^j_{m,k}\right]^+$, which is the security guaranteed sum-rate. Since it is known that $p^{m,k}_{co}$ in~\eqref{eq:ROP_user_k_ori} is independent of $D^j_{m,k}$, the security guaranteed sum-rate is equivalent to $\min\limits_{1\leq j\leq J}\{ \sum_{m=1}^{M}\sum_{k=1}^{K_m}\left(1-p^{m,k}_{co}\right)\! [R_{m,k}-D^j_{m,k}]^+ \}$. Considering the constraints of outage probability, the transmission design is thus given by the following optimization problem \begin{subequations}\label{eq:max_EE} \begin{align} \mathcal{P}0:\max_{\mathcal{S}}& \min\limits_{1\leq j\leq J}\left\{\! \sum_{m=1}^{M}\sum_{k=1}^{K_m}\left(1-p^{m,k}_{co}\right) \left[R_{m,k}-D^j_{m,k}\right]^+\!\right\}, \label{obj:P0_perform} \\ \mathrm{s.t.}\quad & p^{m,k}_{co} \leq \delta, ~\forall m,k, \label{P0:cop_initial} \\ & p^{m,k,j}_{so} \leq \varepsilon, ~ \forall m, k, j, \label{P0:SOP_initial} \\ & \sum_{k=1}^{K_m}\theta_{m,k}={P_m}, ~ \forall m, \label{eq:simplex_theta_power} \end{align} \end{subequations} where $\mathcal{S}=\{R_{m,k}\geq 0,D^j_{m,k}\geq 0,\theta_{m,k}\geq 0\}$, and $\delta\in(0,1)$ and $\varepsilon\in(0,1)$ are the predefined upper bounds representing the maximum tolerable COP and SOP, respectively.\footnote{This assumption can be easily extended to the scenario with specific upper bounds requirements for each user and each eavesdropper due to the parallel structure of our proposed algorithm shown in Section~\ref{Sec:IV}.} Problem $\mathcal{P}0$ provides an elegant formulation to measure the secure transmission performance via~\eqref{obj:P0_perform} and the outage uncertainties via~\eqref{P0:cop_initial} and~\eqref{P0:SOP_initial}. It is more general than the previous problem formulation in secure NOMA transmission under perfect CSI of the main channel (i.e., setting $\delta=0$ and $J=1$ in $\mathcal{P}0$ reduces to the formulation in~\cite{J_Wang19SecureNOMI}). By solving $\mathcal{P}0$, we obtain not only a transmission scheme $\mathcal{S}$ that maximizes the security guaranteed sum-rate, but also the maximum value of the security guaranteed sum-rate by computing the objective function at the optimized solution. However, the challenges in solving $\mathcal{P}0$ lie in the non-concavity of the objective function and the probabilistic constraints, which do not admit simple closed-form expressions. To circumvent the aforementioned challenges, in the next section, we derive a closed-form expression of COP constraint and a tight approximation to the SOP constraint. \section{Handling the Probabilistic Constraints in $\mathcal{P}0$}\label{Sec:III} We first handle the main channel outage probability. Putting~\eqref{eq:SIR_User} into~\eqref{eq:ROP_user_k_ori}, $p^{m,k}_{co}$ is expressed as~\eqref{eq:COP_cf_inter}, shown at the top of this page, \begin{figure*} \normalsize \begin{align} p^{m,k}_{co} &=\mathrm{Pr}\left\{R_{m,k}>\log_2\left(1+\rho_{m,k}\right)\right\} \nonumber\\ &=\mathrm{Pr}\left\{2^{R_{m,k}}-1\!>\!\frac{|\frac{\mathbf{g}^H_{m,k}\mathbf{w}_m}{\mu_{m,k}} |^2\theta_{m,k}}{|\frac{{\mathbf{g}}^H_{m,k}\mathbf{w}_m}{\mu_{m,k}} |^2\sum\limits_{i=1}^{k-1}\theta_{m,i}+{P_m}\|\frac{{\mathbf{g}}_{m,k}}{\mu_{m,k}}\|^2\sin^2 \beta_{m,k}\sum\limits_{v\neq m} |\mathbf{e}^H_{m,k} \mathbf{w}_v|^2 +\frac{1}{\gamma_{m,k}}}\right\} \label{eq:COP_cf_inter} \\ &=1-\exp\left(\frac{1-2^{R_{m,k}}}{\left(\theta_{m,k}-\left(2^{R_{m,k}}-1\right)\sum\limits_{i=1}^{k-1}\theta_{m,i}\right)2\gamma_{m,k}}\right) \left(1+\frac{2^{R_{m,k}}-1}{\theta_{m,k}-\left(2^{R_{m,k}}-1\right)\sum\limits_{i=1}^{k-1}\theta_{m,i}}\frac{{P_m}2^{-\frac{B}{N-1}}}{2}\right)^{1-M},\forall m, k \label{eq:COP_cf_final}, \end{align} \hrulefill \end{figure*} where~\eqref{eq:COP_cf_final} is derived in Appendix~\ref{ref:der_cop_mk}. On the other hand, putting~\eqref{eq:SIR_Eve} into~\eqref{eq:SOP_Cons}, $p^{m,k,j}_{so}$ is rewritten as \begin{align}\label{eq:SOP_origi} &p^{m,k,j}_{so} \nonumber \\ =&\mathrm{Pr}\left\{D^j_{m,k}<\log_2\left(1+q^j_{m,k}\right)\right\}\nonumber\\ =&\mathrm{Pr}\left\{\!2^{D^j_{m,k}}-1\!<\!\frac{|\!\frac{{\mathbf{g}}^H_{e,j}\mathbf{w}_m}{\mu_{e,j}} \!|^2\theta_{m,k}}{|\!\frac{{\mathbf{g}}^H_{e,j}\!\mathbf{w}_m}{\mu_{e,j}} \!|^2\!\sum\limits_{i\neq k}\theta_{m,i}\!+\!P_m\!\sum\limits_{v\neq m}\!|\!\frac{{\mathbf{g}}^H_{e,j}\!\mathbf{w}_v}{\mu_{e,j}}\!|^2\!+\!\frac{1}{\gamma_{e,j}}}\!\right\}\nonumber \\ =&\mathrm{Pr}\left\{2^{D^j_{m,k}}-1<{\frac{\mathbf{g}_{e,j}^H}{\mu_{e,j}}\mathbf{\Lambda}\frac{{\mathbf{g}}_{e,j}}{\mu_{e,j}}}\right\},~\forall m,k,j, \end{align} where $\mathbf{\Lambda}\in \mathbb{C}^{N\times N}$ is given by \begin{align}\label{eq:lamda_quadraticForm} \mathbf{\Lambda}= &\gamma_{e,j}\theta_{m,k}\mathbf{w}_m\mathbf{w}^H_m-\gamma_{e,j}\left(2^{D^j_{m,k}}-1\right)\mathbf{w}_m\mathbf{w}^H_m\sum_{i\neq k}\theta_{m,i}\nonumber \\ &-\gamma_{e,j}{P_m}\left(2^{D^j_{m,k}}-1\right)\sum_{v\neq m}\mathbf{w}_v\mathbf{w}^H_v. \end{align} It is observed that $p^{m,k,j}_{so}$ is expressed in the cumulative distribution function (CDF) of indefinite quadratic forms with ${\mathbf{g}}_{e,j}/{\mu_{e,j}}\sim \mathcal{CN}(\mathbf{0},\mathbf{I}_N)$. Hence, a closed-form expression of~\eqref{eq:SOP_origi} is given by~\cite[eq. 30]{J_IQF_GausianYI16} \begin{equation}\label{eq:sop_inter_U-k} p^{m,k,j}_{so}=\sum_{i=1}^{N_p}\prod_{v\neq i}^{N}\left(1-\frac{\lambda_{v}}{\lambda_{i}}\right)^{-1}\!\exp\left(-\frac{2^{D^j_{m,k}}-1}{\lambda_{i}}\right)\!,\!\forall m,k,j, \end{equation} where $\{\lambda_{i}\}_{i=1}^N$ are the eigenvalues of $\mathbf{\Lambda}$ in descending order, and $N_p$ denotes the number of positive and distinct eigenvalues among $\{\lambda_i\}_{i=1}^N$. By virtue of~\eqref{eq:COP_cf_final} and~\eqref{eq:sop_inter_U-k}, $\mathcal{P}0$ can be equivalently transformed into $\mathcal{P}1$, shown at the top of the next page. \begin{figure*} \normalsize \begin{subequations} \begin{align} \mathcal{P}1: \max_{\mathcal{S}}& \min\limits_{1\leq j\leq J}\left\{ \sum_{m=1}^{M}\sum_{k=1}^{K_m}\! \frac{\exp\left(-\frac{2^{R_{m,k}}-1}{\left(\theta_{m,k}-\left(2^{R_{m,k}}-1\right)\sum\limits_{i=1}^{k-1}\theta_{m,i}\right)2\gamma_{m,k}}\right)[R_{m,k}-D^j_{m,k}]^+}{\left(1+\frac{2^{R_{m,k}}-1}{\theta_{m,k}-\left(2^{R_{m,k}}-1\right)\sum\limits_{i=1}^{k-1}\theta_{m,i}}\frac{{P_m}2^{-\frac{B}{N-1}}}{2}\right)^{M-1}} \right\}, \label{obj:Sum_rate_P1} \\ \mathrm{s.t.}~ & \exp\left(\frac{1-2^{R_{m,k}}}{\left(\theta_{m,k}-\left(2^{R_{m,k}}-1\right)\sum\limits_{i=1}^{k-1}\theta_{m,i}\right)2\gamma_{m,k}}\right)\! \left(1+\frac{(2^{R_{m,k}}-1){P_m}2^{-\frac{B}{N-1}}}{\left(\theta_{m,k}-\left(2^{R_{m,k}}-1\right)\sum\limits_{i=1}^{k-1}\theta_{m,i}\right)2}\right)^{1-M} \!\geq\! 1- \delta, \forall m,k, \label{eq:pro_cons1_K} \\ &\sum_{i=1}^{N_p}\prod_{v\neq i}^{N}\left(1-\frac{\lambda_{v}}{\lambda_{i}}\right)^{-1}\exp\left(-\frac{2^{D^j_{m,k}}-1}{\lambda_{i}}\right) \leq \varepsilon, ~\forall m,k,j, \label{ineq:cons_SOP_CF} \\ & \sum_{k=1}^{K_m}\theta_{m,k}={P_m}, ~\forall m \end{align} \end{subequations} \hrulefill \end{figure*} Since $D^j_{m,k}$ is independent of $R_{m,k}$ and $\theta_{m,k}$, maximizing the objective function~\eqref{obj:Sum_rate_P1} is thereby equivalent to minimizing $D^j_{m,k}$. Notice that $D^j_{m,k}$ appears not only in the objective function~\eqref{obj:Sum_rate_P1}, but also through $\lambda_{i}$ in~\eqref{ineq:cons_SOP_CF}, making~\eqref{ineq:cons_SOP_CF} intractable. In order to proceed, we employ the following lemma, which provides a tighter constraint than~\eqref{ineq:cons_SOP_CF}. \begin{lemma}\label{lem:Bertin-Type} The following constraint is a tighter constraint than the SOP constraint of~\eqref{ineq:cons_SOP_CF}: \begin{equation}\label{ineq:slack_SOP_trans} \begin{split} &2^{D^j_{m,k}}-1 \geq \frac{\theta_{m,k}}{\kappa_{m,k,j}+\sum\limits_{i\neq k}\theta_{m,i}}\geq 0, ~\forall m,k,j, \end{split} \end{equation} where $\kappa_{m,k,j}$ is given by \begin{equation}\label{eq:para_kmj} \begin{split} &\kappa_{m,k,j}\\ =&\frac{\frac{1}{\gamma_{e,j}} \!+\!{P_m}\mathrm{Tr}\left(\!\sum\limits_{v\neq m}\mathbf{w}_v\mathbf{w}^H_v\!\right)\!-\! {P_m}\sqrt{2\ln(\varepsilon_k^{-1})} \|\!\sum\limits_{v\neq m}\mathbf{w}_v\mathbf{w}^H_v\!\|_F }{\left(1+\ln(\varepsilon_k^{-1})+\sqrt{2\ln(\varepsilon_k^{-1})}\right)\mathrm{Tr}\left(\mathbf{w}_m\mathbf{w}^H_m\right)} \end{split} \end{equation} with tunable parameter $\{\varepsilon_k\in(0,1]\}_{k=1}^{K_m}$, which could be different from $\varepsilon$ in~\eqref{ineq:cons_SOP_CF}. \end{lemma} \begin{proof} Please see Appendix~\ref{Appd-proof:Lema1}. \end{proof} \noindent With~\eqref{ineq:cons_SOP_CF} replaced by~\eqref{ineq:slack_SOP_trans}, the minimum value of $D^j_{m,k}$ is obtained with equality of~\eqref{ineq:slack_SOP_trans} and is expressed as \begin{equation}\label{eq:opt_r_mk} \tilde{D}^j_{m,k}= \log_2\left(1+ \frac{\theta_{m,k}}{\kappa_{m,k,j} + \sum\limits_{i\neq k}\theta_{m,i}}\right),~\forall m,k,j. \end{equation} Since constraint~\eqref{ineq:slack_SOP_trans} is much tighter than~\eqref{ineq:cons_SOP_CF} in $\mathcal{P}1$, the obtained minimum $\tilde{D}^j_{m,k}$ is a conservative solution to $\mathcal{P}1$. To refine the solution to $\mathcal{P}1$, we can elaborately adjust $\varepsilon_k$ to reduce the conservatism and the discussion on how to select $\varepsilon_k$ is deferred to the end of Section~\ref{Sec:V}. Although replacing the original SOP with a tighter constraint would result in a solution $\tilde{D}^j_{m,k}$, it is still challenging to solve $\mathcal{P}1$, since $R_{m,k}$ and $\theta_{m,k}$ are strongly coupled in the objective function and the non-convex constraint~\eqref{eq:pro_cons1_K}. To this end, we decouple the problem $\mathcal{P}1$ by introducing an auxiliary variable and provide a conventional solution in the next section. \section{Problem Decoupling and Conventional Solution}\label{Sec:IV} First, we introduce an auxiliary variable \begin{equation}\label{eq:aux_int_xi} \xi_{m,k}=\frac{2^{R_{m,k}}-1}{\theta_{m,k}-\left(2^{R_{m,k}}-1\right)\sum\limits_{i=1}^{k-1}\theta_{m,i}}\geq 0,~\forall m,k. \end{equation} Putting $\xi_{m,k}$ and $\tilde{D}^j_{m,k}$ into~\eqref{obj:Sum_rate_P1}, the objective function in $\mathcal{P}1$ can be rewritten as \begin{equation}\label{eq:obj_trans_twoVars} \min\limits_{1\leq j\leq J}\left\{ \sum_{m=1}^{M} \sum_{k=1}^{K_m} \frac{\left[\log_2\left(\frac{1+\frac{\xi_{m,k}\theta_{m,k}}{1+\xi_{m,k}\sum\limits_{i=1}^{k-1}\theta_{m,i}}}{1+\frac{\theta_{m,k}}{ \kappa_{m,k,j} + \sum\limits_{i\neq k}\theta_{m,i}} }\right)\right]^+}{\exp\left(\frac{\xi_{m,k}}{2\gamma_{m,k}}\right)\!\left(1+\frac{\xi_{m,k}P_m}{2^{\frac{B}{N-1}+1}}\right)^{M-1}} \right\}. \end{equation} Furthermore, the constraint~\eqref{eq:pro_cons1_K} can be rewritten as \begin{equation}\label{ineq:COP_aux_P1} \exp\left(-\frac{\xi_{m,k}}{2\gamma_{m,k}}\right)\left(1\!+\!\frac{\xi_{m,k}P_m}{2^{\frac{B}{N-1}+1}}\right)^{1-M}\! \geq \! 1- \delta, ~\forall m,k. \end{equation} Based on~\eqref{ineq:COP_aux_P1}, we can establish the feasible set of $\xi_{m,k}$ with the following lemma, which is proved in Appendix~\ref{proof:lem-cf_z_ub}. \begin{lemma}\label{lem:clo-fom_zub} The feasible set of $\xi_{m,k}$ is $[0,\xi^{ub}_{m,k}]$, where $\xi^{ub}_{m,k}$ is given by \begin{equation}\label{eq:up_xi_CF} \begin{split} \xi^{ub}_{m,k}=&2\gamma_{m,k}(M-1)W_0\left(\frac{2^{\frac{B}{N-1}}\exp\left(\frac{2^{\frac{B}{N-1}}}{\gamma_{m,k}(M-1)P_m}\right)}{\gamma_{m,k}(M-1){P_m}(1-\delta)^{\frac{1}{M-1}}}\right)\\ &-\frac{2^{\frac{B}{N-1}+1}}{{P_m}}. \end{split} \end{equation} \end{lemma} With~\eqref{eq:obj_trans_twoVars}-\eqref{eq:up_xi_CF}, $\mathcal{P}1$ (after~\eqref{ineq:cons_SOP_CF} tightened by~\eqref{ineq:slack_SOP_trans}) is transformed into the following problem\footnote{Since $\mathcal{P}2$ is not equivalent to $ \mathcal{P}0$ due to a safe approximation, the final obtained solution to $\mathcal{P}2$ is a feasible solution to $\mathcal{P}0$.} \begin{subequations}\label{opt:P2_initial} \begin{align} \mathcal{P}2: \!\max_{\{{\xi}_{m,k},\theta_{m,k}\geq 0\}_{k=1}^{K_m}} & \! \min\limits_{1\leq j\leq J}\!\left\{ \sum\limits_{m=1}^{M} \sum_{k=1}^{K_m} \frac{\exp\left(-\frac{\xi_{m,k}}{2\gamma_{m,k}}\right) }{\left(1+\frac{\xi_{m,k}P_m}{2^{\frac{B}{N-1}+1}}\right)^{M-1}}\right. \nonumber \\ & \left. \times \left[\log_2\left(\frac{1+\frac{\xi_{m,k}\theta_{m,k}}{1+\xi_{m,k}\sum\limits_{i=1}^{k-1}\theta_{m,i}}}{1+\frac{\theta_{m,k}}{ \kappa_{m,k,j} + \sum\limits_{i\neq k}\theta_{m,i}} }\right)\right]^+ \right\}, \label{eq:obj_thr}\\ \mathrm{s.t.}\quad &0\leq \xi_{m,k} \leq \xi^{ub}_{m,k}, ~ \forall m,k, \label{ineq:ini_fea-xi} \\ & \sum_{k=1}^{K_m}\theta_{m,k}={P_m}, ~ \forall m. \label{eq:init-fea-the} \end{align} \end{subequations} From~\eqref{eq:para_kmj}, it is known that $\kappa_{m,k,j}$ is independent of $\xi_{m,k}$ and $\theta_{m,k}$. Hence, the operations of maximization and minimization in $ \mathcal{P}2$ can be interchanged. Furthermore, since $\{\kappa_{m,k,j}\}_{j=1}^J$ are independent of one another, we can solve $\mathcal{P}2$ by separately solving $J$ independent maximization problems and selecting the minimum value. Moreover, for subproblem $j$ (corresponding to Eve $j$), the objective function of $\mathcal{P}2$ contains a summation on $m$, and $\{{\xi}_{m,k},\theta_{m,k}\}_{m=1}^M$ are independent of one another. Hence, subproblem $j$ further reduces to $M$ parallel subproblems, where the subproblem of cluster $m$ with respect to Eve $j$ is expressed as $\mathcal{P}2^{[m,j]}$, shown at the top of the next page. \begin{figure*} \normalsize \begin{subequations}\label{opt:sub_inti} \begin{align} \mathcal{P}2^{[m,j]}: \max_{\{{\Xi}^{(m,j)}_{k},\Theta^{(m,j)}_{k}\geq 0\}_{k=1}^{K_m}} & \sum_{k=1}^{K_m} \frac{\exp\left(-\frac{\Xi^{(m,j)}_{k}}{2\gamma_{m,k}}\right)}{\left(1+\Xi^{(m,j)}_{k}\frac{{P_m}2^{-\frac{B}{N-1}}}{2}\right)^{M-1}} \left[\log_2\left(\frac{1+\frac{\Xi^{(m,j)}_{k}\Theta^{(m,j)}_{k}}{1+\Xi^{(m,j)}_{k}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}}}{1+\frac{\Theta^{(m,j)}_{k}}{ \kappa_{m,k,j} + \sum\limits_{i\neq k}\Theta^{(m,j)}_{i}} }\right)\right]^+, \label{eq:max_thr}\\ \mathrm{s.t.}\quad &0\leq \Xi^{(m,j)}_{k} \leq \xi^{ub}_{m,k},~\forall k, \label{ineq:cons_xi_orthant} \\ & \sum_{k=1}^{K_m}\Theta^{(m,j)}_{k}={P_m}, \label{eq:simplex_theta} \end{align} \end{subequations} \hrulefill \end{figure*} Due to the reverse of minimization and maximization in $ \mathcal{P}2$ and to emphasize that $m$ and $j$ are fixed in $\mathcal{P}2^{[m,j]}$, $\xi_{m,k}$ and $\theta_{m,k}$ are respectively relabeled as ${\Xi}^{(m,j)}_{k}$ and $\Theta^{(m,j)}_{k}$. Furthermore, since the feasible set of $\mathcal{P}2^{[m,j]}$ is a Cartesian product of closed convex sets, the objective function and constraints in $\mathcal{P}2^{[m,j]}$ are decoupled when either $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ or $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ is fixed. Accordingly, the optimization problem can be solved via the block coordinate ascent approach~\cite{C_Xie18BCA} for alternatively updating $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ and $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$. To be specific, when fixing $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$, the subproblem of $\mathcal{P}2^{[m,j]}$ for updating $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ is formulated as the following multiple-ratio fractional programming (FP) problem~\cite{J_Tof17FP} \begin{subequations} \begin{align} \mathcal{Q}1: \quad & \max_{\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}} \quad \sum_{k=1}^{K_m} \frac{A_k(\Xi^{(m,j)}_{k})}{B_k(\Xi^{(m,j)}_{k})}, \label{obj:FP_Q1} \\ \mathrm{s.t.}\quad &0 \leq \Xi^{(m,j)}_{k} \leq \xi^{ub}_{m,k}, \quad \forall k, \label{ineq:set_xi_Q1} \end{align} \end{subequations} where $A_k(\Xi^{(m,j)}_{k})$ and $B_k(\Xi^{(m,j)}_{k})$ are given by \begin{equation} \begin{split} A_k(\Xi^{(m,j)}_{k})= \left[\log_2\left(\frac{1+\frac{\Xi^{(m,j)}_{k}\Theta^{(m,j)}_{k}}{1+\Xi^{(m,j)}_{k}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}}}{1+\frac{\Theta^{(m,j)}_{k}}{ \kappa_{m,k,j} + \sum\limits_{i\neq k}\Theta^{(m,j)}_{i}} }\right)\right]^+, \end{split} \end{equation} \begin{equation} B_k(\Xi^{(m,j)}_{k})=\exp\left(\frac{\Xi^{(m,j)}_{k}}{2\gamma_{m,k}}\right) \left(1+\frac{\Xi^{(m,j)}_{k}P_m}{2^{\frac{B}{N-1}+1}}\right)^{M-1}. \end{equation} Since~\eqref{obj:FP_Q1} is a sum-of-ratios objective function, a number of approaches are able to handle this problem~\cite{J_Benson07FP,J_Bosh15SoR,J_Mult_FP01}. Among them, branch-and-bound algorithm is a popular approach that systematically subdivides the compact feasible interval of $\Xi^{(m,j)}_{k}$ to locate the global optimal solution of $\mathcal{Q}1$. However, the price for this method is the high computational complexity when $K_m$ is large. On the other hand, when fixing $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$, the subproblem of $\mathcal{P}2^{[m,j]}$ for updating $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ is formulated as \begin{equation}\label{opt:int_update_theta} \begin{split} \mathcal{D}1: &\max_{\{\Theta^{(m,j)}_{k} \geq 0\}_{k=1}^{K_m}} \sum_{k=1}^{K_m}\left[\!\log_2\left(\frac{1\!+\!\frac{\Xi^{(m,j)}_{k}\Theta^{(m,j)}_{k}}{1+\Xi^{(m,j)}_{k}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}}}{1\!+\!\frac{\Theta^{(m,j)}_{k}}{ \kappa_{m,k,j} + \sum\limits_{i\neq k}\Theta^{(m,j)}_{i}}}\right)\!\right]^+, \\ & \mathrm{s.t.}~~ \sum_{k=1}^{K_m}\Theta^{(m,j)}_{k}={P_m}. \end{split} \end{equation} Notice that the objective function can be rewritten as $[F_1(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})-F_2(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})]^+$, where $F_1(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})$ and $F_2(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})$ are both concave functions and given by \begin{equation} \begin{split} F_1(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})=&\sum_{k=1}^{K_m} \log_2\left(1+\Xi^{(m,j)}_{k}\sum_{i=1}^{k}\Theta^{(m,j)}_{i}\right)\\ &+\sum_{k=1}^{K_m}\log_2\left(\kappa_{m,k,j}+ \sum_{i\neq k}\Theta^{(m,j)}_{i}\right), \end{split} \end{equation} \begin{align} F_2(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})=&\sum_{k=1}^{K_m} \log_2\left(1+\Xi^{(m,j)}_{k}\sum_{i=1}^{k-1}\Theta^{(m,j)}_{i}\right)\nonumber \\ &+\sum_{k=1}^{K_m}\log_2\left(\kappa_{m,k,j}+{P_m}\right). \end{align} As a result, the objective function of $\mathcal{D}1$ can be expressed in a difference of convex (DC) form~\cite{J_An05Opt}, and $\mathcal{D}1$ is transformed into a DC programming problem. Then, by employing convex-concave procedure (CCP)~\cite{J_Lipp16CCP}, a suboptimal solution of $\mathcal{D}1$ can be obtained. Since CCP needs to solve a convex quadratic problem by the interior-point method, its complexity order is $\mathcal{O}(K_m^3)$ in each iteration~\cite{Ben-TalA01}, leading to heavy-computation when $K_m$ is large. \section{Massive Access with First-Order Algorithm}\label{Sec:V} To overcome the high complexity challenge brought by the conventional method when the number of users $K_m$ in cluster $m$ is large, we introduce an efficient gradient-based algorithm in this section to significantly reduce the computational complexity. Specifically, we provide a local optimal point for updating $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ when $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ is fixed and a stationary point for updating $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ when $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ is fixed, providing an overall first-order algorithm under the framework of alternating maximization (AM)~\cite{J_Beck15AM}. The convergence property of the proposed algorithm is also proved in this section. \subsection{Updating $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$} To handle the multiple-ratio FP problem $\mathcal{Q}1$, we first establish the following theorem. \begin{theorem}\label{tem:FP_CCP_def} $\mathcal{Q}1$ is a multiple-ratio concave-convex FP problem with concave functions $A_k(\Xi^{(m,j)}_{k})$, convex functions $B_k(\Xi^{(m,j)}_{k})$, and nonempty convex set. \end{theorem} \begin{proof} Please see Appendix~\ref{proof:them-FP_condi}. \end{proof} \noindent Based on Theorem~\ref{tem:FP_CCP_def}, $\mathcal{Q}1$ can be equivalently transformed into~\cite{J_Yuwei18FP} \begin{subequations}\label{opt:upda_xi_org} \begin{align} \mathcal{Q}2: \!\! \max_{\{\Xi^{(m,j)}_{k},y_k \in \mathbb{R}\}_{k=1}^{K_m}} &\sum_{k=1}^{K_m}\!\underbrace{\left(\!2y_k\sqrt{ A_k(\Xi^{(m,j)}_{k})}-y_k^2B_k(\Xi^{(m,j)}_{k})\!\right)}_{:= g(\Xi^{(m,j)}_{k},y_k)}, \label{obj:q2_fun} \\ \mathrm{s.t.}\quad &0 \leq \Xi^{(m,j)}_{k} \leq \xi^{ub}_{m,k}, \quad \forall k, \end{align} \end{subequations} where $\{y_k\}_{k=1}^{K_m}$ are auxiliary variables. When $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ are fixed, the optimal $\{y^{\dagger}_k\}_{k=1}^{K_m}$ in $\mathcal{Q}2$ are derived as \begin{equation}\label{eq:update_aux_y} y_k^{\dagger}= \sqrt{A_k(\Xi^{(m,j)}_{k})}\Big/B_k(\Xi^{(m,j)}_{k}), \quad k = 1,\ldots,K_m. \end{equation} On the other hand, when $\{y_k\}_{k=1}^{K_m}$ are fixed, due to the concavity of each $A_k(\Xi^{(m,j)}_{k})$ and convexity of each $B_k(\Xi^{(m,j)}_{k})$ from Theorem~\ref{tem:FP_CCP_def}, $g(\Xi^{(m,j)}_{k},y^{\dagger}_k)$ is concave in $\Xi^{(m,j)}_{k}$. As a result, $\mathcal{Q}2$ is a concave maximization problem over ${\Xi}^{(m,j)}_{k}$, and the optimal ${\Xi^{(m,j)}_{k}}^{\dagger}$ for maximizing $g(\Xi^{(m,j)}_{k},y^{\dagger}_k)$ is summarized in the following property, which is proved in Appendix~\ref{prrof:lema_opt_Q2}. \begin{prop}\label{lem:sta_xi_mk_fp} The optimal $\{{\Xi^{(m,j)}_{k}}^{\dagger}\}_{k=1}^{K_m}$ in $\mathcal{Q}2$ are given by \begin{equation}\label{eq:opt_Xi-Q2} {\Xi^{(m,j)}_{k}}^{\dagger} = \min\left\{ {\Xi^{(m,j)}_{k}}^{\diamond}, \xi^{ub}_{m,k} \right\},\quad k = 1,\ldots,K_m, \end{equation} where ${\Xi^{(m,j)}_{k}}^{\diamond}$ satisfies~\eqref{eq:sta_xi_mk}, shown at the top of this page. \begin{figure*} \normalsize \begin{equation} \label{eq:sta_xi_mk} \frac{\left(1+{\Xi^{(m,j)}_{k}}^{\diamond}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}\right)^{-1} \log^{-\frac{1}{2}}_2\left(\frac{\left(1+{\Xi^{(m,j)}_{k}}^{\diamond}\sum\limits_{i=1}^{k}\Theta^{(m,j)}_{i}\right)\left(\kappa_{m,k,j} + \sum\limits_{i\neq k}\Theta^{(m,j)}_{i} \right)} {\left(1+{\Xi^{(m,j)}_{k}}^{\diamond}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}\right)(\kappa_{m,k,j} +{P_m})} \right) }{\left(1+{\Xi^{(m,j)}_{k}}^{\diamond}\sum\limits_{i=1}^{k}\Theta^{(m,j)}_{i}\right)\left(\frac{1}{2\gamma_{m,k}}+\frac{(M-1){P_m}2^{-\frac{B}{N-1}}}{2+{P_m}{\Xi^{(m,j)}_{k}}^{\diamond}2^{-\frac{B}{N-1}}}\right)B_k({\Xi^{(m,j)}_{k}}^{\diamond})\ln 2}=\frac{y_k^{\dagger}}{\Theta^{(m,j)}_{k}} \end{equation} \hrulefill \end{figure*} \end{prop} \noindent Based on Proposition~\ref{lem:sta_xi_mk_fp}, ${\Xi^{(m,j)}_{k}}^{\dagger}$ can be efficiently found via bisection method. To sum up, the entire procedure for solving $\mathcal{Q}2$ is summarized in Algorithm~\ref{alg:CDM_iterative_RI}, which is essentially a cyclic block coordinate ascent method. Furthermore, since $g(\Xi^{(m,j)}_{k},y_k)$ is biconcave on $\Xi^{(m,j)}_{k}$ and $y_k$ with separable feasible sets, it converges to a local optimal point of $\mathcal{Q}2$~\cite{J_Wright15}. On the other hand, it is observed that variable ${\Xi^{(m,j)}_{k}}$ maximizes the objective function of $\mathcal{Q}1$ if and only if ${\Xi^{(m,j)}_{k}}^{\dagger}$ together with $y_k^{\dagger}$ maximizes the objective function of $\mathcal{Q}2$. Hence, the transformed $\mathcal{Q}2$ has equivalent objective value and solution with respect to the original problem $\mathcal{Q}1$. Based on the above discussion, we conclude the following property with respect to Algorithm~\ref{alg:CDM_iterative_RI}. \begin{algorithm}[H] \caption{Iterative method for solving $\mathcal{Q}2$} \label{alg:CDM_iterative_RI} \begin{algorithmic}[1] \STATE Initialize $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ to a feasible value. \REPEAT \STATE Updating $\{y_k\}_{k=1}^{K_m}$ by~\eqref{eq:update_aux_y}. \STATE Updating $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ by~\eqref{eq:opt_Xi-Q2}. \UNTIL Stopping criterion is satisfied. \end{algorithmic} \end{algorithm} \begin{theorem}\label{lem:opt_xi_alg1} Algorithm~\ref{alg:CDM_iterative_RI} consists of a sequence of concave optimization problems, with the corresponding solutions converging to a local optimal point of $\mathcal{Q}1$. \end{theorem} \subsection{Updating $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$} Notice that the objective function of $\mathcal{D}1$ must be non-negative at optimality. Hence, the pointwise maximum $[\cdot]^+$ can be dropped, and $\mathcal{D}1$ is equivalent to \begin{equation}\label{D2: obj_theta} \begin{split} \mathcal{D}2: &\!\max_{ \{\Theta^{(m,j)}_{k} \geq 0\}_{k=1}^{K_m}} \sum_{k=1}^{K_m}\underbrace{\! \log_2\left(\frac{1+\frac{\Xi^{(m,j)}_{k}\Theta^{(m,j)}_{k}}{1+\Xi^{(m,j)}_{k}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}}}{1+\frac{\Theta^{(m,j)}_{k}}{ \kappa_{m,k,j} + \sum\limits_{i\neq k}\Theta^{(m,j)}_{i}}}\right)\!}_{:= f\left(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}\right)}, \\ &\mathrm{s.t.}~~ \sum_{k=1}^{K_m}\Theta^{(m,j)}_{k}={P_m}. \end{split} \end{equation} Since $\mathcal{D}2$ has a continuously differentiable objective function and linear feasible set, it can be solved by the projected-gradient (PG) method~\cite{B_Bertseka97NP}, which alternatively performs an unconstrained gradient descent step and computes the projection of the unconstrained update onto the feasible set of the optimization problem. To be specific, the update of $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ at the $l^{th}$ iteration is given by \begin{align}\label{eq:proj_grad} &{\Theta^{(m,j)}_{k}}^{\diamond}\left(l+\frac{1}{2}\right)\nonumber\\ &={\Theta^{(m,j)}_{k}}^{\diamond}(l)+\mathcal{I} \sum_{i=1}^{K_m}\nabla_{\Theta^{(m,j)}_{i}} f, ~\forall k =1,\ldots,K_m, \end{align} where $\mathcal{I}$ is a constant step size chosen by Armijo rule to guarantee convergence~\cite[Prop. 2.3.3]{B_Bertseka97NP}, and $\nabla_{\Theta^{(m,j)}_{i}} f$ is the gradient of $f(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})$ at $\Theta^{(m,j)}_{i}, i\in \{1,\ldots,K_m\}$, with its explicit expression shown in~\eqref{eq:clo_explicit_dif} of Appendix~\ref{lem:CF_theta_AM}. On the other hand, to project ${\Theta^{(m,j)}_{k}}^{\diamond}(l+1/2)$ onto the feasible set of $\mathcal{D}2$ to find its nearest feasible point ${\Theta^{(m,j)}_{k}}^{\diamond}(l+1)$, we have an equivalent optimization problem expressed as \begin{align}\label{opt:proj_D2} &{\Theta^{(m,j)}_{k}}^{\diamond}(l+1)\nonumber \\ &=\mathop{\arg\min}_{\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}\in \mathcal{P}_{\mathcal{D}_2}}\! \sum_{k=1}^{K_m}\left\|\Theta^{(m,j)}_{k}\!-\!{\Theta^{(m,j)}_{k}}^{\diamond}\!\left(l+\frac{1}{2}\right)\right\|^2, \end{align} where $\mathcal{P}_{\mathcal{D}_2}=\{\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}| \sum_{k=1}^{K_m}\Theta^{(m,j)}_{k}={P_m},{\Theta}^{(m,j)}_{k}\geq 0\}$ is the domain of $\mathcal{D}2$. Since~\eqref{opt:proj_D2} is a convex optimization problem, a closed-form solution can be derived based on Karush-Kuhn-Tucker (KKT) condition and is given by the following property, which is proved in Appendix~\ref{proof:lemma_proj_gra_theta}. \begin{prop}\label{lem:Proj_gra_theta_k} The optimal solution of~\eqref{opt:proj_D2} is given by \begin{equation}\label{eq:pro_gra_optTheta} \begin{aligned} &{\Theta^{(m,j)}_{k}}^{\diamond}(l+1)\!=\!\Bigg[{\Theta^{(m,j)}_{k}}^{\diamond}\left(l+\frac{1}{2}\right)\\ &-\frac{1}{K_m}\left(\sum_{k=1}^{K_m}{\Theta^{(m,j)}_{k}}^{\diamond}\!\left(l+\frac{1}{2}\right)-{P_m}\right)\Bigg]^+\!,\forall k =1,\ldots,K_m. \end{aligned} \end{equation} \end{prop} \noindent Based on~\eqref{eq:proj_grad} and Proposition~\ref{lem:Proj_gra_theta_k}, we can iteratively update $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$, where the convergent point is guaranteed to be a stationary point of $\mathcal{D}2$~\cite[Prop. 2.3]{B_Bertseka97NP}. To sum up, the above PG method for solving $\mathcal{D}2$ is summarized in Algorithm~\ref{alg:dual_update_theta}. \begin{algorithm}[H] \caption{PG method for solving $\mathcal{D}2$} \begin{algorithmic}[1]\label{alg:dual_update_theta} \STATE Initialize with a feasible point $\{{\Theta^{(m,j)}_{k}}^{\diamond}(0)\}_{k=1}^{K_m}$ and set $l:=0$. \REPEAT \STATE Update $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ with the gradient iterate~\eqref{eq:proj_grad} and projection iterate~\eqref{eq:pro_gra_optTheta}. \STATE Update iteration: $l:=l+1$. \UNTIL Stopping criterion is satisfied. \end{algorithmic} \end{algorithm} \subsection{Tightness Refinement and Overall Algorithm} With the update of $\{{\Xi}^{(m,j)}_{k}\}_{k=1}^{K_m}$ and $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ given by Algorithm~\ref{alg:CDM_iterative_RI} and~\ref{alg:dual_update_theta} respectively, subproblem $ \mathcal{P}2^{[m,j]}$ can be solved under the AM framework, with the convergence revealed by the following Theorem (proved in Appendix~\ref{proof_tem_convergence}). \begin{theorem}\label{tem:conver_AM} For a given $\{\varepsilon_k\in(0,1]\}_{k=1}^{K_m}$, and starting from a feasible solution of $ \mathcal{P}2^{[m,j]}$, the sequence of solutions generated by alternatively executing Algorithms~\ref{alg:CDM_iterative_RI} and~\ref{alg:dual_update_theta} converges to a stationary point of problem $ \mathcal{P}2^{[m,j]}$. \end{theorem} However, as pointed out by Theorem~\ref{tem:conver_AM}, the solution of $ \mathcal{P}2^{[m,j]}$ would depend on the parameter $\varepsilon_k$ (via $\kappa_{m,k,j}$), which controls the tightness of the approximation in Lemma~\ref{lem:Bertin-Type}. In particular, if we solve $\mathcal{P}2^{[m,j]}$ with the tunable parameter $\varepsilon_k$ varying from $0$ to $1$ (other simulation setting detailed in Section VI) and calculate $p^{m,k,j}_{so}$ by putting the solution of $ \mathcal{P}2^{[m,j]}$ into~\eqref{eq:sop_inter_U-k}, we could obtain $p^{m,k,j}_{so}$ as a function of $\varepsilon_k$, denoted by $p^{m,k,j}_{so}(\varepsilon_k)$, and the results are shown in Fig.~\ref{fig:refine_Solution} for a few selected $\{m,k,j\}$. It is observed that $p^{m,k,j}_{so}$ after a safe approximation (named as Approximation) is usually far less than the tunable parameter $\varepsilon_k$, as shown in the gap between the diagonal black dotted line and colored lines. However, since $p^{m,k,j}_{so}(\varepsilon_k)$ is nondecreasing in $\varepsilon_k$ as shown in Fig.~\ref{fig:refine_Solution}, we could use the bisection method~\cite{C_BoShen08Proba} to relieve the performance loss and locate a proper $\varepsilon_k\in [\varepsilon,1]$ such that $p^{m,k,j}_{so}(\varepsilon_k)$ is close to the required $\varepsilon$. \begin{figure} \centering \includegraphics[width=3.7in]{Fig2.eps} \caption{$p^{m,k,j}_{so}$ versus $\varepsilon_k$ with the basic simulation setting detailed in Section~\ref{Sec:VI}.}\label{fig:refine_Solution} \end{figure} Based on the above discussion, the proposed first-order algorithm for solving $ \mathcal{P}2^{[m,j]}$ with tightness parameter refinement is summarized in Algorithm~\ref{refinement method to P2} and the refined results (named as Refinement) are shown in Fig.~\ref{fig:refine_Solution}. It is observed that $p^{m,k,j}_{so}$ after refinement is very close to $\varepsilon_k$ without approximation. This indicates that although the solution set after approximation in Lemma~\ref{lem:Bertin-Type} is smaller, the corresponding solution set after refinement approximates the original one very well. Notice that Algorithm~\ref{refinement method to P2} consists of an outer bisection iteration and an inner AM iteration. Since the outer bisection iteration must converge, together with Theorem~\ref{tem:conver_AM}, the overall Algorithm~\ref{refinement method to P2} is guaranteed to converge. For the computational complexity of Algorithm~\ref{refinement method to P2}, it is dominated by the inner AM iteration (i.e., from step 6 to step 9 in Algorithm~\ref{refinement method to P2}). To be specific, the computational complexity of Algorithm~\ref{alg:CDM_iterative_RI} is dominated by step 4 with the bisection search to update ${\Xi^{(m,j)}_{k}}$ at each iteration. Hence, the complexity order for updating $\{{\Xi}^{(m,j)}_{k}\}_{k=1}^{K_m}$ is $\mathcal{O}\left(K_m\ln(1/\varsigma)\right)$~\cite{Ben-TalA01}, where $\varsigma>0$ denotes the predefined searching resolution of the bisection method. On the other hand, the computational complexity of Algorithm~\ref{alg:dual_update_theta} is dominated by step 3 with the gradient iteration, which only involves the first-order differentiation. Therefore, the complexity order for updating $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ is $\mathcal{O}\left(K_m/\tau \right)$ with an accuracy of $\tau $~\cite{B_Bertseka97NP}. Since the complexity of Algorithm~\ref{refinement method to P2} is linear in $K_m$, it is suitable for massive access. Noticing that $\mathcal{P}2$ consists of $MJ$ parallel subproblems in the form of $ \mathcal{P}2^{[m,j]}$, the overall algorithm for solving $\mathcal{P}2$ can be implemented in a parallel manner and is summarized in Algorithm~\ref{alg:overall_am_P3}, where the modern multi-core computing architecture can be leveraged for speeding up the computation. \begin{algorithm}[H] \caption{Solution of $\mathcal{P}2^{[m,j]}$ with optimized tunable parameter $\{\varepsilon_k\}_{k=1}^{K_m}$} \begin{algorithmic}[1]\label{refinement method to P2} \STATE \textbf{input}: the required $\varepsilon$ and a predefined searching resolution $z$. \STATE Initialize $\varepsilon^k_{\min}=\varepsilon$, $\varepsilon^k_{\max}=1$. \REPEAT \STATE Update tunable parameter $\varepsilon_k = (\varepsilon^k_{\min}+\varepsilon^k_{\max})/2 $. \STATE Initialize the feasible point of $\{{\Xi}^{(m,j)}_{k},\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ based on~\eqref{ineq:cons_xi_orthant} and~\eqref{eq:simplex_theta}. \REPEAT \STATE Update $\{{\Xi}^{(m,j)}_{k}\}_{k=1}^{K_m}$ by using Algorithm~\ref{alg:CDM_iterative_RI}. \STATE Update $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ by using Algorithm~\ref{alg:dual_update_theta}. \UNTIL Stopping criterion is satisfied. \STATE Calculate $p^{m,k,j}_{so}(\varepsilon_k)$ by putting the solution of $\mathcal{P}2^{[m,j]}$ into~\eqref{eq:sop_inter_U-k} for all $k$. \STATE\textbf{if} ~{$p^{m,k,j}_{so}(\varepsilon_k)<\varepsilon$}, \textbf{then} ~{$\varepsilon^k_{\min}=\varepsilon_k$}, \STATE\textbf{else} ~{$\varepsilon^k_{\max}=\varepsilon_k$}. \STATE\textbf{end if} \UNTIL $|p^{m,k,j}_{so}(\varepsilon_k)-\varepsilon|\leq z$. \STATE\textbf{output}: The maximizer $\{{{\Xi}^{(m,j)}_{k}}^*,{\Theta^{(m,j)}_{k}}^*\}_{k=1}^{K_m}$ with optimized $\{\varepsilon_k\}_{k=1}^{K_m}$. \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{The overall algorithm for solving $\mathcal{P}2$ } \begin{algorithmic}[1]\label{alg:overall_am_P3} \STATE Solve $\mathcal{P}2^{[m,j]}$ in a parallel manner for all $m,j$ by using Algorithm~\ref{refinement method to P2}. \STATE Put $\{{{\Xi}^{(m,j)}_{k}}^*,{\Theta^{(m,j)}_{k}}^*\}_{k=1}^{K_m}$ for all $m,j$ into the objective function of~\eqref{eq:obj_thr}. \STATE Select $\hat{j}\in\{1,\ldots,J\}$ such that the objective function value of $\mathcal{P}2$ is the minimum. \STATE The maximizer $\xi^*_{m,k}={{\Xi}^{(m,\hat{j})}_{k}}^*$ and $\theta^*_{m,k}={\Theta^{(m,\hat{j})}_{k}}^*$ for all $m,k$. \end{algorithmic} \end{algorithm} \section{Numerical Results and Discussions}\label{Sec:VI} In this section, we evaluate the secure transmission performance of the proposed algorithm through simulations. All simulations are performed on MATLAB R2017a on a Windows x64 desktop with 3.2 GHz CPU and 16 GB RAM. Each point in the figures is obtained by averaging over 100 simulation trials. Unless otherwise specified, the simulation set-up is as follows and kept throughout this section. We adopt the carrier frequency of 915 MHz and a carrier spacing of 200 kHz according to the 3GPP specification~\cite{LTE_carrierFQ13}. The path loss exponent is $\alpha = 2.5$ in the free space environment~\cite{B_Rappaport}. There are 100 users and 10 Eves in the whole system. All users are randomly distributed between 1 m and 100 m, while the location for Eve $j$ is fixed at $d_{e,j}=10/j$ m. Once the large-scale fading parameters are generated, they are assumed to be known and fixed throughout the simulations. The small-scale fading vectors of all users and Eves are independently generated according to $ \mathcal{CN}(\mathbf{0},\mathbf{I}_N)$, i.e., $\mu_{m,k}=\mu_{e,j}=1$. The power allocation to cluster $m$ is set to $P_m=1/M$. The noise power at each user is set to $\sigma^2_b=0$ dB, and the noise power at each eavesdropper is set to $\sigma^2_e=5$ dB. To avoid repeating figure descriptions, the settings for $(\delta,\varepsilon,M,N,J,P)$ are provided in the caption of each figure. \subsection{Performance of the Proposed First-Order Algorithm} Firstly, we demonstrate the convergence of Algorithm~\ref{refinement method to P2} for solving $\mathcal{P}2^{[m,j]}$ with fixed $\{\varepsilon_k=0.1\}_{k=1}^{K_m}$. Since Algorithm~\ref{refinement method to P2} consists of multiple layers of iterations, stopping criterion for each layer depends on the relative change of the two consecutive objective function values (e.g., less than $10^{-4}$). The convergence of the inner loop of Algorithm~\ref{alg:CDM_iterative_RI} in terms of updating $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ is shown in Fig.~\ref{fig:Conver_alg1}. It is observed that the inner loop converges within 10 iterations under different numbers of $P$, which corroborates the results in Theorem~\ref{lem:opt_xi_alg1}. When fixing the number of cluster $M$, the power allocated to each cluster increases with increasing $P$. As a result, the objective function value of $g(\Xi^{(m,j)}_{k},y_k)$ in $\mathcal{Q}2$ increases as $P$ increases. On the other hand, the convergence property of Algorithm~\ref{alg:dual_update_theta} is shown in Fig.~\ref{fig:Conver_alg2}. It is shown that the PG method converges after 300 iterations under different values of $P$. To verify the convergence of Algorithm~\ref{refinement method to P2}, Fig.~\ref{fig:Conver_alg3} shows the objective function value of $\mathcal{P}2^{[m,j]}$ versus the inner AM iteration. It is observed that AM converges rapidly within 25 iterations under different values of $P$, which corroborates the convergence result of Theorem~\ref{tem:conver_AM}. \begin{figure*} \centering \subfigure[]{ \label{fig:Conver_alg1} \includegraphics[width=2.2in]{Fig3a.eps}} \hspace{0in} \subfigure[]{ \label{fig:Conver_alg2} \includegraphics[width=2.2in]{Fig3b.eps}} \hspace{0in} \subfigure[]{ \label{fig:Conver_alg3} \includegraphics[width=2.2in]{Fig3c.eps}} \caption{For a given $\{\varepsilon_k=0.1\}_{k=1}^{K_m}$ with $M=8$, $N=100$, $J=5$, $\delta=0.5$. (a)~The iterations of Algorithm 1.~(b)~The iterations of Algorithm 2.~(c)~The AM iterations in Algorithm 3.} \end{figure*} Next, to show the computational complexity advantage of the proposed algorithm for solving $\mathcal{P}2$, we compare Algorithm~\ref{alg:overall_am_P3} with the conventional method, which is the combination of the branch-and-bound algorithm and CCP. To be specific, $\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ is updated based on branch-and-bound algorithm and the initial value is chosen according to box constraint~\eqref{ineq:set_xi_Q1}; $\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$ is updated based on CCP with the interior-point method and the initial value is chosen as $\{\Theta^{(m,j)}_{k}=P_m/K_m\}_{k=1}^{K_m}$. Besides, the convergence tolerance and maximum number of iterations for the conventional method are set to $10^{-4}$ and 100, respectively. As shown in Fig.~\ref{fig:time_compare}, compared with the conventional method, the proposed Algorithm~\ref{alg:overall_am_P3} reduces the computation time by at least two orders of magnitude.\footnote{The proposed algorithm has the potential of leveraging the modern multi-core computing architecture, and more efficient programming languages (C or Assembler) to further speed up the computation in practical implementation.} On the other hand, Fig.~\ref{fig:Perf_alg_compare} shows that the proposed Algorithm~\ref{alg:overall_am_P3} achieves almost the same security guaranteed sum-rate as the conventional method under different values of $P$. Due to the complexity advantage, we only provide the solution to $\mathcal{P}2$ obtained via Algorithm~\ref{alg:overall_am_P3} in the following discussion. \subsection{Performance Comparisons with Other Multiplexing Schemes} To show the performance advantages of the proposed scheme by employing the power domain multiplexing, we make a comparison with the orthogonal multiple access scheme~\cite{J_19XuTDMACompare}, where we employ zero-forcing beamforming among clusters and time division multiple access (TDMA) within the cluster. To begin with, we illustrate the impact of COP constraint on system performance, where all security guaranteed sum-rates are increasing in $\delta$ as shown in Fig.~\ref{fig:Thrpt_delta}. Moreover, the sum-rate advantage of the proposed scheme with respect to that of TDMA becomes more prominent as $\delta$ increases and then maintains at a large margin. A heuristic explanation of this phenomenon is that $\Xi^{(m,j)}_{k}$ is no longer constrained by its upper bound $\xi^{ub}_{m,k}$ when $\delta$ is large according to~\eqref{eq:up_xi_CF}. When comparing two schemes in terms of the number of clusters, Fig.~\ref{fig:Thr_feedbackBits} shows that the proposed scheme significantly improves the security guaranteed sum-rate of the TDMA scheme under different values of $M$. When fixing $P$, the power allocated to each cluster decreases as $M$ increases. As a result, the sum-rate decreases as $M$ increases. Furthermore, with an increase of $J$, the performance degrades due to more Eves in the system. To show the importance of considering imperfect CSI, we compare the proposed scheme with NOMA that does not consider the imperfect CSI~\cite{J_Wang19SecureNOMI}. To make a fair comparison with~\cite{J_Wang19SecureNOMI}, we simulate both schemes under the same security requirement and select $J=1$. The security guaranteed sum-rate versus $\varepsilon$ and $P$ are provided in Fig.~\ref{fig:Thr_epsilon} and Fig.~\ref{fig:Througput_SNR}, respectively. It can be seen that the proposed scheme always achieves significantly higher sum-rates than NOMA ignoring CSI uncertainty. Finally, Fig.~\ref{fig:Througput_SNR} shows that the security guaranteed sum-rate decreases as $N$ increases, which might seem counterintuitive. However, this is due to the coarser CDI when $N$ increases under a fixed $B$. This phenomenon can be seen in~\eqref{eq:obj_trans_twoVars}, where $B$ and $N$ appear in a ratio. This also suggests using more feedback bits would remedy the performance loss. \begin{figure*} \centering \subfigure[]{ \label{fig:time_compare \includegraphics[width=3in]{Fig4a.eps}} \hspace{0.0in} \subfigure[]{ \label{fig:Perf_alg_compare} \includegraphics[width=3in]{Fig4b.eps}} \caption{Performance comparison with the conventional method with $M=8$, $N=100$, $J=5$, $P=10\mathrm{dB}$, $\varepsilon=0.1$, $\delta=0.5$. (a)~Average computation time versus total number of users.~(b)~Security guaranteed sum-rate versus total number of users.} \end{figure*} \begin{figure*} \centering \subfigure[]{ \label{fig:Thrpt_delta \includegraphics[width=3in]{Fig5a.eps}} \hspace{0.0in} \subfigure[]{ \label{fig:Thr_feedbackBits} \includegraphics[width=3in]{Fig5b.eps}} \hspace{0.0in} \caption{Comparison with TDMA scheme with $N = 100$: (a)~Security guaranteed sum-rates versus $\delta$: $M=8$, $J=5$, $\varepsilon=0.3$.~(b)~Security guaranteed sum-rates versus $M$: $P=-5\mathrm{dB}$, $\delta=0.3$, $\varepsilon=0.1$.} \end{figure*} \begin{figure*} \centering \subfigure[]{ \label{fig:Thr_epsilon \includegraphics[width=3in]{Fig6a.eps}} \hspace{0.0in} \subfigure[]{ \label{fig:Througput_SNR \includegraphics[width=3in]{Fig6b.eps}} \caption{Comparison with NOMA ignoring CSI uncertainty~\cite{J_Wang19SecureNOMI} with $M=8$:~(a)~Security guaranteed sum-rates versus $\varepsilon$: $N = 100$, $P=10\mathrm{dB}$.~(b)~Security guaranteed sum-rates versus $P$: $\varepsilon=0.1$, $\delta=0.3$.} \end{figure*} \section{Conclusion}\label{Sec:VII} This paper studied the secure downlink NOMA transmission under imperfect CSI. To characterize the performance of this system, an efficient first-order algorithm was proposed to maximize the security guaranteed sum-rate under the constraints of outage probability and transmit power budget. Since the proposed first-order algorithm is Hessian-free, it has a linear complexity order with respect to the number of users in the system, making it ideal for massive access scenarios. Numerical results demonstrated that the proposed first-order algorithm achieves identical performance to the conventional method but saves at least two orders of magnitude in computation time, and it significantly improves the security guaranteed sum-rate compared to orthogonal multiple access transmission, and NOMA transmission ignoring CSI uncertainty. \appendices \section{Derivation of~\eqref{eq:COP_cf_final} }\label{ref:der_cop_mk} Based on~\eqref{eq:COP_cf_inter}, $p^{m,k}_{co}$ can be expressed as \begin{align} &p^{m,k}_{co}\nonumber\\ =&\mathrm{Pr}\left\{2^{R_{m,k}}-1\!>\!\frac{\mathcal{X}\theta_{m,k}}{\mathcal{X}\sum\limits_{i=1}^{k-1}\theta_{m,i}+{P_m}\mathcal{Y}+\frac{1}{\gamma_{m,k}}}\right\} \nonumber\\ =&\mathrm{Pr}\left\{\!\frac{2^{R_{m,k}}-1}{\theta_{m,k}\!-\!\left(2^{R_{m,k}}-1\right)\sum\limits_{i=1}^{k-1}\theta_{m,i}}\!>\!\frac{\mathcal{X}}{{P_m}\mathcal{Y}+\frac{1}{\gamma_{m,k}}}\!\right\} \label{eq:COP_CF_tempt}, \end{align} where $\mathcal{X}=|{\mathbf{g}}^H_{m,k}\mathbf{w}_m/\mu_{m,k}|^2\geq 0 $, $\mathcal{Y}=\|{\mathbf{g}}_{m,k}/\mu_{m,k}\|^2\sin^2 \beta_{m,k} \sum_{v\neq m}| \mathbf{e}^H_{m,k} \mathbf{w}_v|^2\geq 0$, and~\eqref{eq:COP_CF_tempt} is due to $\theta_{m,k}-\left(2^{R_{m,k}}-1\right)\sum_{i=1}^{k-1}\theta_{m,i}>0$~\cite[eq. 8]{J_Ding14NOMA}, otherwise, $p^{m,k}_{co}$ is always one. Furthermore, based on the independent property of the interference terms~\cite[eq. 24]{J_Kout12DownSDMA}, variables of $\mathcal{X}$ and $\mathcal{Y}$ are independent. To obtain a closed-form expression of $p^{m,k}_{co}$, we first provide the probability density function (PDF) of $\mathcal{X}$. Denote $\mathbf{\Psi}_{m,k}=\frac{{\mathbf{g}}_{m,k}}{\mu_{m,k}}\sim \mathcal{CN}(\mathbf{0},\mathbf{I}_N)$ and $\tilde{\mathbf{\Psi}}_{m,k}=\mathbf{\Psi}_{m,k}/\|\mathbf{\Psi}_{m,k}\|$, $\mathcal{X}$ can be rewritten as $\|{\mathbf{\Psi}}_{m,k}\|^2|\tilde{\mathbf{\Psi}}_{m,k}^H\mathbf{w}_m|^2$. Since the normalized beamformer $\mathbf{w}_m$ is determined by $\{\hat{\mathbf{g}}_n\}_{n\neq m}$ according to~\eqref{eq:zf_beamformer} and $\{\hat{\mathbf{g}}_n\}_{n\neq m}$ is independent of ${\mathbf{g}}_{m,k}$, ${\mathbf{g}}_{m,k}$ and $\mathbf{w}_m$ are independent. As a result, $\tilde{\mathbf{\Psi}}_{m,k}$ and $\mathbf{w}_m$ are independent and unit vectors in $N$ dimensional space. Based on~\cite[Lemma 1]{J_Roh06Se_Beamform}, the square inner product between two independent unit-norm random vectors $X_1:=|\tilde{\mathbf{\Psi}}^H_{m,k}\mathbf{w}_m|^2$ follows $\mathrm{Beta}(1,N-1)$ and its PDF is $f_{X_1}(x_1)=(1-x_1)^{N-2}/Be(1,N-1), x_1\in [0,1]$, where $Be(x,y)$ is the beta function~\cite[eq. 8.380]{Table_Mathe00}. On the other hand, since $\mathbf{\Psi}_{m,k}\sim \mathcal{CN}(\mathbf{0},\mathbf{I}_N)$, $X_2:=\|\mathbf{\Psi}_{m,k}\|^2$ follows a $\chi^2$ distribution with $2N$ degrees of freedom, and its PDF is $f_{X_2}(x_2)=\frac{x_2^{N-1}e^{-x_2/2}}{2^N\Gamma(N)}, x_2\geq 0$, where $\Gamma(x)$ is the Gamma function~\cite[eq. 8.310]{Table_Mathe00}. Since $\mathcal{X}=X_1X_2$ and $X_1$ and $X_2$ are independent, the PDF of $\mathcal{X}$ is given by \begin{align}\label{eq:pdf-X} f_{\mathcal{X}}(x)&=\int_{x_2}\frac{1}{|x_2|}f_{X_2}(x_2)f_{X_1}\left(\frac{x}{x_2}\right)\mathrm{d}x_2 \nonumber\\ &=\frac{\int_{x}^{+\infty}(x_2-x)^{N-2}e^{-\frac{x_2}{2}}\mathrm{d}x_2}{Be(1,N-1)2^N\Gamma(N)}\nonumber \\ &=\frac{1}{2}e^{-\frac{x}{2}}, \quad x\geq 0. \end{align} Now, we derive the PDF of $\mathcal{Y}$. The cumulative distribution function of $\sin^2 \beta_{m,k}$ is given by~\cite{J_Yoo_pdf07} \begin{equation}\label{eq:pdf_beta_mk} \begin{split} &F\left(\sin^2\! \beta_{m,k}\right)\\ =&\left\{\begin{array}{ll} \!2^B \left(\sin^2 \!\beta_{m,k}\right)^{N-1} , & \!\!\mathrm{if} ~0\leq \sin^2 \beta_{m,k} \leq 2^{-\frac{B}{N-1}}, \\ \!1, &\!\!\mathrm{if} ~\sin^2 \beta_{m,k}\geq 2^{-\frac{B}{N-1}}. \end{array}\right. \end{split} \end{equation} Hence, we have $\|\mathbf{\Psi}_{m,k}\|^2\sin^2 \beta_{m,k} \sim \mathcal{G}\left(N-1,2^{-\frac{B}{N-1}}\right)$ (gamma distribution with shape $N-1$ and scale $2^{-\frac{B}{N-1}}$)~\cite[Lemma 1]{J_Yoo_pdf07}. On the other hand, it is known that $ \mathbf{e}_{m,k}$ is a unit vector that has the same distribution as $\tilde{\mathbf{\Psi}}_{m,k}$. Moreover, the unit vector $\mathbf{w}_v$ is isotropic within the $N-1$ dimensional hyperplane and independent of $\mathbf{e}_{m,k}$. Based on~\cite[Lemma 2]{J_Jindal06Finite}, we have $| \mathbf{e}^H_{m,k} \mathbf{w}_v|^2\sim \mathrm{Beta}(1,N-2)$. Therefore, by applying~\cite[Lemma 1]{EJ_Zhang09Pro}, $\|\mathbf{\Psi}_{m,k}\|^2\sin^2 \beta_{m,k} |\mathbf{e}^H_{m,k} \mathbf{w}_v|^2 \sim \mathrm{Exp}\left(2^{\frac{B}{N-1}}\right)$. Since all terms $\{\mathbf{e}^H_{m,k} \mathbf{w}_v\}_{v\neq m}$ are independent of one another, $\mathcal{Y}$ is the sum of $(M-1)$ independent and identically exponentially distributed random variables. Therefore, $\mathcal{Y}\sim \mathcal{G}\left(M-1,2^{-\frac{B}{N-1}}\right)$ and its PDF is expressed as \begin{equation}\label{eq:pdf-Y} \begin{split} f_{\mathcal{Y}}(y)&= \frac{y^{M-2}\exp\left(-y2^{\frac{B}{N-1}}\right)}{2^{-\frac{B(M-1)}{N-1}}\Gamma(M-1)}, \quad y>0. \end{split} \end{equation} Finally, based on~\eqref{eq:pdf-X} and~\eqref{eq:pdf-Y},~\eqref{eq:COP_CF_tempt} is further derived as \begin{align} p^{m,k}_{co} &=\mathrm{Pr}\left\{\mathcal{X}-{P_m}I\mathcal{Y} < \frac{I}{\gamma_{m,k}}\right\}\nonumber \\ &=1-e^{-\frac{I}{2\gamma_{m,k}}} \int_{0}^{\infty} \frac{y^{M-2}e^{-y\left(2^{\frac{B}{N-1}}+\frac{{P_m}I}{2}\right)}}{2^{-\frac{B(M-1)}{N-1}}\Gamma(M-1)}\mathrm{d}y \nonumber \\ &=1-e^{-\frac{I}{2\gamma_{m,k}}}\left(1+\frac{{P_m}I2^{-\frac{B}{N-1}}}{2}\right)^{-(M-1)}, \label{eq:COP_appendix_CF} \end{align} where $I=\frac{2^{R_{m,k}}-1}{\theta_{m,k}-\left(2^{R_{m,k}}-1\right)\sum_{i=1}^{k-1}\theta_{m,i}}>0$, and~\eqref{eq:COP_appendix_CF} follows from~\cite[eq. 3.326]{Table_Mathe00}. By putting $I$ into~\eqref{eq:COP_appendix_CF}, $p^{m,k}_{co}$ is obtained as shown in~\eqref{eq:COP_cf_final}. \section{Proof of Lemma~\ref{lem:Bertin-Type} }\label{Appd-proof:Lema1} By putting~\eqref{eq:para_kmj} into~\eqref{ineq:slack_SOP_trans} and re-arranging the terms, we obtain~\eqref{ineq:prox_sop_temp}, shown at the top of the next page, \begin{figure*} \normalsize \begin{equation} \label{ineq:prox_sop_temp} \begin{split} &\frac{2^{D^j_{m,k}}-1}{\gamma_{e,j}}\geq \left(\theta_{m,k}-\left(2^{D^j_{m,k}}-1\right)\sum_{i\neq k}\theta_{m,i}\right)\mathrm{Tr}\left(\mathbf{W}_m\right)-{P_m}\left(2^{D^j_{m,k}}-1\right)\mathrm{Tr}\left(\mathbf{W}^{\bot}_m\right)\\ &+\sqrt{2\ln(\varepsilon_k^{-1})} \left(\left(\theta_{m,k}-\left(2^{D^j_{m,k}}-1\right)\sum_{i\neq k}\theta_{m,i}\right)\mathrm{Tr}\left(\mathbf{W}_m\right)+{P_m}\left(2^{D^j_{m,k}}-1\right)\|\mathbf{W}^{\bot}_m\|_F \right)\\ &+\ln(\varepsilon_k^{-1})\left(\theta_{m,k}-\left(2^{D^j_{m,k}}-1\right)\sum_{i\neq k}\theta_{m,i}\right)\mathrm{Tr}\left(\mathbf{W}_m\right) \end{split} \end{equation} \hrulefill \end{figure*} where $\mathbf{W}_m=\mathbf{w}_m\mathbf{w}^H_m$, $\mathbf{W}^{\bot}_m=\sum_{v\neq m}\mathbf{w}_v\mathbf{w}^H_v$. In order to proceed, we provide the following three facts based on~\eqref{eq:lamda_quadraticForm}. Denote the largest eigenvalue of $\mathbf{\mathbf{\Lambda}}$ by $\lambda_{\max}(\mathbf{\mathbf{\Lambda}})$, we have \begin{align}\label{ineq:lamda_sop_f} &[\lambda_{\max}(\mathbf{\Lambda})]^+\nonumber\\ \leq&\lambda_{\max}\left(\gamma_{e,j}\left(\theta_{m,k}-\left(2^{D^j_{m,k}}-1\right)\sum_{i\neq k}\theta_{m,i}\right)\mathbf{W}_m\right)\nonumber\\ \leq& \gamma_{e,j}\left(\theta_{m,k}-\left(2^{D^j_{m,k}}-1\right)\sum_{i\neq k}\theta_{m,i}\right)\mathrm{Tr}\left(\mathbf{W}_m\right). \end{align} On the other hand, we have~\eqref{ineq:SOP_fea_f}, shown at the top of the next page, \begin{figure*} \normalsize \begin{equation} \label{ineq:SOP_fea_f} \begin{split} \|\mathbf{\Lambda}\|_F &\overset{\text{(a)}}{\leq} \gamma_{e,j}\left(\left(\theta_{m,k}-\left(2^{D^j_{m,k}}-1\right)\sum_{i\neq k}\theta_{m,i}\right)\|\mathbf{W}_m\|_F +{P_m}\left(2^{D^j_{m,k}}-1\right)\|\mathbf{W}^{\bot}_m\|_F \right)\\ &\overset{\text{(b)}} {\leq}\gamma_{e,j} \left(\left(\theta_{m,k}-\left(2^{D^j_{m,k}}-1\right)\sum_{i\neq k}\theta_{m,i}\right)\mathrm{Tr}\left(\mathbf{W}_m\right)+{P_m}\left(2^{D^j_{m,k}}-1\right)\|\mathbf{W}^{\bot}_m\|_F \right) \end{split} \end{equation} \hrulefill \end{figure*} where step (a) follows from the triangle inequality of the norm, and step (b) follows from $\|\mathbf{W}_m\|_F=\sqrt{\sum_{i=1}^{N}r^2_{i}}\leq \sum_{i=1}^{N}r_{i} = \mathrm{Tr}\left(\mathbf{W}_m\right)$ with $\{r_{i}\geq 0\}_{i=1}^N$ being the eigenvalues of $\mathbf{W}_m$. Furthermore, $\mathrm{Tr}\left(\mathbf{\Lambda}\right)$ is expressed as \begin{align}\label{eq:tr_SOP_aux} \mathrm{Tr}\left(\mathbf{\Lambda}\right)=&\gamma_{e,j}\left(\theta_{m,k}-\left(2^{D^j_{m,k}}-1\right)\sum_{i\neq k}\theta_{m,i}\right)\mathrm{Tr}\left(\mathbf{W}_m\right)\nonumber \\ &-\gamma_{e,j}{P_m}\left(2^{D^j_{m,k}}-1\right)\mathrm{Tr}\left(\mathbf{W}^{\bot}_m\right). \end{align} Applying~\eqref{ineq:lamda_sop_f}-\eqref{eq:tr_SOP_aux} to~\eqref{ineq:prox_sop_temp}, we obtain \begin{equation}\label{ineq:SOP_BIT} \begin{split} &2^{D^j_{m,k}}-1\geq\\ &\mathrm{Tr}\left(\mathbf{\Lambda}\right)+\sqrt{2\ln(\varepsilon_k^{-1})}\|\mathbf{\Lambda}\|_F+\ln(\varepsilon_k^{-1})[\lambda_{\max}(\mathbf{\Lambda})]^+. \end{split} \end{equation} Comparing both sides of~\eqref{ineq:SOP_BIT} with $\frac{{\mathbf{g}}_{e,j}^H}{\mu_{e,j}}\mathbf{\Lambda}\frac{\mathbf{g}_{e,j}}{\mu_{e,j}}$ and taking probability, we have \begin{equation}\label{ineq:Cond_relation} \begin{aligned} &\mathrm{Pr}\left\{\frac{{\mathbf{g}}_{e,j}^H}{\mu_{e,j}}\mathbf{\Lambda}\frac{\mathbf{g}_{e,j}}{\mu_{e,j}}> 2^{D^j_{m,k}}-1\right\}\leq \mathrm{Pr}\Bigg\{\frac{{\mathbf{g}}_{e,j}^H}{\mu_{e,j}}\mathbf{\Lambda}\frac{\mathbf{g}_{e,j}}{\mu_{e,j}}\geq \\ & \mathrm{Tr}(\mathbf{\mathbf{\Lambda}})+\sqrt{2\ln(\varepsilon_k^{-1})}\|\mathbf{\mathbf{\Lambda}}\|_F+\ln(\varepsilon_k^{-1}) [\lambda_{\max}(\mathbf{\mathbf{\Lambda}})]^+\Bigg\}. \end{aligned} \end{equation} On the other hand, since ${\mathbf{g}}_{e,j}/{\mu_{e,j}}\sim \mathcal{CN}(\mathbf{0},\mathbf{I}_N)$, together with the Hermitian matrix $\mathbf{\Lambda}\in \mathbb{C}^{N\times N}$, for any $\epsilon\geq 0$, we have \begin{align}\label{ineq:BTI_Pro_int} &\mathrm{Pr}\left\{\frac{{\mathbf{g}}_{e,j}^H}{\mu_{e,j}}\mathbf{\Lambda}\frac{\mathbf{g}_{e,j}}{\mu_{e,j}}\geq \mathrm{Tr}(\mathbf{\mathbf{\Lambda}})+\sqrt{2\epsilon}\|\mathbf{\mathbf{\Lambda}}\|_F+\epsilon [\lambda_{\max}(\mathbf{\mathbf{\Lambda}})]^+\right\}\nonumber\\ &\leq \exp(-\epsilon), \end{align} which is the Bernstein-Type Inequality (BTI)~\cite{J_Bech09Berntein} and always holds. By substituting $\epsilon=\ln(\varepsilon_k^{-1})$ into~\eqref{ineq:BTI_Pro_int}, we obtain \begin{equation}\label{ineq:BTI} \begin{aligned} \mathrm{Pr}\Bigg\{\frac{{\mathbf{g}}_{e,j}^H}{\mu_{e,j}}\mathbf{\Lambda}\frac{\mathbf{g}_{e,j}}{\mu_{e,j}}\geq & \mathrm{Tr}(\mathbf{\mathbf{\Lambda}})+\sqrt{2\ln(\varepsilon_k^{-1})}\|\mathbf{\mathbf{\Lambda}}\|_F \\ &+\ln(\varepsilon_k^{-1}) [\lambda_{\max}(\mathbf{\mathbf{\Lambda}})]^+\Bigg\}\leq \varepsilon_k \end{aligned} \end{equation} for any $\varepsilon_k\in(0,1]$. Substituting~\eqref{ineq:BTI} into~\eqref{ineq:Cond_relation}, we obtain \begin{equation}\label{eq:dir_SOP_BTI} \mathrm{Pr}\left\{\frac{{\mathbf{g}}_{e,j}^H}{\mu_{e,j}}\mathbf{\Lambda}\frac{\mathbf{g}_{e,j}}{\mu_{e,j}}> 2^{D^j_{m,k}}-1\right\}\leq \varepsilon_k. \end{equation} As a result, $\mathrm{Pr}\left\{\frac{{\mathbf{g}}_{e,j}^H}{\mu_{e,j}}\mathbf{\Lambda}\frac{\mathbf{g}_{e,j}}{\mu_{e,j}}> 2^{D^j_{m,k}}-1\right\}\leq \varepsilon$ holds when we set $\varepsilon_k=\varepsilon$. Applying the result of~\cite[eq. 30]{J_IQF_GausianYI16}, it can be shown that $\mathrm{Pr}\left\{\frac{{\mathbf{g}}_{e,j}^H}{\mu_{e,j}}\mathbf{\Lambda}\frac{\mathbf{g}_{e,j}}{\mu_{e,j}}> 2^{D^j_{m,k}}-1\right\}\leq \varepsilon$ is equivalent to~\eqref{ineq:cons_SOP_CF}. Therefore, if~\eqref{ineq:slack_SOP_trans} holds for any $\varepsilon_k\in(0,1]$, then~\eqref{ineq:cons_SOP_CF} holds, i.e.,~\eqref{ineq:slack_SOP_trans} is a tighter constraint than the SOP constraint of~\eqref{ineq:cons_SOP_CF}. \section{Proof of Lemma~\ref{lem:clo-fom_zub} }\label{proof:lem-cf_z_ub} Since by definition~\eqref{eq:aux_int_xi}, $\xi_{m,k}\geq 0$. Hence, we only need to find the upper bound of $\xi_{m,k}$ to determine its feasible set. Denote $q\left(\xi_{m,k}\right)$ equals to the left hand side of~\eqref{ineq:COP_aux_P1}, the first-order derivative of $q\left(\xi_{m,k}\right)$ is given by \begin{equation} \begin{split} &q'\left(\xi_{m,k}\right) =-\frac{\left(\frac{2+\xi_{m,k}{P_m}2^{-\frac{B}{N-1}}}{4\gamma_{m,k}}+\frac{(M-1){P_m}2^{-\frac{B}{N-1}}}{2}\right)}{\exp\left(\frac{\xi_{m,k}}{2\gamma_{m,k}}\right)\left(1+\frac{\xi_{m,k}{P_m}2^{-\frac{B}{N-1}}}{2}\right)^{M}}, \end{split} \end{equation} which is negative for $M\geq 1$. With the decreasing property of $q\left(\xi_{m,k}\right)$ and lower boundedness of $q\left(\xi_{m,k}\right)$ in~\eqref{ineq:COP_aux_P1}, an upper bound of $\xi_{m,k}$, denoted by $\xi^{ub}_{m,k}$, is obtained by solving the following equation \begin{equation}\label{eq:inter_CF_ub_xi} \exp\left(-\frac{\xi^{ub}_{m,k}}{2\gamma_{m,k}}\right) \left(1+\xi^{ub}_{m,k}\frac{{P_m}2^{-\frac{B}{N-1}}}{2}\right)^{1-M}=1-\delta. \end{equation} By straightforward algebra,~\eqref{eq:inter_CF_ub_xi} can be further re-expressed as \begin{equation}\label{eq:inter_z-ub} \begin{split} \frac{2^{\frac{B}{N-1}}+\frac{{P_m}\xi^{ub}_{m,k}}{2}}{{P_m}\gamma_{m,k}(M-1)}& \exp\left(\frac{2^{\frac{B}{N-1}}+\frac{{P_m}\xi^{ub}_{m,k}}{2}}{{P_m}\gamma_{m,k}(M-1)}\right)\\ &=\frac{2^{\frac{B}{N-1}}\exp\left(\frac{2^{\frac{B}{N-1}}}{\gamma_{m,k}(M-1){P_m}}\right)}{\gamma_{m,k}(M-1){P_m}(1-\delta)^{\frac{1}{M-1}}}. \end{split} \end{equation} With the help of the principal branch of Lambert W function~\cite{Lambert_W96},~\eqref{eq:inter_z-ub} is rewritten as \begin{equation}\label{eq:W0-up-Xi} W_0\left(\frac{2^{\frac{B}{N-1}}\exp\left(\frac{2^{\frac{B}{N-1}}}{\gamma_{m,k}(M-1){P_m}}\right)}{\gamma_{m,k}(M-1){P_m}(1-\delta)^{\frac{1}{M-1}}}\right)=\frac{2^{\frac{B}{N-1}}+\frac{{P_m}\xi^{ub}_{m,k}}{2}}{{P_m}\gamma_{m,k}(M-1)}. \end{equation} Re-arranging the terms in~\eqref{eq:W0-up-Xi} leads to~\eqref{eq:up_xi_CF}. \section{Proof of Theorem~\ref{tem:FP_CCP_def} }\label{proof:them-FP_condi} Firstly, since the feasible set of $\mathcal{Q}1$ is determined by~\eqref{ineq:set_xi_Q1} with simple bound constraints, it must be a nonempty standard convex set. Secondly, to prove the concavity of $A_k(\Xi^{(m,j)}_{k})$, we rewrite $A_k(\Xi^{(m,j)}_{k})$ as $A_k(\Xi^{(m,j)}_{k})=\left[\log_2(1+E(\Xi^{(m,j)}_{k}))- \log_2 \left(1+ \frac{\Theta^{(m,j)}_{k}}{ \kappa_{m,k,j} + \sum_{i\neq k}\Theta_{m,i}}\right) \right]^+$, where $E(\Xi^{(m,j)}_{k})=\frac{\Xi^{(m,j)}_{k}\Theta^{(m,j)}_{k}}{1+\Xi^{(m,j)}_{k}\sum_{i=1}^{k-1}\Theta^{(m,j)}_{i}}\geq 0$. The first-order and second-order derivatives of $E(\Xi^{(m,j)}_{k})$ are respectively given by \begin{equation}\label{eq:der_zk_xi-k} E'(\Xi^{(m,j)}_{k}) =\frac{\Theta_{m,k}}{\left(1+\Xi^{(m,j)}_{k}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}\right)^2}\geq 0, \end{equation} \begin{equation} E''(\Xi^{(m,j)}_{k}) =\frac{-2\Theta^{(m,j)}_{k}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}}{\left(1+\Xi^{(m,j)}_{k}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}\right)^3}\leq 0. \end{equation} Since $E''(\Xi^{(m,j)}_{k})\leq 0$ for $0 \leq \Xi^{(m,j)}_{k} \leq \xi^{ub}_{m,k}$, $E(\Xi^{(m,j)}_{k})$ is concave on $\Xi^{(m,j)}_{k}$. Furthermore, due to the concavity and non-decreasing property of function $\log_2(1+x)$ with $x>0$, $\log_2(1+E(\Xi^{(m,j)}_{k}))-\log_2\left(1+\frac{\Theta^{(m,j)}_{k}}{\kappa_{m,k,j} + \sum_{i\neq k}\Theta^{(m,j)}_{i}}\right)$ is concave on $\Xi^{(m,j)}_{k}$. Since pointwise maximum operation preserves concavity~\cite{Cov_Opt90}, $A_k(\Xi^{(m,j)}_{k})$ is concave on $\Xi^{(m,j)}_{k}$. Thirdly, to prove the convexity of $B_k(\Xi^{(m,j)}_{k})$, we rewrite $B_k(\Xi^{(m,j)}_{k})$ as $\hat{b}_k(\Xi^{(m,j)}_{k})\tilde{b}_k(\Xi^{(m,j)}_{k})$, where $\hat{b}_k(\Xi^{(m,j)}_{k}):=\exp\left(\frac{\Xi^{(m,j)}_{k}}{2\gamma_{m,k}}\right)$ and $\tilde{b}_k(\Xi^{(m,j)}_{k}):=\left(1+\Xi^{(m,j)}_{k}\frac{P_m}{2^{\frac{B}{N-1}+1}}\right)^{M-1}$. It is obvious that $\hat{b}_k(\Xi^{(m,j)}_{k})$ is convex on $\Xi^{(m,j)}_{k}$. On the other hand, since $\tilde{b}_k(\Xi^{(m,j)}_{k})$ is the composition with an affine mapping from the convex function $x^{M-1}$ with $x\geq 0$, $\tilde{b}_k(\Xi^{(m,j)}_{k})$ has the same convex property as $x^{M-1}$. Considering that $\hat{b}_k(\Xi^{(m,j)}_{k})$ and $\tilde{b}_k(\Xi^{(m,j)}_{k})$ are both convex, $B_k(\Xi^{(m,j)}_{k})$ is convex since convexity is closed under multiplication and positive scaling~\cite{Cov_Opt90}. \section{Proof of Proposition~\ref{lem:sta_xi_mk_fp} }\label{prrof:lema_opt_Q2} The maximizer for $\mathcal{Q}2$ would either be at the stationary point ${\Xi^{(m,j)}_{k}}^{\diamond}$ or boundary points of the feasible range $[0, \xi^{ub}_{m,k}]$. Since $g(0,y^{\dagger}_k)=0$ and the objective function of $\mathcal{Q}2$ must be non-negative at optimality, the optimal ${\Xi^{(m,j)}_{k}}^{\dagger}$ cannot be 0. As a result, the optimal ${\Xi^{(m,j)}_{k}}^{\dagger}$ is either ${\Xi^{(m,j)}_{k}}^{\diamond}$ or $ \xi^{ub}_{m,k}$. On the other hand, if ${\Xi^{(m,j)}_{k}}^{\diamond}>\xi^{ub}_{m,k}$, then ${\Xi^{(m,j)}_{k}}^{\dagger}=\xi^{ub}_{m,k}$ since the optimal ${\Xi^{(m,j)}_{k}}^{\dagger}\leq \xi^{ub}_{m,k}$. Otherwise, due to the concavity of $g(\Xi^{(m,j)}_{k},y^{\dagger}_k)$ on $\Xi^{(m,j)}_{k}$, ${\Xi^{(m,j)}_{k}}^{\dagger}={\Xi^{(m,j)}_{k}}^{\diamond}$. Therefore, the optimal ${\Xi^{(m,j)}_{k}}^{\dagger}$ is obtained as shown in~\eqref{eq:opt_Xi-Q2}. Next, we determine the stationary point ${\Xi^{(m,j)}_{k}}^{\diamond}$. The pointwise-maximum function $A_k(\Xi^{(m,j)}_{k})$ can be rewritten as $A_k(\Xi^{(m,j)}_{k})=\tilde{A}_k(\Xi^{(m,j)}_{k})\mathbb{I}\left(\Xi^{(m,j)}_{k}>\left(\kappa_{m,k,j} + \sum_{i= k+1}\Theta^{(m,j)}_{i}\right)^{-1}\right)$, where $\tilde{A}_k(\Xi^{(m,j)}_{k})$ is given by \begin{equation} \begin{split} \tilde{A}_k(\Xi^{(m,j)}_{k})=& \log_2\left(\frac{1+\frac{\Xi^{(m,j)}_{k}\Theta^{(m,j)}_{k}}{1+\Xi^{(m,j)}_{k}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}}}{1+\frac{\Theta^{(m,j)}_{k}}{\kappa_{m,k,j} + \sum\limits_{i\neq k}\Theta^{(m,j)}_{i}}} \right), \end{split} \end{equation} and $\mathbb{I}(H)$ is the indicator function with $\mathbb{I}(H)=1$ if the event \textit{H} occurs and $\mathbb{I}(H)=0$ otherwise. It is known that if $A_k(\Xi^{(m,j)}_{k})=0$, $g(\Xi^{(m,j)}_{k},y^{\dagger}_k)=0$ and the objective function of $ \mathcal{Q}2$ is always 0. Hence, substituting $A_k(\Xi^{(m,j)}_{k})=\tilde{A}_k(\Xi^{(m,j)}_{k})$ into $g(\Xi^{(m,j)}_{k},y^{\dagger}_k)$, the stationary point ${\Xi^{(m,j)}_{k}}^{\diamond}$ is the unique root for $\frac{\partial g(\Xi^{(m,j)}_{k},y^{\dagger}_k)}{\partial\Xi^{(m,j)}_{k}}=0$, which is equivalent to~\eqref{eq:der_1st_bis}, shown at the top of this page. \begin{figure*} \normalsize \begin{equation} \label{eq:der_1st_bis} \frac{\Theta^{(m,j)}_{k}\left(1+{\Xi^{(m,j)}_{k}}^{\diamond}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}\right)^{-1}\tilde{A}^{-1/2}_k({\Xi^{(m,j)}_{k}}^{\diamond})}{\left(1+{\Xi^{(m,j)}_{k}}^{\diamond}\sum\limits_{i=1}^{k}\Theta^{(m,j)}_{i}\right)B_k({\Xi^{(m,j)}_{k}}^{\diamond})\ln 2}\!=\!y_k^{\dagger} \left(\frac{1}{2\gamma_{m,k}}+\frac{(M-1){P_m}2^{-\frac{B}{N-1}}}{2+{P_m}{\Xi^{(m,j)}_{k}}^{\diamond}2^{-\frac{B}{N-1}}}\right) \end{equation} \hrulefill \end{figure*} Re-arranging the terms in~\eqref{eq:der_1st_bis} lead to~\eqref{eq:sta_xi_mk}. \section{Derivation of the gradient of $f(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})$ }\label{lem:CF_theta_AM} From~\eqref{D2: obj_theta}, $f(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})$ can be rewritten as \begin{align} \label{eq:f_rewrite_sumform} &f(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})\nonumber\\ =&\underbrace{\log_2\left(1+\Xi^{(m,j)}_{k}\sum_{i=1}^{k}\Theta^{(m,j)}_{i}\right)}_{:= f_1(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})}-\underbrace{\log_2\left(1+\Xi^{(m,j)}_{k}\sum_{i=1}^{k-1}\Theta^{(m,j)}_{i}\right)}_{:= f_2(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})}\nonumber \\ &+\underbrace{\log_2\left(\kappa_{m,k,j} + \sum_{i\neq k}\Theta^{(m,j)}_{i}\right)}_{:= f_3(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})}-\log_2\left(\kappa_{m,k,j} +{P_m}\right). \end{align} Then, the gradient of $f_1$, $f_2$ and $f_3$ with respect to $\Theta^{(m,j)}_{i}, i\in \{1,\ldots,K_m\}$ are respectively derived as $\nabla_{\Theta^{(m,j)}_{i}} f_1= \frac{1}{\ln2}\frac{\Xi^{(m,j)}_{k}\mathbb{I}\left(i\leq k\right)}{1+\Xi^{(m,j)}_{k}\sum_{i=1}^{k}\Theta^{(m,j)}_{i}}$, $\nabla_{\Theta^{(m,j)}_{i}} f_2=\frac{1}{\ln2}\frac{\Xi^{(m,j)}_{k}\mathbb{I}\left(i< k\right)}{1+\Xi^{(m,j)}_{k}\sum_{i=1}^{k-1}\Theta^{(m,j)}_{i}}$, and $\nabla_{\Theta^{(m,j)}_{i}} f_3=\frac{1}{\ln2}\frac{\mathbb{I}\left( i\neq k \right)}{\kappa_{m,k,j} + \sum_{i\neq k}\Theta^{(m,j)}_{i}}$. Therefore, the gradient of $f(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m})$ with respect to $\Theta^{(m,j)}_{i}, i\in \{1,\ldots,K_m\}$ is given by~\eqref{eq:clo_explicit_dif}, shown at the top of the next page. \begin{figure*} \normalsize \begin{equation} \label{eq:clo_explicit_dif} \begin{split} \nabla_{\Theta^{(m,j)}_{i}} f=\left\{\begin{array}{ll} \frac{1}{\ln2}\left( \frac{\Xi^{(m,j)}_{k}}{1+\Xi^{(m,j)}_{k}\sum\limits_{i=1}^{k}\Theta^{(m,j)}_{i}}- \frac{\Xi^{(m,j)}_{k}}{1+\Xi^{(m,j)}_{k}\sum\limits_{i=1}^{k-1}\Theta^{(m,j)}_{i}}+ \frac{1}{\kappa_{m,k,j} + \sum\limits_{i\neq k}\Theta^{(m,j)}_{i}} \right), \!& \mathrm{if}~ i<k, \\ \frac{1}{\ln2}\frac{\Xi^{(m,j)}_{k}}{1+\Xi^{(m,j)}_{k}\sum\limits_{i=1}^{k}\Theta^{(m,j)}_{i}}, \!&\mathrm{if}~i= k, \\ \frac{1}{\ln2}\frac{1}{\kappa_{m,k,j} + \sum\limits_{i\neq k}\Theta^{(m,j)}_{i}}, \!&\mathrm{if}~i> k \end{array}\right. \end{split} \end{equation} \hrulefill \end{figure*} \section{Proof of Proposition~\ref{lem:Proj_gra_theta_k} }\label{proof:lemma_proj_gra_theta} The Lagrangian function of~\eqref{opt:proj_D2} is given by \begin{align}\label{eq:lagar_dualFunc} \mathcal{L}\left(\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m},\zeta\right)=& \sum_{k=1}^{K_m}\left\|\Theta^{(m,j)}_{k}-{\Theta^{(m,j)}_{k}}^{\diamond}\left(l+\frac{1}{2}\right)\right\|^2\nonumber \\ &+\zeta \left(\sum_{k=1}^{K_m}\Theta^{(m,j)}_{k}-{P_m}\right), \end{align} where $\zeta$ is the dual variable corresponding to the constraint in~\eqref{D2: obj_theta}. Based on feasible set $\mathcal{P}_{\mathcal{D}_2}$, $\Theta^{(m,j)}_{k}$ can be either 0 or positive. If $\Theta^{(m,j)}_{k}>0$, the optimal solution must satisfy the following KKT conditions: ${\Theta^{(m,j)}_{k}}^{\dagger}-{\Theta^{(m,j)}_{k}}^{\diamond}(l+1/2)+\zeta^{\dagger}/2=0$ and $\sum_{k=1}^{K_m}{\Theta^{(m,j)}_{k}}^{\dagger}-{P_m}=0$. Therefore, ${\Theta^{(m,j)}_{k}}^{\dagger}$ is derived as ${\Theta^{(m,j)}_{k}}^{\dagger}={\Theta^{(m,j)}_{k}}^{\diamond}(l+1/2)-\zeta^{\dagger}/2$, where the optimal $\zeta^{\dagger}$ is given by \begin{equation} \zeta^{\dagger}=\frac{2}{K_m}\left(\sum_{k=1}^{K_m}{\Theta^{(m,j)}_{k}}^{\diamond}\left(l+\frac{1}{2}\right)-{P_m}\right). \end{equation} Together with the case of $\Theta^{(m,j)}_{k}=0$, the optimal solution to~\eqref{opt:proj_D2} is shown in~\eqref{eq:pro_gra_optTheta}. \section{Proof of Theorem~\ref{tem:conver_AM} }\label{proof_tem_convergence} Define $\mathbf{Z}=(\mathbf{\Xi},\boldsymbol{\Theta})\in \mathcal{Z}$ as the composite vector with $\mathbf{\Xi}:=\{\Xi^{(m,j)}_{k}\}_{k=1}^{K_m}$ and $\boldsymbol{\Theta}:=\{\Theta^{(m,j)}_{k}\}_{k=1}^{K_m}$, where $\mathcal{Z}$ is the feasible set of $\mathcal{P}2^{[m,j]}$. Furthermore, the $l^{th}$ AM iteration is denoted by $\mathbf{Z}^{(l)}=(\mathbf{\Xi}^{(l)},\boldsymbol{\Theta}^{(l)})$, the sequence in between is denoted by $\mathbf{Z}^{(l+\frac{1}{2})}=(\mathbf{\Xi}^{(l+1)},\boldsymbol{\Theta}^{(l)})$, and the objective function of $\mathcal{P}2^{[m,j]}$ is denoted by $\Upsilon(\mathbf{Z})$. First, we show that the sequence of solutions converges to a limit point $\mathbf{Z}^*$. \begin{lemma}\label{lem:limit_point_property} The sequence of solutions $\{\mathbf{Z}^{(l)}\}_{l\in \mathbb{N}}$ generated by AM iteration is bounded and must have a limit point $\mathbf{Z}^*$. \end{lemma} \begin{proof} We first prove that $\Upsilon(\mathbf{Z})$ is monotonically increasing as iteration number increases. It is known that for the $l^{th}$ iteration, to update $\mathbf{\Xi}^{(l)}$ with Algorithm~\ref{alg:CDM_iterative_RI}, the obtained point $\mathbf{\Xi}^{(l+1)}$ is a local optimal point, and it must be a saddle-point for $\mathcal{P}2^{[m,j]}$. Together with the property of saddle-points~\cite{J_Liu09Saddle}, we have the following inequality \begin{equation}\label{ineq:AM_conv_B2} \Upsilon(\mathbf{Z}^{(l+\frac{1}{2})})\geq \Upsilon(\mathbf{Z}^{(l)}). \end{equation} On the other hand, with Algorithm~\ref{alg:dual_update_theta}, the obtained $\boldsymbol{\Theta}^{(l+1)}$ is a stationary point. Furthermore, since the feasible set of $\mathcal{P}2^{[m,j]}$ is a Cartesian product of convex sets, the optimization over $\boldsymbol{\Theta}$ is independent on $\mathbf{\Xi}$. Hence, we have the following inequality \begin{equation}\label{ineq:Converge_AM_condi} \Upsilon(\mathbf{Z}^{(l+1)})\geq \Upsilon(\mathbf{Z}^{(l+\frac{1}{2})}). \end{equation} Combining~\eqref{ineq:AM_conv_B2} and~\eqref{ineq:Converge_AM_condi}, we conclude that \begin{equation}\label{eq:mon_incre_pro1} \Upsilon(\mathbf{Z}^{(l+1)}) \geq \Upsilon(\mathbf{Z}^{(l)}) \geq \cdots\geq \Upsilon(\mathbf{Z}^{(0)}), \forall l\in\{1,2\ldots\}, \end{equation} where $\Upsilon(\mathbf{Z}^{(0)})$ is any finite initial value of the objective function. Then we prove the boundedness of the sequence of solutions $\{\mathbf{Z}^{(l)}\}_{l\in \mathbb{N}}$ generated by AM iteration. It is observed that $\mathbf{\Xi}$ and $\boldsymbol{\Theta}$ are respectively located in separable closed sets based on~\eqref{ineq:cons_xi_orthant} and~\eqref{eq:simplex_theta}. Hence, the sequence $\{\mathbf{Z}^{(l)}\}_{l\in \mathbb{N}}$ is bounded. Together with monotonic property of~\eqref{eq:mon_incre_pro1}, $\{\mathbf{Z}^{(l)}\}_{l\in \mathbb{N}}$ must have a limit point $\mathbf{Z}^*$ based on Bolzano-Weierstrass theorem~\cite{B_BartleRobert11}. \end{proof} To further investigate the property of limit point $\mathbf{Z}^*$, we first recall the notion of gradient mapping. The gradient mappings with respect to $\mathbf{Z}\in \mathcal{Z}$ for any $L>0$ is defined as~\cite{B_Nesterov14} \begin{equation}\label{eq:gra_map_def} G_L(\mathbf{Z})=L\left(\mathbf{Z}-\mathrm{prox}_L^{\Upsilon}\left(\mathbf{Z}-\frac{1}{L}\nabla \Upsilon(\mathbf{Z})\right)\right), \end{equation} where $\mathrm{prox}_L^{\Upsilon}(\mathbf{Z}):=\mathop{\arg\min}\limits_{\mathbf{U}\in\mathcal{Z}} \left\{\Upsilon(\mathbf{U}) +\frac{L}{2}\|\mathbf{U}-\mathbf{Z}\|^2\right\}$ is the proximal mapping associated to $\Upsilon$~\cite{J_Beck15AM}. From~\eqref{eq:gra_map_def}, the corresponding partial gradient mappings on $\mathbf{\Xi}$ with constant $L_1<\infty$ and $\boldsymbol{\Theta}$ with constant $L_2<\infty$ are respectively given by \begin{equation} G^{\mathbf{\Xi}}_{L_1}(\mathbf{Z})=L_1\left(\mathbf{\Xi}-\mathrm{prox}_{L_1}^{\Upsilon}\left(\mathbf{\Xi}-\frac{1}{L_1}\nabla_{\mathbf{\Xi}} \Upsilon(\mathbf{Z})\right)\right), \end{equation} \begin{equation} G^{\boldsymbol{\Theta}}_{L_2}(\mathbf{Z})=L_2\left(\boldsymbol{\Theta}-\mathrm{prox}_{L_2}^{\Upsilon}\left(\boldsymbol{\Theta}-\frac{1}{L_2}\nabla_{\boldsymbol{\Theta}} \Upsilon(\mathbf{Z})\right)\right). \end{equation} Then, based on partial gradient mappings, we have the following sufficient increase property. \begin{lemma}\label{lem:suf_inc_prop} Updating $\mathbf{Z}$ by using AM iteration, the following inequalities hold for $ l\in\{1,2\ldots\}$ \begin{equation}\label{ineq:inter_increase_proper} \Upsilon(\mathbf{Z}^{(l+\frac{1}{2})})-\Upsilon(\mathbf{Z}^{(l)})\geq \frac{1}{2L_1}\| G^{\mathbf{\Xi}}_{L_1}(\mathbf{Z}^{(l)}) \|^2, \end{equation} \begin{equation}\label{ineq:increase_property_theta} \Upsilon(\mathbf{Z}^{(l+1)})-\Upsilon(\mathbf{Z}^{(l+\frac{1}{2})})\geq \frac{1}{2L_2}\| G^{\boldsymbol{\Theta}}_{L_2}(\mathbf{Z}^{(l+\frac{1}{2})})\|^2. \end{equation} \end{lemma} \begin{proof} Since the converged $\mathbf{\Xi}^{(l+1)}$ for the $l^{th}$ iteration is a local optimal point of $\mathcal{Q}1$ according to Theorem~\ref{lem:opt_xi_alg1}, $(\mathbf{\Xi}^{(l+1)},\boldsymbol{\Theta}^{(l)})\in \mathrm{prox}_L^{\Upsilon}(\mathbf{Z}^{(l)})$. Furthermore, the partial gradient of $\Upsilon(\mathbf{Z})$ is Lipschitz continuous with respect to $\mathbf{\Xi}$ for any $\boldsymbol{\Theta}$ satisfying constraint~\eqref{eq:simplex_theta}. Applying the result in~\cite[Lemma 2]{J_Bolte14ConverAnalysis}, we have $\Upsilon(\mathbf{\Xi}^{(l+1)},\boldsymbol{\Theta}^{(l)})-\Upsilon(\mathbf{\Xi}^{(l)},\boldsymbol{\Theta}^{(l)})\geq \frac{1}{2L_1}\| G^{\mathbf{\Xi}}_{L_1}(\mathbf{Z}^{(l)}) \|^2$, which is equivalent to~\eqref{ineq:inter_increase_proper}. On the other hand, since the converged $\boldsymbol{\Theta}^{(l+1)}$ is a stationary point of $\mathcal{D}2$, we have $(\mathbf{\Xi}^{(l+1)},\boldsymbol{\Theta}^{(l+1)})\in \mathrm{prox}_L^{\Upsilon}(\mathbf{Z}^{(l+\frac{1}{2})})$. Similarly, we have $\Upsilon(\mathbf{\Xi}^{(l+1)},\boldsymbol{\Theta}^{(l+1)})-\Upsilon(\mathbf{\Xi}^{(l+1)},\boldsymbol{\Theta}^{(l)})\geq \frac{1}{2L_2}\| G^{\boldsymbol{\Theta}}_{L_2}(\mathbf{Z}^{(l+\frac{1}{2})}) \|^2$, which is equivalent to~\eqref{ineq:increase_property_theta}. \end{proof} Finally, we prove the limit point $\mathbf{Z}^*$ is a stationary point of $\mathcal{P}2^{[m,j]}$ based on Lemma~\ref{lem:limit_point_property} and Lemma~\ref{lem:suf_inc_prop}. From Lemma~\ref{lem:limit_point_property}, there exists a subsequence $\{\mathbf{Z}^{(l)}\}_{l\in \mathbb{N}}$ converges to a limit point $\mathbf{Z}^*$, and $\{\Upsilon(\mathbf{Z}^{(l)})\}_{l\in \mathbb{N}}$ is a nondecreasing upper-bounded sequence. Hence, $\{\Upsilon(\mathbf{Z}^{(l)})\}_{l\in \mathbb{N}}$ must converge to some finite value and $\Upsilon(\mathbf{Z}^{(l+1)})-\Upsilon(\mathbf{Z}^{(l)})\rightarrow 0$ as $l\rightarrow \infty$. Together with Lemma~\ref{lem:suf_inc_prop}, we conclude that $G^{\mathbf{\Xi}}_{L_1}(\mathbf{Z}^{(l)})\rightarrow 0$ and $G^{\boldsymbol{\Theta}}_{L_2}(\mathbf{Z}^{(l+\frac{1}{2})})\rightarrow 0$ as $l\rightarrow \infty$, which implies that $G^{\mathbf{\Xi}}_{L_1}(\mathbf{Z}^*)= \mathbf{0}$ and $G^{\boldsymbol{\Theta}}_{L_2}(\mathbf{Z}^*)= \mathbf{0}$ due to the continuity of partial gradient mappings $G^{\mathbf{\Xi}}_{L_1}$ and $G^{\boldsymbol{\Theta}}_{L_2}$. Therefore, the limit point $\mathbf{Z}^*$ is a stationary point~\cite{J_Bolte14ConverAnalysis}. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-07-28T02:44:20", "yymm": "2007", "arxiv_id": "2007.13687", "language": "en", "url": "https://arxiv.org/abs/2007.13687" }
\section{Introduction} During the past decade the dissipative optomechanical coupling introduced into optomechanics by Elste, Girvin, and Clerk \cite{Elste2009} attracted an appreciable attention of theorists~\cite{huang2017,Weiss2013,Weiss2013a,Kilda2016, vyatchanin2016,nazmiev2019,Vostrosablin2014,Tarabrin2013,Xuereb2011,Tagantsev2018,tagantsev2019,mehmood2019,Khalili2016,huang2018,huang2018quad,huang2019gen,mehmood2018} and experimentalists~\cite{Li2009,Sawadsky2015,tsvirkun2015,Wu2014,meyer2016,zhang_2014}. For such a coupling, in contrast to that dispersive, the mechanical oscillator modulates the decay rate of the cavity but not its resonance frequency. The dissipative coupling has brought about some new physics in optomechanics. For example, once this coupling is involved, the theory predicts: a generation of a stable optical-spring effect, which is not-feedback-assited~\cite{nazmiev2019}, a virtually full squeezing of the optical noise, in a system exhibiting no optomechanical instability\cite{tagantsev2019}, and not-feedback-assisted cooling of a mechanical oscillator under the resonance excitation~\cite{Tarabrin2013}. Here the latter was also documented experimentally~\cite{Sawadsky2015}. Among the predictions for the dissipative-coupling-based systems the most promising is that on a very efficient laser cooling~\cite{Elste2009,Weiss2013a}. It is a phenomenon of the weak-coupling regime~\cite{marquardt2007} where the light-pressure-induced contribution to the mechanical damping $\gamma_{\mathrm{opt}}$ is much smaller than the cavity decay rate $\gamma$. In this regime for an appreciable cooling, the phonon number can be viewed as originated from two contributions: one is due to the quantum noise in the bandwidth of the oscillator and the other is due to that in the bandwidth of the optical cavity. The former scales as $1/\gamma_{\mathrm{opt}}$, it usually dominates the cooling while the later, scaling as $1/\gamma$, can typically be neglected. In the system where both dispersive and dissipative coupling are active and under a proper detuning, due to interference effects the first contribution "accidentally" vanishes\cite{Elste2009,Weiss2013a}. As a result the second "small" term dominates the story, leading to a record-low cooling limit as was theoretically demonstrated by Weiss and Nunnenkamp \cite{Weiss2013a}. However, once the system is not ideal, e.g., because of the presence of some internal cavity loss, such a limit will be pushed up~\cite{Elste2009,Weiss2013a}. The same holds for the inaccuracy of the optimized detuning $\Delta$. Keeping in mind the situation where the otherwise leading term "accidentally" vanishes, one expects these nonideality effects to be anomalously strong. We mean that, at $\gamma_{\mathrm{int}}/\gamma\ll 1$ or/and $\delta\Delta/\Delta\ll 1$ (here $\delta\Delta$ is for the deviation of $\Delta$ from its optimal value and $\gamma_{\mathrm{int}}$ is the internal decay rate of the cavity), the idealized cooling limit may be substantially affected. On the same lines, one may be concerned about the impact of inaccuracy of the single-mode Langevin equation used for the calculations~\cite{Elste2009,Weiss2013a}. The point is that, in terms of more precise calculations, the contribution in question may stay nonzero at any settings. There also exists an additional limitation for the applicability of the results by Weiss and Nunnenkamp \cite{Weiss2013a}: when these are applied one should check that (i) it is the weak-coupling regime and (ii) the cold friction does not make the mechanical oscillator overdamped. From the above it becomes clear that the experimental implementation of the promising result by Weiss and Nunnenkamp \cite{Weiss2013a}, not speaking about practical technical issues, may be more demanding than just the fulfillment of the optimized settings found in Refs.~\citenum{Elste2009,Weiss2013a}. This justifies the need to specify the range of applicability of this result and formulate additional conditions for its practical implementation. This job is the main subject of the present paper, which is organized as follows. In Sec.~\ref{WN}, the result by Weiss and Nunnenkamp is reproduced, presented in a simple form, and an explicit criterion for its applicability is given. In Sec.~\ref{IL}, the impact of the internal cavity loss is evaluated. Section~\ref{Optimal} is devoted to the impact of the inaccuracy of the optimal settings. In Sec.~\ref{Beyond}, effects beyond the single-mode Langevin-equation accuracy are addressed. Section~\ref{comparison} discusses the dissipative-coupling-assisted protocol versus those dispersive-coupling-assisted. Section~\ref{Conclusions} gives a brief resume of the paper. \section{The result by Weiss and Nunnenkamp and criterion for its applicability} \label{WN} A one-sided optomechanical cavity enabled with the dispersive and dissipative optomechanical couplings is considered, the coupling constants being denoted as $g_\omega$ and $g_\gamma$, respectively. The system is pumped with a strong monochromatic light (the frequency -$\omega_L$, the photon-flux-normalized complex amplitude - $A_0$). The fluctuations of the cavity field are described with the photon ladder Bose operator $\mathbf{a }$ while the fluctuations of the mechanical variable are described with the phonon ladder Bose operator $\mathbf{b }$. These operators satisfy the following equations~\cite{Elste2009} \begin{equation} \label{alin} \frac{\partial \textbf{a}}{\partial t}+\{\gamma/2-i\Delta\}\textbf{a} =\sqrt{\gamma}\textbf{A}_{\textrm{in}}+\left[ig_\omega a_0+g_\gamma (a_0-A_0/\sqrt{\gamma})\right](\textbf{b}^\dag+\textbf{b}), \qquad a_0 = \sqrt{\gamma}A_0/(\gamma/2-i\Delta), \end{equation} \begin{equation} \label{blin} \frac{\partial \textbf{b}}{\partial t}+\left(\frac{\gamma_{\textrm{m}}}{2}+i\omega_{\textrm{m}}\right)\textbf{b} =\sqrt{\gamma_{\textrm{m}}}\textbf{b}_{\textrm{in}}+i\frac{x_{\textrm{zpf}}}{\hbar}\textbf{F}, \qquad x_{\textrm{zpf}}=\sqrt{\frac{\hbar}{2m\omega_{\textrm{m}}}}, \end{equation} where $\Delta=\omega_L -\omega_c$ is the detuning and the operator of the backaction force has the following form \begin{equation} \label{F} \frac{x_{\textrm{zpf}}}{\hbar}\textbf{F}= g_\omega a_0^*\mathbf{a}+i\frac{g_\gamma}{\sqrt{\gamma}} [(a_0^*\textbf{A}_{\textrm{in}}-A_0^*\textbf{a})] +\textrm{H.c.}, \end{equation} where $\hbar$ is the Planck constant, $\omega_{c}$ and $\gamma$ are the resonance frequency and the decay rate of the cavity while $m$, $\omega_{\textrm{m}}$, and $\gamma_m$ are the effective mass, resonance frequency and decay rate of the mechanical oscillator, respectively. Here $\textrm{H.c.}$ stands for Hermitian conjugated. Operator $\textbf{A}_{\textrm{in}}$ describes the vacuum noise: \begin{equation} \label{Aa} \begin{array}{cc} [\textbf{A}_{\textrm{in}}(t),\textbf{A}_{\textrm{in}}^\dag(t')]=\delta(t-t'), \qquad [\textbf{A}_{\textrm{in}}(t),\textbf{A}_{\textrm{in}}(t')]=0, \\ <\textbf{A}_{\textrm{in}}(t)\textbf{A}_{\textrm{in}}(t')> =<\textbf{A}_{\textrm{in}}^\dag(t)\textbf{A}_{\textrm{in}}(t')>=0, \\ \end{array} \end{equation} while $\textbf{b}_{\textrm{in}}$ describes the mechanical thermal noise ($n_{\mathrm{th}}$ stands from the number of thermally excited phonons) \begin{equation} \label{bin} \begin{array}{cc} [\textbf{b}_{\textrm{in}}(t),\textbf{b}_{\textrm{in}}^\dag(t')]=\delta(t-t'), \qquad [\textbf{b}_{\textrm{in}}(t),\textbf{b}_{\textrm{in}}(t')]=0, \\ <\textbf{b}_{\textrm{in}}(t)\textbf{b}_{\textrm{in}}(t')> =0, \qquad <\textbf{b}_{\textrm{in}}^\dag(t)\textbf{b}_{\textrm{in}}(t')>=n_{\mathrm{th}}\delta(t-t'), \\ \end{array} \end{equation} with $<...>$ and $[...,...]$ denoting the ensemble averaging and the commutator, respectively. The goal is to find the phonon occupation number. This is a linear problem, which, in the Fourier domain, can be solved exactly~\cite{Weiss2013,Weiss2013a}. However, according to Ref. \citenum{Weiss2013a}, an approximate solution, keeping a fair accuracy, provides informative analytical results. The approximate procedure is as follows. In the Fourier domain, (\ref{alin}) can be solved with respect to $\mathbf{a}$. Inserting $\mathbf{a}$ into (\ref{blin}), its $\mathbf{b}$-dependent part leads to a renormalization of the mechanical susceptibility, which can be written as follows \begin{equation} \label{sus} \chi(\omega) = \frac{1}{\Gamma_M(\omega)/2-i[\omega-\Omega_M(\omega)]}. \end{equation} The other part yields the stochastic backaction force, $\mathbf{F}_{\mathrm{sb}}(t)$. If we neglect frequency dependent renormalization of $\gamma$ and $\Delta$ due to the optomechanical coupling, the spectral power density of $\mathbf{F}_{\mathrm{sb}}(t)$, which is defined as \begin{equation} \label{SF} S_{FF}(\omega) = \int dt e^{i\omega t}<\mathbf{F}(t)\mathbf{F}(0)>, \end{equation} reads~\cite{Elste2009} \begin{equation} \label{SF1} S_{FF}(\omega) = \frac{|a_0|^2 g_\gamma^2}{\gamma(x_{\textrm{zpf}}/\hbar)^2} \frac{(\omega+\omega_h)^2}{(\gamma/2)^2+(\omega+\Delta)^2}. \end{equation} where \begin{equation} \label{omh} \omega_h\equiv 2\Delta +\gamma g_\omega/g_\gamma. \end{equation} The mechanical spectrum, which is defined as \begin{equation} \label{Sb} S_{bb}(\omega) = \int dt e^{i\omega t}<\mathbf{b}^\dag(t)\mathbf{b}(0)>, \end{equation} can be expressed in terms of $S_{FF}(\omega)$ and $\chi(\omega)$ as follows~\cite{Weiss2013a} \begin{equation} \label{Sb1} S_{bb}(\omega) = |\chi(-\omega)|^2[\gamma_mn_{\mathrm{th}}+(x_{\textrm{zpf}}/\hbar)^2 S_{FF}(\omega) ]. \end{equation} The relation \begin{equation} \label{n} n = <\mathbf{b}^\dag \mathbf{b}>=\int S_{bb}(\omega)d\omega/2\pi \end{equation} can be used to find the number of phonons in the system, which is denoted as $n$. Using explicit expressions for $\Gamma_M(\omega)$ and $\Omega_M(\omega)$ as well as Eqs.~(\ref{sus}), (\ref{Sb1}), (\ref{SF1}) and (\ref{n}), one can numerically evaluate the cooling of the mechanical oscillator. Commonly, to advance analytically, in the expression for $\chi(\omega)$ , one replaces\cite{footnote12} $\Omega_M(\omega)$ with $\omega_M$, which satisfy the equation $\Omega_M(\omega)=\omega$ while $\Gamma_M(\omega)$ is replaced with $\gamma_M=\Gamma_M(\omega_M)$. In this approximations~\cite{Weiss2013a} \begin{equation} \label{n1} n = \frac{\gamma_m}{\gamma_M}n_{\mathrm{th}}+ \frac{|a_0|^2 g_\gamma^2\gamma^{-1}}{(\gamma+\gamma_M)^2/4+(\omega_M-\Delta)^2} \left[\frac{(\omega_h -\omega_M)^2}{\gamma_M}+ \frac{(\omega_h-\Delta)^2}{\gamma}+\frac{\gamma+\gamma_M}{4}\right]. \end{equation} This way calculated $\gamma_M$ can also be obtained using the following result of the quantum noise approach for the light-pressure-induced mechanical decay rate~\cite{marquardt2007} \begin{equation} \label{GM} \gamma_{\mathrm{opt}}\equiv\gamma_M-\gamma_m = (x_{\textrm{zpf}}/\hbar)^2[S_{FF}(\omega_M)-S_{FF}(-\omega_M)]. \end{equation} The above approximate treatment is valid if the renormalized mechanical oscillator is weakly damped, i.e. \begin{equation} \label{Mdamp} \gamma_M\ll \omega_M, \end{equation} while the optomechanical system is in the weak-coupling regime~\cite{marquardt2007} where \begin{equation} \label{iequ} \gamma_{\mathrm{opt}}\ll\gamma, \end{equation} which also practically implies \begin{equation} \label{iequ1} \gamma_M\ll\gamma. \end{equation} Obviously, the neglect of the renormalization of $\gamma$ and $\Delta$, crucial for the calculations, is justified only in the weak-coupling regime. Thus, Eqs.(\ref{iequ}) and (\ref{Mdamp}) make a creation of the validity for the whole theory. Equation (\ref{n1}) can be rationalized: the first term in the brackets is the contribution of the quantum noise in the bandwidth of the mechanical oscillator whereas the second and third are conditioned by the noise in the bandwidth of the optical cavity. In the weak-coupling regime addressed, the first contribution is expected to be dominant unless some special cancelations take place. In the case of the purely dispersive coupling, i.e., at $g_\gamma\rightarrow0$ and $g_\omega\neq0$, in Eqs.(\ref{n1}), indeed only the first term in the brackets is to be kept. This leads to a well-known result for the phonon occupation number, which, for the optimal detuning $\Delta=-\omega_M$, reads \begin{equation} \label{n2} n = \frac{n_{\mathrm{th}}+ n_{\mathrm{disp}}V} {1+V}, \qquad V\equiv\frac{|a_0|^2 g_\omega^2}{(\gamma/2)^2+4\omega_M^2}\frac{16\omega_M^2}{\gamma\gamma_m}, \end{equation} where \begin{equation} \label{ndisp} n_{\mathrm{disp}} = \frac{\gamma^2}{16\omega_M^2}. \end{equation} is the minimal phonon occupation that can be reached for the dispersive-coupling-assisted sideband cooling~\cite{marquardt2007,wilson2007} under red sideband excitation. If the both optomechanical couplings are active, there appears the possibility of breaking through in the minimal phonon occupation number. Specifically, at $\omega_h =\omega_M $, i.e. at \begin{equation} \label{cond} 2\Delta= \omega_M-\gamma g_\omega/g_\gamma, \end{equation} the contribution of the quantum noise in the bandwidth of the mechanical oscillator vanishes due to the Fano effect~\cite{Elste2009}. As a result the minimal phonon number is controlled by the "small" second and third terms in the brackets in Eq.~(\ref{n1}). For such a detuning, one finds~\cite{Weiss2013a} \begin{equation} \label{n3} n = \frac{\gamma_m}{\gamma_M}n_{\mathrm{th}}+ U, \end{equation} where \begin{equation} \label{Uoo} U\equiv |a_0|^2 \frac{g_\gamma^2}{\gamma^2} \end{equation} is proportional to the laser power and \begin{equation} \label{GM3} \gamma_M=\gamma_m + U\gamma_mG, \qquad G=\frac{G_0}{1+(3\omega_M/\gamma- g_\omega/g_\gamma)^2}\qquad G_0=\frac{16\omega_M^2}{\gamma \gamma_m}. \end{equation} Equation (\ref{n3}) can be also rewritten as follows \begin{equation} \label{n33} n = \frac{n_{\mathrm{th}}}{1+GU}+U. \end{equation} Minimization of (\ref{n33}) with respect to the intensity of the pumping light yields the following minimal phonon number \begin{equation} \label{n4} n_{\mathrm{diss}} = n_{\mathrm{th}} \left(\frac{2}{\sqrt{Gn_{\mathrm{th}}}}-\frac{1}{Gn_{\mathrm{th}}}\right), \end{equation} which is reached at \begin{equation} \label{U} U =U_0\equiv \frac{\sqrt{n_{\mathrm{th}} G}-1}{G}. \end{equation} Next, since we are interested in the situation where $n_{\mathrm{diss}} \ll n_{\mathrm{th}}$, Eqs.(\ref{n4}) and (\ref{U}) can be rewritten as follows \begin{equation} \label{n5} n_{\mathrm{diss}} = 2 \sqrt{\frac{n_{\mathrm{th}}}{G}} \end{equation} and \begin{equation} \label{U1} U_0 = \frac{n_{\mathrm{diss}}}{2}. \end{equation} Further optimization is possible by manipulating with the ratio of the optomechanical coupling constants~\cite{Weiss2013a}, specifically, by setting \begin{equation} \label{cond1} \gamma g_\omega/g_\gamma=3\omega_M, \end{equation} we maximize $G$ up to $G_0$. Note that (\ref{cond1}) also implies \begin{equation} \label{cond2} \Delta=-\omega_M. \end{equation} This brings us to the following minimal phonon number that can be reached in the presence of the dissipative and dispersive coupling \begin{equation} \label{ndiss} n_{\mathrm{diss}} =\frac{1}{2}\sqrt{\frac{n_{\mathrm{th}}}{Q} \frac{\gamma}{\omega_M}} \end{equation} where $Q =\omega_M/\gamma_m$ is the quality factor of the decoupled mechanical oscillator. Hereafter, referring to this result we will use "dissipative-coupling-assisted limit" as shorthand. This cooling limit is reached at the following photon cavity occupation \begin{equation} \label{a0} |a_0|^2= \frac{n_{\mathrm{diss}}}{2}\left(\frac{\gamma}{g_{\gamma}}\right)^2. \end{equation} One readily notice that in the bad cavity limit, i.e., at $\gamma\gg\omega_M$, and if the system is dominated by the dissipative coupling, i.e., at $g_\omega/g_\gamma\ll 1$, $G$ is always close $G_0$ such that (\ref{ndiss}) is valid without satisfying condition (\ref{cond1}), while the detuning is different from that given by Eq.(\ref{cond2}). One can readily find the range of applicability of the cooling limit given by Eq.(\ref{ndiss}). Combining (\ref{GM}), (\ref{GM3}), (\ref{ndisp}), and (\ref{U1}), one finds \begin{equation} \label{gamm/gamm} \frac{\gamma_{\mathrm{opt}}}{\gamma}= \frac{n_{\mathrm{diss}}}{2n_{\mathrm{disp}}}. \end{equation} Thus, for the validity of Eq.(\ref{ndiss}), condition (\ref{iequ}) requires that \begin{equation} \label{gamm/gamm1} n_{\mathrm{diss}} \ll 2n_{\mathrm{disp}} \end{equation} while condition (\ref{Mdamp}) yields \begin{equation} \label{gamm/gamm2} n_{\mathrm{diss}} \ll 2n_{\mathrm{disp}}\frac{\omega_M}{\gamma}. \end{equation} In other words, the validity of the cooling limit predicted in Ref.~\citenum{Weiss2013a} requires that that limit must be appreciably deeper than the dispersive-coupling-assisted limit for the red sideband excitation (\ref{ndisp}). \section{Impact of the internal loss} \label{IL} The impact of the internal cavity loss on the Fano effect in question was discussed earlier~\cite{Elste2009, Weiss2013a}. Specifically, in Ref.~\cite{ Weiss2013a}, it was pointed out that, depending on the ratio of $\gamma_{\mathrm{int}}/\gamma$, the quantum noise interference becomes less perfect, and ultimately, if $\gamma_{\mathrm{int}}/\gamma\gg 1$, the force spectrum is a Lorentzian. However, as was stated in the Introduction, in view of the specifics of the system, one can expect a strong impact of the internal cavity loss on the cooling limit already at $\gamma_{\mathrm{int}}/\gamma\ll 1$. Let us show this. The internal loss entails an additional contribution to the spectral power density of the backaction force, which can be approximated as follows~\cite{Weiss2013a} \begin{equation} \label{SFdop} S_{FF,\mathrm{int}}(\omega) = \frac{U\gamma_{\mathrm{int}}}{(x_{\textrm{zpf}}/\hbar)^2} \frac{(\gamma/2)^2+(\Delta +\gamma g_\omega/g_\gamma)^2}{(\gamma/2)^2+(\omega+\Delta)^2}. \end{equation} To be exact, in this expression, one should replace $\gamma$ with the total cavity decay rate $\gamma+\gamma_{\mathrm{int}}$. In what follows, being interested in the situation where $\gamma\gg\gamma_{\mathrm{int}}$, we will ignore this replacement. One readily checks that this contribution leads to a generalization of (\ref{n33}) to find \begin{equation} \label{n30} n = \frac{n_{\mathrm{th}}+HU}{1+GU}+U \qquad H=\frac{\gamma_{\mathrm{int}}}{\gamma_m}\frac{(\gamma/2)^2+(\Delta +\gamma g_\omega/g_\gamma)^2}{(\gamma/2)^2+(\omega_M-\Delta)^2}. \end{equation} For the optimized regime given by Eqs.~(\ref{cond1}) and (\ref{cond2}), the contribution of the internal loss to the minimal phonon number via (\ref{n30}) reads \begin{equation} \label{n31} n_{\mathrm{int}} = \frac{H}{G_0}= \frac{\gamma_{\mathrm{int}}}{\gamma}n_{\mathrm{disp}}\beta, \qquad \beta =\frac{(\gamma/2)^2+16\omega_M^2}{(\gamma/2)^2+4\omega_M^2}. \end{equation} Next, the requirement $n_{\mathrm{int}}\ll n_{\mathrm{diss}}$ brings us to the conclusion that the impact of the internal loss can be neglected if \begin{equation} \label{crossover} \frac{\gamma_{\mathrm{int}}}{\gamma} \ll \frac{1}{\beta}\frac{n_{\mathrm{diss}}}{n_{\mathrm{disp}}}=n_{\mathrm{diss}}\frac{8}{\beta}\left(\frac{\omega_M}{\gamma}\right)^2. \end{equation} One readily checks that an identical estimate follows for the requirement \begin{equation} \label{doubling} HU_0\ll n_{\mathrm{th}}. \end{equation} Using (\ref{gamm/gamm}), Eq.~(\ref{crossover}) can be also rewritten as follows \begin{equation} \label{crossover1} \gamma_{\mathrm{int}} \ll \frac{2}{\beta}\gamma_{\mathrm{opt}}. \end{equation} This result implies, that, roughly, to neglect the impact of the internal loss on cooling, the internal loss decay rate should be much smaller than the light-pressure-induced mechanical damping. Such a requirement is much more demanding than $\gamma_{\mathrm{int}} \ll \gamma$, which one might expect. \section{Impact of inaccuracy of the optimal settings} \label{Optimal} The cooling limit given by Eq.(\ref{ndiss}) was obtained as a result of three conditions satisfied: (i)~an optimal detuning [Eq.(\ref{cond})], (ii)~an optimal laser power [ Eq. (\ref{U})], and (iii)~an optimal ratio of the coupling constants [Eq.(\ref{cond1})]. The impact of the inaccuracy of the optimal detuning can readily be evaluated by using Eq.~(\ref{n1}) to find that a small deviation of the detuning $\Delta$ from the optimal value of $(\omega_M-\gamma g_\omega/g_\gamma)/2$ by $ \delta\Delta$ will lead to an additional number of phonons \begin{equation} \label{nAD} n_\Delta = \frac{U\gamma^2}{(\gamma/2)^2+(\omega_M-\Delta)^2} \frac{4\delta\Delta^2}{\gamma\gamma_M}, \end{equation} which, for the optimal settings (\ref{cond1}) and (\ref{cond2}), can be rewritten as follows \begin{equation} \label{nAD1} n_\Delta = \frac{\delta\Delta^2}{\Delta^2}\frac{(\gamma/2)^2}{(\gamma/2)^2+4\omega_M^2}. \end{equation} Next, the requirement $n_\Delta \ll n_{\mathrm{diss}}$ brings us to the conclusion that the impact of inaccuracy of the detuning $\delta\Delta$ on the phonon number can be neglected if \begin{equation} \label{cond4} \frac{\delta\Delta}{\Delta}\ll\sqrt{ n_{\mathrm{diss}}\frac{(\gamma/2)^2+4\omega_M^2}{(\gamma/2)^2}}. \end{equation} Equation (\ref{n33}) readily implies that the impact of the inaccuracy of the optimal laser power on the cooling limit can be neglected if \begin{equation} \label{delaU} \frac{\delta U}{U_0}\ll 1 \end{equation} where $\delta U$ is a deviation of $U$ from its optimal value $U_0$. Equations (\ref{n5}) and (\ref{GM3}) enable evaluation of the increase of $n_{\mathrm{diss}}$ caused by a small violation of condition $\gamma g_\omega/g_\gamma=3\omega_M$, which reads \begin{equation} \label{dn} n_g = \frac{n_{\mathrm{diss}}}{2} \left(\delta \frac{3 \omega_{\mathrm{M}}}{\gamma}\right)^2 \end{equation} where $\delta\equiv(\gamma g_\omega/g_\gamma-3\omega_M )/(3\omega_M)$, implying that the inaccuracy associated with this condition can be neglected if \begin{equation} \label{dn1} \delta\ll \frac{\sqrt{2}}{3} \frac{\gamma}{\omega_{\mathrm{M}}}. \end{equation} Conditions (\ref{cond4}), (\ref{delaU}), and (\ref{dn1}) suggest that, in the unresolved sideband regime, only the requirement from the tuning inaccuracy may be stringent in the case of very deep cooling (at $n_{\mathrm{diss}}\ll1$ ). i.e. condition $\frac{\delta\Delta}{\Delta}\ll 1$ does not guarantee a negligible correction to the idealized cooling limit. As for the resolved sideband regime, the requirements for both the coupling constant ratio and detuning may be demanding. \section{Beyond the single-mode Langevin equation} \label{Beyond} The key element of the theory discussed is the Fano-effect-driven cancellation of the contribution to the phonon number from the quantum noise in the bandwidth of the mechanical oscillator. Such a cancellation is the result of the single-mode quantum Langevin-equation approximation. Evidently, one cannot exclude that, in terms of more precise calculations, this contribution may stay non-zero at any settings. This issue can be elucidated for the case of the Michelson-Sagnac interferometer\cite{Xuereb2011,Sawadsky2015}, which nowadays is a good candidate for an experimental implementation of the dissipative-coupling assisted ground-state cooling. A virtually exact treatment of this system is available~\cite{Tarabrin2013} on the lines of the so-called "input-output relations"~\cite{footnote5} approach~\cite{Buonanno2003,Danilishin2012,Khalili2016}, a method widely employed in the gravitational-wave community. The result obtained in Ref.~\citenum{Tarabrin2013} for the spectral power density of the stochastic backaction force in the signal-recycled Michelson-Sagnac interferometer can be rewritten in terms of a one-sided cavity controlled by a common actions of the dissipative and dispersive coupling (see APPENDIX) to find \begin{equation} \label{NSF} S_{FF}(\omega) = \frac{|a_0|^2 g_\gamma^2}{\gamma(x_{\textrm{zpf}}/\hbar)^2} \frac{(\omega+\omega_h)^2+(\pi\omega_h\omega/\omega_{\mathrm{FSR}})^2}{(\gamma/2)^2+(\omega+\Delta)^2}, \end{equation} c.f., Eq.(\ref{SF1}), where $\omega_{\mathrm{FSR}}$ is the cavity free spectral range. With such a modification the condition $\omega_h =\omega_M$ does not lead any more to the cancellation in question. Thus, beyond the Langevin-equation approximation, by using (\ref{NSF}) at the optimized settings, we find the following additional contribution to the phonon number \begin{equation} \label{nL} n_\mathrm{L} =\left(\frac{3\pi}{2}\frac{\omega_M}{\omega_{\mathrm{FSR}}}\right)^2\frac{(\gamma/2)^2}{(\gamma/2)^2+4\omega_M^2}, \end{equation} implying that this contribution can be neglected if \begin{equation} \label{Lancond} \frac{\omega_M}{\omega_{\mathrm{FSR}}}\ll \frac{2}{3\pi}\sqrt{ n_{\mathrm{diss}}\frac{(\gamma/2)^2+4\omega_M^2}{(\gamma/2)^2}}. \end{equation} It is seen that this condition may be more stringent than the criterion of applicability of the single-mode Langevin equation $\frac{\omega_M}{\omega_{\mathrm{FSR}}}\ll 1$. The presence of $\omega_{\mathrm{FSR}}$ in Eq.~(\ref{nL}) suggests that this contribution may be attributed to the multimode nature of the interferometer. \section{Comparison with the dispersive-coupling-assisted protocols} \label{comparison} \subsection{Sideband cooling} An important result of Sec.\ref{IL} is that the theory by Weiss and Nunnenkamp \cite{Weiss2013a} predicts the cooling limit that is always lower than that for the dispersive coupling at the red sideband excitation. This is an exact analytical result, which is consistent with the results of numerical simulations from Ref.\citenum{Weiss2013a}. However, the application of this conclusion to a real situation should be done with a reservation for the limitations of the applicability of this theory, which were presented above. Among these limitations the most stringent is related to the internal cavity loss, which, even being relatively small, i.e. at $\gamma_{\mathrm{int}}\ll\gamma$, can essentially push up the cooling limit (\ref{ndiss}) to the value given by Eq.(\ref{n31}). At the same time, remarkably, in the regime dominated by the internal loss but at $\gamma_{\mathrm{int}}\ll\gamma$ , the dissipative-coupling-assisted cooling still yields the minimum phonon number a factor of $\beta\gamma_{\mathrm{int}}/\gamma$, with $1<\beta<4$, smaller than the dispersive-coupling-assisted cooling limit. The cooling limit of a protocol is not its only merit. The in-cavity photon number needed to approach the limit also matters. To characterize the dispersive-coupling-assisted cooling, one can use the phonon number corresponding to the phonon occupancy $2n_{\mathrm{disp}}$, i.e. twice the dissipative-coupling-assisted limit. Using (\ref{n2}), the photon number in question reads \begin{equation} \label{a01} |a_0|^2= \frac{n_{\mathrm{th}}}{Q}\frac{\omega_M}{\gamma}\frac{(\gamma/2)^2+4\omega_M^2}{\gamma^2}\left(\frac{\gamma}{g_{\omega}}\right)^2. \end{equation} Equation (\ref{a01}) is to be compared with Eq.(\ref{a0}), which gives the in-cavity photon number needed to reach the cooling limit (\ref{ndiss}). To have a reference point, we set $g_\omega \cong g_\gamma$. For such a setting, comparing (\ref{a01}) with (\ref{a0}) and (\ref{ndiss}) one may conclude that, for typical experimental parameters, Eq.(\ref{a0}) requires a much larger photon number. Thus, for the lower dissipative-coupling-assisted limit, the price of a higher in cavity field has to be paid. This may question the advantage of the dissipative-coupling-assisted protocol. However, for a balanced judgment, one can compare (\ref{a01}) with the in-cavity photon number needed to reach the level of $2n_{\mathrm{disp}}$ phonon via the other protocol. Taking into account that $n_{\mathrm{disp}}$ must be much larger than $n_{\mathrm{diss}}$ and using (\ref{n33}), the aforementioned in-cavity photon number can be evaluated as follows \begin{equation} \label{a02} |a_0|^2\approx \frac{n_{\mathrm{th}}}{2Q}\frac{\omega_M}{\gamma}\left(\frac{\gamma}{g_{\gamma}}\right)^2. \end{equation} Comparing (\ref{a01}) with (\ref{a02}), one concludes that, in the sideband resolved regime where the dispersive-coupling-assisted protocol is commonly viewed as the ultimate tool, the other protocol may require a much a smaller in-cavity photon number for the same cooling level. For $g_\omega \cong g_\gamma$, the gain is about $8(\omega_M/\gamma)^2$. Thus, in many aspects, the dispersive-coupling-assisted protocol looks advantageous for sideband cooling. \subsection{Feedback-assisted cooling} As is commonly recognized~\cite{Aspelmeyer2014,Elste2009,Weiss2013a}, the principle advantage of the dissipative-coupling-assisted protocol is the possibility of ground-state cooling in the unresolved sideband regime. Another cooling protocol that enables ground-state cooling in that regime is the feedback-assisted cooling via the common dispersive coupling. Let us compare these protocols. For the latter, using a well-known result~\cite{rossi2018}, ground-state cooling is possible with a phonon number that can be approximated as follows: \begin{equation} \label{fbn} n_{\mathrm{fb}} =n_{\mathrm{det}} + \frac{4}{\sqrt{\eta_{\mathrm{det}}}}n_{\mathrm{th}}n_{\mathrm{imp}}. \end{equation} where $n_{\mathrm{det}}=0.5(\sqrt{1/\eta_{\mathrm{det}}}-1)$ is the detector controlled limit, \begin{equation} n_{\mathrm{imp}}= \frac{\gamma\gamma_m}{64|a_0|^2g_\omega^2} \end{equation} is the number of imperfection noise quanta, and $\eta_{\mathrm{det}}$ is the detector efficiency. Equation (\ref{fbn}) is to be compared with the result by Weiss and Nunnenkamp \cite{Weiss2013a} \begin{equation} \label{Mndiss} n_{\mathrm{diss}} =\frac{1}{2}\sqrt{\frac{n_{\mathrm{th}}}{Q} \frac{\gamma}{\omega_M}}. \end{equation} Upon comparing these two cooling protocols one may notice the $\sqrt{n_{\mathrm{th}}}$-versus-$n_{\mathrm{th}}$ difference between Eqs.~(\ref{Mndiss}) and (\ref{fbn}) makes the dissipative-coupling-assisted protocol more robust against a temperature increase. To illustrate the competitivity of these protocols, we consider a situation where, in a real experimental setup exploiting the feedback protocol, instead of using the feedback loop one hypothetically satisfies the optimal conditions for the dissipative-coupling assisted protocol. We take a resent experimental paper~\cite{rossi2018} reporting a record-deep feedback assisted cooling, the experimental parameters of which read $$ n_{\mathrm{th}}\cong10^5\qquad Q=10^9\qquad \gamma/\omega_M=16\qquad \eta_{\mathrm{det}}=0.77. $$ This paper also documents the value of $n_{\mathrm{imp}}=5.8\cdot10^{-8}$, which is three orders of magnitude smaller than previously reported values. For the laser power used, the estimate (\ref{fbn}) was dominated by the detector controlled limit $n_{\mathrm{fb}} = 0.07$ while the minimal number of phonon measured experimentally was about $0.3$. At the same time, for the experimental parameters from this paper, the dissipative-coupling-assisted cooling protocol predicts $n_{\mathrm{diss}}=0.02$ as a cooling limit, which is lower than $n_{\mathrm{det}}=0.07$ and close to the value of the second term in (\ref{fbn}). Thus, the dissipative-coupling-assisted protocol looks competitive, if the conditions for its implementation are met. One readily checks that the requirement of sufficiently low internal loss Eq.(\ref{crossover1}) is the most demanding. For the above parameters, via (\ref{gamm/gamm}) and (\ref{ndisp}), it implies \begin{equation} \label{gamm0/gamm} \frac{\gamma_{\mathrm{int}}}{\gamma}\ll \frac{n_{\mathrm{diss}}}{2n_{\mathrm{disp}}}\approx 0.6\cdot10^{-3}. \end{equation} Clearly, it is a very demanding requirement, which probably makes it impossible to reach the cooling given by Eq.(\ref{ndiss}) for the system parameters from Ref.~\citenum{rossi2018}. If this requirement is not met, the cooling limit will be given by Eq.(\ref{n31}) such that the ground-state cooling becomes problematic. In addition, one should realize that the implementation of the dissipative-coupling-assisted protocol may require an unrealistically high number of in-cavity photons. \section{Conclusions} \label{Conclusions} It was shown that the advanced dissipative-coupling-assisted cooling limit $n_{\mathrm{diss}} $, Eq.(\ref{ndiss}), derived in Ref.~\citenum{Weiss2013a} is valid if it is lower than the dispersive-coupling-assisted limit under the red sideband excitation $n_{\mathrm{disp}} $, Eq.(\ref{ndisp}). Strictly, the range of applicability of this result is given by Eqs.(\ref{iequ}) and (\ref{iequ1}), which can also be rewritten as follows \begin{equation} \label{criterion} \frac{n_{\mathrm{th}}}{Q}\ll \frac{1}{16}\left(\frac{\gamma}{\omega_M}\right)^3 \qquad \mathrm{and}\qquad \frac{n_{\mathrm{th}}}{Q}\ll \frac{1}{16}\frac{\gamma}{\omega_M}. \end{equation} Otherwise the light-pressure effect makes the mechanical oscillator overdamped while the weak-coupling regime does not take place such that the theory goes out of its range of applicability and its results do not hold any more. As expected, the situation with the Fano-effect-driven cancellation of the otherwise leading contribution results in stringent requirements from the accuracy of satisfying the conditions needed to reach the predicted idealized cooling limit. The internal cavity loss, ignored by the original theory, may affect the cooling limit already when the associated decay rate $\gamma_{\mathrm{int}}$ is much smaller than the external cavity decay rate $\gamma$: the internal cavity loss becomes relevant when $\gamma_{\mathrm{int}}$ is about the light-pressure-induced mechanical decay rate, which is much smaller than $\gamma$. Alternatively, the condition providing to neglect the internal loss can be written as follows \begin{equation} \label{gamm0/gamm1} \frac{\gamma_{\mathrm{int}}}{\gamma}\ll \frac{n_{\mathrm{diss}}}{2n_{\mathrm{disp}}}. \end{equation} A similar situation takes place with the accuracy of satisfying the optimized conditions for the detuning and coupling-constant ratio. Such an inaccuracy may essentially affect the idealized cooling limit already in the regimes where the relative inaccuracy of these parameters is small. It was also shown that the aforementioned Fano-effect-driven cancelation is lifted in terms of more precise calculations. As a result, in reality, the idealized cooling limit may be substantially affected. An instructive conclusion of the paper states that, in the sideband resolved regime where the dispersive-coupling-assisted protocol is commonly viewed as the ultimate tool, the dissipative-coupling-assisted protocol may require a much smaller in-cavity photon number for the same cooling level. The material of the present paper clearly suggests that the dissipative-coupling-assisted cooling protocol is competitive once it is perfectly implemented, which, however, may be challenging. Here the stringent limitations on the realization of the idealized scenario, which were addressed in this paper, may be essential. \begin{acknowledgments} The author acknowledges reading the manuscript by M. Nagoga, G. Avakiants, and M. Olkhovich. \end{acknowledgments}
{ "timestamp": "2020-10-15T02:24:42", "yymm": "2007", "arxiv_id": "2007.13650", "language": "en", "url": "https://arxiv.org/abs/2007.13650" }
\section{INTRODUCTION} Quantum chromodynamics (QCD) is the fundamental theory for describing the strong interaction, and its phase structure is an important subject of great interest in recent decades. The first-principle results from lattice QCD simulation~\cite{lattice1,lattice2} have indicated that with increasing temperature $T$, the transition from the ordinary nuclear matter to the chiral symmetric quark-gluon plasma (QGP) is a smooth crossover at small or zero chemical potential $\mu$. At large chemical potential, lattice QCD simulation as a reliable tool to obtain the chiral properties of QCD matter, confronts a great challenge due to the fermion sign problem~\cite{Splittorff:2007ck}, although different strategies (for reviews see, e.g., Refs.~\cite{Braun-Munzinger:2015hba,Fukushima:2013rx,Fukushima:2010bq}), such as Taylor series expansions~\cite{Allton:2005gk,Gavai:2003mf,Gavai:2008zr}, imaginary chemical potential, reweighting techniques~\cite{Fodor:2001pe,Fodor:2002km},~complex Langevin method~\cite{Aarts:2009uq,Klauder:1983sp}, have been developed to try to tackle this problem. In this context, some alternative theoretical tools, such as QCD low-energy effective models (e.g. the Nambu-Jona-Lasinio model~\cite{Nambu:1961fr,Klevansky:1992qe,Hatsuda:1994pi}, Polyakov-loop extended NJL (PNJL) model~\cite{Meisinger:1995ih,Roessner:2006xn,Ratti:2007jf}, quark-meson model or linear sigma model~\cite{Schaefer:2006ds,Schaefer:2007pw,Schaefer:2004en,Lenaghan:2000ey,SchaffnerBielich:1999uj}, Polyakov quark-meson (PQM) model~\cite{Schaefer:2011ex,Gupta:2009fg,Schaefer:2009ui,Stiele:2016cfs}), Dyson-Schwinger equation approach~\cite{Bashir:2012fs,Fischer:2018sdj}, the functional renormalization group approach~\cite{Gies:2006wv,Pawlowski:2005xe,Schaefer:2006sr,Bagnuls:2000ae}, which are not restricted by chemical potential, have been proposed to better explore the QCD phase structure at high chemical potential. And the results from the effective model calculations~\cite{Gupta:2011ez,Schaefer:2008hk} show that the chiral phase transition of the strongly interacting matter is a first-order transition at high density, and a second-order critical endpoint (CEP) can exist between the crossover line and the first-order phase transtion line in the ($\mu$, $T$)-plane. Apart from the phase transition, other important informations, such as thermodynamic properties, in-medium properties of mesons\cite{Tawfik:2014gga,Schaefer:2008hk} and transport properties~\cite{Abhishek:2017pkp,Ghosh:2018xll,Singha:2017jmq} for the strongly interacting matter are also extensively studied in these QCD effective models. To take into account the intricacy of the realistic quark matter produced in relativistic heavy-ion collisions (HICs) at the RHIC and the LHC, different improved versions of the QCD effective models have been proposed by including the effects of the finite volume of the system~\cite{Saha:2017xjq,Bhattacharyya:2012rp,Zhang:2019gva,Magdy:2019frj,Ya-Peng:2018gkz,Deb:2020qmx,Abreu:2019czp,Shi:2018tsq,Tripolt:2013zfa,Braun:2011iz,Tripolt:2013zfa,Li:2017zny,Palhares:2009tf,Magdy:2015eda,Braun:2005fj,Braun:2010vd,Zhao:2019ruc}, the non-extensive effects in term of long-distance correlation~\cite{Zhao:2020xob,Shen:2017etj}, the presence of magnetic fields~\cite{Fukushima:2010fe,Andersen:2014xxa,Ruggieri:2013cya,Mao:2016fha,Gatto:2010pt,Fukushima:2010fe,Ghosh:2019lmx,Kashiwa:2011js,Andersen:2014oaa,Yu:2014xoa,Andersen:2013swa}, and the effects of electric field~\cite{Tavares:2019mvq,Tavares:2018poq,Cao:2015dya,Ruggieri:2016lrn,Ruggieri:2016xww}, to better explore the chiral/confinement properties of the strongly interacting matter at finite temperature or quark chemical potential. Conventionally, in the literature, all the effective models or improved effective models are based on an ideal assumption that the constituents of quark matter are completely isotropic in momentum-space for the absence of magnetic fields. However, due to the geometry of fireball created in HICs is asymmetric, the system evolves with different pressure gradients along different directions. As a result, the expanding and cooling rate along the beam direction (denotes as longitudinal direction) is larger than radial direction~\cite{Romatschke:2003ms} and this momentum anisotropy can survive in all the stages of the HICs, consequently, the parton-level momentum distribution functions may become anisotropic. Thus, it's essential to consider the momentum-space anisotropy induced by the rapid longitudinal asymptotic expansion into the phenomenological investigation of different observables. Up to present, extensive works have been made to explore the effects of momentum anisotropy on the parton self-energy~\cite{Romatschke:2003ms,Schenke:2006fz,Kasmaei:2018yrr,Kasmaei:2016apv}, photon and dilepton production~\cite{Bhattacharya:2015ada,Kasmaei:2019ofu,Schenke:2006yp,Kasmaei:2018oag}, the dissociation of quarkonium~\cite{Jamal:2018mog,Burnier:2009yu,Thakur:2012eb}, heavy-quark potential~\cite{Dumitru:2007hy,Nopoush:2017zbu}, various transport coefficients~\cite{Rath:2019vvi,Zhang:2020efz,Thakur:2017hfc,Srivastava:2015via}, jet quenching parameter\cite{Giataganas:2012zy} which, are sensitive to the evolution of the QGP. And associated results have indicated that the momentum-space anisotropy has a significant effect on the observables of the QGP. However, with the best of our knowledge, so far there is no study of momentum anisotropy in the framework of effective QCD models and no research regarding the effect of momentum-space anisotropy on chiral phase transition. Inspired by this fact, one major goal of present work is to reveal how the momentum anisotropy qualitatively affects the chiral phase structure as well as transport properties in the strongly interacting matter. The present paper is a first attempt to study the effect of the momentum-space anisotropy induced by the rapid longitudinal expansion of fireball created in HICs on the QCD chiral phase transition. We adopt the 2+1 flavor quark-meson model, which is successful in describing the mechanism of spontaneous chiral symmetry breaking, to approximate quark matter. The effect of momentum anisotropy enters in the quark-meson model by substituting the isotropic (local equilibrium) distribution function in the total thermodynamical potential with the anisotropic one. This introduces one more degree of freedom, {\it viz}, the direction of anisotropy. The anisotropic parameter $\xi$, representing the degree of momentum anisotropy or the tendency of the system to stay away from the isotropic state, is also considered as argument into the isotropic distribution function. Based on this momentum anisotropy-dependent quark-meson model, we first explore how the momentum anisotropy affects the chiral phase diagram and the location of CEP. Next, we investigate the thermodynamic properties and the thermal properties of various scalar (pseudoscalar) meson masses for vanishing chemical potential in both isotropic and anisotropic quark matter. Finally, transport coefficients, such as shear viscosity, electrical conductivity, and bulk viscosity, which are crucial to understand the dynamical evolution of QCD matter, also are estimated in an (an-)isotropic quark matter. Note that we restrict ourselves here to the anisotropic system close to isotropic local equilibrium state, consequently, the calculations of thermodynamic quantities, meson masses and transport coefficients in the anisotropic system are methodologically similar to those in the isotropic system. Especially, in the small $\xi$ limit, the anisotropic distribution can just linearly expand to the linear order of $\xi$. Using this linear approximation of the anisotropic distribution, the mathematical expression of transport coefficients, which are obtained by solving the relativistic Boltzmann equation under the relaxation time approximation, can be explicitly separated into an equilibrium part and an anisotropic correction part~\cite{Rath:2019vvi,Zhang:2020efz,Thakur:2017hfc,Srivastava:2015via}. For $\xi\rightarrow 0$, the analytic expressions can reduce to the standard expressions in the local equilibrium medium, which can be seen in Section.~\ref{IV}. This paper is organized as follows. In section.~\ref{QM-Model}, we give a brief overview of the three-flavor quark-meson model. In section.~\ref{III}, the modification of the thermodynamical potential within momentum-space anisotropy is presented. In section.~\ref{IV}, we discuss the chiral phase transition, thermodynamics properties, meson masses, and transport coefficients in both isotropic and anisotropic quark matter. In section.\ref{summary}, we summarize the main results and give an outlook. \section{the quark-meson model}\label{QM-Model} The quark-meson model as a successful QCD-like effective model can capture an important feature of QCD, namely, chiral symmetry breaking and restoration at high temperature/density. The Lagrangian of the three-flavor quark-meson model presently used for our purpose is taken from Ref.~\cite{Lenaghan:2000ey}: \begin{eqnarray}\label{Lagrangian} \mathcal{L}_{\textrm{QM}} =\bar{\Psi}(i\gamma_{\mu}D^{\mu}-g\phi_5)\Psi+\mathcal{L}_{\mathrm{M}}, \end{eqnarray} where $\Psi=u,d,s$ stands for the quark field with three flavors ($N_{f}=3$) and three color degrees of freedom ($N_{c}=3$). The first term in the right hand side of Eq.~(\ref{Lagrangian}) represents the interaction between the quark field and the scalar ($\sigma$) and pseudoscalar ($\pi$) fields with a flavor-blind Yukawa coupling $g$ of the quarks to the mesons. The meson matrix is given as \begin{eqnarray} \phi_5=T_a(\sigma_{a}+i\gamma_5\pi_a), \end{eqnarray} where $T_{a}=\lambda_a/2$ with $a=0,\dots,8$ are the nine generators of the U(3) symmetry. $\lambda_{a}$ is Gell-Mann matrix with $\lambda_{0}=\sqrt{\frac{2}{3}}1$. $\sigma_{a}$ and $\pi_a$ denote the scalar meson nonet and the pseudoscalar meson nonet, respectively. The second term in Eq.~(\ref{Lagrangian}) is the purely mesonic contriburion, $\mathcal{L}_{\mathrm{M}}$, which describes the chiral symmetry breaking parttern in strong interaction. It is given by~\cite{Lenaghan:2000ey} \begin{eqnarray}\label{eq:meson} \mathcal{L}_{\mathrm{M}}&=&\mathrm{Tr}(\partial_\mu\phi^{\dagger}\partial^{\mu}\phi-m^2\phi^{\dagger}\phi)-\lambda_1[\mathrm{Tr}(\phi^{\dagger}\phi)]^2\nonumber\\ &&-\lambda_2\mathrm{Tr}(\phi^{\dagger}\phi)^2+c[\mathrm{Det}(\phi)+\mathrm{Det}(\phi^{\dagger})]\nonumber\\ &&+\mathrm{Tr}[H(\phi+\phi^{\dagger})], \end{eqnarray} with $\phi=T_{a}\phi_{a}=T_{a}(\sigma_{a}+i\pi_a)$ representing a complex ($3\times3$)-matrix. Explict chiral symmetry breaking is shown in the last term of Eq.~(\ref{eq:meson}), where $H=T_{a}h_{a}$ is a ($3\times3$)-matrix with nine external fields $h_{a}$. Explict $U(1)_{A}$ symmetry is given by 't Hooft determinant term with the anomaly term $c$. $m^2$ is the tree-level mass of the fields in the absence of symmetry breaking, $\lambda_1$ and $\lambda_2$ are the two possible quartic coupling constants. \begin{center} \begin{table} \caption{The parameters used in our work from Ref.~\cite{Schaefer:2008hk}.}\label{tb1} \begin{tabular}{cccccc} \hline\hline $m^2[\mathrm{MeV^2}]$&$h_x[\mathrm{MeV^3}]$&$h_y[\mathrm{MeV^3}]$&$\lambda_1$&$\lambda_2$&$c[\mathrm{MeV}]$\\ \hline $(342.252)^2$&$(120.73)^3$&$(336.41)^3$&1.4&46.68&4807.84\\ \hline\hline \end{tabular} \end{table} \end{center} Under the mean-field approximation~\cite{Schaefer:2008hk}, the total thermodynamic potential density of the quark-meson model at finite temperature $T$ and quark chemical potential $\mu_f$ is given by \begin{eqnarray}\label{eq:4} \Omega(T,\mu_f)= \Omega_{q\bar{q}}(T,\mu_f)+U(\sigma_{x},\sigma_{y})_{}. \end{eqnarray} The first term $\Omega_{q\bar{q}}$ in the right hand of Eq.~(\ref{eq:4}) denotes the fermionic part of the thermodynamic potential~\cite{Schaefer:2008hk}: \begin{eqnarray}\label{potential} \Omega_{q\bar{q}}(T,\mu_f)&=&2N_{c}\sum_{f=u,d,s}^{}T\int_{}^{}\frac{{\rm d}^3\mathbf{p}}{(2\pi)^3}[\ln(1-f_{q,f}^0(T,\mu_{f},\mathbf{p}))\nonumber\\ &&+\ln(1-f_{\bar{q},f}^0(T,\mu_{f},\mathbf{p}))], \end{eqnarray} with the isotropic equilibrium distribution function of (antiquark) quark for $f$-th flavor \begin{equation} f^0_{q(\bar{q}),f}(T,\mu_{f},\mathbf{p})=\frac{1}{\exp[E_{f}\mp\mu_{f}/T]+1}. \end{equation} Here, $E_{f}=\sqrt{p^{2}+m_f^{2}}$ is the single-particle energy with flavor-dependent constituent quark mass $m_{f}$. The sign $\mp$ corresponds to quarks and antiquarks, respectively. In present work, an uniform quark chemical potential $\mu\equiv\mu_{u}\equiv\mu_{d}\equiv\mu_{s}$ is assumed. And the breaking of the $SU(2)$ isospin symmetry is not considered, consequently, the up and down quarks have approximately the same masses, i.e., $m_{u}\approx m_{d}$. In the quark-meson model, the constituent quark masses are given as \begin{equation} m_{l}=g\sigma_x/2\quad \mathrm{and}\quad m_{s}=g\sigma_y/\sqrt{2}, \end{equation} where $l$ denotes light quarks ($l\equiv u,d$). $\sigma_x$ and $\sigma_y$ stand for the non-strange and strange chiral condensates, respectively. The Yukawa coupling $g$ is fixed to reproduce a light constituent quark mass of $m_l\approx300$~MeV. The second term $U(\sigma_{x},\sigma_{y})_{}$, $viz$, the purely mesonic potential, is given as~\cite{Schaefer:2004en,Lenaghan:2000ey,Schaefer:2011ex} \begin{eqnarray} U &=&-h_x \sigma_x-h_y \sigma_y+ \frac{m^2(\sigma^2_x+\sigma^2_y)}{2} -\frac{c\sigma^2_x \sigma_y }{2\sqrt{2}} \nonumber \\ &&+ \frac{\lambda_1 \sigma^2_x \sigma^2_y}{2} +\frac{(2 \lambda_1 +\lambda_2)\sigma^4_x}{8} + \frac{ (\lambda_1+\lambda_2)\sigma^4_y}{4}, \nonumber\\ \end{eqnarray} where the model parameters: $m^2$, $h_x$, $h_y$, $\lambda_1$, $\lambda_2$ and $c$ as reported in Ref.~\cite{Schaefer:2008hk}, are shown in Table~\ref{tb1}. Finally, the behaviors of $\sigma_{x}$ and $\sigma_{y}$ as the functions of temperature and quark chemical potential can be obtained by minimizing the total thermodynamic potential density, i.e., \begin{eqnarray}\label{eq:gap} \frac{\partial\Omega}{\partial \sigma_{x}}=\frac{\partial\Omega}{\partial \sigma_{y}}\bigg|_{\sigma_{x}=\bar{\sigma}_x,\sigma_{y}=\bar{\sigma}_y}=0, \end{eqnarray} with $\sigma_{x}=\bar{\sigma}_x, \sigma_{y}=\bar{\sigma}_y$ being the global minimum. \section{Thermodynamic potential with momentum anisotropy}\label{III} Due to the rapid longitudinal expansion of the partonic matter created in the HICs, an anisotropic deformation of the argument of the isotropic (equilibrium) parton distribution functions is generally used to simulate the momentum anisotropy of QGP ~\cite{Romatschke:2003ms,Schenke:2006fz,Kasmaei:2018yrr,Kasmaei:2016apv,Bhattacharya:2015ada,Kasmaei:2019ofu,Schenke:2006yp,Kasmaei:2018oag,Jamal:2018mog,Burnier:2009yu,Thakur:2012eb,Dumitru:2007hy,Nopoush:2017zbu,Rath:2019vvi,Zhang:2020efz,Thakur:2017hfc,Srivastava:2015via}. A special and widely used spherical momentum deformation introduced by Romatschke and Strickland ~\cite{Romatschke:2003ms}, which is characterized by removing and adding particles along a single momentum anisotropy direction, is applied in this paper. Accordingly, the local distribution function of $f$-th flavor quarks(antiquarks) in an anisotropic system can be obtained from the isotropic (local equilibrium) distribution function by the rescaling of one preferred direction in momentum space, which is given as \begin{eqnarray}\label{eq: fx} f_{aniso}^{0}(T,\mu_{f},\mathbf{p})=\frac{1}{e^{(\sqrt{\mathbf{p}^2+\xi(\mathbf{p}\cdot\mathbf{n})^2+m_f^2}\mp\mu_{f})/T}+1}, \end{eqnarray} Here, the anisotropy parameter $\xi$, presenting the degree of momentum-space anisotropy, generally can be defined as \begin{eqnarray} \xi=\frac{\left\langle\mathbf{p}_{T}^2\right\rangle}{2\langle p_{L}^2\rangle}-1, \end{eqnarray} where $p_{L}$ and $\mathbf{p}_{T}$ are the components of momentum parallel and perpendicular to the direction of anisotropy, $\mathbf{n}$, respectively. And $\mathbf{p}=(p\sin\theta\cos\phi,p\sin \theta\sin\phi,p\cos\theta)$, where we use a notation $|\mathbf{p}|\equiv p$ for convenience. $\mathbf{n}=(\sin\alpha,0,\cos\alpha)$, $\alpha$ is the angle between $\mathbf{p}$ and $\mathbf{n}$. Accordingly, $(\mathbf{p}\cdot\mathbf{n})^2=p^2(\sin\theta\cos\phi\sin\alpha+\cos\theta\cos\alpha)^2=p^2c(\theta,\phi,\alpha)$. Note that $\xi>0$ corresponds to a contraction of the particle distribution in the direction of anisotropy whereas $-1<\xi<0$ stands for a stretching of the particle distribution in the direction of anisotropy. If the system is close to the ideal massless parton gas and $\xi$ is small, $\xi$ is also related to the ratio of shear viscosity to entropy density $\eta/s$ as well as the proper time $\tau$ of the medium. The relation for one-dimensional Bjorken expansion in the Navier-Stokes limit is given as~\cite{Asakawa:2006jn} \begin{eqnarray} \xi=\frac{10}{T\tau}\frac{\eta}{s}. \end{eqnarray} This implies that non-vanishing shear viscosity combined with finite momentum relaxation rate in an expanding system can also contribute to the momentum-space anisotropy. At the RHIC energy with the critical temperature $T_{c}\approx160$~MeV, $\tau\approx6 $ fm/c and $\eta/s=1/4\pi$, we can obtain $\xi\approx 0.3$. In this work, we assume the system has a small deviation from the mometum-space isotropy, therefore the value of $\xi$ is small ($|\xi|\ll1$) and the Eq.~(\ref{eq: fx}) can be expanded up to linear order in $\xi$, \begin{eqnarray}\label{eq:f_aniso} f_{aniso}^0(\mathbf{p})&\approx&f^0_{q,f}-\frac{\xi(\mathbf{p\cdot\mathbf{n}})^2}{2E_{f}T}e^{(E_{f}-\mu_{f})/T}f_{q,f}^{02}\nonumber\\ &=&f_{q,f}^0-\frac{\xi(\mathbf{p\cdot\mathbf{n}})^2}{2E_{f}T}f_{q,f}^{0}(1-f_{q,f}^0). \end{eqnarray} By replacing the isotropic distribution functions in Eq.~(\ref{potential}) with the Eq.~(\ref{eq:f_aniso}), we finally obtain the $\xi$-dependent thermodynamic potential density of fermionic part \begin{eqnarray}\label{potential-xi} \begin{aligned} &\Omega_{q\bar{q}}= 2N_{c}\sum_{f}^{}\int_{}^{}\frac{T{\rm d}^3\mathbf{p}}{(2\pi)^3}\nonumber\\ &\left\{\ln(1-f_{q,f}^0+\frac{\xi p^2c(\theta,\phi,\alpha)}{2E_{f}T}f_{q,f}^{0}(1-f_{q,f}^0))\right.\\ &\phantom{=\;\;}\left.+\ln(1-f_{\bar{q},f}^0+\frac{\xi p^2c(\theta,\phi,\alpha)}{2E_{f}T}f_{\bar{q},f}^{0}(1-f_{\bar{q},f}^0))\right\}. \end{aligned}\nonumber\\ \end{eqnarray} Similar to the studies regarding finite-size effect~\cite{Saha:2017xjq} and non-extensive effect~\cite{Zhao:2020xob}, we also treat the anisotropy parameter $\xi$ as a thermodynamic argument in the same footing as $T$ and $\mu$, and do not have any modifications to the usual quark-meson model parameters due to the presence of momentum anisotropy. Replacing the fermionic thermodynamic potential in Eq.~(\ref{eq:gap}) with Eq.~(\ref{potential-xi}), we can finally obtain the $\xi$-dependent chiral condensates at finite temperature and quark chemical potential. \begin{center} \begin{table} \caption{The chiral critical temperature of the non-strange condensate $T^{\chi}_c$ and strange condensate $T^{\chi}_s$ at vanishing quark chemical potential for different anisotropy parameters.}\label{tb2} \begin{tabular}{ccccc} \hline\hline $ \xi $&$-0.4$&$0$&$0.2$&$0.4$\\ \hline $T^{\chi}_c(\mathrm{MeV})$&137&146&152&159\\ \hline $T^{\chi}_s(\mathrm{MeV})$&233&248&258&270\\ \hline\hline \end{tabular} \end{table} \end{center} \section{results and discussions}\label{IV} \subsection{phase transition and phase diagram} In the 2+1 flavor quark-meson model, the chiral condensates of both light quarks and strange quarks can be regarded as the order parameters to analyze the feature of the chiral phase transition. The anisotropy parameters we work here are artifically taken as $\xi=-0.4,~0,~0.2,~0.4$, although the value of $\xi$ in the realistic HICs always remains positive in sign. In Fig.~\ref{Fig:Condensate}, the temperature $T$ dependences of non-strange chiral condensate $\sigma_{x}$ and strange chiral condensate $\sigma_{y}$ for both isotropic and anisotropic quark matter at vanishing quark chemical potential are plotted. For $T=0$~MeV, $\sigma_{x}^0\approx92.4~$MeV and $\sigma_{y}^0\approx94.5~$MeV. As can be seen, $\sigma_{x}$ and $\sigma_{y}$ in both isotropic and anisotropic quark matter decrease continuously with increasing temperature. This means that at vanishing quark chemical potential, the restoration of the chiral symmetry for (an-)isotropic quark matter is always a crossover phase transition. And the restoration of the chiral symmetry in the strange sector is always slower than that in the non-strange sector. As $\xi$ increases, the values of $\sigma_{x}$ and $\sigma_{y}$ increase and their melting behaviors become more smoother. This shows that an increase of anisotropy parameter tends to delay the chiral symmetry restoration. \begin{figure} \includegraphics[width=0.47\textwidth]{sigma.pdf} \caption{\label{Fig:Condensate}The temperature dependences of non-strange chiral condensate $\sigma_{x}$ (upper panel) and strange chiral condensate $\sigma_{y}$ (lower panel) at vanishing quark chemical potential for both isotropic ($\xi=0$ (blue dashed lines)) and anisotropic (i.e., $\xi=$ $-0.4$ (orange dotted-dashed lines),~0.2 (red solid lines) and 0.4 (green wide dashed lines) quark matter in quark-meson model. The values of $\sigma_{x}$ and $\sigma_{y}$ in the vacuum approximately are 92.4 MeV and 94.5~MeV, respectively.} \end{figure} \begin{figure} \includegraphics[width=0.47\textwidth]{Plot-Chi.pdf} \caption{\label{Fig:Chi}The temperature dependences of the susceptibilities in non-strange sector $\chi_{l}$ (upper panel) and in strange sector $\chi_{s}$ (lower panel) at $\mu=0$~GeV for both isotropic ($\xi=0$ (blue dashed line)) and anisotropic (i.e., $\xi=$ $-0.4$ (orange dotted-dashed line),~0.2 (red solid line)) and 0.4 (green wide dashed line) quark matter in the quark-meson model.} \end{figure} In order to obtain the chiral critical temperature, we introduce the susceptibilities of light quarks $\chi_{l}$ and strange quarks $\chi_{s}$, which are defined as \begin{eqnarray} \chi_{l}=-\frac{\partial\sigma_{x}}{\partial T}, \ \ \ \ \ \chi_{s}=-\frac{\partial\sigma_{y}}{\partial T}. \end{eqnarray} The thermal behaviors of both $\chi_{l}$ and $\chi_{s}$ are presented in Fig.~\ref{Fig:Chi}. We can see that $\chi_{l}$ and $\chi_{s}$ are peaking up at the particular temperatures. The peak position of $\chi_{l}$ determines the critical temperature $T^{\chi}_{c}$ for the chiral transition in non-strange sector. Different to $\chi_{l}$, $\chi_{s}$ have two peaks in the entire temperature domain of interest. The temperature coordinate of the first peak of $\chi_{s}$ is almost same as that of $\chi_{l}$, the location of the second broad peak of $\chi_{s}$ determines the critical temperature for the chiral transition of strange sector $T^{\chi}_{s}$. The chiral critical temperature $T_{c}^{\chi}$ at vanishing quark chemical potential is the origin of the crossover phase transition in the QCD chiral phase diagram. Furthermore, these chiral critical temperatures are sensitive to the variation of $\xi$. As $\xi$ increases, $T^{\chi}_{c,s}$ shifts towards higher temperatures as well as the height of ${\chi}_{l,s}$ decreases. The exact values of both $T^{\chi}_{c}$ and $T^{\chi}_{s}$ for different anisotropy parameters are listed in Table~\ref{tb2}. Compared to the case of $\xi=0$, the chiral critical temperatures $T^{\chi}_c$ and $T^{\chi}_s$ decrease by approximately $6\%$ for the case of $\xi=-0.4$. For the cases of $\xi=0.2$ and 0.4, both $T^{\chi}_c$ and $T^{\chi}_s$ increase by approximately $4\%$ and $9\%$, respectively. \begin{figure} \includegraphics[width=0.47\textwidth]{U.pdf} \caption{\label{Fig:condensate-u} The temperature dependence of the non-strange chiral condensate at $\mu=150~\mathrm{MeV}$ (upper panel), $\mu=200~\mathrm{MeV}$ (middle panel) and $\mu=250~\mathrm{MeV}$ (lower panel) in quark matter with different anisotropy parameters, i.e., $\xi=$ $-0.4$ (orange dotted-dashed lines),~0.0 (blue dash lines),~0.2 (red solid lines) and 0.4 (green wide dashed lines)} \end{figure} \begin{figure} \includegraphics[width=0.47\textwidth]{Plot-CEP.pdf} \caption{\label{Fig:CEP} Chiral phase diagram for different anisotropy parameters in the quark-meson model. The solid lines represent the first-order phase transition curves, the dashed lines denote the crossover transition curves, and the solid dots represent the positions of the CEP ($\mu_{CEP},~T_{CEP}$). } \end{figure} \begin{figure*} \includegraphics[width=7.in,height=2.in]{Them.pdf} \caption{\label{fig:thermodynamics} The temperature dependences of the scaled pressure $P/T^4$(left panel), scaled entropy density $s/T^3$(middle panel), and scaled energy density $\epsilon/T^4$ (right panel) for $\mu=0~\mathrm{MeV}$ in quark matter with different anisotropy parameters, i.e., $\xi=$ $-0.4$ (orange dotted-dashed lines),~0.0 (blue dash lines),~0.2 (red solid lines) and 0.4 (green wide dashed lines).} \end{figure*} Next, we extend our exploration to finite quark chemical potential for analyzing the effect of momentum anisotropy on the structure of QCD phase diagram. In Fig.~\ref{Fig:condensate-u}, the temperature dependence of non-strange chiral condensate $\sigma_{x}$ for both isotropic and anisotropic quark matter at different quark chemical potentials ($viz$., $\mu=$150~MeV,~200~MeV,~250~MeV) is plotted. At $\mu=150~$MeV, the chiral symmetry restoration with different $\xi$ still takes place as the crossover phase transition. For $\mu=200~$MeV, the value of $\sigma_{x}$ in the anisotropic quark matter with $\xi=-0.4$ drops from 60~MeV to 23~MeV, and associated susceptibility presents a divergent behavior at $T=90~$MeV, which signals the appearance of a first-order phase transition. For $\mu=250~$MeV, the discontinuity of $\sigma_{x}$ (i.e., the first-order phase transition) also occurs at $\xi=-0.4,~0$ and 0.2, whereas, at $\xi=0.4$ the phase transition is still a smooth crossover. Thus, for the anisotropic matter with $\xi=0.4$, a first-order phase transition happens at higher quark chemical potential. Accordingly, the chiral phase transition diagram can be studied by outlining the location of $T^{\chi}_{c}$ for a wide range of quark chemical potential. And the first-order phase transition has to end and then changes into a crossover is the QCD critical endpoint (CEP), at which the phase transition is of second order. In Fig.~\ref{Fig:CEP}, the 2+1 flavor chiral phase diagram in the ($\mu$, $T$)-plane for the quark-meson model within the effect of momentum-space anisotropy is presented. Along the first-order phase transition line (crossover phase transition line), the chiral critical temperature rises from zero up to the CEP temperature (from the $T_{CEP}$ up to the $T_{c}^{\chi}(\mu=0)$), whereas the critical quark chemical potential decreases from the $\mu_{c}(T=0)$ to the $\mu_{CEP}$ (from the $\mu_{CEP}$ to zero). We observe that the phase boundary in the ($\mu$, $T$)-plane of the quark-meson model phase diagram is shifted to higher values of $\mu$ and $T$, with increasing anisotropy parameter. We also can clearly see that the position of the CEP significantly depends on the variation of momentum anisotropy parameter. As $\xi$ increases, the location of the CEP shifts to higher $\mu$ and smaller $T$ domain, which is similar to the study of non-extensive effect in linear sigma model~\cite{Shen:2017etj}. Similar phenomenon is also observed in the literature for analyzing the finite size effects on chiral phase transition~\cite{Tripolt:2013zfa,Li:2017zny,Palhares:2009tf,Magdy:2015eda,Zhao:2019ruc}. In Ref.~\cite{Tripolt:2013zfa}, when the system size is reduced to 4 fm, the CEP in the quark-meson model vanishes and the whole chiral phase boundary becomes a crossover curve. Based on this result, we deduce that as $\xi$ increases further, the CEP may disappear. In this work, for $\xi=-0.4,~0,~0.2,~0.4$, the location of the CEP is at $(T_{CEP},~\mu_{CEP})=(100,~174)$~MeV, (91,~222)~MeV,~ (84,~247)~MeV and (79,~270)~MeV, respectively. The value of $\mu_{CEP}$ from $\xi=-0.4$ to $\xi=0.4$ increases by about 50\%, whereas the value of $T_{CEP}$ increases by about 20\%. This means that the influence of momentum-space anisotropy on the quark chemical potential coordinate of the CEP is more prominent compared to that on the temperature of the CEP. An opposite trend can be found in the study of finite volume effect~\cite{Tripolt:2013zfa}, where the temperature coordinate of the CEP in the quark-meson model appears to be affected more strongly by the finite volume than CEP's quark chemical potential coordinate. \subsection{QCD thermodynamic quantities} Let us now study the influence of anisotropy parameter $\xi$ on the thermodynamics at vanishing quark chemical potential. The $T$- and $\xi$-dependent pressure $P(T,\xi)$, which is derived from the thermodynamic potential, can be given as \begin{eqnarray} P(T,\xi)=-\Omega(T,\xi), \end{eqnarray} with the vacuum normalization $P(0,\xi)=0$. The entropy density $s$ and energy density $\epsilon$ are defined as \begin{eqnarray}\label{sq} s(T,\xi)=-\frac{\partial\Omega(T,\xi)}{\partial T} \end{eqnarray} and \begin{eqnarray} \epsilon(T,\xi)=-P(T,\xi)+Ts(T,\xi), \end{eqnarray} respectively. In Fig.~\ref{fig:thermodynamics}, the variations of the scaled pressure $P/T^{4}$, scaled entropy density $s/T^{3}$, and scaled energy density $\epsilon/T^{4}$ with respect to temperature in the quark-meson model for both isotropic and anisotropic quark matter are presented. As can be seen that the thermal behaviors of $P/T^{4}$, $s/T^{3}$, and $\epsilon/T^{4}$ for the anisotropic quark matter is in agreement with those for the isotropic system. To be specific, with increasing temperature, $P/T^{4}$, $s/T^{3}$, and $\epsilon/T^{4}$ first rise rapidly then tend towards a saturation value. At high enough temperature, the limit values of $P/T^4$, $s/T^3$, and $\epsilon/T^{4}$ in the case of $\xi=-0.4$ stabilize approximately at 4.0, 16.5,~12.5, respectively, although all these values are lower than their respective QCD Stefan-Boltzmann (SB) limit values: $\frac{P_{SB}}{T^4}=(N_{c}^2-1)\frac{\pi^2}{45}+N_{c}N_{f}\frac{2\pi^2}{180}\simeq5.2$, $\frac{s_{SB}}{T^3}=\frac{4P_{SB}}{T^4}\simeq20.8,~\frac{\epsilon_{SB}}{T^4}=\frac{3P_{SB}}{T^4}\simeq15.6$. From Fig.~\ref{fig:thermodynamics} we also can see that the limit values of these thermodynamics at high enough temperature still are decreasing functions of $\xi$, which is opposite to their qualitative behaviors with the non-extensive parameter $q$. In Ref.~\cite{Zhao:2020xob}, at high temperature, the limit values of these scaled thermodynamics increase as $q$ increases. Moreover, their features with $\xi$ are significantly different to those with finite volume effect. For example, Refs.~\cite{Magdy:2015eda,Bhattacharyya:2012rp} have indicated that with increasing temperature, $P/T^4$ first decreases with increasing volume and then quickly saturates to the infinite volume value, in other word, these thermodynamics are insensitive to volume changes in high temperature domain. The speed of sound squared $c_{s}^2$ as an important quantity in the HICs is also studied in present work. It is defined by \begin{eqnarray} c_{s}^2(T,\xi)=\frac{\partial P}{\partial \epsilon}\bigg|_{V}=\frac{\partial P}{\partial T}\bigg|_{V}\bigg/\frac{\partial\epsilon}{\partial T}\bigg|_{V}=\frac{s}{C_{V}}, \end{eqnarray} with the specific heat at constant volume $V$ \begin{eqnarray} C_{V}(T,\xi)=\frac{\partial \epsilon}{\partial T}\bigg|_{V}=-T\frac{\partial^2\Omega}{\partial T^2}\bigg|_{V}. \end{eqnarray} As shown in the upper panel of Fig.~\ref{Fig:CV-Cs}, the scaled specific heat $C_{V}/T^{3}$ first rises rapidly with increasing temperature, reaches the maximum near the chiral critical temperature $T^{\chi}_c$, then decreases and eventually remains constant. Similar to $P/T^{4}$, $s/T^{3}$, and $\epsilon/T^{4}$, the limit value of $C_{V}/T^{3}$ at high temperature also decreases as $\xi$ increases. The peak of $C_{V}/T^{3}$ decreases as $\xi$ increases, in other word, as $\xi$ increases, the critical behavior of $C_{V}/T^{3}$ is smoothed out. From the lower panel of Fig.~\ref{Fig:CV-Cs} we see that the thermal behavior of the speed of sound squared $c_{s}^{2}$ for $\xi=-0.4$ exhibits a sharp drop near the corresponding chiral critical temperature $T_{c}^{\chi}$, then increases rapidly up to the ideal gas value of $1/3$. Moreover, as $\xi$ increases, the dip structure of $c_{s}^{2}$ is gradually weakened and the location of its minimum shifts to higher temperatures, which is qualitatively similar to $C_V/T^3$. At high temperature, we can see that $c_{s}^2$ is nearly unaffected by $\xi$, which is due to that the reduction in entropy density and the increment in inverse specific heat almost cancel each other out. The literature for the studies of finite-size effect~\cite{Bhattacharyya:2012rp,Saha:2017xjq} and non-extensive effect~\cite{Zhao:2020xob} in PNJL model also has indicated that as the system size $L$ (non-extensive parameter $q$) decreases (increases), the critical behavior of $c_{s}^2$ gradually dilutes and even vanishes. Therefore, these results of thermodynamics again emphasize that the increase of $\xi$ can hinder the restoration of chiral symmetry. \subsection{ Meson mass } In this part, we study the chiral structures of scalar ($J^P=0^+$) and pseudoscalar ($J^P=0^-$) meson masses at vanishing quark chemical potential. A detailed procedure for calculating meson masses at finite temperature and quark chemical potential in the quark-meson model can be found in Ref.~\cite{Schaefer:2008hk}. Here, we just sketch the outline of the related computation. In quantum field theory, the scalar and pseudoscalar meson masses generally can be obtained from the second derivative of the temperature- and quark chemical potential-dependent thermodynamic potential density $\Omega(T,\mu_{f})$ with respect to corresponding the scalar fields $\alpha_{S,a}=\sigma_{a}$ and the pseudoscalar fields $\alpha_{P,a}=\pi_{a} (a=0,...,8)$, which can be expressed as~\cite{Schaefer:2008hk} \begin{equation}\label{eq:m^2_{i,ab}} m_{i,ab}^2=\frac{\partial^2\Omega(T,\mu_{f})}{\partial\alpha_{i,a}\partial\alpha_{i,b}}\bigg|_{\mathrm{min}}=(m_{i,ab}^{\mathrm{M}})^2+(m_{i,ab}^{\mathrm{T}})^2 \end{equation} where the subscript $i=S(P)$ denotes the scalar (pseudoscalar) mesons. The first term on the right-hand side of Eq.~(\ref{eq:m^2_{i,ab}}) is vacuum mass squared matrices calculated from the second derivative of purely mesonic potential. The second term represents the modification of mass squared matrices due to fermionic thermal correction at finite temperature and quark chemical potential, which in an anisotropic system can be written as \begin{widetext} \begin{equation}\label{eq:mass} \begin{aligned} (\delta m_{i,ab}^{T})^2&= \frac{\partial\Omega_{q\bar{q}}(T,\mu_{f},\xi)}{\partial\alpha_{i,a}\partial\alpha_{i,b}}\\ &=2N_{c}\sum_{f=l,s}\int\frac{dp}{4\pi^2}\frac{p^2}{E_{f}}\left\{\left[f_{q,f}^0\left(m^2_{f,ab}-\frac{m^2_{f,a}m_{f,b}^2}{2E_{f}^2}\right)-\frac{f_{q,f}^0(1-f_{q,f}^0)}{2E_{f}T}m_{f,a}^2m_{f,b}^2\right]\right.\\ &\phantom{=\;\;}\left.\times\left[1-\frac{\xi p^2}{6E_{f}T}(1-f_{q,f}^0+\frac{T}{E_{f}})\right]+\frac{\xi p^2f_{q,f}^0}{12E_{f}^2T^2}m_{f,a}^2m_{f,b}^2\left[\frac{2T^2}{E_{f}^2}+\frac{T}{E_{f}}-\frac{Tf^0_{q,f}}{E_{f}}-f_{q,f}^0(1-f_{q,f}^0)\right]+q\rightarrow\bar{q}\right\} . \end{aligned}\\ \end{equation} \end{widetext} The squared constituent quark mass derivative with respect to meson field $\partial m_{f}^2/\partial\alpha_{i,a}\equiv m_{f,a}^2$, and that with respect to meson fields $\partial^2 m_{f}^2/(\partial\alpha_{i,a}\partial\alpha_{i,b})\equiv m_{f,ab}^2$ for different flavors are listed in the Table III of Ref.~\cite{Schaefer:2008hk}. When $\xi=0$, Eq.~(\ref{eq:mass}) can reduce to the result for an isotropic system. Thereafter, the squared masses of four scalar meson states are given as~\cite{Tawfik:2014gga,Schaefer:2008hk,Lenaghan:2000ey} \begin{eqnarray} m_{a_{0}}^2&=&(m_{a_{0}}^\mathrm{M})^2+(\delta m_{S,11}^T)^2,\label{eq:mao} \\ m_{\kappa}^2&=&(m_{\kappa}^\mathrm{M})^2+(\delta m_{S,44}^T)^2, \\ m_{\sigma}^2&=&m_{S,00}^2\cos^2\theta_{S}+m_{S,88}^2\sin^2\theta_{S}\nonumber\\ &&+2m_{S,08}^2\sin\theta_{S}\cos\theta_{S},\\ m_{f_{0}}^2&=&m_{S,00}^2\sin^2\theta_{S}+m_{S,88}^2\cos^2\theta_{S}\nonumber\\ &&-2m_{S,08}^2\sin\theta_{S}\cos\theta_{S}. \end{eqnarray} And the four pseudoscalar meson masses are \begin{eqnarray} m_{\pi}^2 &=& (m_{\pi}^{\mathrm{M}})^2+(\delta m_{P,11}^T)^2,\\ m_{K}^2&=&(m_{K}^{\mathrm{M}})^2+(\delta m_{P,44}^T)^2,\\ m_{\eta^{'}}^2&=& m_{P,00}^2\cos^2\theta_{P}+m_{P,88}^2\sin^2\theta_{P}\nonumber\\&&+2 m_{P,08}^2\sin\theta_{P}\cos\theta_{P},\\ m_{\eta}^2&=& m_{P,00}^2\sin^2\theta_{P}+m_{P,88}^2\cos^2\theta_{P}\nonumber\\&&-2 m_{P,08}^2\sin\theta_{P}\cos\theta_{P},\label{eq:meta} \end{eqnarray} where the mixing angles $\theta_{S(P)}$ read as \begin{eqnarray} \tan2\theta_{i}=(\frac{2m_{i,08}^2}{m_{i,00}^2-m_{i,88}^2}),\quad i=S,P. \end{eqnarray} and $m_{i,00/88/08}^2=(m_{i,00/88/08}^\mathrm{M})^2+\delta(m_{i,00/88/08}^{T})^2$. The detailed descriptions of the vacuum contributions [$(m_{a_{0}}^\mathrm{M})^2$, $(m_{\kappa}^\mathrm{M})^2$, $(m_{\pi}^{\mathrm{M}})^2$, $(m_{K}^{\mathrm{M}})^2$ and $(m_{i,00/88/08}^\mathrm{M})^2$] from purely mesonic potential in Eqs.~(\ref{eq:mao})-(\ref{eq:meta}) can be found from Refs.~\cite{Tawfik:2014gga,Schaefer:2008hk}. \begin{figure} \includegraphics[width=0.47\textwidth]{Plot-CV-CS.pdf} \caption{\label{Fig:CV-Cs} The temperature dependences of the scaled specific heat $C_{V}/T^3$ (upper panel) and the squared speed of sound $c_{s}^2$ (lower panel) at $\mu=0$~MeV for isotropic ($\xi=0$ (blue dashed lines)) and anisotropic (i.e., $\xi=$ $-0.4$ (orange dotted-dashed lines),~0.2 (red solid lines) and 0.4 (green wide dashed lines) quark matter in the quark-meson model.} \end{figure} \begin{figure*} \includegraphics[width=6.5in,height=7.in]{Plot-mass.pdf} \caption{\label{fig:mass} The temperature dependences of the pseudoscalar mesons $\pi$,~$K$,~$\eta'$,~$\eta$ (left panels) and scalar mesons $f_{0}$,~$\sigma$,~$a_{0}$,~$\kappa$ (right panels) at $\mu=0~\mathrm{MeV}$ for both isotropic ($\xi=0$ (blue dashed lines)) and anisotropic (i.e., $\xi=$ $-0.4$ (orange dotted-dashed lines),~0.2 (red solid lines) and 0.4 (green wide dashed lines) quark matter in the quark-meson model.} \end{figure*} The left panels and right panels of Fig.~\ref{fig:mass} display the $T$-dependent masses of the pseudoscalar ($\pi$,~$K$,~$\eta'$,~$\eta$) and scalar ($f_{0}$,~$\sigma$,~$a_{0}$,~$\kappa$) mesons for both isotropic and anisotropic quark matter in the quark-meson model, respectively. We can see that for a fixed anisotropy parameter, the masses of the pseudoscalar meson sectors $\pi$, $K$, and $\eta$ remain constant up to near the chiral critical temperature of non-strange condensate $T_{c}^{\chi}$, whereas the masses of $\eta'$ and scalar meson sectors $\sigma$, $a_{0}$, $\kappa$ remain constant at low temperature and then decreases before reaching $T_{c}^{\chi}$. For the pesudoscalar meson sector $f_{0}$, its mass also remains constant at low temperautre but decreases before reaching the chiral critical temperature of strange condensate $T_{s}^{\chi}$. For pseudoscalar meson sectors $\pi$, $K$ and $\eta$, their masses always decrease with increasing $\xi$ at $T>140$~MeV. However, for $\eta'$ and pseudoscalar meson sectors ($\pi$,~$K$,~$\eta'$,~$\eta$), the dependence of their masses on anisotropy parameter $\xi$ is nonmonotonic in the entire temperature domain of interest. More exact, with the increase of $\xi$, the masses of $\eta'$ ~$\sigma$,~$a_{0}$,~$\kappa$ first increase in low temperature domain ($100~\mathrm{MeV}<T<160$~MeV), then decrease in higher temperature domain ($T>160$~MeV). For $f_{0}$, its mass increases with increasing $\xi$ at $T<270$~MeV ($viz$., $T_{s}^{\chi}(\xi=0.4)$) and decreases afterward. As a whole, near above $T_{c}^{\chi}$ or $T_{s}^{\chi}$, all mesons become unphysical degrees of freedom and their masses become degenerate, which signals the restoration of chiral symmetry. In Fig.~\ref{fig:mass} we can also see that with the increase of $\xi$, the temperature coordinate at which meson masses begin to degenerate can be shifted to higher temperatures. This again shows that an increase of momentum-space anisotropy parameter can hinder the restoration of chiral symmetry. The qualitative behaviors of these meson masses with $\xi$ are different to the results for analyzing the finite size dependence of meson masses within PNJL model~\cite{Ya-Peng:2018gkz,Bhattacharyya:2012rp}, where $K$, $\eta$, and $\eta'$ have a significant volume dependence in lower temperature domain ($T<100~$MeV). \subsection{Transport coefficient} Studying transport properties is essential to deeply understand the dynamical evolution of the strongly interacting matter. In this part, we discuss the influence of momentum-space anisotropy on transport coefficients, such as shear viscosity $\eta$, electrical conductivity $\sigma_{el}$, and bulk viscosity $\zeta$ in quark matter. Due to the effect of momentum-space anisotropy is encoded in the parton distribution functions, the general expressions of these transport coefficients, which are obtained by solving the relativistic Boltzmann equation in relaxation time approximation, need to do some modifications~\cite{Rath:2019vvi,Zhang:2020efz,Thakur:2017hfc,Srivastava:2015via}. Therefore, using the results in Refs.~\cite{Rath:2019vvi,Zhang:2020efz}, the formulas of $\xi$-dependent transport coefficients at zero quark chemical potential are given as \begin{widetext} \begin{eqnarray}\label{eq:shear} \eta_{}=\sum_{f}\frac{d_f}{15T}\int \frac{dp}{\pi^2}\frac{p^6}{E_{f}^2}\left[\tau_{q,f}f_{q,f}^0(1-f_{q,f}^0)\right]-\sum_{f}\frac{\xi d_{f}}{90T^2}\int \frac{dp}{\pi^2}\frac{p^8}{E_{f}^3}[\tau_{q,f}f_{q,f}^0(1-f_{q,f}^0)(1-2f_{q,f}^0+\frac{T}{E_{f}})], \end{eqnarray} \begin{eqnarray}\label{eq:electrical} \sigma_{el}=\sum_{f}\frac{d_{f}q_{f}^2}{3T}\int \frac{dp}{\pi^2}\frac{p^4}{E_{f}^2}[\tau_{q,f}f_{q,f}^0(1-f_{q,f}^0)](1+\frac{\xi}{3})-\sum_{f}\frac{q_{f}^2\xi d_f}{18T^2}\int\frac{dp}{(2\pi)^3}\frac{p^6}{E_{f}^3}[f_{q,f}^0(1+f_{q,f}^0)(1-2f_{q,f}^0+\frac{T}{E_{f}})], \end{eqnarray} \begin{eqnarray}\label{eq:bulk} \zeta_{}&=&\sum_{f}\frac{d_{f}}{T}\int \frac{dp}{\pi^2}\frac{p^2}{E_{f}^2}\left[(\frac{1}{3}-c_{s}^2)p^2-c_{s}^2m_{f}^2+c_{s}^2m_{f}T\frac{dm_{f}}{dT}\right]^2[\tau_{q,f}f_{q,f}^0(1-f_{q,f}^0)] \nonumber\\ &&-\sum_{f}\frac{\xi d_{f}}{6T^2}\int \frac{dp}{\pi^2} \frac{p^4}{E_{f}^3}\left[(\frac{1}{3}-c_{s}^2)p^2-c_{s}^2m_{f}^2+c_{s}^2m_{f}T\frac{dm_{f}}{dT}\right]^2[\tau_{q,f}f_{q,f}^0(1-f_{q,f}^0)(1-2f_{q,f}^0)]\nonumber\\ &&-\sum_{f}\frac{\xi d_{f}}{6T}\int \frac{dp}{\pi^2} \frac{p^4}{E_{f}^4}\left[\frac{1}{9}p^4 -\bigg(c_{s}^2(m_{f}^2+p^2)-c_{s}^2m_{f}T\frac{dm_{f}}{dT}\bigg)^2\right]\tau_{q,f}f_{q,f}^0(1-f_{q,f}^0). \end{eqnarray} \end{widetext} Here, $d_{f}$ is the degeneracy factor for $f$-flavor quark. The quark electric charge $q_f$ is given explicitly by $q_u=-q_{\bar{u}}=2e/3$ and $q_{d,s}=-q_{\bar{d},\bar{s}}=-e/3$. The electron charge reads $e=(4\pi\alpha_s)^{1/2 }$ with the fine structure constant $\alpha_s\simeq1/137$. Different to the formula of bulk viscosity in Ref.~\cite{Rath:2019vvi}, we replace the original term $[(\frac{1}{3}-c_{s}^2)p^2]^2$ in the integrand with $\left[(\frac{1}{3}-c_{s}^2)p^2-c_{s}^2m_{f}^2+c_{s}^2m_{f}T\frac{dm_{f}}{dT}\right]^2$ to incorporate the in-medium effect. In the treatment of the relaxation time $\tau_{q,f}$, we rougly take a constant value $\tau_{q,f}=1\ \mathrm{fm}$ for the computation. In the weakly anisotropic system, the former terms in Eqs.~(\ref{eq:shear})-(\ref{eq:bulk}) are significantly larger than the latter terms in magnitude due to the difference in momentum power of respective integrand. Therefore, transport coefficients are still mainly dominated by the first term of related expressions on the quantitative. \begin{figure} \includegraphics[width=0.5\textwidth]{Plot-shear.pdf} \caption{\label{Fig:shear}The temperature dependence of shear viscosity $\eta$ at $\mu=0$~MeV in quark matter with $\xi= -0.4$ (orange dotted-dashed line), 0.0 (blue dashed line),~0.2 (red solid line), 0.4 (green wide dashed line).} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{Plot-electrical.pdf} \caption{\label{Fig:electrical} The temperature dependence of electrical conductivity $\sigma_{el}$ at $\mu=0$~MeV in quark matter with $\xi= -0.4$ (orange dotted-dashed line), 0.0 (blue dashed line),~0.2 (red solid line), 0.4 (green wide dashed line). } \end{figure} \begin{figure} \includegraphics[width=0.48\textwidth]{Plot-bulk.pdf} \caption{\label{Fig:bulk} The temperature dependence of bulk viscosity $\zeta$ at $\mu=0$~MeV in quark matter with $\xi= -0.4$ (orange dotted-dashed line), 0.0 (blue dashed line),~0.2 (red solid line), 0.4 (green wide dashed line).} \end{figure} The variation of shear viscosity $\eta$ with temperature at vanishing quark chemical potential for both isotropic and anisotropic quark matter is shown in Fig.~\ref{Fig:shear}. We see that $\eta$ in the (an-)isotropic quark matter rises monotonically with increasing temperature because the $T$ dependence of $\eta$ is mainly coming from the quark distribution function $f_{q,f}^0$ in the associated integrand. For the qualitative behavior of $\eta $ with $\xi$, we also can well understand from the associated expression. In the vicinity of the chiral critical temperature $T_{c}^{\chi}$, $\eta$ slightly decreases as $\xi$ grows due to decreasing behavior of the Boltzmann factor $e^{-m_{f}(T)/T}$ with $\xi$. In higher temperature domain ($T>{\tiny }160$~MeV), the decreasing feature of $\eta$ is negligible due to the unsensitivity of the constituent quark masses to $\xi$. However, the absolute value of the second term in Eq.~(\ref{eq:shear}) significantly increases with an increase of $\xi$. As a result, $\eta$ decreases as $\xi$ grows. This is similar to the result in Ref.~\cite{Rath:2019vvi}, where $\eta$ for the QGP is calculated in quasiparticle model. For electrical conductivity $\sigma_{el}$, its thermal behavior is similar to $\eta$, the quantitive difference between $\eta$ and $\sigma_{el}$ is mainly coming from the different momentum power of respective integrand. Similar to shear viscosity, the $\xi$ dependence of $\sigma_{el}$ is also determined by the second term in the associated expression. In Fig.~\ref{Fig:electrical}, we observe that $\sigma_{el}$ decreases as $\xi$ increases, which is also qualitatively consistent with the results of $\sigma_{el}$ for the QGP in quasiparticle model~\cite{Thakur:2017hfc,Srivastava:2015via}. The dependences of $\eta$ and $\sigma_{el}$ on momentum-space anisotropy are different from those on finite system size $L$ in the framework of (P)NJL model. In Ref.~\cite{Saha:2017xjq}, both $\eta$ and $\sigma_{el}$ first increase as $L$ decreases in low temperature domain, whereas, the size effect nearly vanishes in high temperature domain. Furthermore, the result in Ref.~\cite{Zhao:2020xob} has indicated that both $\eta$ and $\sigma_{el}$ in PNJL model also increase obviously as non-extensive parameter $q$ increases at $T>150$~MeV. Next, we discuss the temperature dependence of bulk viscosity $\zeta$ at zero quark chemical potential for both isotropic and anisotropic quark matter. As shown in Fig.~\ref{Fig:bulk}, for a fixed anisotropy parameter, $\zeta$ is peaking up in the vicinities of both $T^{\chi}_{c}$ and $T^{\chi}_{s}$, which is significantly different to the thermal behaviors of $\eta$ and $\sigma_{el}$. We also note that the thermal profile of $\zeta$ is similar to $dm_{s}/dT$ or $\chi_{s}$, which may be attributed that the qualitative behavior of $\zeta$ is mainly govern by $dm_{s}/dT$ rather than the quark distribution function in associated integrand of Eq.~(\ref{eq:bulk}). Due to the decreasing feature of the peak of $dm_{s}/dT$ with increasing $\xi$, the double-peak structure of $\zeta$ can be weakened as $\xi$ grows and the positions of peaks shift to higher temperatures, as shown in Fig.~\ref{Fig:bulk}. The diluting effect of $\xi$ on the critical behavior of $\zeta$ is similar to the studies regarding finite volume effect and non-extensive effect. In Ref.~\cite{Saha:2017xjq}, the double-peak structure of $\zeta$ even converts to one broadened peak structure when the system size is reduced to $2\ \mathrm{fm}$. And in Ref.~\cite{Zhao:2020xob}, as non-extensive parameter $q$ increases to 1.1, the two peaks of $\zeta$ also begin to merge into a broad one. \section{Summary and Conclusion}\label{summary} In this work, an anisotropy parameter $\xi$, which reflects the degree of momentum-space anisotropy arising from different expansion rates of the fireball generated in HICs along longitudinal and radial direction, for the first time, is introduced in the 2+1 flavor quark-meson model by replacing the isotropic distribution function in the thermodynamic potential of the quark-meson model with the anisotropic one. The effect of $\xi$ on the chiral properties, thermodynamics, meson masses, and transport properties in quark matter are investigated. We find that the chiral phase transition of quark matter with different anisotropy parameters is always a crossover at vanishing quark chemical potential. At finite quark chemical potential, the temperature of the CEP is affected more significantly by the anisotropy parameter than its quark chemical potential, which is opposite to the study of finite volume effect. We also demonstrate that at high temperature, the limit values of various scaled thermodynamics ($P/T^4$,~$s/T^3$,~$\epsilon/T^4$,~$C_{V}/T^3$) are quite sensitive to $\xi$. As $\xi$ increases, their limit values decrease, which is different to the finite size effect but rather similar to non-extensive effect. And the critical behavior of $C_{V}/T^3$ and $c_{s}^2$ can be smoothed out with increasing $\xi$. For scalar and pseudoscalar mesons, the temperature, where their masses begin to degenerate, is enhanced as $\xi$ rises, which implies that an increase of $\xi$ can hinder the restoration of chiral symmetry. Finally, the transport coefficients, such as shear viscosity~$\eta$, electrical conductivity~$\sigma_{el}$, and bulk viscosity~$\zeta$ for both isotropic and anisotropic quark matter, are also calculated. Our results show that $\eta$ and $\sigma_{el}$ rise with increasing temperature, while the thermal behavior of $\zeta$ exhibits a noticeble double-peak structure. It is seen that $\eta$ and $\sigma_{el}$ decrease monotonically as $\xi$ increases, whereas the qualitative behavior of $\zeta$ with $\xi$ is similar to $\chi_{s}(\xi)$. With increasing $\xi$, the double-peak structure of $\zeta$ can be weakened, and the positions of peaks shift to higher temperatures. In present work, we only focus on the chiral aspect of the QCD phase diagram, the exploration of the confinement phase transition in an anisotropic quark matter also can be addressed via including the Polyakov-loop potential. In the Polyakov-loop improved quark-meson model, the chiral phase transition and the location of the CEP will be affected further. For the calculation of transport coefficients in this study, the relaxation time of quark is assumed to be a constant. However, in the realistic interaction scenario, the relaxation time may also vary with the momentum anisotropy. These issues are our future research directions. Moreover, note that a spheroidal momentum-space anisotropy specified by one anisotropy parameter in one preferred propagation direction is considered in this work, however, the introduction of additional anisotropy parameters is necessary to provide a better characterization of the QGP properties. The chiral and confinement phase transitions in quark matter with ellipsoidal momentum-anisotropy ~\cite{Kasmaei:2016apv,Kasmaei:2018yrr} characterized by two independent anisotropy parameters, also can be done using PNJL or PQM model. The works of these directions are in progress and we expect to report our results soon. \acknowledgments This work is supported by the National Natural Science Foundation of China under Grants No.11935007.
{ "timestamp": "2021-10-08T02:10:48", "yymm": "2007", "arxiv_id": "2007.13580", "language": "en", "url": "https://arxiv.org/abs/2007.13580" }
\section{Introduction} Entanglement, as characterized by the von Neumann entropy, has recently emerged as an indispensable tool to distinguish phases and phase transitions in many-body quantum systems and reveals highly non-local information \cite{HorHorHorHor2009b,Laf2016,YanChaHamMuc2015}. Many-body localization, which emerges due to ergodicity breaking, is known to exhibit a logarithmic growth of the entanglement entropy ~\cite{BarPolMoo2012,SerPapAba2013,VosAlt2013,ZniProPre2008,LukRisSchTaiKauChoKheLeoGre2019,LuiLafAle2016}. On the other hand, systems showing thermalization show a saturation of entanglement growth \cite{SinBarPol2016}. Logarithmic slow down is also observed in a system with long-range interaction along with a quench \cite{SchLanRooDal2013,LerPap2020,LerMarGamSil2019} and also in many-body system with non-ergodic dynamics arising due to glassy behavior \cite{BarPolMoo2012,vanLevGar2015}. On the other hand, systems with interactions, for which a finite speed of correlation spreading is generally observed~\cite{NahRuhVijHaa2017,SchLanRooDal2013,CalCar2005,LuiLafAle2016}, show a linear growth of the entanglement entropy before saturating. Linear growth is also observed in open quantum systems \cite{BiaHacYok2018,ZurPaz1994,MilSar1999}, for instance, an inverted harmonic oscillator weakly coupled to a thermal bath \cite{ZurPaz1994}. It is further conjectured in Ref. \cite{ZurPaz1994}, that the rate of linear growth equals the sum of positive Lyapunov exponents of the system. Similar correspondence between entropy production and the Lyapunov exponent has been shown for a kicked rotor coupled to a thermal bath \cite{MilSar1999}. However, coupled kicked tops show a violation of this conjecture and it is observed that the rate depends on the coupling strength rather than on the Lyapunov exponents \cite{FujTanMiy2003, FujMiyTan2003}. In addition to many-body and open quantum systems, even isolated two-body systems are capable of showing a non-trivial and often unexpected dynamics. For instance, coupled kicked rotors (CKR) have been found to display localization-delocalization behavior depending on the coupling potential. For example, a CKR studied in Ref.~\cite{DorFis1988} displays Anderson type of localization. A similar result is also seen in CKR with different coupling potential \cite{NotIemRosFazSilRus2018}. A CKR with a point interaction exhibits dynamical localization of the center-of-mass momentum, which is destroyed for the relative momentum \cite{QinAndParFla2017}. In contrast, there are systems displaying the destruction of localization: For example, a CKR with a certain coupling potential shows a diffusive growth of the width of evolved state \cite{AdaTodIke1988}. Similarly, for spatially confined pair of $\delta$-kicked rotors the center-of-mass motion displays destruction of localization \cite{ParKim2003}. Moreover, for CKR either localization or diffusion is reported, depending on the strength of coupling \cite{TolBal2009:p}. Experimentally realized CKR shows a localization-delocalization transition~\cite{GadReeKriSch2013}. Thus, the dynamics displayed by CKR depending on the coupling potential is not yet fully understood and in particular the connection to entropy production has not been elucidated. In this paper we report on a surprising phenomenon in the entanglement production of a pair of coupled kicked rotors on a cylinder, which shows two distinct regimes of entanglement growth, i.e.\ linear and logarithmic, as time progresses. We show that this is tightly connected to a localization-delocalization cross-over of time-evolved states with an intermediate dynamical localization. We also show that the logarithmic growth of the entanglement entropy commences once the system displays normal diffusion at large times, while before that a linear growth is found. We further show that however weak the coupling is, the rotor will eventually display normal diffusion at large times. Analytically we calculate the growth of the linear entropy which shows an initial linear behavior followed by a saturation. The rate of linear growth is shown to depend quadratically on the ratio of scaled Planck's constant to the coupling strength rather than on the Lyapunov exponent. Furthermore, we provide an analytical expression for the time beyond which the logarithmic growth of the entanglement entropy starts. A pair of coupled kicked rotors is a prototypical system for studying the dynamics and entanglement between two particles. Its Hamiltonian is given by \begin{equation} \begin{aligned} H& = \frac{p_1^2}{2} + \frac{p_2^2}{2} +\left[K_1~\cos(x_1) + K_2~\cos(x_2) \right.\\ &~~~ \left. + ~\xi_{12}~\cos(x_1-x_2)\right] \sum_n \delta(t-n)\\ & = H_1 + H_2 + H_{12}, \label{eq2} \end{aligned} \end{equation} where $H_j =\frac{p_j^2}{2}+K_j~\cos(x_j) \sum_n \delta(t-n)$ represents the Hamiltonian of each kicked rotor and $H_{12} = \xi_{12}~\cos(x_1-x_2) \sum_n \delta(t-n)$ describes the coupling between the two rotors. Here $p_j$ is the momentum and $x_j$ the position of the $j$-th rotor. The kicking strengths of the kick received by $j$-th kicked rotor is $K_j$ and $\xi_{12}$ represents the coupling strength. Considering the dynamics stroboscopically, i.e.\ at multiple integer times, one obtains a four-dimensional symplectic map on a cylinder with periodic boundary conditions in the position coordinates. Note that assuming in addition periodic boundary conditions in the momentum coordinates one obtains the four-dimensional coupled standard map~\cite{Fro1971,Fro1972,RicLanBaeKet2014}. For $\xi_{12}=0$, the system represents two uncoupled kicked rotors. If the kicking strengths $K_j$ of the individual rotors are sufficiently large, their dynamics is chaotic with a Lyapunov exponent of approximately $\ln (K_j/2)$~\cite{Chi1979}. In the following numerical investigations we use $K_1=9.0$ and $K_2=10.0$ so that the classical dynamics is chaotic. In this chaotic case, the single or uncoupled kicked rotors display normal diffusion, i.e.\ a linear growth of the mean energy for an ensemble of initial conditions, $\langle E \rangle = D_{\text{cl}}t$ \cite{CasChiIzrFor1979,Izr1990}, where $D_{\text{cl}}$ is the classical diffusion coefficient. \begin{figure}[!h] \includegraphics[width=0.47\textwidth]{Fig_1a.pdf}\vspace{0.3cm} \includegraphics[width=0.47\textwidth]{Fig_1b.pdf} \caption{ (a) Entanglement entropy $S_{\text{vN}}$ as function of time. The blue vertical dashed line represents the crossover time $t^*$ at which the transition from linear to logarithmic occurs. Inset captures the linear regime. The orange dash dotted lines are linear fits to $S_{\text{vN}}$. (b) Mean energy growth $\langle E_1 \rangle$ of the first kicked rotor as a function of time. The orange dash dotted line indicates the break-time $t_{\text{b}}$. Inset shows $\langle E_1 \rangle$ for different coupling strengths, $\xi_{12}=0.0, 0.01, 0.03, 0.07, 0.1$ from bottom to top. (c) and (d) represent momentum and position distributions at two different times, $t=150$ (blue symbols) and $t=10000$ (orange symbols). The solid lines in (c) are Gaussian (green) and exponential (black) fits. All plots are for $K_1=9.0$, $K_2=10.0$, $\xi_{12}=0.05$ (for green solid line), and $\hbar_{\text{s}}=1.0$. } \label{fig1} \end{figure} For the quantum dynamics of CKR, the time evolution is given by the unitary operator $U=(U_1\otimes U_2)U_{12}$, where $U_j = \text{e}^{-\text{i} \frac{p_j^2}{2 \hbar_{\text{s}}}} \text{e}^{-\text{i} \frac{K_j}{\hbar_{\text{s}}}\cos(x_j)}$ and $U_{12} = \text{e}^{-\text{i} \frac{\xi_{12}}{\hbar_{\text{s}}}\cos(x_1-x_2)}$, such that $|\Psi(t)\rangle = U^t |\Psi(0)\rangle$ is the time-evolved state at discrete time $t$ of the initial state $|\Psi(0)\rangle$. In the following, we consider as initial state product states of the form $|\Psi(0)\rangle = |\psi_1(0)\rangle \otimes |\psi_2(0)\rangle$, where $|\psi_j(0)\rangle$ is a coherent state of the $j$-th rotor. For the numerical calculations, the fast Fourier transform is employed by expressing momentum and position values of each rotor evaluated on discrete grids of the same size $N=2^l$ with $l=11$. Note that in contrast to the normal diffusion displayed by a classical single kicked rotor, its quantum dynamics shows a suppression of the diffusion. This phenomenon is called dynamical localization, which is a phase coherent effect analogous to Anderson localization observed in disordered lattices \cite{GrePraFis1984}. Also note, similar to the classical case, the pair of quantum kicked rotors can also be considered with periodic boundary conditions in momentum~\cite{ChaShi1986,Lak2001,RicLanBaeKet2014} and the spectral properties and entanglement generation in this case has been investigated in detail in Refs.~\cite{SriTomLakKetBae2016,LakSriKetBaeTom2016,TomLakSriBae2018, PulLakSriBaeTom2020}. The entanglement between the sub-systems given by the two kicked rotors can be characterized by the von Neumann entropy \begin{equation} \begin{split} S_{\text{vN}}(t)&=-\text{Tr}_1\left(\rho_1(t) \log \rho_1(t)\right), \end{split} \end{equation} where $\rho_1(t) = \text{Tr}_2 (\rho(t))$ is the reduced density matrix obtained by tracing out the contribution of second subsystem and $\rho(t) = |\Psi(t)\rangle \langle \Psi(t)|$ is the total density matrix of the time-evolved state $|\Psi(t)\rangle$. Figure \ref{fig1}(a) shows that the growth of $S_\text{vN}$ has two distinct regimes, linear and logarithmic. Initially, $S_\text{vN}$ grows linearly up to a cross-over time $t^*$, see the inset of Fig.~\ref{fig1}(a) represented by a blue vertical dashed line. We find that the rate of this linear growth depends on the coupling strength following the relation $S_\text{vN} \sim \xi_{12}^{\beta}$. The value of the exponent $\beta$ is numerically found to be approximately $1.85$. In this regime of linear growth, the rate turns out to be independent of the kicking strengths when both rotors display classically chaotic dynamics. After the break-time $t^*$, the growth of $S_\text{vN}$ slows down and shows a logarithmic dependence on time, i.e, $S_\text{vN} = \tfrac{1}{2} \ln t + $ const. The value of $S_\text{vN}$ at the onset of logarithmic growth, i.e.\ $S_\text{vN}(t^*)$, depends on the quantum diffusion coefficient which in turn depends on the kicking strength. Additionally, it is found that $S_\text{vN}(t^*)$ barely depends on the coupling strength (not shown). Thus, the key finding is that the production of entanglement between the rotors does not follow a single functional form and, remarkably, in the two regimes a different dependency on the system parameters, i.e.\ $\xi_{12}$ and $K_j$, is found. To obtain a qualitative understanding of the observed entanglement growth in terms of the underlying quantum dynamics, let us consider the behavior of mean energy growth $\langle E_1\rangle = \langle \Psi(t)| \frac{p_1^2}{2} |\Psi(t)\rangle$ and the distributions $g(x_1) = |\langle \Psi|x_1\rangle|^2$ in position space and $f(p_1) = |\langle \Psi|p_1\rangle|^2$ in momentum space of the first rotor. Figure~\ref{fig1}(b) shows that there are three different regimes: Initially, $\langle E_1\rangle$ grows linearly until the break time $t_{\text{b}}$, which is indicated by a orange vertical dash dotted line in Fig.~\ref{fig1}(b). The break-time is the time until which the quantum energy of a single kicked rotor follows the classical energy growth \cite{Izr1990,MooRobBhaSunRai1995}. Until this time the system builds up its quantum correlations, which after $t_{\text{b}}$ leads to the emergence of an intermediate dynamical localization (IDL) for which $\langle E_1\rangle$ is essentially constant. This IDL extends up to the cross-over time $t^*$, indicated by a blue vertical dashed line in Fig.~\ref{fig1}(b). Beyond $t^*$, the system displays normal diffusion, $\langle E_1\rangle \sim t$. Moreover, it can be seen from the inset of Fig.~\ref{fig1}(b), that the temporal extent of the IDL increases with decreasing coupling $\xi_{12}$. An important consequence of this observation is that the system will always show normal diffusion at large times for any non-vanishing coupling $\xi_{12}>0$. Another significant observation is that the normal diffusion seen in Fig.~\ref{fig1}(b) is similar to classical diffusion. This can be seen from Fig.~\ref{fig1}(c) which shows that the momentum distribution at time $t=10000$ is well described by a Gaussian. Also the position distribution becomes very uniform with only small quantum fluctuations, see Fig.~\ref{fig1}(d). In contrast, the momentum distribution is exponentially localized in the IDL, as illustrated at $t=150$ in Fig.~\ref{fig1}(c) and the corresponding position distribution shows much larger fluctuations as seen in Fig.~\ref{fig1}(d). The Gaussian momentum and uniform position distributions are typical features of a corresponding classical diffusive regime \cite{Izr1990}. Thus, the appearance of normal diffusion suggests that the rotors provide noise to each other which destroys quantum coherence. Quantum coherence is the origin of the appearance of dynamical localization for a single kicked rotor. As a result, classical-like behavior emerges which in turn gives rise to a slow growth of $S_\text{vN}$. Thus, we can conclude that the linear regime of $S_\text{vN}$ appears when the system has quantum correlations. On the other hand, for $t>t^*$, where normal diffusion dominates, complete loss of correlations gives rise to the logarithmic growth. Now, we provide a theoretical explanation of the emergence of the two regimes of $S_\text{vN}$ growth in the case of weak coupling $\xi_{12}\ll 1$. For this we consider the linear entropy $S_{\text{lin}}(t) = \text{Tr}_1 \rho_1(t)^2$ which is analytically easier tractable than the von Neumann entropy $S_\text{vN}$, but shows the same characteristics. To treat the initial time-dependence, the key point is to consider that one rotor acts as an environment to the other so that we can rewrite the Hamiltonian in Eq.~\eqref{eq2} as $H = H_{\text{S}} +H_{\text{E}} + V(t)$, where $H_{\text{S}}$ ($H_{\text{E}}$) represents the system (environment) Hamiltonian and $V(t)$ is the interaction. The evolution of the total density matrix $\rho(t)$ in the interaction picture is \begin{equation} \frac{\text{d}\rho(t)}{\text{d} t}=-\frac{\text{i}}{\hbar_{\text{s}}}\left[V(t),\rho(t)\right]. \end{equation} As the initial state is a product state we have $\rho(0)=\rho_{\text{S}}(0)\otimes \rho_{\text{E}}(0)$. Performing formal integration and iteration and considering $\xi_{12} \ll 1$, we arrive at \begin{equation} \begin{aligned} \rho(t) =& \rho(0) - \frac{\text{i}\xi_{12}}{\hbar_{\text{s}}}\sum_{r=1}^t \left[\mathcal{F}(r),\rho(0)\right]\\ &+\left(\frac{\text{i}\xi_{12}}{\hbar_{\text{s}}}\right)^2\sum_{r=1}^t \sum_{s=1}^{r-1} \left[\mathcal{F}(s),\left[\mathcal{F}(r),\rho(0)\right]\right], \end{aligned} \label{rhosum} \end{equation} where $\mathcal{F}(r)=\cos(x_1(r)-x_2(r))$. The summation instead of integration that appears in Eq.~\eqref{rhosum} is due to the fact that coupling acts only at integer times, i.e.\ $V(t) = \xi_{12}~\cos(x_1-x_2) \sum_n \delta(t-n)$. The calculation of $\rho(t)$ is most conveniently done in position basis as the interaction is in position space. With $\rho_{\text{S}}(t)=\text{Tr}_E\rho(t)$ the computation of $S_{\text{lin}}=1-\text{Tr}_{\text{S}}(\rho_{\text{S}}(t)^2)$ leads to \begin{equation} S_{\text{lin}}(t)=\left(\frac{\xi_{12}}{\hbar_{\text{s}}}\right)^2 C(t), \label{slin_qn} \end{equation} where $C(t)=\sum_{r,s=1}^t C(r,s)$ and $C(r,s)$ represents the correlation function at two different time steps. If both rotors display classically chaotic dynamics, then $C(t)$ is independent of the system parameters. Furthermore, it is numerically found that for small coupling, $C(t)$ depends linearly on time. Thus Eq.~(\ref{slin_qn}) reveals that the rate $\Gamma=\frac{\text{d} S_{\text{lin}}}{\text{d} t}$ depends only on the ratio $\frac{\xi_{12}}{\hbar_{\text{s}}}$ rather than on the kicking strengths $K_j$. This implies that for weak coupling the initial temporal growth of $S_{\text{lin}}$ does not depend on the strength of chaos of the individual rotors. To determine the behavior of the linear entropy at large times, i.e.\ for $t> t^*$, we employ that the time-evolved initial state becomes on average Gaussian in momentum and uniform in position space, see Fig.~\ref{fig1}(c) and (d). Using the Husimi function $\mathcal{H}(x_1,p_1)$ one can express the linear entropy as \cite{NagLahGho2001} \begin{equation} S_{\text{lin}}=1-\int \mathcal{H}(x_1,p_1)^2 \frac{dp_1 dx_1}{2\pi \hbar_{\text{s}}}. \label{slin} \end{equation} We approximate the Husimi distribution of the time-evolved state by $\mathcal{H}(x_1,p_1)=\frac{\hbar_{\text{s}}}{\sqrt{2\pi D_{\text{q}} t}} \exp\left(-\frac{p_1^2}{2 D_{\text{q}} t}\right)$, where $D_{\text{q}}$ is the quantum diffusion coefficient. Inserting $\mathcal{H}(x_1,p_1)$ in Eq.~\eqref{slin} gives \begin{equation} S_{\text{lin}}=1-\frac{\hbar_{\text{s}}}{\sqrt{4\pi D_{\text{q}}}}t^{-1/2}. \label{slin_cl} \end{equation} Equation~\eqref{slin_cl} reveals that $S_{\text{lin}}$ in the regime of linear growth of $S_\text{vN}$ depends on $D_{\text{q}}$ and shows that $S_{\text{lin}}$ saturates at large times. Equating the two expressions for $S_{\text{lin}}$ obtained in Eq.~\eqref{slin_qn} and Eq.~\eqref{slin_cl} at $t=t^*$, at which the cross-over occurs, and solving for $t^*$, we obtain \begin{equation} t^* = \frac{\hbar_{\text{s}}^2}{3 \xi_{12}^2} \left[2+\frac{1}{\mathcal{G}(\xi_{12},D_{\text{q}})} +\mathcal{G}(\xi_{12},D_{\text{q}})\right], \label{crosst} \end{equation} where \begin{equation*} \mathcal{G}(\xi_{12},D_{\text{q}}) = \left[-1+\frac{27}{8\pi}\frac{\xi_{12}^2}{D_{\text{q}}} +\frac{3^{\frac{3}{2}}\xi_{12}}{2}\sqrt{\frac{-1}{\pi D_{\text{q}}} +\frac{27}{16 \pi^2}\frac{\xi_{12}^2}{D_{\text{q}}^2}}\right]^{\frac{1}{3}}. \end{equation*} Hence, $t^*$ depends on the coupling strength $\xi_{12}$, the scaled Planck's constant $\hbar_{\text{s}}$, and the quantum diffusion coefficient $D_{\text{q}}$. Equation~\eqref{crosst} shows that with decreasing $\hbar_{\text{s}}$ the value of $t^*$ decreases. This is because quantum correlations vanish in the semi-classical limit. Also, with increasing $\xi_{12}$, the cross-over time $t^*$ decreases. This relates to the fact that the coupling between the subsystems destroys the coherence in the system. Figure~\ref{fig3} compares the analytical result~\eqref{crosst} with numerical result and a very good agreement is found. This shows that, however small the coupling is, the system will eventually show normal diffusion. It is also interesting to note that for a noisy kicked rotor with noise strength $\epsilon$, the diffusion commences at a time-scale $\frac{\hbar_{\text{s}}^2}{\epsilon^2}$~\cite{OttAntHan1984}. \begin{figure}[!h] \includegraphics[width=0.47\textwidth]{Fig_2.pdf} \caption{Dependence of the cross-over time $t^*$ on the ratio of the coupling strength $\xi_{12}$ to the scaled Planck's constant $\hbar_{\text{s}}$. The analytical result (blue dashed line with crosses) is compared to the numerical results (orange circles). The inset sketches the procedure to calculate $t^*$ numerically. Parameters are $K_1=9.0$, $K_2=10.0$, and $\hbar_{\text{s}}=1.0$.} \label{fig3} \end{figure} \begin{figure}[!h] \includegraphics[width=0.47\textwidth]{Fig_3.pdf} \caption{Decay of coherence of the first kicked rotor for $\xi_{12}=0.05$. The blue vertical dashed line represents $t^*$. The red, yellow, and green curves in the inset show the decoherence for $\xi_{12}=0.0,~0.01,~0.05$, respectively. The orange vertical dash dotted line indicates the break time $t_{\text{b}}$. Parameters are $K_1=9.0$, $K_2=10.0$, and $\hbar_{\text{s}}=1.0$.} \label{fig4} \end{figure} As discussed, coherence plays a central role in the emergence of the different regimes of the entanglement growth as characterized by $S_\text{vN}$ or $S_{\text{lin}}$. To investigate the decay of coherence and examine the nature of noise provided by one rotor to the other, we study the decay of the off-diagonal elements of $\rho_1$. The off-diagonal elements represent the interference between the system and the environment and their decay indicates the loss of coherence \cite{Zur2003}. We quantify this decoherence by calculating $\mathcal{D}(t)= \sum_{i\neq j} \rho_1^{ij}(t)$, which is shown in Fig.~\ref{fig4}. Initially $\mathcal{D}(t)$ is close to one and follows an exponential decay until $t<t^*$. The exponential decay of coherence is also observed in a kicked rotor system with random noise \cite{WhiRudHoo2014,PauSarVisManSanRap2019}. This suggests that one rotor provides random noise to the other and thus effectively acts as an environment. Now, around $t=t^*$ in Fig.~\ref{fig4}, one observes an extended transition and finally a power-law decay. The exponent of the power-law is numerically found to be approximately $0.5$. This slow decay of coherence implies that the quantum system enters the classical-like regime as illustrated in Fig.~\ref{fig1}(b) and will require an arbitrary large time to actually behave like a classical system. A closer look at the initial time-dependence of the decoherence, as shown in the inset of Fig.~\ref{fig4}, reveals an initial production of coherence for uncoupled rotors (red curve) until the break-time $t_{\text{b}}$ and then saturation to a constant value. This initial increase can also be observed in a weak coupling situation (yellow curve) for which $t_{\text{b}}\ll t^*$ and because of that, a signature of IDL is observed as in Fig.~\ref{fig1}(b) for $t>t_{\text{b}}$. Even for coupling $\xi_{12}=0.05$, an initial increase in coherence for a small time interval $t<t_{\text{b}}$ can be observed (green curve in the inset of Fig.~\ref{fig4}) which corresponds to the appearance of a short IDL in Fig.~\ref{fig1}(b). However, for strong coupling, the cross-over time $t^*$ becomes so small that an initial production of coherence is not possible. Thus, the coherence decays from the very beginning. To summarize, for a pair of coupled kicked rotors we demonstrate that the entanglement entropy shows two distinct regimes, initially linear growth followed by a logarithmic increase. The logarithmic regime sets in when the time-evolved state shows a Gaussian profile in momentum space and is uniform on average in position space. This regime can be considered as a kind of classical behavior caused by one rotor acting as a noisy environment to the other. This leads to an exponential decoherence, which is clearly confirmed by the numerical results. The cross-over time $t^*$ between linear and logarithmic behavior of the von Neumann entropy is computed using the linear entropy. Explicit expressions for $S_{\text{lin}}$ in both regimes are obtained and excellent agreement of the prediction for $t^*$ with numerics is found. Thus, we show that entanglement entropy allows to distinguish two completely different dynamics, quantum and classical-like. It would be very interesting to experimentally investigate this CKR, for example using ultra-cold atoms. \acknowledgments We thank Roland Ketzmerick, Arul Lakshminarayan, and David Luitz for useful discussions.
{ "timestamp": "2020-07-28T02:41:19", "yymm": "2007", "arxiv_id": "2007.13604", "language": "en", "url": "https://arxiv.org/abs/2007.13604" }
\section{Introduction} \label{intro} Molecular line emission is one of our primary tools for characterizing the dense interstellar medium. Line observations are uniquely rich in that they carry information not just on the location of gas, but on its physical properties and kinematics. In particular, the velocity information provided by lines allows one to compute the mean velocity, the velocity dispersion, and a variety of higher-order statistics along each line of sight. The brightest molecular lines in the Milky Way and nearby galaxies are the first few rotational lines of CO, and it has long been known that the dispersion of the CO line is much larger than would be expected due to thermal broadening alone, indicating the presence of supersonic motions \citep[e.g.,][]{Liszt74a, Goldreich74a}. Subsequent exploration showed that the linewidth increases systematically with the size of the region probed \citep[e.g.,][]{Larson81a, Solomon87a, Goodman98b, Bolatto08a}, and that the difference in velocity (measured either as the difference in first velocity moments, or via the $L_2$ norm or a similar norm for the difference in the full spectra) between lines of sight increases systematically with the separation of the sightlines on the plane of the sky \citep[e.g.,][]{Issa90a, Ossenkopf02a, Burkhart09a}. Collectively these correlations are known as the linewidth-size relation. While the statistics of the CO line have been explored most extensively, similar large velocity spreads are also observed in many other molecular lines, including isotopologues of CO, and a variety of tracers that, for reasons of either chemistry or excitation, are more sensitive to gas denser than that traced by CO. Examples of the latter include the rotational lines of molecules such as HCN, CS, and N$_2$H$^+$, and inversion transitions of molecules such as NH$_3$. These molecules often show different linewidths, and different linewidth-size relations, from CO, even when both are observed along the same line of sight \citep[e.g.,][]{Onishi96a, Goodman98b, Walsh04a, Andre07a, Kirk07a, Muench07a, Rosolowsky08a}. There have been only limited theoretical attempts to understand the relationships between the kinematics revealed by different tracers. In some cases authors have modelled the kinematics of particular systems observed in multiple tracers \citep[e.g.,][]{Walker-Smith13a, Maureira17a}, but there have been few more general explorations. Consequently, it is not entirely certain what drives the differences between tracers. For example, \citet{Hacar16a} argue that CO linewidths are larger than those seen in rarer isotopologues because opacity broadening artificially inflates the linewidth, causing flows that are actually transsonic to appear supersonic in the CO lines. However, earlier studies showed that opacity broadening of CO is not a major correction factor for measurements of the sonic Mach number \citep{Correia14a} from linewidths but can be very important for measurements of the Mach number from the density spatial power spectrum \citep{Burkhart13b}. \citet{Offner08a} argue that density-dependent excitation effects explain the differences in kinematics measured with mostly optically thin tracers such as NH$_3$, N$_2$H$^+$, and C$^{18}$O. The problem is fundamentally difficult because the observed line emission is a complex product of many factors including the underlying gas distribution and kinematics, subtle excitation and radiative transfer effects, and finite resolution, sensitivity and beam-smearing from the telescopes. All of these effects are difficult to study because they are entangled. Our goal in this paper is to untangle the factors that drive differences in the kinematics as measured with a range of tracers. Our approach is to rely on simulations and simulated line emission. The great advantage of using simulations is that we precisely know the true underlying kinematics, and we can conduct numerical experiments that would not be possible in reality, for example separating the effects of excitation and opacity by independently turning them on and off. To this end, in this paper we use a series of simulations of star-formation in a self-gravitating, magnetised, turbulent medium to model line observations for five tracers: CO,C$^{18}$O, HCN, NH$_3$, and N$_2$H$^+$. We create synthetic position-position-velocity cubes for each, and then analyse the statistical properties of the resulting data. We use these synthetic data to untangle what drives tracer-dependent kinematics. The structure of this paper is as follows. In \autoref{sec: methods} we describe the numerical simulations and methods we use. We present our results in \autoref{sec: results}, where we find that higher density tracers trace smaller regions and lower linewidth due to the linewidth-size relation. We discuss general findings on which tracers perform best in \autoref{sec: disc}, and give our conclusions in \autoref{con}. \section{Methods} \label{sec: methods} We perform our analysis on a suite of Enzo simulations that we describe in \autoref{ssec:simulations} \citep{Collins12a}. These simulations are part of the Catalog for Astrophysical Turbulence Simulations (CATS) and are publicly available (Burkhart et al. 2020, in prep). In order to produce synthetic PPV cubes from these simulations, we generate a table of large velocity gradient (LVG) models with the code Derive the Energetics and Spectra of Optically Thick Interstellar Clouds (\textsc{despotic}; \citealt{Krumholz14b}). We describe our method for producing these tables, and for using them to generate PPV cubes, in \autoref{ssec:despotic}. We then describe how we model the effects of finite telescope resolution and signal to noise ratio on these PPV cubes in \autoref{ssec: model_telescope}. The source code and data used in this paper are available from \url{https://github.com/yyx319/Biases-in-measurements-of-cloud-kinematics} \subsection{Simulations} \label{ssec:simulations} We use a suite of three simulations of self-gravitating, isothermal, magnetised gas in a periodic domain performed with the adaptive mesh refinement (AMR) code \textsc{enzo} (see \citealt{Bryan14a} for a general description of the code, and \citealt{Collins12a} for a description of the MHD method). The initial conditions for all three were generated by a suite of unigrid simulations using the \textsc{ppml} code \citep{Ustyugov09} without self-gravity. These simulation are described in detail in \citet{Collins12a}. The initial conditions include a uniform density field and magnetic field initialized along a preferred direction. The box is driven with a pure solenoidal pattern until a steady turbulent state is reached. At the end of the stirring phase, all three simulations have fully-developed turbulence with virial parameter \begin{equation} \alpha_{\rm vir} = \frac{5 v_{\rm rms}^2}{3 G \rho_0 L_0^2} = 1 \end{equation} and sonic Mach number $\mathcal{M}_s=v_{\rm rms}/c_s = 9$, where $\rho_0$ is the mean density in the simulation box, $L_0$ is the size of the box, $c_s$ is the isothermal sound speed, and $v_{\rm rms}$ is the root mean square velocity. The three simulations have plasma $\beta$ values $\beta_0 = 0.2$, 2.0, and 20.0, respectively. Once statistical steady state is reached, gravity is turned on and the simulations are allowed to evolve with no further driving. We study snapshots from $t=0$ to $t=0.6t_{\rm ff}$ after gravity is turned on. During the self-gravitating phase, the root grid resolution is $512^3$, and we add on top of this four levels of refinement by a factor of two. The refinement condition is such that the local Jeans length $L_{\rm J} =\sqrt{c_s^2\pi/G\rho}$ is always resolved by at least 16 zones. This gives an effective linear resolution of 8192. Isothermal self-gravitating flows of the type used in our simulation suite can be re-scaled to vary the gas density, length, and other parameters (see \autoref{ssec:density_dependence} for further discussion), but in order to calculate the observable emission we need to choose a particular set of physical values of the various simulation parameters. We therefore adopt the following fiducial scalings, which are typical of observed molecular clouds in the Milky Way: \begin{eqnarray} t_{\rm ff}& = &1.1\;\mbox{Myr}\\ L_0& =& 4.6\;\mbox{pc} \\ v_{\rm rms}& = &1.8\;\mbox{km s}^{-1},\\ M &= &5900 \;M_\odot\\ B_0& =& (13, 4.4, 1.3)\;\mu\mbox{G}. \end{eqnarray} These choices correspond to adopting $c_s = 0.2$ km s$^{-1}$ and a hydrogen number density $n_{\rm H} = 1000$ cm$^{-3}$. We return to the issue of scaling in \autoref{ssec:density_dependence}. \subsection{Line emission calculation} \label{ssec:despotic} We calculate the observable molecular line luminosity from the simulations using the code \textsc{despotic} \citep{Krumholz14b}. We perform these calculations for the following lines: HCN J $= 1\to 0$, CO J $= 1\to 0$ and J $= 4\to 3$, ${\rm C}^{18}{\rm O}$ J $= 1\to 0$ and J $= 4\to 3$, ${\rm N}_2{\rm H}^+$ J $= 1\to 0$ and NH$_3$ $(1,1)$, as they span a wide range of densities at which they are effectively excited. We are particularly interested in different lines and transitions of CO and its isotopolgues, since these lines are bright and they are often used for wide-field mapping; we use the J $=4\to3$ line as an example that should be representative of transitions at intermediate J in general. We do not include $^{13}$CO as a separate case, because testing shows that the results for it are just intermediate between those for CO and C$^{18}$O. \textsc{Despotic} solves the equations of statistical equilibrium for the level populations of each species, including non-local thermodynamic equilibrium effects. It uses an escape probability formalism to treat optical depth effects. \textsc{Despotic} implements multiple choices for how to calculate the escape probability, and for this work we use the large velocity gradient (LVG) approximation \citep{Goldreich74a, de-Jong80a}. The details of the numerical method are provided in \citet{Krumholz14b}. We use collision rate and Einstein coefficients taken from the Leiden Atomic and Molecular Database \citep{Schoier05a} for all calculations. The underlying collision rate data for HCN are from \citet{Dumouchel10}, for CO and C$^{18}$O are from \citet{Yang10a}, for N$_2$H$^+$ are from \citet{Daniel05a} and for NH$_3$ are from \citet{Danby88a} and \citet{Maret09a}. Our procedure for modeling molecular line emission follows that of \citet{Onus18a}: we first set the abundances of all species per H nucleus. The values we adopt are $X_{\rm HCN}=1.0\times10^{-8},X_{\rm CO}=1.0\times10^{-4},X_{{\rm C}^{18}{\rm O}}=1.0\times10^{-7},X_{{\rm N}_2{\rm H}^+}=1.0\times10^{-10},X_{{\rm pNH}_3}=1.0\times10^{-8}$ (where pNH$_3$ indicates para-NH$_3$, the isomer that produces the (1,1) inversion transition); these values are taken from \citet{Krumholz14b} and \citet{Offner08a}. Second, we assume a constant gas temperature $T=10$ K \citep{Onus18a}. Under these assumptions we use \textsc{despotic} to compute a table of the luminosity per H$_2$ molecule in each line as a function of density and velocity gradient (which determines the optical depth in the LVG approximation), in a table of values running from $10^0$ to 10$^{10}$ cm$^{-3}$ in 100 logarithmically-spaced steps in number density and 10$^{-3}$ to 10$^3$ km s$^{-1}$ pc$^{-1}$ in 75 logarithmically-spaced steps in velocity gradient. For each cell in the simulation we take the line-of -sight (LOS) velocity gradient smoothed over 8 cells, and use that plus the density to determine the line luminosity in that cell by linearly interpolating in the table. We then generate position-position-velocity (PPV) cubes for each line using the software package \textsc{yt} \citep{Turk11a}. Each PPV cube has a resolution of $256^2\times200$, with a velocity range from $-4$ km s$^{-1}$ to 4 km s$^{-1}$. The corresponding resolution of a single PPV voxel is $\approx 0.02\, \mathrm{pc}\, \times\, 0.02 \,\mathrm{pc}\, \times \, 0.04 \, \mathrm{km}\,\mathrm{s}^{-1}$. We generate PPV cubes along each of the cardinal axes for each simulation at times $t/t_{\rm ff}=0$, $0.1$, $0.3$, and $0.6$. \begin{figure*} \centering \includegraphics[width=.8\linewidth]{figure/fig1.pdf} \caption{Velocity-integrated intensity maps for seven different line tracers as indicated in each panel for the $\beta=0.2$ run at $t=0.1 t_{\mathrm{ff}}$. We also show the true column density map in the top left panel. The line of sight is perpendicular to the direction of the mean magnetic field. The colour bars in each panel have a dynamic range of 100, and are all centred on the mean pixel value, enabling direct comparisons between the panels. } \label{fig:integrated_intensity_maps} \end{figure*} \subsection{Modeling real telescopic observations} \label{ssec: model_telescope} Real observational surveys always have finite signal-to-noise ratio (SNR) and finite spatial and spectral resolution. In order to compare our synthetic observations to real observations on an equal basis, we must therefore model these effects. For this purpose, we select resolutions and sensitivities typical of Galactic surveys, since the small size of our simulated region ($4.6$ pc) makes comparison to extragalactic studies problematic. We consider SNRs of 5, 10 and 20, a beam size of 0.1 pc, and a velocity channel width of 0.08 km s$^{-1}$. This spatial and spectral resolution is comparable to that of wide-area surveys such as COMPLETE \citep{Ridge06a} or the Green Bank Ammonia Survey \citep{Friesen17a}. Our implementation of telescope effects is as follows: we first convolve the image in each PPV velocity channel with a Gaussian beam with a size of 0.1 pc to simulate the effect of beam-smearing. Second, we coarsen our original PPV cube to the target spatial and spectral resolution. Third, we add noise to the voxels in our PPV cube. The noise assigned for each voxel is drawn from a Gaussian distribution with a dispersion that is equal to the mean luminosity in the zero-velocity channel in the noise-free map, divided by the SNR. For the purpose of the analysis below, we mask all voxels in which the total signal, after noise is added, is below 3 times the noise level. Similarly, for velocity-integrated quantities, we mask pixels for which the intensity integrated along the line of sight is lower than $I_{\mathrm{noise}} = \sigma_{\mathrm{lum}}\Delta v \sqrt{n_{\mathrm{channel}}}$, where $\sigma_{\rm lum}$ is the noise level per channel, $\Delta v$ is the channel width, and $n_{\rm channel}$ is the number of channels in the image. \begin{table*} \caption{Summary of results} \begin{center} \begin{tabular}{cc|c|c|cccccccc} \hline\hline \multicolumn{2}{c|}{Snapshot} & & & \multicolumn{5}{c}{Line} \\ $\beta$ & $t/t_{\rm ff}$ & Quantity & True & CO $1\to0$ & CO $1\to0$ thin & CO $4\to3$ & C$^{18}$O $1\to0$ & NH$_3$ & HCN & C$^{18}$O $4\to3$ & N$_2$H$^+$ \\ \hline \multirow{ 5}{*}{0.2} & \multirow{5}{*}{0.1} & $\log \langle n \rangle_{\rm L}$ [cm$^{-3}$] & 3.90 & 3.24 & & 3.49 & 3.76 & 3.75 & 3.96 & 4.10 & 4.21 \\ & & $\langle\tau\rangle_{\rm M}$ & -- & 1630 & & 491 & 1.53 & 7.91 & 35.7 & 0.527 & 0.105 \\ & & $\langle\sigma_{\parallel}\rangle$ [km s$^{-1}$] & 0.59 & 0.71 & 0.58 & 0.64 & 0.61 & 0.59 & 0.52 & 0.47 & 0.45 \\ & & \multirow{2}{*}{$\langle \sigma_{\perp}\rangle$ [km s$^{-1}$]} & 0.58 & 0.73 & 0.57 & 0.64 & 0.60 & 0.58 & 0.52 & 0.47 & 0.45 \\ & & & 0.56 & 0.68 & 0.56 & 0.61 & 0.58 & 0.57 & 0.52 & 0.48 & 0.45 \\ \hline \multirow{5}{*}{0.2} & \multirow{5}{*}{0.3} & $\log \langle n \rangle_{\rm L}$ [cm$^{-3}$] & 4.39 & 3.25 & & 3.52 & 3.85 & 3.81 & 4.05 & 4.26 & 4.49 \\ & & $\langle\tau\rangle_{\rm M}$ & -- & 3500 & & 1060 & 3.27 & 16.7 & 64.8 & 1.16 & 0.155 \\ & & $\langle\sigma_{\parallel}\rangle$ [km s$^{-1}$] & 0.57 & 0.69 & 0.57 & 0.63 & 0.60 & 0.58 & 0.51 & 0.45 & 0.41 \\ & & \multirow{2}{*}{$\langle \sigma_{\perp}\rangle$ [km s$^{-1}$]} & 0.59 & 0.75 & 0.59 & 0.65 & 0.62 & 0.60 & 0.54 & 0.48 & 0.45 \\ & & & 0.55 & 0.67 & 0.54 & 0.60 & 0.57 & 0.56 & 0.51 & 0.46 & 0.43 \\ \hline \multirow{5}{*}{0.2} & \multirow{5}{*}{0.6} & $\log \langle n \rangle_{\rm L}$ [cm$^{-3}$] & 5.42 & 3.27 & & 3.57 & 4.00 & 4.91 & 4.19 & 4.51 & 4.97 \\ & & $\langle\tau\rangle_{\rm M}$ & -- & 37500 & & 11300 & 34.8 & 176 & 598 & 12.6 & 1.05 \\ & & $\langle\sigma_{\parallel}\rangle$ [km s$^{-1}$] & 0.54 & 0.69 & 0.54 & 0.62 & 0.58 & 0.57 & 0.49 & 0.44 & 0.40 \\ & & \multirow{2}{*}{$\langle \sigma_{\perp}\rangle$ [km s$^{-1}$])} & 0.61 & 0.77 & 0.60 & 0.68 & 0.65 & 0.63 & 0.56 & 0.50 & 0.45 \\ & & & 0.53 & 0.66 & 0.53 & 0.60 & 0.57 & 0.56 & 0.51 & 0.45 & 0.41 \\ \hline \multirow{5}{*}{2} & \multirow{5}{*}{0.1} & $\log \langle n \rangle_{\rm L}$ [cm$^{-3}$] & 3.87 & 3.25 & & 3.48 & 3.73 & 3.73 & 3.93 & 4.07 & 4.18 \\ & & $\langle\tau\rangle_{\rm M}$ & -- & 1670 & & 504 & 1.57 & 8.14 & 37.4 & 0.538 & 0.114 \\ & & $\langle\sigma_{\parallel}\rangle$ [km s$^{-1}$] & 0.57 & 0.65 & 0.57 & 0.62 & 0.59 & 0.58 & 0.53 & 0.49 & 0.47 \\ & & \multirow{2}{*}{$\langle \sigma_{\perp}\rangle$ [km s$^{-1}$]} & 0.51 & 0.57 & 0.51 & 0.55 & 0.53 & 0.52 & 0.49 & 0.45 & 0.43 \\ & & & 0.54 & 0.61 & 0.54 & 0.58 & 0.56 & 0.55 & 0.51 & 0.48 & 0.47 \\ \hline \multirow{5}{*}{2} & \multirow{5}{*}{0.3} & $\log \langle n \rangle_{\rm L}$ [cm$^{-3}$] & 4.16 & 3.28 & & 3.54 & 3.85 & 3.82 & 4.05 & 4.25 & 4.45 \\ & & $\langle\tau\rangle_{\rm M}$ & -- & 3700 & & 1120 & 3.46 & 17.6 & 68.7 & 1.23 & 0.166 \\ & & $\langle\sigma_{\parallel}\rangle$ [km s$^{-1}$] & 0.58 & 0.66 & 0.57 & 0.63 & 0.60 & 0.59 & 0.53 & 0.48 & 0.45 \\ & & \multirow{2}{*}{$\langle \sigma_{\perp}\rangle$ [km s$^{-1}$]} & 0.51 & 0.58 & 0.51 & 0.56 & 0.53 & 0.53 & 0.49 & 0.45 & 0.42 \\ & & & 0.53 & 0.61 & 0.53 & 0.58 & 0.55 & 0.55 & 0.50 & 0.46 & 0.43 \\ \hline \multirow{5}{*}{2} & \multirow{5}{*}{0.6} & $\log \langle n \rangle_{\rm L}$ [cm$^{-3}$] & 5.73 & 3.34 & & 3.63 & 4.07 & 3.97 & 4.24 & 4.60 & 5.20 \\ & & $\langle\tau\rangle_{\rm M}$ & -- & 67400 & & 20400 & 62.6 & 316 & 1070 & 22.7 & 1.85 \\ & & $\langle\sigma_{\parallel}\rangle$ [km s$^{-1}$] & 0.58 & 0.69 & 0.58 & 0.64 & 0.61 & 0.60 & 0.54 & 0.49 & 0.45 \\ & & \multirow{2}{*}{$\langle \sigma_{\perp}\rangle$ [km s$^{-1}$]} & 0.53 & 0.60 & 0.53 & 0.58 & 0.56 & 0.56 & 0.52 & 0.48 & 0.45 \\ & & & 0.54 & 0.64 & 0.54 & 0.62 & 0.58 & 0.58 & 0.52 & 0.47 & 0.43 \\ \hline \multirow{5}{*}{20} & \multirow{5}{*}{0.1} & $\log \langle n \rangle_{\rm L}$ [cm$^{-3}$] & 3.95 & 3.32 & & 3.54 & 3.79 & 3.79 & 4.01 & 4.16 & 4.29 \\ & & $\langle\tau\rangle_{\rm M}$ & -- & 1560 & & 473 & 1.47 & 7.65 & 35.2 & 0.505 & 0.105 \\ & & $\langle\sigma_{\parallel}\rangle$ [km s$^{-1}$] & 0.67 & 0.70 & 0.66 & 0.69 & 0.68 & 0.67 & 0.63 & 0.59 & 0.57 \\ & & \multirow{2}{*}{$\langle \sigma_{\perp}\rangle$ [km s$^{-1}$]} & 0.49 & 0.57 & 0.49 & 0.55 & 0.51 & 0.51 & 0.46 & 0.42 & 0.39 \\ & & & 0.57 & 0.65 & 0.56 & 0.62 & 0.59 & 0.58 & 0.53 & 0.48 & 0.45 \\ \hline \multirow{5}{*}{20} & \multirow{5}{*}{0.3} & $\log \langle n \rangle_{\rm L}$ [cm$^{-3}$] & 5.23 & 3.40 & & 3.65 & 4.01 & 3.86 & 4.22 & 4.47 & 4.90 \\ & & $\langle\tau\rangle_{\rm M}$ & -- & 5480 & & 1660 & 5.10 & 25.9 & 95.5 & 1.83 & 0.203 \\ & & $\langle\sigma_{\parallel}\rangle$ [km s$^{-1}$] & 0.66 & 0.73 & 0.66 & 0.70 & 0.68 & 0.67 & 0.62 & 0.58 & 0.55 \\ & & \multirow{2}{*}{$\langle \sigma_{\perp}\rangle$ [km s$^{-1}$]} & 0.48 & 0.58 & 0.48 & 0.55 & 0.51 & 0.51 & 0.47 & 0.42 & 0.39 \\ & & & 0.58 & 0.67 & 0.58 & 0.65 & 0.61 & 0.60 & 0.55 & 0.50 & 0.46 \\ \hline \multirow{5}{*}{20} & \multirow{5}{*}{0.6} & $\log \langle n \rangle_{\rm L}$ [cm$^{-3}$] & 6.35 & 3.53 & & 4.85 & 4.41 & 4.25 & 4.53 & 5.00 & 5.73 \\ & & $\langle\tau\rangle_{\rm M}$ & -- & 29600 & & 8950 & 27.5 & 139 & 473 & 9.95 & 0.834 \\ & & $\langle\sigma_{\parallel}\rangle$ [km s$^{-1}$] & 0.65 & 0.75 & 0.65 & 0.68 & 0.67 & 0.64 & 0.58 & 0.56 & 0.54 \\ & & \multirow{2}{*}{$\langle \sigma_{\perp}\rangle$ [km s$^{-1}$]} & 0.56 & 0.68 & 0.56 & 0.64 & 0.60 & 0.60 & 0.55 & 0.49 & 0.45 \\ & & & 0.64 & 0.78 & 0.64 & 0.73 & 0.68 & 0.68 & 0.62 & 0.55 & 0.52 \\ \hline\hline \end{tabular} \end{center} \label{tab:results_summary} \end{table*} \section{Results} \label{sec: results} In this section we mainly focus on the snapshots of $\beta=0.2$ and $t=0.1t_{\mathrm{ff}}$ and $t=0.6t_{\mathrm{ff}}$, using projections in which the orientation is perpendicular to the magnetic field. We discuss the dependence of the results over the full parameter space in \aref{appd:full_par}, where we show that our qualitative conclusions hold regardless of the snapshots we choose to analyse. For reasons of simplicity, we therefore focus on these two example cases in the main body of the paper. For the first part of this section we use our noise-free maps at the native resolution of the simulation; we defer discussion of the biases induced by noise and finite resolution to \autoref{ssec:telescope_effect}. \subsection{Qualitative Results} \label{ssec: qualitative results} We show an example true column density map and integrated intensity maps for our seven different tracers for the case $\beta=0.2$, t=0.1t$_{\mathrm{ff}}$ (i.e. just after gravity is turned on) in \autoref{fig:integrated_intensity_maps}. In order to facilitate comparisons between different tracers, the dynamic range is the same in every panel. We see that different tracers pick up different parts of the flow, as expected \citep[e.g.,][]{Burkhart13a}. Due to strong optical depth effects, CO shows a smaller dynamic range in column density than is actually present, and preferentially picks out lower density regions. Conversely, dense gas tracers such as HCN, C$^{18}$O J=4$\to$3, and N$_2$H$^+$ produce emission primarily from overdense regions, and show much larger deficits along low column density lines of sight than are actually present. C$^{18}$O J=1$\to0$ and NH$_3$ sit in between these extremes, reproducing the dynamic range found in the true column density map relatively well. \begin{figure} \centering \includegraphics[width=1\columnwidth]{figure/fig2.pdf} \caption{Luminosity-weighted second moment maps for CO J = $1\to0$ (top left), HCN (top right), and for their ratio (bottom right) for the same snapshot and projection as shown in \autoref{fig:integrated_intensity_maps}. The bottom left panel shows the true, mass-weighted second moment map. In all cases the second moments we plot are normalised to the gas sound speed $c_s = 0.2$ km s$^{-1}$. } \label{fig:second_moment} \end{figure} In order to analyze the complex statistical properties of the velocity structure, in each pixel we calculate the luminosity-weighted first moment \begin{equation} \bar{v} = \frac{\int L_v v \, dv}{L} \end{equation} and second moment \begin{equation} \sigma_v =\left[\frac{\int L_v (v-\bar{v})^2\, dv}{L}\right]^{1/2}, \end{equation} where $L_v$ is the specific luminosity at velocity $v$ and $L=\int L_v \, dv$ is the velocity-integrated luminosity. \autoref{fig:second_moment} shows example second moment maps for HCN and CO J=1$\to0$, as well as their ratio, for the same snapshot as shown in \autoref{fig:integrated_intensity_maps}. We summarise the second moments that we measure for each simulation snapshot and each orientation in \autoref{tab:results_summary}. In this table, we report the luminosity-weighted mean second moment for each snapshot $\int L \sigma_v \, dA / \int L \, dA$, where the integral is over all pixels in the PPV cube. For comparison, we also calculate the true mass-weighted velocity dispersion $\int \Sigma \sigma_v\, dA / \int \Sigma\, dA$, where $\Sigma$ is the surface density of a pixel and $\sigma_v$ is the mass-weighted mean velocity dispersion along that pixel. This gives the velocity dispersion without bias from the density-dependence of emission tracers or optical depth effects. For both the true and measured second moments, we distinguish between measurements in the direction parallel to the magnetic flux, which we denote $\langle \sigma_{\parallel}\rangle$, and measurements in the two directions perpendicular to the magnetic flux, which we denote $\langle \sigma_{\perp}\rangle$; since there are two cardinal directions perpendicular to the field, we list two values of $\langle \sigma_{\perp}\rangle$ in \autoref{tab:results_summary}. In both the example shown in \autoref{fig:second_moment}, and in the numerical values reported in \autoref{tab:results_summary}, we see that our simulated maps exhibit the general trend that motivates much of this study: some tracers such as CO J=1$\to$0 show large, highly-supersonic second moments, while others such as NH$_3$ or N$_2$H$^+$ show systematically smaller second moments, which approach transsonic values in some cases. Which is closest to the true, mass-weighted velocity dispersion varies depending on the observation direction and the snapshot. In the remainder of this section, we investigate the physical reasons for these trends. \subsection{Density effects} \label{ssec:Line Width Size Relation vs. Density} One obvious difference between molecular tracers is the densities of gas to which they are sensitive. We illustrate this in \autoref{fig:L-rho}, which shows the PDF of luminosity with respect to gas density for all the tracers and in the same simulation as shown in \autoref{fig:integrated_intensity_maps}, at two different times, one early in the evolution ($t=0.1t_{\rm ff}$) and one after the collapse is well- advanced ($t=0.6t_{\rm ff}$). We can see that different tracers are sensitive to different ranges of density. Some, such as CO J=1$\to0$, yield a majority of their emission from gas that is less dense than the mass-weighted mean, while others, such as N$_2$H$^+$, are biased to gas that is denser than the mean; for this particular simulation, C$^{18}$O J=$1\to0$ NH$_3$ appear to be a reasonably good tracer of the true density structure, at least near the peak of the density PDF, though this is not true of all simulations at all times. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure/Density_Profile.pdf} \caption{ PDF of luminosity (coloured lines) and mass (black dashed line) as a function of log density for all the tracers for the same simulation as shown in \autoref{fig:integrated_intensity_maps} ($\beta=0.2$) at times $t=0.1t_{\rm ff}$ and $t=0.6t_{\rm ff}$, corresponding to states early and late in the star formation process.} \label{fig:L-rho} \end{figure} \begin{figure} \centering \includegraphics[width=1.\linewidth]{figure/fig4.pdf} \caption{Luminosity-weighted second moment vs.~luminosity-weighted mean density in the snapshots $\beta$=0.2, $t=0.1t_{\mathrm{ff}}$ and $t=0.6t_{\mathrm{ff}}$, projected along a direction perpendicular to the magnetic field; we show the data for other projections, times, and magnetic field strengths in Appendix A. Points are marker-coded by time and color-coded by different tracers. Open symbols, labelled ``true'' in the legend, show the true mass-weighted mean density and velocity dispersion for each snapshot.} \label{fig:2nd_m_vs_Lrho} \end{figure} We investigate whether differences in linewidth are caused by density-dependent emission by comparing the mean second moments with the luminosity-weighted mean density. We define the latter quantity as \begin{equation} \langle n \rangle_{\rm L} = \frac{\int n \mathcal{L}\, dV}{\int \mathcal{L} \, dV}, \label{eq:n_L} \end{equation} where $\mathcal{L}$ is the luminosity per unit volume (integrated over all velocities) for a particular line and LOS as a function of position, $n$ is the number density (measured in terms of H nuclei per unit volume), and the integral runs over the entire simulation domain. We show the relationship between $\langle \sigma_\perp \rangle$ and $\langle n\rangle_{\rm L}$ for the snapshots of $\beta$=0.2 and $t=0.1t_{\mathrm{ff}}$ and $t=0.6t_{\mathrm{ff}}$ in \autoref{fig:2nd_m_vs_Lrho}, and report values of $\langle n\rangle_{\rm L}$ averaged over three cardinal axis for each snapshot in \autoref{tab:results_summary}. We also report values of the true mass-weighted mean density, which is simply given by \autoref{eq:n_L} with $\mathcal{L}$ set equal to the true density $\rho$. From the figure, we see that second moments are highly correlated with luminosity-weighted mean density. The velocity dispersion of the dense tracers can drop to trans-sonic values, despite the fact that the actual Mach number is 9, at least at early times. At later times the luminosity-weighted mean densities tend to increase, while the velocity dispersions remain roughly constant. This is a result of the decay of turbulence and the onset of collapse. However, even deep into the collapse, we see that velocity dispersion and luminosity-weighted mean density remain highly-correlated, and we therefore conclude that such correlations are a generic feature of turbulent flows, independent of whether they are self-gravitating or undergoing collapse. \begin{figure} \centering \includegraphics[width=\linewidth]{figure/fig3.pdf} \caption{1D auto-correlation function of the luminosity density for the same snapshot as shown in \autoref{fig:integrated_intensity_maps} for various tracers, as indicated in the legend.} \label{fig:ACF} \end{figure} \begin{figure} \centering \includegraphics[width=.8\linewidth]{figure/fig5.pdf} \caption{Luminosity weighted second moment averaged over cardinal directions, $\sigma$, as a function of auto-correlation size of emitting regions $L_{\rm AC}$ for the snapshots $\beta$=0.2, $t=0.1t_{\mathrm{ff}}$ and $t=0.6t_{\mathrm{ff}}$. Different tracers as color coded in the same manner as \autoref{fig:2nd_m_vs_Lrho}. The solid and dotted lines represent least-squares linear fits to the data points for the corresponding time, as indicated in the legend. Note that this plot does not include CO J=1$\to0$ and CO J=4$\to3$, because we are unable to define $L_{\rm AC}$ for them.} \label{fig:LS_relation} \end{figure} The results shown above strongly suggest different lines trace different regions, and this is at least partly drives the differences in linewidth. Such behaviour is generically expected in turbulent flows, which have power spectra $P(k)\propto k^\alpha$ with $\alpha < 0$, indicating that power resides predominantly on large scales. We can verify directly that this effect is at work by characterising the sizes of the emitting regions captured by different tracers, and checking how well these predict the velocity dispersion measured in that tracer. In order to characterise the sizes of the emitting regions, we calculate the auto-correlation function (ACF) of the luminosity density $\mathcal{L}$ for each tracer, \begin{equation} A(\mathbf{x}) = \frac{\int \mathcal{L}(\mathbf{x}+\mathbf{x}')\mathcal{L}(\mathbf{x}') \, d^3x'}{\int \mathcal{L}(\mathbf{x}')\mathcal{L}(\mathbf{x}') \, d^3x'}, \end{equation} where $\mathbf{x}$ is known as the lag and the integral runs over the simulation volume. Note that we have not normalised the ACF by subtracting off the mean square of $\mathcal{L}$, because we are interested in the level of variation in the line compared to blank sky, not compared to the mean emission level of the cloud. Although our turbulence is not truly isotropic, due to the presence of a large-scale magnetic field, for convenience we will work with the angle-averaged 1D ACF, $A(x)$, which is simply the average of $A(\mathbf{x})$ over angle. In \autoref{fig:ACF} we show the 1D ACF for the same snapshot as shown in \autoref{fig:integrated_intensity_maps}. We see that the ACF is different for different tracers, with low-density, high-optical depth tracers like CO J=1$\to0$ showing a shallow ACF, and high-density, low-optical depth tracers like N$_2$H$^+$ showing a steep ACF. For the purposes of our analysis here, we will define the characteristic auto-correlation length scale $L_{\rm AC}$ for a given tracer as the lag for which $A(L_{\rm AC}) = 0.5$. Note that this leaves $L_{\rm AC}$ undefined for CO J=1$\to0$ and J=4$\to$3, since the ACF for them remains above 0.5 even for lags comparable to the size of the simulation box. We compare the measured linewidth in each tracer with the corresponding characteristic emitting size in \autoref{fig:LS_relation}. There is clearly a near-linear correlation between $\log L_{\rm AC}$ and $\log\sigma$, where $\sigma$ is the root mean square of the mean second moments measured along each of the three cardinal axes. We illustrate this by plotting simple linear least-squares fits to the data in \autoref{fig:LS_relation}; these fits describe the data quite well, particularly at $t=0.1t_{\rm ff}$. Clearly at least part of the variation in linewidth measured with different tracers is a result of differences in density sensitivity leading to tracers picking out regions of different size. This, combined with the linewidth-size relation of turbulence, in turn induces a difference in linewidth between the tracers. \subsection{Opacity effects} \label{ssec:Opacity_Broadening} \begin{figure} \centering \includegraphics[width=.8\linewidth]{figure/fig6.pdf} \caption{PDF of luminosity-weighted second moment over all pixels for the same snapshot as \autoref{fig:integrated_intensity_maps}, measured along the line of sight perpendicular to the mean magnetic field. In the top panel we show the distribution of second moments measured in each pixel using our simulated line emission, including optical depth effects in the LVG approximation. In the middle panel, we show the second moments measured from line emission where we have artificially set the optical depths of all lines in all cells to zero. The bottom panel shows the distribution of the ratios of second moments computed including and ignoring optical depth effects. } \label{fig:Opacity} \end{figure} We next explore the effects of opacity on the linewidths measured with optically thick tracers. As pointed out by \citet{Correia14a}, linewidths can be artificially enhanced by opacity broadening, whereby high optical depth suppresses emission in the line core more than in the line wings, making the line appear too broad. To begin exploring this effect, we use the cell-by-cell optical depths (which we compute using the LVG approximation) to calculate the mass-weighted mean optical depth $\langle\tau\rangle_{\rm M}$ for each of our simulation snapshots and LOS in each of our lines. We report these values averaged over three cardinal axis in \autoref{tab:results_summary}. As expected, we find that CO J $=1\to 0$ and CO J $=4\to 3$ are generally very optically thick ($\langle\tau\rangle_{\rm M}\sim 1000-10000$), HCN and CO J $=4\to 3$ are moderately optically thick ($\langle\tau\rangle_{\rm M}\sim 100-1000$), and all other lines are moderately or completely optically thin. To see how this affects the inferred velocity dispersion, in the top panel of \autoref{fig:Opacity} we show the distribution of second moments of our seven tracers measured in every pixel for the same sample snapshot as shown in \autoref{fig:integrated_intensity_maps}. For comparison, in the middle panel we show the same quantity, but calculated in a case where we artificially set the optical depth of all lines to zero (or equivalently, where we take the limit of $\nabla\cdot\mathbf{v}\to\infty$ in the LVG approximation). In the bottom panel of the figure, we show the distribution of ratios of the measured to optically thin second moments; that is, the bottom panel is the distribution of the ratios of observed second moments including opacity effects (as shown in the top panel) to second moments that would be observed without opacity effects (as shown in the middle panel). From \autoref{fig:Opacity}, we see that opacity broadening is moderately strong for CO J$=1\to 0$, J$=4\to 3$ and HCN, on average adding $\sim 30\%$ to the CO-inferred velocity dispersion, $\sim 15\%$ to the HCN-inferred one. The effect is weak for all other lines. \begin{figure} \centering \includegraphics[width=\linewidth]{figure/2nd_m-tau.pdf} \caption{Luminosity-weighted mean second moment versus mass-weighted mean opacity for $\beta=0.2$. Horizontal lines show the value of the true velocity dispersion at each time.} \label{fig:second_moment-opacity} \end{figure} We investigate the dependence of linewidth on opacity for the snapshots of $\beta=0.2$ and $t=0.1t_{\mathrm{ff}}$ and $t=0.6t_{\mathrm{ff}}$ in \autoref{fig:second_moment-opacity}. From the figure, we see that there is a weak correlation between linewidth and opacity, consistent with our earlier finding that opacity broadening is a $\sim 30\%$ effect for CO J $=1\to0$ \citep{Correia14a} and a $\sim 15\%$ effect for HCN. Interestingly, however, there is even a correlation between linewidth and opacity for mass-weighted mean opacities $\langle\tau\rangle_{\rm M} \lesssim 1$, where optical depth effects cannot possibly be important -- for example, \autoref{fig:Opacity} shows that optical depth effects are completely negligible for C$^{18}$O J $=1\to0$, J $=4\to3$ and N$_2$H$^+$, our three most optically thin-tracers, but there is nonetheless a systematic trend that linewidths measured with C$^{18}$O are larger than those measured with N$_2$H$^+$. The reason is simple: optical depth is correlated with density sensitivity, which we have also seen affects measured linewidths. Thus even in cases where the optical depth itself has no effect, there can still be an apparent correlation between optical depth and linewidth simply because the density range to which a given molecule is sensitive affects the linewidth, and density and optical depth are correlated. The relationship is even more complex for tracers that are at least marginally optically thick, because the effective critical density for a given species depends on its optical depth -- the level populations will thermalise in an optically thick region at lower density than in an optically thin one. Thus high optical depth weights the emission to lower density regions both because it suppresses the escape of photons from higher density ones, and because it helps thermalise the population and thus increase the luminosity in lower density ones. In order to disentangle the various effects that optical depth has on line shape, we carry out the following experiment for CO. We first calculate the level population of CO using our normal escape probability treatment of optical depth effects, but we then calculate the resulting emission assuming the gas is optically thin. In this way we can separate out the effects of CO optical depth on the level population from its effects on the emergent light, i.e., the effects of opacity broadening. We calculate the velocity dispersion of the PPV cubes produced in this manner using the same procedure as in \autoref{ssec: qualitative results} and show the results in \autoref{tab:results_summary}. We see that the velocity dispersions computed for CO in this manner are generally very close to the values found for C$^{18}$O. This means that, at least for CO, the effect of opacity broadening is more important than the density sensitivity in setting the linewidth -- i.e., when we compute the density-dependence of emission including optical depth effects, but ignore the radiative transfer effects of optical depth, the linewidths we obtain are closer to the case where the optical depth is negligible for all purposes (as is the case for C$^{18}$O) than to the case where we include both optical depth effects in both the level population and the radiative transfer calculation. Conversely, for HCN, which has a more moderate optical depth and a stronger dependence on density, opacity broadening is clearly less important than density effects: while \autoref{fig:Opacity} indicates that opacity broadening does increase its linewidth, examination of \autoref{tab:results_summary} shows that it nonetheless yields a linewidth that is systematically \textit{smaller} than the true one. For HCN, density dependence is clearly more important than opacity dependence. Taken together, our experiments suggest that both density-dependent excitation and opacity broadening can have significant effects on inferred linewidths. For very optically thick species like CO $1 \to 0$, the opacity broadening effect is dominant. However, density-dependent excitation and the resulting variation in the characteristic sizes of emitting regions also produces a strong correlation between linewidth and the characteristic density of the emitting material. This primary correlation can also produce a spurious secondary correlation between optical depth and inferred linewidth even in species for which opacity broadening is completely negligible. \subsection{Effects of finite resolution, sensitivity and beam-smearing} \label{ssec:telescope_effect} \begin{figure} \centering \includegraphics[width=1.1\columnwidth]{figure/noise_eff.pdf} \caption{Velocity-integrated intensity maps (left) and two typical spectra (right) for two tracers CO $1\to 0$ and HCN, for the same snapshot as \autoref{fig:integrated_intensity_maps}. In the left column, the top panel for each of the two tracers shows the results for the true PPV cube, while the bottom panel shows the results after beam convolution and with a finite SNR of 5. The red circles in the map denote the positions at which we extract the two example spectra shown in the right column. In this column, we show both the true and noise-added spectra; we also show the $1\sigma$ noise level and the mask we apply at $3\sigma$ as dotted and dashed lines, respectively.} \label{fig:img&spct_noise} \end{figure} Finally, we investigate bias due to noise and beam-smearing. In \autoref{fig:img&spct_noise}, we show some examples of velocity-integrated intensity maps and typical spectra before and after adding noise.\footnote{Careful readers may notice that the brightness temperature in the CO lines is somewhat larger than the gas kinetic temperature of 10 K. While this should not happen in reality, it can happen in our simulations due to the limitations of the LVG approximation for radiative transfer, which treats all absorption as local, and thus can miss absorption of background emission by foreground structures that are located some distance from the emitter, but happen to overlap in velocity. This issue only affects CO, since no other tracer is optically-thick enough for spatially-distant foreground absorption to be significant. Moreover, by varying our method for approximating the velocity gradient, we have verified that this issue has no significant impact on our results for CO kinematics; changing our method of estimating of the velocity gradient such that the peak brightness temperature for CO changes by factors of $\sim 10$ produces $\lesssim 10\%$ changes in the inferred velocity dispersions.} In the left panel of \autoref{fig:img&spct_noise}, we see that the intensity maps are only minimally affected by noise and finite spatial resolution. However, in the right panel of \autoref{fig:img&spct_noise}, we see that for CO $1\to0$, the line wings are significantly hidden by the noise, which lowers the recovered linewidth, while for HCN this effect is much smaller. In order to illustrate the dependence of this narrowing effect on the noise level and choice of tracer, in \autoref{fig:teles_effect} we show the ratio of the luminosity-weighted mean velocity dispersion inferred from our cubes with finite resolution and sensitivity to the true velocity dispersion. We show this ratio as a function of the signal-to-noise ratio of the observations. For comparison, we also show results obtained from the idealized synthetic observation (infinite SNR and high resolution) as the dashed lines. We see that limited SNR can lower the inferred linewidth significantly, especially for SNR of 5 and for low density tracers. This is because we throw out the portion of line wings contaminated by noise. In this sense, noise is the opposite of opacity effects -- the latter preferentially suppress the line centre, while the former suppresses the wings. At SNR $\sim$ 5, the linewidth we recover for CO $1 \to 0$ drops by $\sim$30\% compared to what we obtain in the infinite resolution limit, and even at SNR $\sim$ 20, it is still lowered by 5\%. For the highest density tracers such as N$_2$H$^+$, the bias induced by noise is smaller than for the low density tracers; for example, the N$_2$H$^+$ velocity dispersion we recover from the noisy cube is only 7\% smaller than for the true cube, even at a SNR of 5. Interestingly, at high SNR $\sim$ 20, the velocity dispersion inferred from the noisy cube can even slightly exceed the value recovered from the true cube, due to the effect of beam convolution. We have verified that this is the case by also constructing PPV cubes with beam smearing but no noise -- for such cubes, we find that the linewidths of the higher density tracers typically increase by a few percent, while those of the lower density tracers are largely unaffected. To summarize, it seems that the bias introduced by telescope is set by a competition between beam and noise effects, and the bias induced by these two components is different for different tracers. Low density tracers are influenced significantly by noise and not affected much by beam-smearing, leading to lower measured velocity dispersions, whereas high density tracers are influenced less by noise and more by beam-smearing, so that the velocity dispersion we infer for them is increased. All of these effects of resolution and sensitivity sit on top of the radiative transfer and excitation effects we have explored in the previous sections. \begin{figure} \centering \includegraphics[width=\linewidth]{figure/noise.pdf} \caption{ Ratio of the luminosity-weighted mean to true velocity dispersion, using dispersions inferred from seven emission lines as indicated in the legend, calculated from PPV cubes with finite resolution and added noise, as described in \autoref{ssec: model_telescope}. We show the results as a function of the signal-to-noise ratio. All points shown are for the same snapshot and projection as shown in \autoref{fig:integrated_intensity_maps}. For comparison, results obtained from the idealised synthetic observation (infinite SNR and resolution equal to the native resolution of the underlying simulations) are shown as dashed lines. } \label{fig:teles_effect} \end{figure} \section{Discussion} \label{sec: disc} Having analysed the mechanisms that give rise to various biases, we are now in a position to draw overall conclusions about the relative reliability of various tracers, and how this depends on cloud properties. Doing so is our primary focus in this section. \subsection{Which tracers reflect the true velocity dispersion?} \label{ssec: Which tracers reflect the true velocity dispersion?} We begin with the most basic question: which tracers most reliably match the true (i.e., mass-weighted) velocity dispersion, and what sorts of errors and biases do these and other tracers have? To answer this question, we plot the distribution of ratio of the velocity dispersion of emission lines to the true ones for all the pixels of all snapshots in \autoref{fig:sigma_line_vs_sigma_true_all}. We start here with the case without beam smearing or noise, and note that this histogram includes all snapshots at all times, not just the cases on which we focused as examples in \autoref{sec: results}. From this figure it is clear that, overall, C$^{18}$O is generally most accurate, with NH$_3$ as a close second; both have typical errors below $\sim 10\%$, and little bias, i.e., the PDF is reasonably well-centred around $\sigma_{\rm line}/\sigma_{\rm true} = 1$. Interestingly, we see that CO $4\to3$ is also well-centred on the true value. However, its distribution is significantly broader, with errors of $\sim 20\%$. This is not surprising, since we have seen that the good average performance of CO $4\to3$ is due to near-cancellation between density bias and opacity bias; the latter causes pixels with high column density to show inflated linewidths, while the former causes pixels with low column density to return linewidths that are artificially low. CO $1\to0$ is biased high by $\sim 20\%$, and has a tail extending to $>50\%$, while the denser tracers C$^{18}$O $4\to3$ and N$_2$H$^+$ are biased low by a similar amount, and have tails extending down to a factor of 2 error. \begin{figure} \centering \includegraphics[width=\linewidth]{figure/sigma_line_vs_sigma_true_all.pdf} \caption{ Distribution of ratio of the velocity dispersion measured using various emission lines, $\sigma_{\rm line}$, to the true mass-weighted velocity dispersion, $\sigma_{\rm true}$, for all pixels of all snapshots, i.e., combining all magnetic field strengths, times, and orientations. Different colours show different emission lines, as indicated in the legend.} \label{fig:sigma_line_vs_sigma_true_all} \end{figure} We show in \aref{appd:full_par} that these general conclusions apply not just to the total distribution over all snapshots, but also to individual cases at different plasma $\beta$, orientation with respect to the magnetic field, and simulation time. It is at least somewhat surprising that which tracers are most accurate does not depend on these factors in light of \autoref{fig:L-rho}, which shows that which lines trace the \textit{mass} best does depend on evolutionary state -- at early times when the density distribution is close to lognormal, C$^{18}$O emission follows mass most closely, but at later times when the density distribution has developed a significant powerlaw tail, dense gas tracers such as N$_2$H$^+$ more accurately follow the tail of the density distribution. The resolution to this apparent paradox can be found by noticing that, even at late times, C$^{18}$O remains the best tracer near the peak of the PDF. We have seen that density and velocity are anti-correlated, which is why dense gas tracers tend to be biased low in their estimates of the velocity dispersion. This effect helps protect the accuracy of moderate density tracers like C$^{18}$O and NH$_3$ at late times. Although there is substantial mass in the high-density powerlaw portion of the PDF, the bulk of the kinetic energy is still retained in the lower-density material for which C$^{18}$O and NH$_3$ remain accurate tracers. Thus the material that these tracers are failing to capture makes relatively little contribution to the velocity dispersion, and thus a failure to capture it introduces little error. Finally, let us consider how beam smearing and noise change our picture as outlined above. From the analysis in \autoref{ssec:telescope_effect}, we see that SNR values as low as 5 will lead to measurements of the velocity dispersion that are up to $\sim 30\%$ lower than would be recovered in the limit of infinite SNR. High density tracers are the least affected, and become nearly insensitive to SNR once the SNR exceeds $\sim 10$, while low and moderate density tracers often require SNR of about 20 to approach the infinite SNR limit. Such high SNRs are generally only practical to obtain for the rotational lines of CO. This presents a challenge to observational survey design, because it is precisely such lines that suffer the most from opacity bias, and thus tend to \textit{overestimate} the velocity dispersion when the SNR is high. Conversely, observations of tracers such as C$^{18}$O and NH$_3$ that are relatively immune to density and opacity bias may often require long integration tines to reach acceptable SNR. In practice these considerations may suggest the use of CO $4\to3$ as the best available compromise, as it is the only line that gives a relatively precise measurement of kinematics but is also bright enough to allow reasonable mapping speeds at high SNR. \subsection{Dependence on cloud density} \label{ssec:density_dependence} As discussed in \autoref{ssec:simulations}, in order to calculate observable line emission, we must choose a particular set of physical units for our simulation suite. It is therefore important to check to what extent our results are robust against this choice. In order to investigate this, we can rescale the simulations to arbitrary density and size scale. Since we are extracting an idealised sub-region of a molecular cloud, we are free to regard out simulation as representing a small, dense part of the cloud, or a larger, less dense part. Quantitatively, we rescale our density field by a factor $a$ compared to our fiducial choice, which means the average density becomes $n=1000a$ cm$^{-3}$. In the process, we have to fix the virial parameter, the Mach number, and the plasma beta, because these are all dimensionless quantities that affect the solutions to the equations of hydrodynamics. We also keep the sound speed the same, because that is set by the rate of cosmic ray heating, which is roughly constant in the Galaxy. In order to satisfy these constraints, we adopt following scalings for our re-scaled simulation: \begin{eqnarray} t_{\rm ff}& = &1.1\cdot a^{-\frac{1}{2}}\;\mbox{Myr}\\ L_0& =& 4.6\cdot a^{-\frac{1}{2}}\;\mbox{pc}\\ v_{\rm rms}& = &1.8\;\mbox{km s}^{-1},\\ M &= &5900\cdot a^{-\frac{1}{2}} \;M_\odot\\ B_0& =& (13, 4.4, 1.3)\cdot a^{\frac{1}{2}}\;\mu\mbox{G}. \end{eqnarray} With these choices, all dimensionless numbers describing the flow are left unchanged. We consider $a=0.1$ and $a=10$ in addition to our standard case $a=1$, and generate PPV cubes and velocity dispersion measurements for all pixels in all snapshots following the same procedure described in \autoref{sec: methods} and \autoref{ssec: qualitative results}. In \autoref{fig:scaling} we show the luminosity-weighted mean velocity dispersion inferred from all our molecular species, averaged over all simulations snapshots and orientations and normalised to the true velocity dispersion, versus the density scaling factor. We see that our conclusion that C$^{18}$O $1 \to 0$ is generally best, with NH$_3$ close behind, holds over a wide range of density, but that the amount of bias in these two species and in other tracers is density-dependent. Lower-density clouds suffer less opacity broadening and worse density bias, and thus make CO $1\to0$ closer to accurate and dense gas tracers further away from reality. Denser clouds have the opposite trend, suffering more opacity bias and less density bias, so that nearly any dense gas tracer works equally well, but CO is quite bad, with $\sim 30\%$ errors. \begin{figure} \centering \includegraphics[width=\linewidth]{figure/ratio.pdf} \caption{Ratio of the luminosity-weighted mean velocity dispersion inferred from five emission lines as indicated in the legend, averaged over all simulations snapshots and orientations and normalised to the true velocity dispersion, as a function of the density scaling factor $a$.} \label{fig:scaling} \end{figure} \subsection{Limitations of periodic boxes} In addition to worrying whether our results depend on our choice of density scale, we can also worry that they depend on geometry. Our simulations are periodic boxes representing the central regions of molecular clouds, while real molecular clouds have dense material concentrated towards the center, surrounded by more diffuse molecular material toward the cloud's edge. It is therefore important to consider the extent to which our use of the periodic box approximation might affect our conclusions. \citet{Kowal07} have studied this question by comparing uniform-density periodic boxes such as ours to simulations in which an overall density gradient is applied on top of the periodic box, creating an effective boundary to the cloud. They find that the boundary of the molecular cloud increases the proportion of low density gas due to the disturbance of the diffuse ambient medium. This has the effect of increasing the amount of emission per unit total cloud mass from low-density tracers such as CO J=$1\to 0$, but does not affect the high density part of the density PDF, and thus has a small effect on high density tracers, particularly C$^{18}$O J=$4 \to 3$, HCN and N$_2$H$^+$. Thus the only line for which our results are potentially affected by our use of a periodic box is CO J=$1\to 0$. Moreover, the direction of the bias from observation of any particular tracer depends on the extent to which that tracer departs from the ``true'' mass distribution. The presence of a cloud boundary will change the ``true'' mass-weighted density PDF and the corresponding luminosity-weighted PDF of the tracers, but the correlation between the tracers and underlying mass is the same. Thus the direction of bias for CO is likely to be the same even in the presence of a cloud boundary. We therefore conclude that the main likely effect of adding a boundary layer to our cloud would be to change the absolute amount, but not the direction, of the bias for CO J=$1\to 0$. Other results would change minimally. \section{Conclusions} \label{con} In this paper, we investigate the factors that drive differences in the linewidths of molecular clouds measured with various tracers. We carry out this investigation using a suite of self-gravitating MHD simulations of molecular clouds, covering a wide range of magnetic field strength and evolutionary state. For each of our sample simulations, we model the line emission using a large-velocity gradient approximation applied cell-by-cell to create synthetic PPV cubes that we use to investigate cloud kinematic structure in a variety of tracers. We specifically explore the effect of density-dependent emission and opacity broadening on observed linewidths, two mechanisms that have been discussed in the literature before, but never systematically investigated together. We also explore the effects of finite resolution and signal to noise ratio. The major findings of this paper are summarized below: \begin{enumerate} \item Molecular lines that are sensitive to denser gas tend to produce systematically lower estimates of the gas velocity dispersion. This is a direct consequence of the linewidth-size relation obeyed by turbulent molecular clouds: tracers that are excited primarily in high-density gas tend to produce most of their emission from compact regions that, as a result of the linewidth-size relation, have small velocity dispersions and thus underestimate the true velocity dispersions of large clouds. Low-density tracers, by contrast, sample larger regions and therefore return larger velocity dispersions that are closer to the true velocity dispersion. \item Opacity broadening also introduces a significant bias in the linewidths measured with optical thick tracers like CO $J=1\rightarrow 0$. The effect here tends to be opposite to the density bias: tracers that are easily excited in low-density gas, such as CO, tend to have high optical depths near line centre. This preferentially suppresses emission from the line centre, biasing inferred velocity dispersions too high. The relationship between optical depth and density-dependent excitation is complex, because high optical depth lowers effective critical density, while sub-thermal excitation can, depending on the molecule and line, either increase or decrease the optical depth. For CO, opacity broadening appears to be the more important effect, but which factors are dominant must be determined on a line-by-line basis. \item Bias induced by noise, finite spectral resolution and beam smearing from the telescope is mainly set by a competition between beam and noise effects. Noise introduces a bias whose effect is opposite that of opacity broadening, as it contaminates the line wings significantly, which artificially reduces the inferred linewidth; low density tracers are the most seriously affected. Beam-smearing, on the other hand, increases the linewidth slightly for high density tracers. At low SNR, the combined effects lower the linewidth of all tracers, while at high SNR, the linewidths of low density tracers are slightly reduced, and those of high density tracers are increased by a few percent due to beam-smearing. \item The competing biases of opacity broadening and density-dependent excitation lead to a ``sweet spot'' where, at fixed SNR, the overall bias is minimal, for three common tracers: the $J=4\rightarrow3$ transition of CO, the $J=1\rightarrow 0$ transition of C$^{18}$O and the $(1,1)$ inversion transition of NH$_3$. These lines generally produce the best estimates of true velocity dispersion for a typical molecular cloud, with errors below $\sim 10\%$ ($\sim 20\%$ for CO 4$\to$3). This statement is robust against variations of magnetic field strength, evolutionary state, and orientation relative to the direction of the overall magnetic field. By contrast, CO $J=1\rightarrow 0$ lines tend to produce velocity dispersions that are too large by $\approx 20\%$, while denser gas tracers such as HCN and N$_2$H$^+$ tend to underestimate the true velocity dispersion by similar amounts. However, these biases must be weighed against those produced by finite SNR, since the C$^{18}$O J=$1\to0$ and NH$_3(1,1)$ lines tend to be faint, and thus require longer integration times than for some other lines to reach SNR values high enough that noise does not dominate the uncertainty. \item The level of bias in various tracers is sensitive to the mean density of the region being observed. Over a wide range of density C$^{18}$O remains the best estimator of the true velocity dispersion, with NH$_3$ close behind, but that the amount of bias in these two and in other tracers is density-dependent. In extreme cases, errors in the estimated velocity dispersion can be as large as 50\% high or low, depending on the cloud density and the choice of tracer. \end{enumerate} \section*{Acknowledgements} MRK acknowledges funding from the Australian Research Council through the Future Fellowship (FT180100375) and Discovery Projects (DP190101258) funding schemes. BB acknowledges support Simons Foundation Flatiron Institute and the Center for Computational Astrophysics (CCA). Simulations used for this work are part of the Catalog for Astrophysical Turbulence Simulations (CATS) project hosted by CCA at \url{www.mhdturbulence.com}. This work made use of resources from the National Computational Infrastructure (NCI), which is supported by the Australian Government, through grant jh2. \bibliographystyle{mnras}
{ "timestamp": "2020-07-28T02:37:40", "yymm": "2007", "arxiv_id": "2007.13488", "language": "en", "url": "https://arxiv.org/abs/2007.13488" }
\section*{Background} \label{sec:background} Machine Learning has revolutionized life science research, especially in Neuroimaging and Bioinformatics~\cite{gao2017,serra2018}, such as by modeling interactions between whole brain genomics/imaging~\cite{park2012,medland2014} and identifying Alzheimer's Disease (AD)-related proteins~\cite{zhao2019}. Especially, Deep Learning can achieve accurate computer-assisted diagnosis when large-scale annotated training samples are available. In Medical Imaging, unfortunately, preparing such massive annotated datasets is often unfeasible \cite{han2020AIAI,cheplygina2019}; to tackle this pervasive problem, researchers have proposed various data augmentation techniques, including Generative Adversarial Network (GAN)-based ones~\cite{goodfellow2014,FridAdar,Han2020WIRN,Han2,Han3,han2019CIKM} ; alternatively, Rauschecker \textit{et al.} combined Convolutional Neural Networks (CNNs), feature engineering, and expert-knowledge Bayesian network to derive brain Magnetic Resonance Imaging (MRI) differential diagnoses that approach neuroradiologists' accuracy for $19$ diseases. However, even exploiting these techniques, supervised learning still requires many images with pathological features, even for rare disease, to make a reliable diagnosis; nevertheless, it can only detect already-learned specific pathologies. In this regard, as physicians notice previously unseen anomaly examples using prior information on healthy body structure, unsupervised anomaly detection methods leveraging only large-scale healthy images can discover and alert overlooked diseases when their generalization fails. Towards this, researchers reconstructed a single medical image \textit{via} GANs~\cite{Schlegl}, AutoEncoders (AEs)~\cite{Uzunova}, or combining them, since GANs can generate realistic images and AEs, especially Variational AEs (VAEs), can directly map data onto its latent representation~\cite{Chen}; then, unseen images were scored by comparing them with reconstructed ones to discriminate a pathological image distribution (i.e., outliers either in the learned feature space or from high reconstruction loss). However, those single image reconstruction methods mainly target diseases easy-to-detect from a single image even for non-expert human observers, such as glioblastoma on MR images~\cite{Chen} and lung cancer on Computed Tomography (CT) images~\cite{Uzunova}. Without considering continuity between multiple adjacent images, they cannot directly discriminate diseases composed of the accumulation of subtle anatomical anomalies, such as AD. Moreover, no study has shown so far how unsupervised anomaly detection is associated with either disease stages, various (i.e., more than $2$ types of) diseases, or multi-sequence MRI scans. Therefore, this paper proposes unsupervised Medical Anomaly Detection GAN (MADGAN), a novel two-step method using GAN-based multiple adjacent brain MRI slice reconstruction to detect various diseases at various stages on multi-sequence structural MRI (Fig.~\ref{fig1}): (\textit{Reconstruction}) Wasserstein loss with Gradient Penalty (WGAN-GP)~\cite{Gulrajani,han2018} + 100 $\ell _1$ loss---trained on $3$ healthy brain axial MRI slices to reconstruct the next $3$ ones---reconstructs unseen healthy/abnormal scans; the $\ell _1$ loss generalizes well only for unseen images with a similar distribution to the training images while the WGAN-GP loss captures recognizable structure; (\textit{Diagnosis}) Average $\ell _2$ loss per scan discriminates them, comparing the ground truth/reconstructed slices; the $\ell _2$ loss clearly discriminates the healthy/abnormal scans as squared error becomes huge for outliers. Using Receiver Operating Characteristics (ROCs) and their Area Under the Curves (AUCs), we evaluate the diagnosis performance of AD on T1-weighted (T1) MRI scans, and brain metastases/various diseases (e.g., small infarctions, aneurysms) on contrast-enhanced T1 (T1c) MRI scans. Using $1,133$ healthy T1 and $135$ healthy T1c scans for training, our Self-Attention (SA) MADGAN approach can detect AD at a very early stage, Mild Cognitive Impairment (MCI), with AUC $0.727$, and AD at a late stage with AUC $0.894$, while detecting brain metastases with AUC $0.921$. \paragraph{Contributions.} Our main contributions are as follows: \begin{itemize} \item \textbf{MRI Slice Reconstruction:} This first multiple MRI slice reconstruction approach can reliably predict the next $3$ slices from the previous $3$ ones only for unseen images similar to training data by combining SAGAN and $\ell _1$ loss. \item \textbf{Unsupervised Anomaly Detection:} This first unsupervised multi-stage anomaly detection reveals that, like physicians' way of performing a diagnosis, massive healthy data can aid early diagnosis, such as of MCI, while also detecting late-stage disease much more accurately by discriminating with $\ell _2$ loss. \item \textbf{Various Disease Diagnosis:} This first unsupervised various disease diagnosis can reliably detect the accumulation of subtle anatomical anomalies (e.g., AD), as well as hyper-intense enhancing lesions (e.g., brain metastases) on multi-sequence MRI scans. \end{itemize} \section*{Related work} \label{sec:RelatedWork} \subsection*{Alzheimer's disease diagnosis} \label{sec:ADdiagnosis} Even though the clinical, social, and economic impact of early AD diagnosis is of paramount importance \cite{arvanitakis2019diagnosis}---primarily associated with MCI detection \cite{moscoso2019prediction}---it generally relies on subjective assessment by physicians (e.g., neurologists, geriatricians, and psychiatrists). The diagnosis typically considers two characteristics: (\textit{i}) mesial temporal lobe atrophy (particularly hippocampus, entorhinal cortex, and perirhinal cortex) and (\textit{ii}) temporo-parietal cortical atrophy. Quantifying these structures is crucial for early AD diagnosis and its progression tracking \cite{desikan2009}. Moreover, morphometry-based markers, such as gray matter volume and cortical thickness, can play a key role in brain atrophy assessment \cite{ma2016}. Towards quantitative and reproducible approaches, many traditional supervised Machine Learning-based methods---which relies on handcrafted MRI-derived features---were proposed in the literature~\cite{salvatore2015,nanni2019}. In this context, diffusion-weighted MRI tractography enables reconstructing the brain's physical connections that can be subsequently investigated by complex network-based techniques. Lella \textit{et al.} \cite{lella2020} employed the whole brain structural communicability as a graph-based metric to describe the AD-relevant brain connectivity disruption. This approach achieved comparable performance with classic Machine Learning models---namely, Support Vector Machines, Random Forests, and Artificial Neural Networks---in terms of classification and feature importance analysis. In the latest years, Deep Learning has achieved outstanding performance by exploiting more multiple levels of abstraction and descriptive embeddings in a hierarchy of increasingly complex features \cite{lecun2015}: Liu \textit{et al.} devised a semi-supervised CNN to significantly reduce the need for labeled training data~\cite{liu2014early}; for clinical decision-making tasks, Suk \textit{et al.} integrated multiple sparse regression models (i.e., Deep Ensemble Sparse Regression Network)~\cite{suk2017}; Spasov \textit{et al.} proposed a parameter-efficient CNN for 3D separable convolutions, combining dual learning and a specific layer to predict the conversion from MCI to AD within $3$ years~\cite{spasov2019}; different from CNN-based approaches, Parisot used a semi-supervised Graph Convolutional Network trained on a sub-set of labeled nodes with diagnostic outcomes to represent sparse clinical data~\cite{parisot2018}. However, to the best of our knowledge, no existing work has conducted fully unsupervised anomaly detection for AD diagnosis since capturing subtle anatomical differences between MCI and AD is challenging. \subsection*{Brain metastasis and various disease diagnosis} \label{sec:AbnormalityMRIdiagnosis} Along with neuro-degenerative diseases, MRI can also play a definite role in abnormality diagnosis. Whereas advanced cancer screening, imaging, and therapeutics can improve oncological patients' survival and quality of life, brain metastases still remain major contributors of morbidity and mortality, especially for patients with lung cancer, breast cancer, or malignant melanoma~\cite{sacks2020}. To tackle this, previous computational methods have detected the brain metastases in either supervised~\cite{han2019CIKM,grovik2020} or semi-automatic manner~\cite{rundo2018NC,rundo2018next}. Detecting other various diseases, such as cerebral aneurysms, hemorrhage, and infarctions, also remain challenging~\cite{miki2016,vert2017}. Therefore, similar to the brain metastases, researchers have mostly relied on supervised methods, especially CNN-based detection~\cite{zhou2019,conti2020,federau2020}. Recently, unsupervised anomaly segmentation methods have been applied to brain MRI datasets for detecting multiple sclerosis lesions~\cite{baur2018deep} and glioblastoma~\cite{zimmerer2019unsupervised}. However, it is difficult to directly compare our approach with such existing unsupervised anomaly detection methods on 3D medical images since we perform a whole-brain diagnosis (i.e., classification), instead of segmentation. \subsection*{Unsupervised medical anomaly detection} Unsupervised disease diagnosis is challenging because it requires estimating healthy anatomy's normative distributions only from healthy examples to detect outliers either in the learned feature space or from high reconstruction loss. The latest advances in Deep Learning, mostly GANs~\cite{goodfellow2014} and VAEs~\cite{kingma2013}, have allowed for the accurate estimation of the high-dimensional healthy distributions. Except for discriminative-boundary-based approaches including \cite{alaverdyan2020regularized}, almost all unsupervised medical anomaly detection studies have leveraged reconstruction: as pioneering research, Schlegl \textit{et al.} proposed AnoGAN to detect outliers in the learned feature space of the GAN~\cite{schlegl2017unsupervised}; then, the same authors presented fast AnoGAN that can efficiently map query images onto the latent space~\cite{Schlegl}; since the reconstruction-based models often suffer from many false positives, Chen \textit{et al.} penalized large deviations between original/reconstructed images in gliomas and stroke lesion detection on brain MRI~\cite{chen2020}. However, to the best of our knowledge, all previous studies are based on 2D/3D single image reconstruction, without considering continuity between multiple adjacent slices. Moreover, no existing work has investigated how unsupervised anomaly detection is associated with either disease stages, various (i.e., more than two types of) diseases, or multi-sequence MRI scans. \subsection*{Self-Attention GANs (SAGANs)} Zhang \textit{et al.} proposed SAGAN that deploys an SA mechanism in the generator/discriminator of a GAN to learn global and long-range dependencies for diverse image generation~\cite{zhang2019self}; for further performance improvement, they suggested to apply the SA modules to large feature maps. The SAGANs have shown great promise in various tasks, such as human pose estimation~\cite{wang2019improving}, image colorization~\cite{sharma2019robust}, photo-realistic image de-quantization~\cite{zhang2020deep}, and large-scale image generation~\cite{brock2018large}. This SAGAN trend also applies to Medical Imaging to extract multi-level features for better super-resolution/denoising and lesion characterization: to mitigate the problem of thin slice thickness, Kudo \textit{et al}. and Li \textit{et al.} applied the SA modules to GANs on CT and MRI scans, respectively~\cite{kudo2019virtual,li2020super}; similarly, in \cite{lan2020sc}, the authors proposed to fuse plane SA modules and depth SA modules for low-dose 3D CT denoising; Lan \textit{et al.} synthesized multi-modal 3D brain images using SA conditional GAN~\cite{lan2020sc}; Ali \textit{et al.} incorporated SA modules into progressive growing of GANs to generate realistic and diverse skin lesion images for data augmentation~\cite{ali2019data}. However, to the best of our knowledge, no existing work has directly exploited the SAGAN for medical disease diagnosis. \section*{Materials and methods} \subsection*{Datasets} \label{sec:datasets} \subsubsection*{AD dataset: OASIS-3} \label{sec:OASIS} We use a longitudinal $3.0$T MRI dataset of $176 \times 240$/$176 \times 256$ T1 brain axial MRI slices containing both normal aging subjects/AD patients, extracted from the Open Access Series of Imaging Studies-3 (OASIS-3)~\cite{lamontagne2018oasis}. The $176 \times 240$ slices are zero-padded to reach $176 \times 256$ pixels. Relying on Clinical Dementia Rating (CDR)~\cite{Morris}, common clinical scale for the staging of dementia, the subjects are comprised of: \begin{itemize} \item Unchanged CDR $= 0$: Cognitively healthy population; \item CDR $= 0.5$: Very mild dementia\ ($\sim$ MCI); \item CDR $= 1$: Mild dementia; \item CDR $= 2$: Moderate dementia. \end{itemize} Since our dataset is longitudinal and the same subject's CDRs may vary (e.g., CDR $= 0$ to CDR $= 0.5$), we only use scans with unchanged CDR $= 0$ to assure certainly healthy scans. As CDRs are not always assessed simultaneously with the MRI acquisition, we label MRI scans with CDRs at the closest date. We only select brain MRI slices including hippocampus/amygdala/ventricles among whole $256$ axial slices per scan to avoid over-fitting from AD-irrelevant information; the atrophy of the hippocampus/amygdala/cerebral cortex, and enlarged ventricles are strongly associated with AD, and thus they mainly affect the AD classification performance of Machine Learning~\cite{Ledig}. Moreover, we discard low-quality MRI slices. The remaining dataset is divided as follows: \begin{itemize} \item Training set: Unchanged CDR $= 0$ ($408$ subjects/$1,133$ scans/$57,834$ slices); \item Test set: Unchanged CDR $= 0$ ($168$ subjects/$473$ scans/$24,278$ slices),\\CDR $= 0.5$ ($152$ subjects/$253$ scans/$13,813$ slices),\\CDR $= 1$ ($90$ subjects/$135$ scans/$7,532$ slices),\\CDR $= 2$ ($6$ subjects/$10$ scans/$500$ slices). \end{itemize} The same subject's scans are included in the same dataset. The datasets are strongly biased towards healthy scans similar to MRI inspection in the clinical routine. During training for reconstruction, we only use the training set---structural MRI alone---containing healthy slices to conduct unsupervised learning. We do not use a validation set as our unsupervised diagnosis step is non-trainable. \subsubsection*{Brain metastasis and various disease dataset} \label{sec:dataset2} This paper also uses a non-longitudinal, heterogeneous $1.5$T/$3.0$T MRI dataset of $190 \times 224$/$216 \times 256$/$256 \times 256$/$460 \times 460$ T1c brain axial MRI slices. This dataset was collected by the authors at the National Center for Global Health and Medicine, Tokyo, Japan, and is currently not publicly available due to ethical restrictions. The dataset contains both healthy subjects, brain metastasis patients \cite{rundo2018NC}, and patients with various diseases different from brain metastases. The slices are resized to $176 \times 256$ pixels. The various diseases include but are not limited to: \begin{itemize} \item Small infarctions; \item Aneurysms; \item Benign tumors; \item Hemorrhages; \item Cysts; \item White matter lesions; \item Post-operative inflammations. \end{itemize} Conforming to T1 slices, we also only select T1c slices including hippocampus, amygdala, and ventricles---a large portion of various diseases also appear in the mid-brain. The remaining dataset is divided as follows: \begin{itemize} \item Training set: Normal ($135$ subjects/$135$ scans/$7,793$ slices); \item Test set: Normal ($58$ subjects/$58$ scans/$3,353$ slices),\\Brain Metastases ($79$ subjects/$79$ scans/$4,872$ slices),\\Various Diseases ($66$ subjects/$66$ scans/$4,195$ slices). \end{itemize} Since we cannot collect large-scale T1c scans from healthy patients like OASIS-3 dataset, during training for reconstruction, we use both T1/T1c training sets containing healthy slices simultaneously for the knowledge transfer. In the clinical practice, T1c MRI is well-established in detecting various diseases, including brain metastases \cite{arvold2016updates}, thanks to its high-contrast in the enhancing region---however, the contrast agent is not suitable for screening studies. Accordingly, such inter-sequence knowledge transfer is valuable in computer-assisted MRI diagnosis. During testing, we make an unsupervised diagnosis on T1 and T1c scans separately. \subsection*{MADGAN-based multiple adjacent brain MRI slice reconstruction} \label{sec:MRIrecon} To model strong consistency in healthy brain anatomy (Fig.~\ref{fig1}), in each scan, we reconstruct the next $3$ MRI slices from the previous $3$ ones using an image-to-image GAN (e.g., if a scan includes $40$ slices $s_i$ for $i=1,\dots,40$, we reconstruct all possible $35$ setups: $(s_i)_{i\in\{1,2,3\}} \mapsto (s_i)_{i\in\{4,5,6\}}$; $(s_i)_{i\in\{2,3,4\}} \mapsto (s_i)_{i\in\{5,6,7\}}$; \dots; $(s_i)_{i\in\{35,36,37\}} \mapsto (s_i)_{i\in\{38,39,40\}}$). As Fig.~\ref{fig2} shows, our MADGAN uses a U-Net-like~\cite{Ronneberger,RundoUSEnet} generator with $4$ convolutional layers in encoders and $4$ deconvolutional layers in decoders respectively with skip connections, as well as a discriminator with $3$ decoders. We apply batch normalization to both convolution with Leaky Rectified Linear Unit (ReLU) and deconvolution with ReLU. Between the designated convolutional/deconvolutional layers and batch normalization layers, we apply SA modules~\cite{zhang2019self} for effective knowledge transfer $via$ feature recalibration between T1 and T1c slices; as confirmed on four different image datasets~\cite{kimura2020adversarial}, introducing the SA modules to GAN-based anomaly detection (i.e., attention-driven, long-range dependency modeling) can also mitigate the effect of noise by ignoring irrelevant disturbances and focusing on the salient body parts in the slice. We compare the MADGAN models with a different number of the SA modules: (\textit{i}) no SA modules (i.e., MADGAN); (\textit{ii}) 3 (red-contoured) SA modules (i.e., 3-SA MADGAN); (\textit{iii}) 7 (red- and blue-contoured) SA modules (i.e., 7-SA MADGAN). To confirm how reconstructed slices' realism and anatomical continuity affect medical anomaly detection, we also compare the MADGAN models with different loss functions: (\textit{i}) WGAN-GP loss + 100 $\ell _1$ loss (i.e., MADGAN); (\textit{ii}) WGAN-GP loss (i.e., MADGAN w/o $\ell _1$ loss). The $\ell _1$ and $\ell _2$ losses between an input image $x$ and its reconstructed image $x'$ are defined as follows: \begin{align} &&\ell _1 = \sum_{i=1}^{P} |x_i - x'_i|,\\ &&\ell _2 = \sum_{i=1}^{P} (x - x')^2, \end{align} where $P$ denotes the number of pixels. \paragraph*{Implementation details} Each MADGAN training lasts for $1.8 \times 10^{6}$ steps with a batch size of $16$ (our maximum available batch size). We use $2.0 \times 10^{-4}$ learning rate for Adam optimizer~\cite{kingma2014}. Such as in RGB images, we concatenate adjacent $3$ grayscale slices into $3$ channels. During training, the generator uses two dropout~\cite{srivastava2014dropout} layers with 0.5 rate. We flip the discriminator’s real/synthetic labels once in three times for robustness. Using 4 NVIDIA Quadro GV100 graphics processing units, we implement the framework on TensorFlow 1.8. \subsection*{Unsupervised medical anomaly detection} \label{sec:UnsupADdiagnosis} During diagnosis, we classify unseen healthy and abnormal scans based on average $\ell _2$ loss per scan. The average $\ell _2$ loss is calculated from whole MADGAN-reconstructed $3$ slices $s_i$ of each scan containing $n$ slices: $(s_i)_{i\in\{4,5,6\}}$; $(s_i)_{i\in\{5,6,7\}}$; \dots; $(s_i)_{i\in\{n-2,n-1,n\}}$. We use the $\ell _2$ loss since squared error is sensitive to outliers and it significantly outperformed other losses (i.e., $\ell _1$ loss, Dice loss, Structural Similarity loss) in our preliminary paper~\cite{han2020CIBB}. To evaluate its unsupervised AD diagnosis performance on a T1 MRI test set, we show ROCs---along with the AUC values---between CDR $= 0$ \textit{vs} (\textit{i}) all the other CDRs; (\textit{ii}) CDR $= 0.5$; (\textit{iii}) CDR $= 1$; (\textit{iv}) CDR $= 2$. We also show the AUCs under different training steps (i.e., $150$k, $300$k, $600$k, $900$k, $1.8$M steps) and confirm the effect of calculating average $\ell _2$ loss (among whole slice sets or continuous 10 slice sets exhibiting the highest loss) per scan; if the 10 slice sets start from the $j$-th slice, we use: $(s_i)_{i\in\{j,j+1,j+2\}}$; $(s_i)_{i\in\{j+1,j+2,j+3\}}$; \dots; $(s_i)_{i\in\{j+9,j+10,j+11\}}$). Moreover, we visualize pixelwise $\ell _2$ loss between real/reconstructed 3 slices, along with distributions of average $\ell _2$ loss per scan of CDR $= 0/0.5/1/2$ to know how disease stages affect its discrimination. In exactly the same manner, we evaluate the diagnosis performance of brain metastases/various diseases on a T1c MRI test set, showing ROCs/AUCs between normal \textit{vs} (\textit{i}) brain metastases + various diseases; (\textit{ii}) brain metastases; (\textit{iii}) various diseases. \section*{Results} \label{sec:Results} \subsection*{Reconstructed brain MRI slices} \label{sec:MRIreconRes} Fig.~\ref{fig3} illustrates example real T1 MRI slices from a test set and their reconstruction by MADGAN and 7-SA MADGAN. Similarly, Figs.~\ref{fig4} and~\ref{fig5} show example real T1c MRI slices and their reconstructions. Pixelwise $\ell _2$ loss tends to increase (i.e., high intensity in the heatmap) around lesions due to their different image distribution from healthy samples. Figs.~\ref{fig6} and~\ref{fig7} indicate distributions of average $\ell _2$ loss per scan on T1 and T1c scans, respectively. Leveraging $\ell _1$ loss' good realism sacrificing diversity (i.e., generalizing well only for unseen images with a similar distribution to training images) and WGAN-GP loss' ability to capture recognizable structure, the MADGAN can successfully capture T1-specific appearance and anatomical changes from the previous $3$ slices. Meanwhile, the 7-SA MADGAN tends to be less stable in keeping texture but more sensitive to abnormal anatomical changes due to the SA modules' anomaly-sensitive reconstruction $via$ the attention-driven, long-range dependency modeling, resulting in moderately higher average $\ell _2$ loss than the MADGAN. Since the models are trained only on healthy slices, as visualized by an overimposed Jet colormap, reconstructing slices with higher CDRs tends to comparatively fail, especially around hippocampus, amygdala, cerebral cortex, and ventricles due to their insufficient atrophy after reconstruction; this is plausible because physicians also perform the AD diagnosis based on their prior normal atrophy information around those body parts. We do not find other significant reconstruction failures except them, considering that inter-subject/sequence variability also lead to considerable reconstruction failures. The T1c scans show much lower average $\ell _2$ loss than the T1 scans due to darker texture. Since most training images are the T1 slices with brighter texture than the T1c slices, reconstruction quality clearly decreases on the T1c slices, occasionally exhibiting bright texture. Accordingly, reconstruction failure from anomaly contributes comparatively less to the average $\ell _2$ loss, especially when local small lesions, such as brain abscess and enhanced lesions, appear---unlike global big lesions, such as multiple cerebral infarction and blood component retention. However, the average $\ell _2$ loss remarkably increases on brain metastases scans due to their hyper-intensity, especially for the 7-SA MADGAN. \subsection*{Unsupervised anomaly detection results} \label{sec:ADdiagnosisRes} Figs.~\ref{fig8} and~\ref{fig9} show AUCs of unsupervised anomaly detection on T1 and T1c scans under different training steps, respectively. The AUCs generally increase as training progresses, but more SA modules require more training steps until convergence due to their feature recalibration. Although most models show a convergence after $900$k steps, MADGAN with abundant SA modules might perform even better, especially on the T1c scans with less training data than the T1 scans, if we continue its training. All the best results in specific tasks, except for CDR $= 0$ \textit{vs} CDR $= 0.5$, are from the SA models (e.g., 7-SA MADGAN w/o $\ell _1$ loss under 900k steps: AUC $0.783$ in CDR $= 0$ \textit{vs} CDR $= 0.5 + 1 + 2$, 3-SA MADGAN under 300k steps: AUC $0.966$ in normal \textit{vs} brain metastases, 3-SA MADGAN under 600k steps: AUC $0.638$ in normal \textit{vs} various diseases); thus, whereas the SA models, which do not know the task to optimize in an unsupervised manner, perform unstably, we might use them similarly to supervised learning if we could obtain good parameters for a certain disease. Without $\ell _1$ loss, the AUCs tend to decrease, also accompanying large fluctuations; 7-SA MADGAN w/o $\ell _1$ loss performs well on the T1 scans but poorly on the T1c scans due to the instability. Figs.~\ref{fig10} and~\ref{fig11} illustrate ROC curves and their AUCs on T1 and T1c scans under $1.8$M training steps, respectively. Since brains with higher CDRs accompany stronger anatomical atrophy from healthy brains, their AUCs between unchanged CDR $= 0$ remarkably increase as CDRs increase. MADGAN and 7-SA MADGAN both achieves good AUCs, especially for higher CDRs---The MADGAN obtains AUC $0.750/0.707/0.829$ in CDR $= 0$ \textit{vs} CDR $= 0.5/1/2$, respectively; the discrimination between healthy subjects \textit{vs} MCI patients (i.e., CDR $= 0$ \textit{vs} CDR $= 0.5$) is extremely difficult even in a supervised manner~\cite{Ledig}. Whereas detecting various diseases is difficult in an unsupervised manner, the 7-SA MADGAN outperforms the MADGAN and achieves AUC $0.921$ in brain metastases detection. As Tables~\ref{tab:CDR} and~\ref{tab:T1c} show, the effect of how to calculate average $\ell _2$ loss (among whole slice sets or continuous 10 slice sets exhibiting the highest loss) per scan is limited. Whereas no significant differences exist between them, the best performing approach on each dataset is always whole slice sets-based. \section*{Discussion and conclusions} \label{sec:Conclusion} Using massive healthy data, our MADGAN-based multiple MRI slice reconstruction can reliably discriminate AD patients from healthy subjects for the first time in an unsupervised manner; to detect the accumulation of subtle anatomical anomalies, our solution leverages a two-step approach: (\textit{Reconstruction}) $\ell _1$ loss generalizes well only for unseen images with a similar distribution to training images while WGAN-GP loss captures recognizable structure; (\textit{Diagnosis}) $\ell _2$ loss clearly discriminates healthy/abnormal data as squared error becomes huge for outliers. Using $1,133$ healthy T1 MRI scans for training, our approach can detect AD at a very early stage, MCI, with AUC $0.727$ while detecting AD at a late stage with AUC $0.894$. Accordingly, this first unsupervised anomaly detection across different disease stages reveals that, like physicians' way of performing a diagnosis, large-scale healthy data can reliably aid early diagnosis, such as of MCI, while also detecting late-stage disease much more accurately. To confirm its ability to also detect other various diseases, even on different MRI sequence scans, we firstly investigate how unsupervised medical anomaly detection is associated with various diseases and multi-sequence MRI scans, respectively. Due to the different texture of T1/T1c slices, reconstruction quality clearly decreases on the data-sparse T1c slices, and thus reconstruction failure from anomaly contributes comparatively less to the average $\ell _2$ loss. Nevertheless, we generally succeed to unravel diseases hard-to-detect and easy-to-detect in an unsupervised manner: it is hard to detect local small lesions, such as brain abscess and enhanced lesions; but, it is easy to detect hyper-intense enhancing lesions, such as brain metastases (AUC $0.921$), especially for 7-SA MADGAN thanks to its feature recalibration. Our visualization of differences between real/reconstructed slices might play a key role in understanding and preventing various diseases, including rare disease. Since we firstly propose a two-step unsupervised anomaly detection approach based on multiple slice reconstruction, its limitations are two-fold: yet less generalizable reconstruction and diagnosis. As future work, we will investigate more suitable SA modules in a reconstruction model, such as Dual Attention Network that capture feature dependencies in both spatial/channel dimensions~\cite{fu2019dual}; here, optimizing where to place how many SA modules is the most relevant aspect. We will validate combining new loss functions for both reconstruction/diagnosis, including sparsity regularization~\cite{zhou2020sparse}, structural similarity~\cite{haselmann2018anomaly}, and perceptual loss~\cite{tuluptceva2020anomaly}. Lastly, we plan to collect a higher amount of healthy T1c scans to reliably detect and locate various diseases, including cancers and rare diseases. Integrating multi-modal imaging data, such as Positron Emission Tomography with specific radiotracers~\cite{rundoCMPB2017}, might further improve disease diagnosis~\cite{brier2016}, even when analyzed modalities are not always available~\cite{li2014multimodal}. Moreover, to specify detected anomalies, we might extend this work to supervised learning with limited pathological data by discriminating normal/pathological image distributions during diagnosis, instead of calculating the average $\ell _2$ loss per scan. \begin{backmatter} \section*{List of abbreviations used} Area Under the Curve: AUCs, AutoEncoder: AE, Alzheimer's Disease: AD, Clinical Dementia Rating: CDR, Contrast-enhanced T1-weighted: T1c, Convolutional Neural Network: CNN, Computed Tomography: CT, Generative Adversarial Network: GAN, Magnetic Resonance Imaging: MRI, Medical Anomaly Detection Generative Adversarial Network: MADGAN, Mild Cognitive Impairment: MCI, Open Access Series of Imaging Studies-3: OASIS-3, Receiver Operating Characteristic: ROC, Rectified Linear Unit: ReLU, Self-Attention: SA, T1-weighted: T1, Variational AutoEncoder: VAE, Wasserstein loss with Gradient Penalty: WGAN-GP. \section*{Competing interests} The authors declare that they have no competing interests. \section*{Author's contributions} Conceived the idea: CH, LR, ZAM, KM. Designed the code: CH, LR, ZAM. Collected the T1c dataset: TN. Implemented the code: CH. Performed the experiments: CH. Analyzed the results: CH, LR. Wrote the manuscript: CH, LR. Critically read the manuscript and contributed to the discussion of the whole work: KM, TN, ZAM, YS, SK, ES, HN, SS. \section*{Acknowledgements} This research was partially supported both by AMED Grant Number JP18lk1010028 and The Mark Foundation for Cancer Research and Cancer Research UK Cambridge Centre [C9685/A25177]. Additional support has been provided by the National Institute of Health Research (NIHR) Cambridge Biomedical Research Centre. Zolt\'{a}n \'{A}d\'{a}m Milacski was supported by Grant Number VEKOP-2.2.1-16-2017-00006. The OASIS-3 dataset has Grant Numbers P50 AG05681, P01 AG03991, R01 AG021910, P50 MH071616, U24 RR021382, and R01 MH56584. \bibliographystyle{bmc-mathphys}
{ "timestamp": "2020-10-13T02:33:04", "yymm": "2007", "arxiv_id": "2007.13559", "language": "en", "url": "https://arxiv.org/abs/2007.13559" }
\section{Introduction} \label{intro} Ulcerative colitis and Crohn's disease represent the two main types of inflammatory bowel disease (IBD). Both are relapsing diseases and may present similar symptoms including long-term inflammation in the digestive system, however they are very different: Ulcerative colitis affects only the large intestine and the rectum whereas Crohn's disease can affect the entire gastrointestinal tract from the mouth to the anus. Typical presentations of Crohn's disease include the discontinuous involvement of various portions of the gastrointestinal tract and the development of complications including strictures, abscesses, or fistulas that compromise deep layers of the tissue while ulcerative colitis remains superficial but present no healthy areas between inflamed spots. There is consensus now that IBD result from an unsuitable response of a deficient mucosal immune system to the indigenous flora and other luminal antigens due to alterations of the epithelial barrier functions. We propose in this paper a simplified mathematical model aiming to recreate the immune response triggering inflammation. In the particular case of Crohn's disease, we seek to understand the patchy inflammatory patterns that differentiate patients suffering from this illness from those who has been diagnosed with ulcerative colitis. IBD can be seen as an example of the acute inflammatory response of body tissues caused by harmful stimuli such as the presence of pathogenic germs or damaged cells. This protective response is also associated with the origin of other well-known diseases such as rheumatoid arthritis, the inflammatory phase in diabetic wounds or tissue inflammation, and has been extensively studied. Today it is still of central interest for researchers and, although several models have been proposed in order to understand the causes that lead to acute inflammation, the mathematical approach to this topic remains a recent field of research. A very complete review on the subject is provided in \cite{Vodovotz_2006,Vodovotz_2004}. Among the mathematical works on inflammation we can refer to many models based on ordinary differential equation \cite{Day_2006,Dunster_2014,Herald_2010,Kumar_2004,Lauffenburger_1981,Mayer_1995,Reynolds_2006,Roy_2009,WENDELSDORF_2010}. Most of the authors take into account pro-inflammatory and anti-inflammatory mediators but also pathogens and other more or less realistic physiological variables. Depending on the parameters and the initial data these models manage to reproduce a variety of scenarios that can be observed experimentally and clinically; for example the case in which the host can eliminate the infection and also other situations in which the immune system cannot keep the disease under control or where the existence of oscillatory solutions determines a chronic cycle of inflammation. Most of the conclusions in the referenced papers are the result of stability study of the equilibrium states and numerical analysis of the simulations by phase portraits methods. In addition, in \cite{Day_2006,Kumar_2004,Roy_2009,WENDELSDORF_2010} a sensitivity analysis of the variables to the parameters of the models is performed in order to adjust the numerical results with experimental data and achieve greater biological fidelity of the model. Several authors had also considered spatial heterogeneity in order to model the inflammatory response, we can mention \cite{Khatib_2007,Khatib_2011,Ibragimov_2006} in the particular case of atherogenesis, \cite{Lauffenburger_1983,Penner_2012} in the tissue inflammation context and \cite{Chalmers_2015,Sullivan_2006} for the acute inflammatory response. The main variables of the models introduced in the mentioned works vary according to the dynamics that the authors wish to describe, the density of phagocytic cells, pro-inflammatory cytokines, anti-inflammatory mediators and bacteria are some standard quantities that are often taken into account. As in the ordinary differential equations approach the stability of the systems is systematically studied, in \cite{Chalmers_2015,Consul_2014,Khatib_2007,Lauffenburger_1981} a vast analysis of all possible scenarios is performed depending on the values of the model parameters, the authors provide biological interpretation of such behavior as well as numerical simulations; furthermore, in \cite{Khatib_2011} the existence of travelling waves solutions is proved to be at the origin of a chronic inflammatory response. A different approach is presented in \cite{Penner_2012}, the model introduced in this paper aims to explain mathematically the patterns observed in the skin due to acute inflammation in the absence of specific pathogenic stimuli. By analyzing the stability of homogeneous and non-homogeneous states, sufficient conditions leading to the existence of such patterns solutions are obtained; several numerical examples are given as well. Similarly, in \cite{Lauffenburger_1983} authors claim that the instability of uniform steady distribution of phagocytic cells might trigger non-uniform cell density distributions which is potentially dangerous since tissue damages may occur in regions of high cell concentration. In this sense some sufficient conditions are given in order to prevent the existence of such kind of unstable states, these conditions primarily involve the phagocyte random motility coefficient and a chemotaxis coefficient included in the model. As suggested by \textit{in vitro} studies, phagocytic cells (big eaters) may move following a chemotactic impulse generated by the presence of pathogens germs, for this reason most of the authors cited above include the effect of chemotaxis by mean of the classical term first introduced by Patlak in 1953 and Keller and Segel in 1970 \cite{KellerSegel,Patlak_1953}. Nevertheless, there is no consensus on this assumption, as noted in \cite{Lauffenburger_1983}, \textit{in vivo} observations more often show that the phagocytes seem to move within an infected lesion randomly, this is the case in the models introduced in \cite{Consul_2014,Khatib_2007,Khatib_2011}. In the present paper, we propose a mechanism leading to patterns, which does not rely on chemotactism. We think the inflammatory response could be modeled by an activator-inhibitor system. Such systems are known to produce Turing mechanism, that is, periodic stationary solutions. This could possibly explain the patchy nature of Crohn's disease. \section{The model} We propose here a reaction-diffusion system modelling the dysfunctional immune response that triggers IBD. As mentioned in the introduction, this kind of systems have attracted much interest as a prototype model for pattern formation, in this case we refer in particular to inflammatory patterns. Roughly speaking, the first line of defense of the mucosal immune system is the epithelial barrier which is a polarized single layer covered by mucus in which commensal microbes are embedded. Lowered epithelial resistance and increased permeability of the inflamed and non-inflamed mucosa is systematically observed in patients with Crohn's disease and ulcerative colitis, hence the epithelial barrier gets leaky and luminal antigens gain access to the underlying mucosal tissue. In a healthy gut, the immune response by mean of intestinal phagocytes eliminates the external agents limiting the inflammatory response in the gut. Unfortunately in a disease-state the well controlled balance of the intestinal immune system is disturbed at all levels, this dysfunctional mechanism contributes to acute and chronic inflammatory processes. Indeed, an excessive amount of immune cells migrating to the damaged zone can engage the permeability of the epithelial barrier and thus might allows further infiltration of microbiota which aggravate inflammation. This complex network triggers the initiation of an inflammatory cascade that causes ulcerative colitis and Crohn's diseases, see Fig.\ref{scheme}. \begin{figure} [p] \begin{center} \begin{tikzpicture}[scale=2.5] \draw [domain=-1.5:1.5,blue,samples=100,dashed] plot(\x,{1/sqrt(0.7*2*3.14)*exp(-(\x)^2/(2*0.7))}); \draw [domain=-1:1,red,samples=100] plot(\x,{1/sqrt(0.2*2*3.14)*exp(-(\x)^2/(2*0.2))}); \fill [green, opacity=1,pattern=crosshatch dots] (-2,-0.075) rectangle (-0.075,0); \fill [green, opacity=1,pattern=crosshatch dots] (0.1,0) rectangle (2,-0.075); \draw [->,red] (0.5,-0.5) to[bend left,thick] (0.05,0); \draw [->,red] (-0.5,-0.5) to[bend right,thick] (-0.02,0); \draw (0,-0.5) node[below,fill=white] {\tiny \color{red} bacteria}; \draw (0.6,1) node[above] {\tiny \color{blue} phagocytes}; \draw (2.05,-0.04) node[right] {\tiny epithelium}; \draw [->, blue] (0.6,1) to[thick] (0.35,0.7); \draw [->, blue] (-0.6,1) to[thick] (-0.35,0.7); \draw (1.6,-0.25) node[below] {Lumen}; \end{tikzpicture} \subcaption{Bacteria (red line) break through the epithelium (dotted zone); phagocytes (blue dashed line) are recruited in order to neutralize them} \begin{tikzpicture}[scale=2.5] \draw [domain=-2:2,red,samples=200] plot(\x,{1/(1.5*sqrt(0.02*2*3.14))*exp(-(\x)^2/(2*0.02))+1/(3*sqrt(0.07*2*3.14))*exp(-(\x-1.4)^2/(2*0.07))+1/(3*sqrt(0.07*2*3.14))*exp(-(\x+1.4)^2/(2*0.07))}); \draw [domain=-2:2,blue,samples=100,dashed] plot(\x,{1.3/sqrt(2*3.14)*exp(-(\x)^2/2)}); \fill [green, opacity=1,pattern=crosshatch dots] (-2,-0.075) rectangle (-0.075,0); \fill [green, opacity=1,pattern=crosshatch dots] (0.1,0) rectangle (2,-0.075); \draw [->,red] (0.5,-0.5) to[bend left,thick] (0.05,0); \draw [->,red] (-0.5,-0.5) to[bend right,thick] (-0.02,0); \draw (0,-0.5) node[below,fill=white] {\tiny \color{red} bacteria}; \draw (2.05,-0.04) node[right] {\tiny epithelium}; \draw [->, blue] (0.6,0.4) to[thick] (0.6,0.15); \draw [->, blue] (-0.6,0.4) to[thick] (-0.6,0.15); \draw [->, red] (0,1.5) to[thick] (0,1.8); \draw (1.6,-0.25) node[below] {Lumen}; \end{tikzpicture} \subcaption{Phagocytes spread rapidly through blood vessels. A high spot of bacteria remains with a lateral inhibition by phagocytes} \begin{tikzpicture}[scale=2.5] \draw [domain=-2:2,red,samples=200] plot(\x,{1/(1.5*sqrt(0.02*2*3.14))*exp(-(\x)^2/(2*0.02))+1/(2.5*sqrt(0.02*2*3.14))*exp(-(\x-1.4)^2/(2*0.02))+1/(2.5*sqrt(0.02*2*3.14))*exp(-(\x+1.4)^2/(2*0.02))}); \draw [domain=-2:2,blue,samples=100,dashed] plot(\x,{1/(2*sqrt(0.15*2*3.14))*exp(-(\x)^2/(2*0.15))+1/(3*sqrt(0.15*2*3.14))*exp(-(\x-1.4)^2/(2*0.15))+1/(3*sqrt(0.15*2*3.14))*exp(-(\x+1.4)^2/(2*0.15))}); \fill [green, opacity=1,pattern=crosshatch dots] (-2,-0.075) rectangle (-0.075,0); \fill [green, opacity=1,pattern=crosshatch dots] (0.1,0) rectangle (2,-0.075); \draw [->,red] (0.5,-0.5) to[bend left,thick] (0.05,0); \draw [->,red] (-0.5,-0.5) to[bend right,thick] (-0.02,0); \draw (0,-0.5) node[below,fill=white] {\tiny \color{red} bacteria}; \draw (2.05,-0.04) node[right] {\tiny epithelium}; \draw [->, red] (1.4,0.7) to[thick] (1.4,1); \draw [->, red] (-1.4,0.7) to[thick] (-1.4,1); \draw (1.6,-0.25) node[below] {Lumen}; \end{tikzpicture} \subcaption{Other spots appear} \end{center} \caption {Initiation of the inflammatory process} \label{scheme} \end{figure} For the sake of simplicity in this model we will consider just two components varying in time and space: {\footnotesize 1.} The number of non-resident bacteria leaking into the intestinal tissue through the epithelial barrier noted as $\beta$, also refereed as microbiota, pathogens or antigens and {\footnotesize 2.} The immune cells $\gamma$ which we will often refer as phagocytic cells. Also, by simplicity we model a portion of the digestive tube as an interval $\Omega\subset \mathbb{R}$ of the real axis, which will be very large. The model reads: \begin{equation}\label{Modelo}\left\{ \begin{array}{lcl} \partial_{t} \beta - d_{b}\Delta \beta &=& r_{b}\left(1-\frac{\beta}{b_i} \right)\beta -\frac{a \beta \gamma}{s_b+\beta} +f_e \left(1-\frac{\beta}{b_i}\right)\gamma,\\ \partial_{t} \gamma - d_{c}\Delta \gamma &=& f_{b} \beta -r_c \gamma.\\ \end{array}\right. \end{equation} We complete by considering Neumann boundary conditions and initial data $\beta(0,x)=\beta_0(x)$ and $\gamma(0,x)=\gamma_0(x)$ for all $x \in \Omega$. During the immune response there is a first stage where the non-resident phagocytes migrate from the vasculature into the intestinal mucosa and a second stage where they move to the damaged zone and fight the bacteria. This first stage results from a transport movement through the blood vessels and it is almost instantaneous compared to the second one, so we omit it in this simplified model. Another main assumption is to consider that immune cells and bacteria move randomly through the damaged tissue and the epithelial barrier. As mentioned in the introduction, it is generally accepted that diffusion provides an adequate description of molecular spreading but, in the case of phagocytic cells, chemotaxis is claimed to be crucial establishing the direction of movement in the sense of the pathogen gradient. However, there are \textit{in vivo} experiments that corroborate our hypothesis \cite{Lauffenburger_1983} and several authors have made similar assumptions \cite{Consul_2014,Khatib_2007,Khatib_2011}. Nevertheless, by neglecting chemotaxis in our model we do not claim that it is an unimportant phenomenon, instead, this assumption must be seen as a simplification and an idealization of the physiological mechanism we seek to describe. The coefficients $d_b>0$ and $d_c>0$ are the diffusion rates of bacteria and phagocytes, respectively. The parameter $r_b>0$ is associated with the reproduction rate of bacteria. In healthy conditions the number of bacteria within the lumen remains almost constant and they are not able to penetrate the epithelial barrier, we associate this quantity to the parameter $b_i>0$. We remark that this parameter $b_i$ is in some sense a carrying capacity; in fact, in the total absence of the epithelial barrier, the maximum amount of bacteria in the colon would not be greater than $\beta=b_i$, that is the reason why we add the logistic term $1-\frac{\beta}{b_i}$ in the first equation, \cite{Verhulst_1845,Perthame_2015}. The parameter $f_b>0$ is associated with the immune response rate of the organism sending cells to fight bacteria in the damaged zones. In others words, as soon as the presence of pathogens is detected, phagocytes are coming up. The term $-\frac{a \beta \gamma}{s_b+\beta}$ with $a>0$ and $s_b>0$ corresponds to the effect of the immune system on the pathogen agents. In particular $\frac{a \beta }{s_b+\beta}$ is the phagocytosis rate or intake rate, it suggests that the attack rate of immune cells on bacteria varies with the density of pathogen. This functional response term takes into account the rate $p_{c}$ at which phagocytes encounter a bacterium per unit of bacteria density, which is $p_{c}:=\frac{a}{s_b}$ and the average time $\tau$ that takes a phagocyte to neutralize a bacterium (or handling time) which can be computed as $\tau:=\frac 1{a}$. Experiments presented in \cite{Leijh_1980,Stossel_1973} reflect this dynamic. In the mathematical literature such kind of term is often referred as a Holling Type II functional response, see \cite{holling_1965,Perthame_2015}. We consider $f_e>0$ as a measure of the negative effect of the phagocyte's concentration for the epithelial resistance, and therefore it has a positive impact on the bacteria density i.e. the larger the epithelial gap, the more there are bacteria, the more there are immune cells drifting to the damaged zone and the more porous is the epithelium and so on. Finally, a self-regulation function of anti-inflammatory cells limits their life-time, so immune cells have an intrinsic death rate which is noted in the model as $r_c>0$. \section{On Turing Patterns} Since one of our main interest with this paper is to explain patchy inflammatory bowel patterns often observed in patients suffering from Crohn's disease, we seek to demonstrate that the model we propose may present Turing-type instabilities under certain conditions. This denomination is due to Alan Turing who was the first to describe spatial patterns caused by the effects of diffusion in his article on morphogenesis theory published in 1952, \cite{Turing_1952}. Roughly speaking, a Turing system consist of an activator that must diffuse at a much slower rate than an inhibitor to produce a pattern. We remind to the reader that diffusion causes areas of high concentration to spread out to areas of low concentration. In such kinds of systems the activator component must increase the production of itself while the inhibitor restrains the production of both. Turing's analysis shows that in certain regimes those systems are unstable to small perturbations, leading to the growth of large scale patterns. In the model we previously introduce bacteria are the activator and the immune cells the inhibitor, indeed bacteria reproduce at a certain rate $r_b$ and immune cells neutralize bacteria by phagocytosis (Holling-type term) and self-regulate their own life-time $r_c$. In practice, we should look for steady state solutions of the equation (\ref{Modelo}) which are linearly unstable, i.e. such that there are perturbations for which the linearized system has exponentially growing solutions in time. To be sure that a Turing-type phenomena is occurring it is important to exclude the cases where the corresponding growth modes are unbounded, that is solutions with infinitely high frequencies and also the cases in which solutions blow up or go to extinction \cite{Perthame_2015}. In section \ref{Stability and Turing patterns} we study the conditions leading to the observation of Turing phenomena in our model. \section{Results} \subsection{Non-negativity property and boundedness} We begin by establishing some elementary properties in the model to guarantee system (\ref{Modelo}) accuracy as a population dynamics model. In other words it is important that whenever the initial data have a reasonable biological meaning, the solution of the differential equation inherits that property. We start by a non-negativity property: \begin{prop}\label{positivite} Provided that the initial condition $(\beta_0(x),\gamma_0(x))$ is non-negative the solutions of the system (\ref{Modelo}) remain non-negative for every $t>0$. \end{prop} Similarly, we establish a boundedness property associated with the carrying capacity of the population environment: \begin{prop}\label{bornes} If $\beta_0(x)<b_i$ then for all $t>0$ one has $\beta(t,x)<b_i$. Moreover, if $\gamma_0(x)$ is bounded, then $\gamma(t,\cdot)$ remains bounded in the $L^2$-norm in $\Omega$ for every $t>0$. \end{prop} \subsection{Stability analysis} \label{Stability and Turing patterns} Let us study now the steady states of the model and their stability properties. The equation (\ref{Modelo}) have two non-negative homogeneous steady states. One of them is the trivial solution $(\beta,\gamma)=(0,0)$ associated with the absence of bacteria and immune cells. The other one, that we denote $(\beta,\gamma)=(\overline{\beta},\overline{\gamma})$, satisfies: \begin{equation}\label{caracterizacion} 0=\left(r_{b}+f_e \kappa \right)\left(1-\frac{\overline{\beta}}{b_i}\right) -\frac{a \kappa \overline{\beta}}{s_b+\overline{\beta}}, \end{equation} where $\kappa:=\frac{f_b}{r_c}$ and $\overline{\gamma}=\kappa\overline{\beta}$. We remark that $(0,0)$ is unstable. Indeed, the linearized matrix around this steady state has negative determinant and thus an eigenvalue with positive real part. For the non-trivial equilibrium point $(\overline{\beta},\overline{\gamma})$ the stability analysis is less straightforward. The following proposition establishes the conditions leading to the stability of this steady state. \begin{prop}\label{ODE_linear_stability} Consider the O.D.E system associated with (\ref{Modelo}) with non-negative real parameters $a, r_b, r_c, f_b, f_e, b_i$ and $s_b$, \begin{equation}\label{Modelo_ODE}\left\{ \begin{array}{lcl} \displaystyle{\partial_{t} \beta} &=& \displaystyle{r_{b}\left(1-\frac{\beta}{b_i}\right)\beta -\frac{a \beta \gamma}{s_b+\beta} +f_e \left(1-\frac{\beta}{b_i}\right)\gamma}\\ \displaystyle{\partial_{t} \gamma } &=&\displaystyle{ f_{b} \beta -r_c \gamma}\\ \end{array}\right.. \end{equation} This system has a unique positive steady state solution $\big( \beta(t),\gamma(t)\big)=(\overline{\beta},\overline{\gamma})$ which is stable if and only if \begin{equation} \label{C2} \frac{a\kappa \overline{\beta}^2}{(s_b+\overline{\beta})^2} - r_b \frac{\overline{\beta}}{b_i} - f_e\kappa < r_c. \end{equation} \end{prop} We conjecture that the model might show some unexpected behavior around this steady state which could be at the origin of patchy inflammatory patterns. Hence, let us focus on conditions leading the formation of Turing patterns for the reaction diffusion system (\ref{Modelo}), that is perturbations around the steady state $(\overline{\beta},\overline{\gamma})$ such that the linearized system has exponential growth in time and for which the corresponding growth modes are bounded. The following proposition establishes the necessary conditions for the occurrence of such phenomenon. \begin{prop}\label{Turing_patterns} Consider the system (\ref{Modelo}) and its unique positive homogeneous steady state solution $(\overline{\beta},\overline{\gamma})$; assume that there exist real non-negative values of the parameters $a, r_b, r_c, s_b, f_e, f_b, b_i$ such that the following condition holds: \begin{equation}\label{Conditions} 0 < \frac{a\kappa \overline{\beta}^2}{(s_b+\overline{\beta})^2} - r_b \theta - f_e\kappa < r_c \end{equation} Then for $\frac{d_b}{d_c}$ small enough the reaction diffusion system (\ref{Modelo}) shows Turing instabilities around this steady state. \end{prop} \section{Parameters of the model} In this section we want to estimate the values of the parameters of the model and to prove the non emptiness of the parameters set defined by (\ref{Conditions}). As long as it is possible we will rely on values obtained from real observations or in vitro experiments. However, in some cases the exact values are unknown due to the difficulty of measuring them in vivo or even in vitro. Let us start with an estimation of the reproduction rate of the bacteria, represented in our model as $r_b$. Bacterium's generation time, which is the time it gets to the population to double the number of individuals, might vary from 12 minutes to several hours depending on temperature, nutrients, culture medium, among others factors. For E. Coli, for instance, it is around 20 minutes in standard conditions, \cite{Korem_2015}. We can then consider that the evolution of bacteria population is given by $\partial_t b = r_b b$ and so $r_b = \frac{ln(2)}{20}$ measured in bacteria per minute. That gives us an approximate value $r_b = 3.47\ast 10^{-2}$ $u/min$ which is in the estimated range of values given in \cite{Lauffenburger_1983} for this parameter. Similarly, it is known that in healthy conditions phagocytes have, in average, a half-life of two days \cite{Labro_2000}, and so from $\partial_t c = -r_c c$ we get $r_c = \frac{ln(2)}{2880}$ cells per minute which means that the death rate of phagocytes is ideally of the order of $10^{-4}$ $u/min$, which coincides with that considered in \cite{Waugh2007} for immune cells in diabetic wounds or in \cite{Lauffenburger_1983} for bacterial infection causing tissue inflammation. However, there is no consensus, some authors assume this parameter to be of the order of $10^{-3}$$u/min$ in the inflammatory response framework \cite{Chow_2005} or even of the order of $10^{-6}$$u/min$ in the case of early atherosclerosis \cite{Chalmers_2015}. For such parameters, corresponding to a healthy organism, we do not expect to observe a Crohn's disease. Indeed, the mechanism we describe below occurs with $r_c=2\times 10^{-2}$ $u/min$ (see Table \ref{parameters}). For $r_c=10^{-3}$ $u/min$, the range of parameters for which a Turing pattern occurs is quite narrow, Fig.\ref{figura3}. The diffusion coefficient of immune cells might also vary according to the type of cell and the part of the body where they act. In the consulted literature the value of this parameter varies from $10^{-12}$ $m^2/min$ to $10^{-10}$ $m^2/min$ depending on the context \cite{Consul_2014,Khatib_2011,Lauffenburger_1983,Stickle1985}. In the absence of experimental data providing more precise information about the order of this parameter in the particular case of bacterial infection in the intestinal track, we consider this coefficient to remain within this range in damaged areas of the intestine. Although there are not precise information concerning the diffusion rate of bacteria through the epithelial barrier, it is known that in aqueous solutions like the lumen, the diffusion rate might vary from $10^{-11} $ $m^2/min$ to $10^{-8} $ $m^2/min$ depending on the type of bacteria. However, in a non-liquid framework, which is the case of bacteria penetrating through the epithelial barrier, motility should be reduced. We will now roughly compute a value for the parameter $a$, we suppose that there is a signficant density of bacteria in a certain position $x=x_0$, and we study the time evolution of the population within this point. If $\beta$ is large enough, the term $1-\frac{\beta}{b_1}$ is negligible, moreover the term $-\frac{a\beta\gamma}{s_b+\beta}$ tends to approach $-a \gamma$, so we can approximately write \begin{equation} \label{approximacion_de_alpha} \partial_{t}\beta(t,x_0)=-a \gamma(t,x_0). \end{equation} Let us now define $\tau$ as the average time it takes a phagocyte to neutralize a bacterium, which is around 3 minutes in the in vitro observations, it implies that \begin{equation} \beta(t+\tau,x_0)=\beta(t,x_0)-\gamma(t,x_0) \end{equation} and consequently $\partial_{t}\beta(t,x_0)\approx -\frac{\gamma(t,x_0)}{\tau}$. Replacing this into (\ref{approximacion_de_alpha}) we arrive at the conclusion that $a$ is of the order of $\frac{1}{\tau}$ units per minute. The density of bacteria in the lumen is approximately $b_i = 10^{17}$ $u/{m^3}$. At the positive equilibrium stage $(\overline{\beta},\overline{\gamma})$,which is associated to an inflammatory phase, we suppose that around 30$\%$ of the total density of bacteria within the lumen might penetrate the epithelial barrier without going out of control. Therefore, we set $\overline{\beta}=0.3\times b_i $ units of bacteria. Even though we have no exact data concerning the density of immune cells in the damaged zone, the in vitro experiments suggest that during the inflammation stage it is around ten times less than the bacteria density, this is quite natural considering that the size of a phagocyte is much larger than the size of a bacterium. Hence, we set the hypothesis that $\kappa=\frac{1}{10}$ which means that at the equilibrium point it holds $\overline{\beta} = 10\overline{\gamma}$ and consequently $\overline{\gamma}= 3\cdot 10^{-2} \times b_i$. Taken this into account from the equilibrium condition we have that $f_b = 10^{-1} r_c$ measured in units per minute. The parameter $f_e$ is finally computed so that (\ref{caracterizacion}) holds at the equilibrium state. \section{Numerical simulations} We perform some numerical simulations in MATLAB by mean of a semi-implicit scheme to solve the system of equations (\ref{Modelo}), the results are shown in Fig. \ref{figura2}. We have considered the parameters values presented in the table \ref{parameters} which were estimated in the previous section. For these values, the condition (\ref{Conditions}) associated to a Turing phenomenon occurrence established in the Proposition \ref{Turing_patterns} is verified. However, there is a whole family of parameters verifying (\ref{Conditions}), as shown in Fig. \ref{figura3}. \begin{figure}[t] \includegraphics[width=0.95\textwidth]{Fig2.eps} \caption{Bacteria (red line) and phagocytes (blue dashed line) after a time-lapse of 2 weeks with an initial bacterial infection $\beta_0(x)=10^{15}\times\mathbb{I}_{\{1495 \leq x \leq 1505\}}$ (yellow line) and $\gamma_0(x)=0$}\label{figura2} \end{figure} \begin{table}[b] \caption{Assigned values for the parameters of the model (\ref{Modelo})} \label{parameters} \centering \begin{tabular}{llll} \hline\noalign{\smallskip} parameter & interpretation & value & units \\ \noalign{\smallskip}\hline\noalign{\smallskip} $r_b$ & Reproduction rate of bacteria & 0.0347 & \tiny{(u/min)}\\ $r_c$ & Intrinsic death rate of phagocytes & 0.02 & \tiny{(u/min)} \\ $d_b$ & Diffusion rate of bacteria & $10^{-13}$ & \tiny{($m^2$/min)}\\ $d_c$ & Diffusion rate of phagocytes & $10^{-10}$ & \tiny{($m^2$/min)}\\ $b_i$ & Density of bacteria in the lumen & $10^{17}$ & \tiny{(u/$m^3$)} \\ $f_b$ & Immune response rate & 0.002 & \tiny{(u/min)}\\ $a$ & Coefficient proportional to the rate of phagocytosis ($a=s_b p_{c}$) \ \ & 0.3129 & \tiny{(u/min)} \\ & it is also inversely proportional to the handling time ($a=\frac 1{\tau}$) & & \\ $s_b$ & Proportionality coefficient between $p_{c}$ and $a$ & $10^{15}$ & \tiny{(u/$m^3$)}\\ $f_e$ & Related to the porosity of the epithelium & 0.0856 & \tiny{(u/min)} \\ \noalign{\smallskip}\hline \end{tabular} \end{table} For the simulations we have considered an initial datum with no phagocytes presence and a tiny spot of bacteria concentrated in the middle of the domain $\Omega$. This might be understood as a slight leak of bacteria from the lumen through the epithelium. The activator-inhibitor dynamics generated by the body's immune response to the presence of bacteria and the contrast in the propagation rates of the two actors of the system is the reason why the patterns emerge in Fig. \ref{figura2} after a certain time. This behavior is definitively associated with a Turing phenomenon. \begin{figure} \includegraphics[width=0.95\textwidth]{Fig3.eps} \caption{Region (blue) defined by the parameters $r_c$ and $a$ that verify condition (\ref{Conditions}) leading to Turing patterns observation} \label{figura3} \end{figure} We remark that the values we assign to the diffusion coefficients remains within the range estimated in the previous section. However, from the mathematical point of view what is really important in order to ensure verifying the conditions leading to the observation of Turing patterns is the smallness of the ratio $\delta = \frac{d_b}{d_c}$. To change those values by preserving $\delta$ only represents a spatial rescaling that does not affect the pattern formation. \section{Proof of the results} \subsection*{Proof of the Proposition \ref{positivite}} \begin{proof} Consider $\overline{t}>0$ the first instant when either $\beta(t,x)$ or $\gamma(t,x)$ became non-positive, then for some $x^*\in \Omega$ one has $\beta(\overline{t},x^*) \gamma(\overline{t},x^*)=0$. If $\beta(\overline{t},x^*)=0$ and $\gamma(\overline{t},x^*)\geq 0$ since $\beta_0(x)>0$ then there exist $\delta_t>0$ such that \begin{equation} \forall t\in [\overline{t}-\delta_t,\overline{t}] \ \hbox{one has} \ \partial_{t}\beta(t,x^*)<0. \end{equation} Nevertheless, from the first equation in (\ref{Modelo}) one has $\partial_{t}\beta(\overline{t},x^*)=f_e \gamma(\overline{t},x^*)\geq 0$ which contradicts the previous conclusion. Similarly, if $\beta(\overline{t},x^*)\geq 0$ and $\gamma(\overline{t},x^*)= 0$ from the positivity assumption of the initial data we can conclude the existence of $\delta_t>0$ such that \begin{equation} \forall t\in [\overline{t}-\delta_t,\overline{t}] \ \hbox{one has} \ \partial_{t}\gamma(t,x^*)<0. \end{equation} Again from the second equation in (\ref{Modelo}) one has $\partial_{t}\gamma(\overline{t},x^*)=f_b \beta(\overline{t},x^*)\geq 0$ which is a contradiction. \end{proof} \subsection*{Proof of the Proposition \ref{bornes}} \begin{proof} The argument of this proof is similar to the one used to prove the non-negativity property. Indeed, consider $\overline{t}$ the first instant when $\beta$ rises the value $b_i$, then there exist $x^*\in \Omega$ such that $\partial_t \beta(\overline{t},x^*)>0$, nevertheless from the equation associated with $\beta$ one conclude that $\partial_t \beta(\overline{t},x^*)=-\frac{a \beta(\overline{t},x^*)\gamma(\overline{t},x^*)}{s_b+\beta(\overline{t},x^*)}<0$ from the positivity property. So we get a contradiction which implies that for all $t>0$ one has necessarily $\beta(t,x)<b_i$. The boundedness of $\gamma$ follows directly from the boundedness of $\beta$ and $\gamma_0$. In fact multiplying by $\gamma$ in the second equation of (\ref{Modelo}), integrating by parts and applying Holder inequality one gets that \begin{equation} \frac{1}{2}\partial_{t} \|\gamma\|^2_{L^2(\Omega)}+\|\nabla\gamma\|^2_{L^2(\Omega)}\leq \left( f_b\|\beta\|^2_{L^2(\Omega)}-r_c\right)\|\gamma\|^2_{L^2(\Omega)} \end{equation} from where after applying the Gronwall inequality one concludes that there exist a positive constant $C=C(t)$ such that $\|\gamma(t)\|^2_{L^2(\Omega)}<C\|\gamma_0\|^2_{L^2(\Omega)}$. \end{proof} \subsection*{Proof of the Proposition \ref{ODE_linear_stability}} \begin{proof} The existence of such a positive steady state follows from the analysis of (\ref{caracterizacion}). Let us define $F(\beta)=\left(r_{b}+f_e \kappa \right)\left(1-\frac{\beta}{b_i}\right) -\frac{a \kappa \beta}{s_b+\beta}$. From the positivity of the parameters of the model we have that $F(0)>0$ and $F(b_i)<0$, this means that there are at least one positive value $\overline{\beta}\in(0,b_i)$ that satisfies $F(\overline{\beta})=0$ or equivalently (\ref{caracterizacion}). Moreover, since the derivative of $F$ is strictly negative we deduce that it has at most one root which leads to the uniqueness of $\overline{\beta}$. Let us now study the conditions leading to the stability of this steady state. In order to simplify the notations we will define $\theta:=\frac{\overline{\beta}}{b_i}$. We also define $\mathbf{M}$ as the matrix of the linearized system around this positive steady state $(\overline{\beta},\overline{\gamma})$ \[ \mathbf{M}:= \begin{pmatrix} \displaystyle{r_b(1-2\theta)-\frac{a s_b \kappa \overline{\beta}}{(s_b+ \overline{\beta})^2}-f_e \kappa \theta} \ & \ \displaystyle{-\frac{a \overline{\beta}}{s_b + \overline{\beta}} + f_e(1-\theta)} \\ \displaystyle{f_b} & -r_c \end{pmatrix}. \] We compute the determinant and the trace of this matrix \begin{eqnarray*} tr(\mathbf{M}) &=& \frac{a\kappa \overline{\beta}^2}{(s_b+\overline{\beta})^2} - r_b \theta - f_e\kappa - r_c,\\ det(\mathbf{M})&=& r_cr_b \theta + f_bf_e\theta + \frac{af_bs_b \overline{\beta}}{(s_b + \overline{\beta})^2}.\\ \end{eqnarray*} From the positivity of the parameters of the model it is clear that the determinant of $\mathbf{M}$ is positive, therefore in order to have linear stability around $(\overline{\beta},\overline{\gamma})$ it is necessary and sufficient to impose the negativity of the trace of $\mathbf{M}$ which is equivalent to (\ref{C2}). \end{proof} \subsection*{Proof of the Proposition \ref{Turing_patterns}} \begin{proof} We linearize the system around $(\overline{\beta},\overline{\gamma})$. For the sake of simplicity we keep the notation $\beta(t,x),\gamma(t,x)$ for the linearized variables \begin{equation}\left\{ \begin{array}{lcl} \partial_t \beta - d_b \Delta \beta &=& \bigg(r_b(1-2\theta)-\frac{a\kappa s_b \overline{\beta}}{(s_b + \overline{\beta})^2}-f_e \kappa \theta \bigg) \beta + \bigg(-\frac{a \overline{\beta}}{s_b + \overline{\beta}} + f_e(1-\theta)\bigg)\gamma \\ \partial_t \gamma - d_c \Delta \gamma &=& f_b \beta -r_c \gamma \end{array}\right. \end{equation} We are seeking in particular for solutions with exponential growth in time, so we consider that \begin{equation}\label{descomposicion} \beta(t,x)=e^{\lambda t}B(x) \ ; \ \gamma(t,x)=e^{\lambda t}C(x) \end{equation} with $\lambda > 0$. This means that $B(x)$ and $C(x)$ should satisfy the fallowing problem \begin{equation}\label{linear_system}\left\{ \begin{array}{lcl} - d_b \Delta B(x) &=& \bigg(r_b(1-2\theta)-\frac{a\kappa s_b \overline{\beta}}{(s_b + \overline{\beta})^2}-f_e \kappa \theta - \lambda \bigg) B(x) + \bigg(-\frac{a \overline{\beta}}{s_b + \overline{\beta}} + f_e(1-\theta)\bigg) C(x) \\ - d_c \Delta C(x) &=& f_b B(x) +(-r_c -\lambda) C(x) \end{array}\right. \end{equation} or equivalently that they are eigenfunctions associated with the positive eigenvalue $\lambda$. We consider in particular Fourier modes of the form \[B(x)=Be^{i\xi x} \quad ; \ C(x)=Ce^{i \xi x}, \] and we replace it in (\ref{linear_system}) to obtain the fallowing homogeneous linear system of equations \[ \begin{pmatrix} 0 \\ 0 \end{pmatrix}= \begin{pmatrix} r_b(1-2\theta)-\frac{a\kappa s_b \overline{\beta}}{(s_b + \overline{\beta})^2}-f_e \kappa \theta - \lambda -d_b \xi^2 \ & \ -\frac{a \overline{\beta}}{s_b + \overline{\beta}} + f_e(1-\theta) \\ f_b & -r_c - \lambda - d_c \xi^2 \end{pmatrix}\begin{pmatrix} B \\ C \end{pmatrix} \] Let us call $\mathbf{M}_{\lambda,\xi}$ the matrix associated to the previous linear system. It can be written in terms of $\xi$, $\lambda$ and the matrix $\mathbf{M}$ introduced before in the proof of the Proposition \ref{ODE_linear_stability} \[ \mathbf{M}_{\lambda,\xi}= \begin{pmatrix} \displaystyle{\mathbf{M}_{(1,1)} - \lambda - d_b\xi^2} \ & \ \mathbf{M}_{(1,2)} \\ \mathbf{M}_{(2,1)} & \mathbf{M}_{(2,2)} - \lambda -d_c \xi^2 \end{pmatrix}. \] In other words we look for a certain $\lambda$ with positive real part and $\xi^2$ for which $\det(\mathbf{M}_{\lambda,\xi})=0$. The determinant of $\mathbf{M}_{\lambda,\xi}$ is a quadratic polynomial function in $\lambda$ \begin{equation}\label{determinante} det(\mathbf{M}_{\lambda,\xi}) = \lambda^2 + a_1\lambda + a_2 \end{equation} with coefficients \begin{eqnarray*} a_1 &=& -tr(\mathbf{M}) +(d_b+d_c)\xi^2,\\ a_2 &=& \det(\mathbf{M}) - (\mathbf{M}_{(1,1)}d_c+\mathbf{M}_{(2,2)}d_b)\xi^2+d_bd_c\xi^4. \end{eqnarray*} Since the right-hand side inequality in (\ref{Conditions}) ensures that $tr(\mathbf{M})<0$, we conclude that $a_1>0$. Hence, the polynomial associated to $\det(\mathbf{M}_{\lambda,\xi})$ can have a positive root $\lambda$ if and only if $a_2<0$. The term $a_2$ is itself a quadratic polynomial in $\xi^2$ with positive second order coefficient. For the sake of simplicity we will define $\delta:=\frac{d_b}{d_c}$, and we will study the sign of $\frac{a_2}{d_bd_c}$ which roots are explicitly given by \begin{equation} \Lambda_{\pm}=\frac{\mathbf{M}_{(1,1)}+\delta \mathbf{M}_{(2,2)}}{2*d_b}\bigg[1 \pm \sqrt{1-\frac{4\det(\mathbf{M}) \delta}{(\mathbf{M}_{(1,1)}+\delta \mathbf{M}_{(2,2)})^2}} \bigg]. \end{equation} In the regime $\delta$ small enough the Taylor expansion gives us the following approximate values \begin{eqnarray} \Lambda_-&=&-\frac{\det(\mathbf{M})}{d_c \mathbf{M}_{(1,1)}}\\ \Lambda_+&=&\frac{\mathbf{M}_{(1,1)}}{d_c \delta} \end{eqnarray} The left-hand side inequality in (\ref{Conditions}) guarantees that $\Lambda_+$ is positive and since $\delta$ can be as small as desired, then $\Lambda_+>>1$ and the interval $(\Lambda_-,\Lambda_+)$ where $a_2$ is negative is large enough. In other words, there exist a positive real $\lambda$ and Fourier modes for which $\det(\mathbf{M}_{\lambda, \xi^2})=0$ and consequently we can find exponential growth in time solutions to the linearized system around the steady state $(\overline{\beta},\overline{\gamma})$. However, the Fourier modes for which this condition holds are bounded. We have showed the existence of perturbations such that the linearized system has exponential growth in time. The frequency of the perturbations can not be infinity and from Proposition \ref{positivite} and \ref{bornes} neither extinction nor blows-up are possible. Hence, we have finally proved the formation of Turing Patterns. \end{proof} \section{Conclusions} This work remains a simplified approach to the question of modelling inflammatory response in Crohn's disease. We have made several hypotheses with the aim of globally understanding the biological mechanism behind the abnormal body reaction leading to the disease but staying relatively simple in terms of number of variables and equations. Though we have tried to consider parameters values true to medical and biological observations, we highlight the qualitative results over quantitative ones. In this sense, obtaining a Turing mechanism through our model, might explain the patchy inflammatory patterns observed in patients suffering from Crohn's disease and must be interpreted as another step in the aiming to fully understand this illness and its causes. It remains a question concerning the Ulcerative Colitis (RCH) since it has several common factors that relate it to Crohn's disease but also others that set them apart. It might be interesting to study the possibility of modelling RCH by mean of the same system of equations in a different parameter regime and eventually find responses helping doctors with early diagnosis or treatments. \begin{acknowledgements} {This work would not have been possible without the support of the Inflamex Laboratory of Excellence and the Galilee PhD College, whom we sincerely thank. Also, a special thanks to Dr. Xavier Treton for helping us understand inflammatory bowel diseases.} \end{acknowledgements} \paragraph{Declarations} \section*{Funding} {This work was supported by the Inflamex Laboratory of Excellence.} \section*{Conflict of interest} The authors declare that they have no conflict of interest. \bibliographystyle{apalike}
{ "timestamp": "2020-07-28T02:40:46", "yymm": "2007", "arxiv_id": "2007.13587", "language": "en", "url": "https://arxiv.org/abs/2007.13587" }
\section{Introduction} The {\em Riemann xi function}, $ \xi(s) := {\textstyle \frac 12} s(s-1)\pi^{-s/2} \G(s/2)\zeta(s), $ is entire of order $1$. It satisfies $\xi(1-s)=\xi(s)$ so that its development about the central point $1/2$ is \begin{equation} \label{xip} \xi(s) = \sum_{n=0}^\infty \frac{\xi^{(2n)}(1/2)}{(2n)!} (s-1/2)^{2n}. \end{equation} The function $\Xi(z)$ is defined as $\xi(1/2+i z)$. In our context it is useful to define another function $$\Theta(z):=\xi(1/2+\sqrt{z})$$ which is entire of order $1/2$. The Riemann hypothesis is equivalent to $\Xi(z)$ having only real zeros and to $\Theta(z)$ having only negative real zeros. The central point is now at $z=0$ and we write \begin{equation} \label{xi1} \Theta(z) = \sum_{m=0}^\infty \frac{\g(m)}{m!} z^{m}, \qquad \qquad \g(m) := \frac{m!}{(2m)!} \xi^{(2m)}(1/2). \end{equation} Note that $\xi^{(2m)}(1/2)$ is positive for all $m$ since, as seen in \e{mom2}, it may be expressed as the integral of a positive function. Hence any real zeros of $\Theta(z)$ are necessarily negative. We also see that $\Theta(z)$ is {\em real} which just means it maps $\R$ into $\R$. It is convenient to label a function as {\em hyperbolic} if all of its zeros are real. Jensen describes the results of his research into the zeros of functions in \cite{Je13}. For any real entire function $F(z)$ of genus at most $1$, he associated a family of polynomials, now called Jensen polynomials, and showed that they are all hyperbolic if and only if $F(z)$ is hyperbolic. He applied this idea to $\Xi(z)$ in \cite[p. 189]{Je13}, giving a criterion for the Riemann hypothesis, and developed further equivalent conditions for hyperbolicity in \cite{Je13} and unpublished work; see the discussion in \cite{Po27}. Following \cite{DL,GORZ} we define the {\em Jensen polynomial of degree $d$ and shift $n$} as \begin{equation} \label{jensen} J^{d,n}(X):=\sum_{j=0}^d \binom{d}{j}\g(n+j) X^j. \end{equation} This is associated to the $n$th derivative $\Theta^{(n)}(z)$, and as we review in Corollary \ref{cdo}, $\Theta^{(n)}(z)$ is hyperbolic if and only if $J^{d,n}(X)$ is hyperbolic for all $d \gqs 1$. Also, if $\Theta(z)$ is hyperbolic then all of its derivatives must be hyperbolic as well; see Corollary \ref{cdo2}. Hence we obtain the following extended criterion which has presumably been known since the time of Jensen. \begin{theorem} \label{rh} The Riemann hypothesis is true if and only if $J^{d,n}(X)$ is hyperbolic for all $d \gqs 1$, $n \gqs 0$. \end{theorem} Griffin, Ono, Rolen and Zagier revived interest in Theorem \ref{rh} when they showed in \cite{GORZ} that a great many of the Jensen polynomials $J^{d,n}(X)$ are hyperbolic. For every fixed $d$, \cite[Thm. 1]{GORZ} states that $J^{d,n}(X)$ is hyperbolic for all sufficiently large $n$. This is made more explicit in \cite{Ono} where their Theorem 1.1 shows that $J^{d,n}(X)$ is hyperbolic whenever $n \gg e^{8d/9}$. In these papers, the results are demonstrated by using precise asymptotics for $\g(m)$ to show that renormalized versions of $J^{d,n}(X)$ may be approximated by Hermite polynomials $H_d(X)$ as $n\to \infty$. We give a variant of Theorem \ref{rh} next by bringing in Hermite polynomials from the beginning. Set \begin{equation} \label{jen-herm} P^{d,n}(X):=\sum_{j=0}^d \binom{d}{j}\g(n+j) H_{d-j}(X). \end{equation} \begin{theorem} \label{hyp} The Riemann hypothesis is true if and only if $P^{d,n}(X)$ is hyperbolic for all $d \gqs 1$, $n \gqs 0$. \end{theorem} In fact, we see in Sect. \ref{jkl} that Theorem \ref{hyp} is a special case of a more general result where the Hermite polynomials in \e{jen-herm} may be replaced by any Jensen polynomial associated to an element of the Laguerre-P\'olya class. Checking that the zeros of $P^{d,n}(X)$ are real seems easier than for $J^{d,n}(X)$. Combining the asymptotics of $\g(m)$ with a tailor-made theorem of Tur\'an (that was also employed in \cite{Ono}) gives the next result directly, showing that $P^{d,n}(X)$ has only real zeros for all but a relatively small number of shifts $n$. \begin{theorem} \label{chem} For all $d$ sufficiently large, $P^{d,n}(X)$ is hyperbolic whenever $n/\log^2 n \gqs d^{3/4}/2$. \end{theorem} Comparing $J^{d,n}(X)$ and $P^{d,n}(X)$ for specific $d$, $n$, we will see in Corollary \ref{poly-herm2} that $J^{d,n}(X)$ being hyperbolic implies that $P^{d,n}(X)$ is hyperbolic. Hence the results of \cite{GORZ,Ono} also apply to $P^{d,n}(X)$. Chasse in \cite[Sect. 3]{Cha13}, proved that $J^{d,n}(X)$ is hyperbolic\footnote{This range was extended in \cite[Cor. 1.3]{Ono} and may be further extended to $d \lqs 9\times 10^{24}$ using \cite{PT}.} for all $n\gqs 0$ and $d \lqs 2\times 10^{17}$ and so the same is true for $P^{d,n}(X)$. As an example, it is easy to see that $J^{2,n}(X)$ is hyperbolic if and only if \begin{equation} \label{tu} \g(n+1)^2 \gqs \g(n)\g(n+2). \end{equation} This is the Tur\'an inequality, necessary for the Riemann hypothesis, and first proved for all $n \gqs 0$ in \cite{Cso86}. However, $P^{2,n}(X)$ is hyperbolic if and only if \begin{equation} \label{tu2} \g(n+1)^2 + 2\g(n)^2 \gqs \g(n)\g(n+2), \end{equation} and clearly \e{tu} implies \e{tu2} but not the other way around. The key ingredient in the proof of Theorem \ref{chem} is the asymptotic expansion of the coefficients $\g(n)$ in \e{xi1} for large $n$. This is shown in \cite[Thm. 9, Eq. (14)]{GORZ} with an application of Laplace's method. We give a more precise version of this result by including the usual error estimates and giving formulas for all the coefficients. We also confirm a suggestion of Romik in \cite[Sect. 6.1]{Rom} that the answer can be conveniently expressed in terms of $W(2n/\pi)$, with $W$ the Lambert function. Recall that this function is the inverse to $x \mapsto xe^x$ and so satisfies \begin{equation} \label{lamb} W(xe^x)=x, \qquad W(x) e^{W(x)}=x \end{equation} for at least $x\gqs 0$. It is non-negative and increasing for $x\gqs 0$ and $W(x)\lqs \log x$ holds for $x\gqs e$. \begin{theorem} \label{mainthm3} Set $w :=W(2n/\pi)$. Then as $n \to \infty$ we have \begin{equation*} \g(n) = 4 \pi^{2} e^{7w/4} \sqrt{\frac {w}{w+1}} \left(\frac{e w^2}{16 n e^{2/w}} \right)^n \left( 1+ \sum_{k=1}^{K-1}\frac{c_k(w)}{n^k}+ O\left( \frac{\log^K(n)}{n^K}\right) \right) \end{equation*} for an implied constant depending only on $K$. Each $c_k(w)$ is a rational function of $w$ with size $O(\log^k(n))$ and given explicitly in \e{stx2}. \end{theorem} The first coefficient is \begin{equation} \label{c1w} c_1(w) = -\frac{w^4+58 w^3+29 w^2-24 w-16}{192 (w+1)^3}. \end{equation} Table \ref{xeb} compares the approximations of Theorem \ref{mainthm3} with the actual value of $\g(n)$ for $n=1000$. All decimals are correct to the accuracy shown. See also Table 2 in \cite{GORZ}, (their $\g(n)$ is $8$ times larger). \begin{table}[ht] \centering \begin{tabular}{ccc} \hline $K$ & Theorem \ref{mainthm3} & \\ \hline $1$ & $4.84\textcolor{gray}{60204243211378239} \times 10^{-2568}$ & \\ $3$ & $4.845042611\textcolor{gray}{1532216799} \times 10^{-2568}$ & \\ $5$ & $4.84504261127258\textcolor{gray}{84216} \times 10^{-2568}$ & \\ $7$ & $4.8450426112725879772\textcolor{gray}{} \times 10^{-2568}$ & \\ \hline \phantom{$\g(1000)$} & $4.8450426112725879772 \times 10^{-2568}$ & $\g(1000)$\\ \hline \end{tabular \caption{The approximations of Theorem \ref{mainthm3} to $\g(1000)$.} \label{xeb} \end{table} Theorem \ref{mainthm3} follows from the next theorem, giving the asymptotics of $\xi^{(2n)}(1/2)$ from \e{xip}. \begin{theorem} \label{mainthm2} Set $w :=W(2n/\pi)$. Then as $n\to \infty$ we have \begin{equation} \label{ytr} \xi^{(2n)}(1/2)= 4\pi^{2}e^{7w/4}\sqrt{\frac{2 w}{w+1}} \left( \frac{w}{2e^{1/w}}\right)^{2n} \left( 1+ \sum_{k=1}^{K-1}\frac{\mu_k(w)}{n^k}+ O\left( \frac{\log^{K}(n)}{n^K}\right) \right) \end{equation} for an implied constant depending only on $K$. Each $\mu_k(w)$ is a rational function of $w$ with size $O(\log^k(n))$ and given explicitly by \e{same2} and \e{pq3x}. \end{theorem} Theorem \ref{mainthm2} is essentially a reformulated version of \cite[Thm. 9]{GORZ} where they used the solution $L$ to the equation $n=L(\pi e^L+3/4)$ instead of $W(2n/\pi)$. The main term of the expansion \e{ytr} appears in Thm. 6.1 of \cite{Rom} and may also be compared with the weaker results of \cite{Pu} and \cite{Co}. The asymptotics of many related integrals are covered by our techniques in Sect. \ref{lap} and a general result is formulated in Theorem \ref{ian2}. This gives the complete asymptotic expansion as $n \to \infty$ of \begin{equation} \label{iafn} I_\alpha(f;n):=\int_1^\infty (\log t)^n e^{-\alpha t} f(t) \, dt \qquad \qquad (\alpha >0) \end{equation} for suitable functions $f$. For an application of this, also involving Hermite polynomials, recall Tur\'an's expansion \begin{equation} \label{turanbn} \Xi(z)=\sum_{n=0}^\infty (-1)^n b_{2n} H_{2n}(z) \end{equation} for $\Xi(z)= \xi(1/2+i z)$. This series converges locally uniformly in $\C$ according to \cite[Thm. 2.1]{Rom}. We may extend the asymptotics for $b_{2n}$ in \cite[Thm. 2.7]{Rom} with the next result. \begin{theorem} \label{turanb} Set $w :=W(2n/\pi)$. Then as $n \to \infty$ we have \begin{equation*} b_{2n}=4\pi^{2} \frac{ e^{7w/4-w^2/16}}{(2n)!}\sqrt{\frac{ 2w}{w+1}} \left( \frac{w}{4e^{1/w}}\right)^{2n} \left( 1+ \sum_{k=1}^{K-1}\frac{\tau_k(w)}{n^k}+ O\left( \frac{\log^{3K}(n)}{n^K}\right) \right) \end{equation*} for an implied constant depending only on $K$. Each $\tau_k(w)$ is a rational function of $w$ with size $O(\log^{3k}(n))$ and given by the formulas \e{same}, \e{pq2} and \e{pq3}. \end{theorem} We remark that Hermite polynomials have also appeared recently in connection with the asymptotics of $\zeta(s)$ in \cite{OSrs}, which gives a generalization of the Riemann-Siegel formula. Romik explores further orthogonal polynomial expansions of $\Xi[z]$ in \cite{Rom}. Wagner, in \cite{Wa}, extends the work in \cite{GORZ} to large classes of $L$-functions The techniques in this paper should also be useful for these generalizations. \vskip 4mm {\bf Acknowledgements.} Thanks to Jacques G\'elinas, Dan Romik, Tim Trudgian and both referees for their helpful comments. \section{Preliminaries} The {\em Hermite polynomials} have generating function $\exp(2Xt-t^2)$ and the explicit expression \begin{equation*} H_d(X)= d! \sum_{r=0}^{\lfloor d/2\rfloor} \frac{ (-1)^r }{r! (d-2r)!} (2X)^{d-2r}. \end{equation*} These polynomials are a special case of the Laguerre polynomials appearing in \e{lague}. The {\em partial ordinary Bell polynomials} $\hat{\mathcal B}_{i,j}$ are useful devices to keep track of power series coefficients. With $j \in \Z_{\gqs 0}$, we have the generating function definition \begin{equation} \label{pobell2} \left( p_1 x +p_2 x^2+ p_3 x^3+ \cdots \right)^j = \sum_{i=j}^\infty \hat{\mathcal B}_{i,j}(p_1, p_2, p_3, \dots) x^i. \end{equation} Clearly $\hat{\mathcal B}_{i,0}(p_1, p_2, p_3, \dots) = \delta_{i,0}$. The formulas \begin{align} \label{pobell} \hat{\mathcal B}_{i,j}(p_1, p_2, p_3, \dots) & = \sum_{\substack{1\ell_1+2 \ell_2+ 3\ell_3+\dots = i \\ \ell_1+ \ell_2+ \ell_3+\dots = j}} \frac{j!}{\ell_1! \ell_2! \ell_3! \cdots } p_1^{\ell_1} p_2^{\ell_2} p_3^{\ell_3} \cdots, \\ & = \sum_{n_1+n_2+\dots + n_j = i} p_{n_1}p_{n_2} \cdots p_{n_j} \qquad \qquad (j \gqs 1) \label{pobell3} \end{align} hold, where the sum in \e{pobell} is over all possible $\ell_1$, $\ell_2$, $\ell_3, \dots \in \Z_{\gqs 0}$ and the sum in \e{pobell3} is over all possible $n_1$, $n_2, \dots \in \Z_{\gqs 1}$. For instance, \begin{equation*} \hat{\mathcal B}_{9,6}(p_1, p_2, p_3, \dots) = 20 p_1^3 p_2^3+30 p_1^4 p_2 p_3 + 6 p_1^5 p_4. \end{equation*} For $j \gqs 1$ we see from \e{pobell3} that $\hat{\mathcal B}_{i,j}(p_1, p_2, p_3, \dots)$ is a polynomial in $p_1, p_2, \dots, p_{i-j+1}$ of homogeneous degree $j$ with positive integer coefficients. See the discussion and references in \cite[Sect. 7]{OSper}, for example, for more information. As an application we will need later, consider \begin{equation} \label{hot} h_u(x) := \log\left( 1+\frac{\log(x+1)}{u}\right) = \sum_{i=1}^\infty \ell_i(u) x^i \end{equation} which is a holomorphic function of $x\in \C$ for $|x|\lqs 1/2$ and $u\gqs 2$, say. To find the coefficients $\ell_i(u)$ write \begin{align} \log\left( 1+\frac{\log(x+1)}{u}\right) & = \sum_{j=1}^\infty \frac{(-1)^{j+1}}{j \cdot u^j} \log^j(x+1) \notag\\ & = \sum_{j=1}^\infty \frac{(-1)^{j+1}}{j \cdot u^j} \sum_{i=j}^\infty \hat{B}_{i,j}( 1,-{\textstyle \frac 12},{\textstyle \frac 13},\dots) x^i\notag\\ & = \sum_{i=1}^\infty x^i \sum_{j=1}^i \hat{B}_{i,j}( 1,-{\textstyle \frac 12},{\textstyle \frac 13},\dots)\frac{(-1)^{j+1}}{j \cdot u^j}. \label{wash} \end{align} Then \begin{equation} \ell_i(u) = \sum_{j=1}^i \hat{B}_{i,j}( 1,-{\textstyle \frac 12},{\textstyle \frac 13},\dots) \frac{(-1)^{j+1}}{j \cdot u^j} \label{covi} \end{equation} with \begin{equation} \ell_1(u)=\frac 1u, \qquad \ell_2(u)=-\frac 1{2u}-\frac 1{2u^2}, \qquad \ell_3(u)=\frac 1{3u}+\frac 1{2u^2}+\frac 1{3u^3}, \qquad \text{etc}.\label{luex} \end{equation} We also record here a basic bound for the incomplete gamma function $\G(s,a):=\int_a^\infty e^{-x} x^{s-1}\, dx$. \begin{lemma} \label{inc-gam} For $a,r \gqs 0$ we have \begin{equation*} \G(r+1,a) \lqs 2^r(a^r + \G(r+1))e^{-a}. \end{equation*} \end{lemma} \begin{proof} The result follows from \begin{align*} \int_a^\infty e^{-x} x^{r}\, dx & = e^{-a}\left(\int_0^a e^{-x}(x+a)^r \, dx + \int_a^\infty e^{-x}(x+a)^r \, dx\right) \\ & \lqs e^{-a}\left(\int_0^\infty e^{-x}(2a)^r \, dx + \int_0^\infty e^{-x}(2x)^r \, dx\right). \qedhere \end{align*} \end{proof} Hence, for all $a,r \gqs 0$ and $c>0$ \begin{equation}\label{gmmb} \int_a^\infty e^{-c x} x^{r}\, dx \ll c^{-r-1} \left((ac)^r +1\right) e^{-ac} \end{equation} for an implied constant depending only on $r$. \section{Jensen polynomials and the Laguerre-P\'olya class} \label{jkl} \subsection{Background} In \cite{Po27} P\'olya defines a function to be of {\em genus $1^*$} (``erh\"{o}htem Genus $1$'') if it is of the form $e^{-\alpha z^2}f(z)$ for $\alpha \gqs 0$ and $f(z)$ an entire function of genus at most $1$. So this class includes some entire functions of order $2$ and, by Hadamard's theorem, all entire functions of order $<2$. It is easy to see that the product of two functions of genus $1^*$ is also of genus $1^*$. See for example \cite[Sect. 1]{kk} for a discussion of P\'olya's long study of genus $1^*$ functions. The {\em Laguerre-P\'olya class} consists of functions that are real, entire of genus $1^*$ and that satisfy the added condition of having only real zeros. In particular, the functions $\Xi(z)$ and $\Theta(z)$ are both real entire functions of genus $1^*$ and they are in the Laguerre-P\'olya class if and only if the Riemann hypothesis is true. P\'olya and Schur in \cite{PS14} characterize the Laguerre-P\'olya class in various ways and we give their description in terms of Jensen polynomials next. For a formal power series $\Phi(z)=\sum_{j=0}^\infty c_j z^j/j!$ define \begin{equation*} g_d(\Phi;x):= \sum_{j=0}^d \binom{d}{j} c_j x^j. \end{equation*} to be the {\em Jensen polynomial of degree $d$ associated to $\Phi$}. Let \begin{equation*} g^*_d(\Phi;x):= x^d g_d(\Phi;1/x) = \sum_{j=0}^d \binom{d}{j} c_j x^{d-j} \end{equation*} be the reciprocal polynomial. We have easily \begin{equation} \label{appe} \frac d{dx} g_d(\Phi;x) = d \cdot g_{d-1}(\Phi';x), \qquad \frac d{dx} g^*_d(\Phi;x) = d \cdot g^*_{d-1}(\Phi;x) \end{equation} and the right identity in \e{appe} indicates that $g^*_d(\Phi;x)$ is an {\em Appell polynomial}. More properties of Jensen polynomials are given in \cite[Prop. 2.1]{Cso90}. \begin{theorem}[P\'olya-Schur \cite{PS14}] \label{psthm} Let $\Phi(z)=\sum_{j=0}^\infty c_j z^j/j!$ be a formal power series with real coefficients. Then $\Phi(z)$ converges uniformly on compact sets in $\C$ to an entire function in the Laguerre-P\'olya class if and only if $g_d(\Phi;z)$ is hyperbolic for all $d\gqs 1$. \end{theorem} Laguerre had seen cases of this theorem and Jensen proved it in \cite[pp. 183--187]{Je13} with the assumption that $\Phi(z)$ is a real entire function of genus at most $1$. P\'olya showed in \cite[Sect. 8]{Po27} that Jensen's proof extends easily to $\Phi(z)$ being of genus $1^*$ and we describe this briefly next. \begin{proof}[Sketch of proof when $\Phi(z)$ is of genus $1^*$] In the easier direction, assume all $g_d(\Phi;z)$ are hyperbolic. Then since $g_d(\Phi;z/d) \to \Phi(z)$ locally uniformly in $\C$ as $d\to \infty$, it follows from Hurwitz's theorem for example that $\Phi(z)$ must be hyperbolic. Let $D=\frac{d}{dz}$. Then it can be shown to follows from Rolle's theorem that if $p(z)$ is a real hyperbolic polynomial then $(D-a)p(z)=p'(z)-a p(z)$ is also hyperbolic for all $a\in \R$. If $f(z)$ and $p(z)$ are real hyperbolic polynomials then applying this argument $\deg(f)$ times shows $f(D)p(z)$ is hyperbolic. Assume $\Phi(z)$ is hyperbolic. From its Weierstrass factorization it is possible to construct real hyperbolic polynomials $\Phi_n(z)$ that converge uniformly to $\Phi(z)$ as $n\to \infty$. Then each $\Phi_n(D) p(z)$ is hyperbolic and as $n\to \infty$ they converge to $\Phi(D)p(z)$. It follows that $\Phi(D)p(z)$ is hyperbolic whenever $p(z)$ is. The final step is to note that $\Phi(D)z^d = g^*_d(\Phi;z)$, making $g^*_d(\Phi;z)$ and hence $g_d(\Phi;z)$ hyperbolic. \end{proof} The simple idea behind our Theorem \ref{hyp} is to see what happens when $p(z)=z^d$, in the last sentence of the above proof, is replaced by $H_d(z)$. Also note that a similar limiting procedure allows the polynomial $p(z)$ to be replaced by more general functions; see Theorem \ref{prodj}. The full Theorem \ref{psthm}, requiring no conditions on $\Phi(z)$, is stated in \cite[p. 111]{PS14} and proved there in Sections 2--4. See for example \cite[Thm. 2.7]{Cso89} for further characterizations of the Laguerre-P\'olya class. In our previous notation, with \e{xi1} and \e{jensen}, \begin{equation} \label{jg} J^{d,n}(x)=g_d(\Theta^{(n)};x) \end{equation} since the Taylor coefficients of the $n$th derivative $\Theta^{(n)}(z)$ are shifted by $n$. \begin{cor} \label{cdo} The function $\Theta^{(n)}(z)$ is hyperbolic if and only if $J^{d,n}(z)$ is hyperbolic for all $d \gqs 1$. \end{cor} \begin{proof} We know that $\Theta(z)$ is real and entire of order $1/2$. All of its derivatives must have the same properties and this implies that $\Theta^{(n)}(z)$ is real and entire of genus $1^*$ for all $n\gqs 0$. Therefore $\Theta^{(n)}(z)$ is hyperbolic if and only if it is the Laguerre-P\'olya class. The corollary now follows from \e{jg} and Theorem \ref{psthm}. \end{proof} \begin{cor} \label{cdo2} If $\Theta^{(n)}(z)$ is hyperbolic then $\Theta^{(m)}(z)$ is also hyperbolic for all $m>n$. \end{cor} \begin{proof} By Corollary \ref{cdo}, if $\Theta^{(n)}(z)$ is hyperbolic then $J^{d,n}(z)$ is hyperbolic for all $d \gqs 1$. The identity on the left of \e{appe} implies that \begin{equation*} \frac{d}{dz} J^{d,n}(z) = d \cdot J^{d-1,n+1}(z). \end{equation*} As we saw in the proof of Theorem \ref{psthm}, the derivative of a real hyperbolic polynomial must be hyperbolic. Therefore $J^{d,n+1}(z)$ is hyperbolic for all $d \gqs 1$. Hence, by Corollary \ref{cdo} again, $\Theta^{(n+1)}(z)$ is hyperbolic. Induction completes the proof. \end{proof} \subsection{Jensen polynomials of products} \begin{lemma} \label{wim} Let $\Phi$ and $\Omega$ be two formal power series with $\Phi(z)=\sum_{j=0}^\infty c_j z^j/j!$. Then for all $d\gqs 0$ \begin{equation} \label{iran} g_d^*(\Phi \cdot \Omega;x) = \sum_{j=0}^d \binom{d}{j} c_j \cdot g^*_{d-j}(\Omega;x). \end{equation} \end{lemma} \begin{proof} This is an easy exercise. \end{proof} Replacing $x$ by $1/x$ in \e{iran} also yields \begin{equation} \label{ave} g_d(\Phi \cdot \Omega;x) = \sum_{j=0}^d \binom{d}{j} c_j x^j \cdot g_{d-j}(\Omega;x). \end{equation} \begin{theorem} \label{prod} Let $\Phi(z)=\sum_{j=0}^\infty c_j z^j/j!$ be a real entire function of genus $1^*$ and let $\Omega(z)$ be in the Laguerre-P\'olya class. Then $\Phi(z)$ is hyperbolic if and only if the polynomials \begin{equation} \label{ran} \sum_{j=0}^d \binom{d}{j} c_j \cdot g^*_{d-j}(\Omega;x) \end{equation} are hyperbolic for all $d\gqs 1$. \end{theorem} \begin{proof} First note that $\Phi \cdot \Omega$ is of genus $1^*$ because each of $\Phi$ and $ \Omega$ are. The zeros of $ \Omega$ are real since it is in the Laguerre-P\'olya class. Therefore $\Phi \cdot \Omega$ is in the Laguerre-P\'olya class if and only if $\Phi(z)$ is hyperbolic. By Theorem \ref{psthm}, $\Phi \cdot \Omega$ is in the Laguerre-P\'olya class if and only if $g_d^*(\Phi \cdot \Omega;x)$ is hyperbolic for all $d\gqs 1$. Noting that $g_d^*(\Phi \cdot \Omega;x)$ equals \e{ran} by Lemma \ref{wim} finishes the proof. \end{proof} Clearly we could replace \e{ran} by \e{ave} in the statement of Theorem \ref{prod} and get the same result. The function $ e^{-z^2} = 1-\frac{2!}{1} \cdot \frac{z^2}{2!} + \frac{4!}{2!} \cdot \frac{z^4}{4!}- \dots $ is in the Laguerre-P\'olya class and \begin{align*} g_d^*\left( e^{-z^2};x\right) & = \sum_{j=0}^{\lfloor d/2 \rfloor} \binom{d}{2j} (-1)^j \frac{(2j)!}{j!} x^{d-j} \\ & = \sum_{j=0}^{\lfloor d/2 \rfloor} \frac{ d!(-1)^j }{(d-2j)! j!} x^{d-j} =H_d(x/2). \end{align*} By Lemma \ref{wim}, for any $\Phi(z)=\sum_{j=0}^\infty c_j z^j/j!$, we then find \begin{equation*} g_d^*\left(\Phi \cdot e^{-z^2};x\right) = \sum_{j=0}^d \binom{d}{j} c_j \cdot H_{d-j}(x/2) \end{equation*} and in particular \begin{equation} \label{tus} g_d^*\left(\Theta^{(n)} \cdot e^{-z^2};x\right) = \sum_{j=0}^d \binom{d}{j} \g(j+n) \cdot H_{d-j}(x/2) = P^{d,n}(x/2) \end{equation} with our notation \e{jen-herm}. \begin{proof}[Proof of Theorem \ref{hyp}] As seen in the proof of Corollary \ref{cdo}, $\Theta^{(n)}(z)$ is real and entire of genus $1^*$ for all $n\gqs 0$. Theorem \ref{prod} with $\Phi(z) = \Theta^{(n)}(z)$ and $\Omega(z)=e^{-z^2}$ implies that $\Theta^{(n)}(z)$ is hyperbolic if and only if $P^{d,n}(x)$ as in \e{tus} is hyperbolic for all $d\gqs 1$. As in Corollary \ref{cdo2}, the Riemann hypothesis is equivalent to $\Theta^{(n)}(z)$ being hyperbolic for all $n\gqs 0$. \end{proof} We noted in the introduction that $J^{d,n}(x)$ being hyperbolic implies that $P^{d,n}(x)$ is hyperbolic. This is shown next with the help of a general result of P\'olya from \cite[p. 242]{Po15}. \begin{theorem} \label{prodj} Let $\Phi(z)$ and $\Omega(z)$ be in the Laguerre-P\'olya class with $\Phi(z)=\sum_{j=0}^\infty c_j z^j/j!$. Suppose \begin{equation} \label{ranxy} \sum_{j=0}^\infty \frac{c_j}{j!} \Omega^{(j)}(z) \end{equation} converges when $|z|<\rho$ for some $\rho>0$. Then \e{ranxy} represents a function in the Laguerre-P\'olya class. \end{theorem} Following P\'olya, the example of $\Omega(z) = e^{-z^2}$ may be used in Theorem \ref{prodj} along with the formula \begin{equation*} \Omega^{(j)}(z) = (-1)^j e^{-z^2} H_j(z). \end{equation*} We obtain the following special case, which was also stated in Lemma II of \cite{Tu59}. \begin{cor} \label{poly-herm} Let $\sum_{j=0}^m a_j z^j $ be a hyperbolic polynomial with real coefficients. Then the polynomial $ \sum_{j=0}^m a_j H_j(z) $ is also hyperbolic. \end{cor} \begin{cor} \label{poly-herm2} If $J^{d,n}(x)$ is hyperbolic then $P^{d,n}(x)$ is hyperbolic. \end{cor} \begin{proof} If $J^{d,n}(x)$ is hyperbolic then so is the reciprocal polynomial $x^d J^{d,n}(1/x)$. An application of Corollary \ref{poly-herm} to this reciprocal now shows that $ P^{d,n}(x)$ is hyperbolic. \end{proof} Note that taking $\Phi(z)= J^{d,n}(x)$ and $\Omega(z) = H_d(x/2)$ in Theorem \ref{prodj} implies the similar polynomial \begin{equation*} \sum_{j=0}^d \binom{d}{j}\g(n+j) \frac{H_{d-j}(x)}{(d-j)!} \end{equation*} is also hyperbolic if $J^{d,n}(x)$ is. This uses the equality $\frac{d^j}{dx^j} H_d(x/2) = \frac{d!}{(d-j)!} H_{d-j}(x/2)$. \subsection{Further examples of Jensen polynomials} Theorem \ref{prod} with $\Phi(z)=\Theta^{(n)}(z)$ may be used to give different criteria for the Riemann hypothesis, introducing an interesting flexibility. For $\Omega(z)=1$ we obtain the polynomials $J^{d,n}(X)$ and for $\Omega(z)=e^{-z^2}$ we obtain $P^{d,n}(X)$. We could use $\cos(z)$ and $\sin(z)$ for $\Omega(z)$ for example, but we next focus on a family of Bessel functions. For $\alpha \in \R$ the Bessel function of the first kind has the series representation \begin{equation*} J_\alpha(z) = \sum_{m=0}^\infty \frac{(-1)^m}{m! \, \G(m+\alpha+1)} \left(\frac z2 \right)^{2m+\alpha}. \end{equation*} Then $z^{-\alpha} J_\alpha(2z)$ is entire and even and we may put \begin{equation*} \mathcal J_\alpha(z) := z^{-\alpha/2} J_\alpha(2\sqrt{z}) = \sum_{m=0}^\infty \frac{(-1)^m z^m}{m! \, \G(m+\alpha+1)} . \end{equation*} A short calculation as in \cite[Sect. 3]{DY09} finds \begin{equation*} g_d(\mathcal J_\alpha;x) = \frac{d!}{\G(d+\alpha+1)} L^{(\alpha)}_d(x) \end{equation*} with the generalized Laguerre polynomials given by \begin{equation} L^{(\alpha)}_d(x) = \frac{\G(d+\alpha+1)}{d!} \sum_{k=0}^d \binom{d}{k}\frac{(-1)^k x^k}{\G(k+\alpha+1)} = \sum_{k=0}^d \binom{d+\alpha}{d-k}(-1)^k \frac{ x^k}{k!}. \label{lague} \end{equation} These polynomials are orthogonal for $\alpha>-1$ and therefore have only real roots when $d\gqs 1$ for $\alpha$ in this range. In fact the Laguerre polynomials are known to have real roots for $\alpha \gqs -2$. For $d\gqs 2$ and $\alpha<-2$, $L^{(\alpha)}_{d}(x)$ has non-real roots except possibly when $\alpha$ is an integer. See \cite[Eq. (5.2.1), Thm. 6.7.3]{Sz75} for these results. It follows from Theorem \ref{psthm} that $\mathcal J_\alpha$ is in the Laguerre-P\'olya class for all $\alpha \gqs -2$. Applying Theorem \ref{prod} with $\Phi = \Theta^{(n)}$ and $\Omega = \mathcal J_\alpha$, and using the alternate identity \e{ave}, proves an extended criterion for the Riemann hypothesis involving the Laguerre polynomials: \begin{theorem} \label{lag} The Riemann hypothesis is true if and only if the polynomials \begin{equation*} Q^{d,n,\alpha}(x):=\sum_{j=0}^d \binom{d+\alpha}{j} \g(n+j) x^j L^{(\alpha)}_{d-j}(x) \end{equation*} are hyperbolic for all $d \gqs 1$, all $n \gqs 0$ and all real $\alpha \gqs -2$. \end{theorem} For example, taking $d=2$, the discriminant of $Q^{2,n,\alpha}(x)$ is \begin{equation*} (\alpha+2) \g(n)^2 +(\alpha+1)^2(\alpha+2)^2\Bigl[\g(n+1)^2 -\g(n)\g(n+2) \Bigr], \end{equation*} and this is non-negative if $\alpha \gqs -2$ and the Tur\'an inequality \e{tu} holds. We note that Farmer, in the interesting preprint \cite{Far}, puts these types of Jensen polynomial approaches to the Riemann hypothesis into context in the recent literature and comments on their likelihood of success. \section{The asymptotics of $I_{\alpha}(n)$} \label{lap} Before treating \e{iafn} we first look at the simpler case \begin{equation} \label{simpl} I_\alpha(n):=\int_1^\infty (\log t)^n e^{-\alpha t} \, dt \end{equation} which contains all the main ideas. Recalling \e{covi} we define the rational functions \begin{gather} a_r(v) :=\sum_{j=0}^{2r} \frac{(2j+2r-1)!!}{j!} \left( \frac{v^2}{v+1}\right)^{j+r} \hat{B}_{2r,j}(\ell_3(v), \ell_4(v), \dots) \label{covi2}\\ \text{so that} \qquad a_0(v) = 1, \qquad a_1(v) =\frac{2 v^4+9 v^3+16 v^2+6 v+2}{24 (v+1)^3}, \qquad \text{etc}. \label{luex2} \end{gather} \begin{theorem} \label{ian} Suppose $\alpha>0$ and set $u :=W(n/\alpha)$. Then as $n \to \infty$ we have \begin{equation} \label{maini} I_{\alpha}(n) = \sqrt{2\pi} \frac{ u^{n+1}e^{u-n/u}}{\sqrt{(1+u)n}} \left( 1+ \sum_{r=1}^{R-1}\frac{a_r(u)}{n^r}+ O\left( \frac{u^R}{n^R}\right) \right) \end{equation} where the implied constant depends only on $R \gqs 1$ and $\alpha$. Also $a_r(u) \ll u^r \ll \log^r(n)$. \end{theorem} \begin{proof} We give the proof in the rest of this section, following Laplace's method as described in \cite[Sect. B6]{Fl09} for example. See also \cite[Thm. 9]{GORZ} and \cite[Sect. 2.4]{Rom} for similar arguments. Let $g(t)$ denote the integrand in \e{simpl}. Then \begin{equation*} g'(t) = \left( \frac n{t\log t} -\alpha \right) g(t) \end{equation*} and $g'(t_0)=0$ for the unique $t_0>1$ satisfying $t_0 \log t_0 = n/\alpha$. With \e{lamb} we have that \begin{equation} \label{boo} W(n/\alpha)=W(t_0\log t_0) = \log t_0 \qquad \text{implies} \qquad t_0=e^{W(n/\alpha)}= \frac{n/\alpha}{W(n/\alpha)}. \end{equation} The largest contribution to the integral $I_\alpha(n)$ will be near the maximum of the integrand at $t=t_0 \approx n/(\alpha \log (n/\alpha))$. We develop the integrand about this point: \begin{align} I_\alpha(n) & = g(t_0) \int_1^\infty \exp\left(\log\left( \frac{g(t)}{g(t_0)}\right)\right) \, dt \notag \\ & = t_0 g(t_0) \int_{1/t_0-1}^\infty \exp\left(\log\left( \frac{g((x+1)t_0)}{g(t_0)}\right)\right) \, dx \notag \\ & = t_0 g(t_0) \int_{1/t_0-1}^\infty \exp\left(n\log\left( 1+\frac{\log(x+1)}{\log t_0}\right)-\alpha t_0 x\right) \, dx. \label{fay} \end{align} For $u=W(n/\alpha)$ the integrand is \begin{equation} \label{expp} \exp\left(n\left[\log\left( 1+\frac{\log(x+1)}{u}\right)-\frac x u\right]\right) = \exp\left(n\left[ h_u(x)-\frac x u\right]\right) \end{equation} with $ h_u(x)$ from \e{hot} having a power series expansion for $|x|\lqs 1/2$ and $u\gqs 2$, say, with coefficients $\ell_i(u)$ given in \e{covi}. After this point, the equality between $u$ and $W(n/\alpha)$ may be ignored and we treat $u$ as a free parameter with $u \gqs 2$, or later $u \gqs 1$. \begin{lemma} \label{jow} Suppose $|z|\lqs 1/2$ for $z \in \C$ and assume $u\gqs 2$. Then for $k\gqs 0$ we have \begin{gather} h_u(z) = \sum_{i=1}^{k} \ell_i(u) z^i + R_k(u,z) \notag\\ \text{where} \qquad |\ell_i(u)| \lqs \frac{3}{u} \left( \frac 43 \right)^i, \qquad |R_k(u,z)| \lqs \frac{12}{u} \left( \frac 43 \right)^{k}|z|^{k+1}. \label{wher} \end{gather} \end{lemma} \begin{proof} From the simple inequality $ |\log(w+1)| \lqs 2|w|$ for $|w|\lqs 3/4$, $w\in \C$ we obtain \begin{equation*} |h_u(w)| \lqs 4|w|/u \quad \text{for} \quad |w|\lqs 3/4, u \gqs 2. \end{equation*} We may now bound $h_u^{(i)}(0)$ and the Taylor remainder using Cauchy's estimates in the usual way, with the bound $|h_u(w)| \lqs 3/u$ for $|w|=3/4$. \end{proof} \begin{lemma}[Neglecting the tails] \label{oop} Suppose $0<\delta\lqs 1/100$ and $u\gqs 2$. Then \begin{equation} \label{aco} \int_{1/t_0-1}^\infty \exp\left(n\left[h_u(x)-\frac x u\right]\right) \, dx = \int_{-\delta}^\delta \exp\left(n\left[h_u(x)-\frac x u\right]\right) \, dx + O\left(u \exp\left(-\frac{\delta^2 n}{4u}\right)\right) \end{equation} where the implied constant is absolute. \end{lemma} \begin{proof} The elementary inequalities \begin{equation*} \log(1+x)\lqs x \quad (x\gqs 0), \qquad \log(1+y)<y/2 \quad (y\gqs 3) \end{equation*} imply that \begin{equation*} h_u(x) \lqs \log(1+x/u) <x/(2u) \quad \text{for} \quad x \gqs 3u. \end{equation*} Therefore \begin{equation} \label{out} \int_{3u}^\infty \exp\left(n\left[h_u(x)-\frac x u\right]\right) \, dx \lqs \int_{3u}^\infty \exp\left(-\frac {n x}{2u}\right) \, dx = \frac{2u}n e^{-3n/2}. \end{equation} By design $h_u(x)-x/u$ is increasing for $x<0$ and decreasing for $x>0$. Then for $x\gqs \delta$ we have \begin{align*} h_u(x)-x/u & \lqs h_u(\delta)-\delta/u \\ & \lqs -\frac{u+1}{2u^2} \delta^2+|R_2(u,\delta)| < -\frac{\delta^2}{2u}\left( 1- 50 \delta \right) \lqs -\frac{\delta^2}{4u}. \end{align*} We obtain the same bound for $x\lqs -\delta$ and so \begin{equation*} \left( \int_{1/t_0-1}^{-\delta} + \int_{\delta}^{3u} \right) \exp\left(n\left[h_u(x)-\frac x u\right]\right) \, dx \lqs (1+3u)\exp\left(-\frac{\delta^2 n}{4u}\right). \end{equation*} This is bigger than the bound in \e{out} and the lemma is proved. \end{proof} Put $C:= n (1+u)/u^2$ so that $n \cdot \ell_2(u) =-C/2$ by \e{luex}. It follows from Lemma \ref{jow} that \begin{equation} \label{twc} \int_{-\delta}^\delta \exp\left(n\left[h_u(x)-\frac x u\right]\right) \, dx = \exp\left(O(n\delta^{k+1}) \right) \int_{-\delta}^\delta e^{-C x^2/2}\exp\left( n \sum_{i=3}^{k} \ell_i(u) x^i \right) \, dx. \end{equation} Write the sum in \e{twc} as $y:=\sum_{i=3}^{k} \ell_i(u) x^i$. Then $n |y|\ll \delta^3 n/u$ by \e{wher}. We would like $n|y|$ to be small and so $\delta^3 n$ should tend to $0$ as $n \to \infty$. Also, with the error in \e{aco}, $\delta^2 n$ should tend to $\infty$. This means choosing $\delta$ between $n^{-1/2}$ and $n^{-1/3}$. We now fix \begin{equation*} \delta := n^{-2/5} \qquad \text{so that} \qquad \delta^2 n = n^{1/5}, \qquad \delta^3 n = n^{-1/5}. \end{equation*} With this choice of $\delta$ (and assuming $k\gqs 2$) we have $\exp\left(O(n\delta^{k+1}) \right) = 1+O(n\delta^{k+1})$. Also, with this $\delta$, the integrand on the right of \e{twc} is $\ll_k 1$ and therefore \begin{equation} \label{twc2} \int_{-\delta}^\delta \exp\left(n\left[h_u(x)-\frac x u\right]\right) \, dx = \int_{-\delta}^\delta e^{-C x^2/2}\exp\left( n \sum_{i=3}^{k} \ell_i(u) x^i \right) \, dx +O(n^{3/5-2k/5}). \end{equation} Without truncating the Taylor series \e{hot} of $h_u(x)$, we may write \begin{align} \exp\left(\sum_{i=3}^\infty n \cdot \ell_i(u) x^i \right) & = \sum_{i=0}^\infty x^i \sum_{j=0}^i \hat{B}_{i,j}(\ell_3(u), \ell_4(u), \dots) \frac{n^j x^{2j}}{j!}\notag \\ & = \sum_{r=0}^\infty e_r(n,u) x^r \label{suraj} \end{align} for \begin{equation} \label{esum} e_r(n,u) := \sum_{j=0}^{\lfloor r/3\rfloor} \hat{B}_{r-2j,j}(\ell_3(u), \ell_4(u), \dots) \frac{n^j}{j!}. \end{equation} It follows that $ e_r(n,u)$ is a polynomial in $n$ of degree at most $\lfloor r/3\rfloor$ with coefficients that are rational functions in $u$. Using \e{pobell3} and \e{wher} shows \begin{equation}\label{and} e_r(n,u) \ll_r n^{r/3}. \end{equation} The next lemma gives a truncated version of \e{suraj}. \begin{lemma} \label{expx} For all $x$ with $|x|\lqs \delta=n^{-2/5}$ and all $u\gqs 1$ we have \begin{equation} \label{pat} \exp\left(\sum_{i=3}^k n \cdot \ell_i(u) x^i \right) = \sum_{r=0}^{k-1} e_r(n,u) x^r + O\left( \frac{1}{ n^{k/15}}\right) \end{equation} for an implied constant depending only on $k$. \end{lemma} \begin{proof} For a number $c$ to be chosen later, the left side of \e{pat} is \begin{align*} \exp(ny) & = \sum_{j=0}^c (ny)^j/j! + O\left((n|y|)^{c+1} \right) \\ & = \sum_{j=0}^c \frac{n^j x^{2j}}{j!} \left( \sum_{i=1}^{k-2} \ell_{i+2}(u) x^i \right)^j + O\left((u \cdot n^{1/5})^{-c-1} \right) \\ & = \sum_{j=0}^c \frac{n^j x^{2j}}{j!} \sum_{i=j}^{(k-2)j} \hat{B}_{i,j}(\ell_3(u), \ell_4(u), \dots, \ell_k(u),0,0, \dots) x^i + O\left(\frac{1}{ n^{(c+1)/5}} \right). \end{align*} If we define $e_r(n,u)_k$ as in \e{esum}, but with $\ell_j(u)$ inside the Bell polynomial replaced by $0$ when $j\gqs k+1$, then we obtain \begin{equation*} \exp(ny) = \sum_{m=0}^{k-1} e_m(n,u) x^m + \sum_{m=k}^{k c} e_m(n,u)_k x^m + O\left(\frac{1}{ n^{(c+1)/5}} \right) \end{equation*} because $e_m(n,u)_k = e_m(n,u)$ for $m\lqs k-1$. As in \e{and} we have $e_m(n,u)_k \ll n^{m/3}$ and hence \begin{equation*} \sum_{m=k}^{k c} e_m(n,u)_k x^m \ll \sum_{m=k}^{k c} n^{m/3} x^m \ll \sum_{m=k}^{k c} (\delta^3 n)^{m/3} \ll \sum_{m=k}^{k c}n^{-m/15} \ll n^{-k/15}. \end{equation*} Choosing any $c\gqs k/3-1$ completes the proof. \end{proof} \begin{lemma}[Central approximation and completing the tails] We have \begin{equation} \label{twc3} \int_{-\delta}^\delta e^{-C x^2/2}\exp\left( n \sum_{i=3}^{k} \ell_i(u) x^i \right) \, dx =\frac{\sqrt{2\pi} u}{\sqrt{(u+1)n}} \sum_{m=0}^{\lfloor(k-1)/2\rfloor} \frac{a^*_m}{n^m} + O\left( \frac{1}{ n^{k/15}}\right) \end{equation} when $\delta = n^{-2/5}$, $C= n (1+u)/u^2$, $u\gqs 1$ and \begin{equation*} a^*_m=a^*_m(n,u):= (2m-1)!! \left( \frac{u^2}{u+1}\right)^m \ e_{2m}(n,u). \end{equation*} \end{lemma} \begin{proof} Lemma \ref{expx} implies the left side of \e{twc3} equals \begin{equation} \label{wworld} \sum_{r=0}^{k-1} e_r(n,u) \int_{-\delta}^\delta e^{-C x^2/2} x^r \, dx + O\left( \frac{1}{ n^{k/15}}\right). \end{equation} If we extend the integral in \e{wworld} to all of $\R$ then the difference is twice \begin{equation} \label{wworld2} \int_{\delta}^\infty e^{-C x^2/2} x^r \, dx \lqs \int_{\delta}^\infty e^{-C \delta x/2} x^r \, dx \end{equation} With \e{gmmb} and the inequality $C> n/u$, \e{wworld2} is bounded by a constant depending on $r$ times \begin{equation*} \left(\frac {2 \delta^{r-1}}{C} + \left(\frac{2}{C\delta}\right)^{r+1} \right) e^{-C \delta^2/2} \ll e^{-n^{1/5}/(2u)}. \end{equation*} For $m\in \Z_{\gqs 0}$ we use \begin{equation*} \int_{-\infty}^\infty e^{-C x^2/2} x^{2m} \, dx = \frac{\G(m+1/2)}{(C/2)^{m+1/2}} = \sqrt{\frac{2\pi}{C}}\frac{(2m-1)!!}{C^m}. \end{equation*} Therefore \begin{multline*} \int_{-\delta}^\delta e^{-C x^2/2}\exp\left( n \sum_{i=3}^{k} \ell_i(u) x^i \right) \, dx \\ = \sqrt{\frac{2\pi}{C}} \sum_{m=0}^{\lfloor(k-1)/2\rfloor} e_{2m}(n,u)\frac{(2m-1)!!}{C^m} + O\left( \frac{1}{ n^{k/15}} + n^{k/3} e^{-n^{1/5}/(2u)}\right)\\ =\frac{\sqrt{2\pi} u}{\sqrt{(u+1)n}} \sum_{m=0}^{\lfloor(k-1)/2\rfloor} \frac{a^*_m}{n^m} + O\left( \frac{1}{ n^{k/15}}\right) \end{multline*} as required. \end{proof} Each $a^*_m(n,u)$ is a polynomial in $n$ of degree at most $\lfloor 2m/3\rfloor$ with coefficients in $\Q(u)$. Explicitly: \begin{equation} \label{cumb} a^*_m = \sum_{j=0}^{\lfloor 2m/3\rfloor} c_{m,j} n^j \quad \text{for} \quad c_{m,j}:= \frac{(2m-1)!!}{j!} \left( \frac{u^2}{u+1}\right)^m \hat{B}_{2m-2j,j}(\ell_3(u), \ell_4(u), \dots). \end{equation} It is convenient to replace $k$ by $2Lk+1$ with an $L$ to be chosen later. Simplifying $\sum_{m=0}^{Lk} a^*_m/n^m$ into a polynomial in $1/n$ we find \begin{equation*} \sum_{m=0}^{Lk} \frac{a^*_m(n,u)}{n^m} = \sum_{r=0}^{Lk} \frac{\hat{a}_r(u)}{n^r} \quad \text{for} \quad \hat{a}_r(u) =\sum_{j=0}^{2r} c_{r+j,j} \end{equation*} provided $c_{m,j}$ is set to zero for $m>Lk$. In other words \begin{equation*} \hat{a}_r(u)=\sum_{j=0}^{\min\{2r,Lk-r\}} \frac{(2j+2r-1)!!}{j!} \left( \frac{u^2}{u+1}\right)^{j+r} \hat{B}_{2r,j}(\ell_3(u), \ell_4(u), \dots) \end{equation*} and equation \e{twc3} becomes \begin{equation} \label{twc4} \int_{-\delta}^\delta e^{-C x^2/2}\exp\left( n \sum_{i=3}^{2Lk+1} \ell_i(u) x^i \right) \, dx =\frac{\sqrt{2\pi} u}{\sqrt{(u+1)n}} \sum_{r=0}^{Lk} \frac{\hat{a}_r(u)}{n^r} + O\left( \frac{1}{ n^{2Lk/15}}\right). \end{equation} Now \e{twc2} and \e{twc4} can be combined to produce \begin{multline} \label{twc5} \int_{-\delta}^\delta \exp\left(n\left[h_u(x)-\frac x u\right]\right) \, dx = \frac{\sqrt{2\pi} u}{\sqrt{(u+1)n}} \\ \times \left[\sum_{r=0}^{k-1} \frac{\hat{a}_r(u)}{n^r} + \sum_{r=k}^{Lk} \frac{\hat{a}_r(u)}{n^r} +O\left( \frac{n^{1/2}}{ n^{(4Lk-1)/5}} + \frac{n^{1/2}}{ n^{2Lk/15}}\right) \right] . \end{multline} Comparing $ \hat{a}_r(u)$ with $a_r(u)$ defined in \e{covi2}, we see they are equal when $r<k$ and $L\gqs 3$. Also, note that $\ell_i(u) \ll 1/u$ by \e{wher} implies that $\hat{B}_{2r,j}(\ell_3(u), \ell_4(u), \dots) \ll 1/u^j$ by \e{pobell3}. Hence \begin{equation} \label{puin} a_r(u), \ \hat{a}_r(u) \ll_r u^r. \end{equation} Choosing $L=15$, for example, in \e{twc5} shows \begin{equation*} \int_{-\delta}^\delta \exp\left(n\left[h_u(x)-\frac x u\right]\right) \, dx = \frac{\sqrt{2\pi} u}{\sqrt{(u+1)n}} \left[\sum_{r=0}^{k-1} \frac{a_r(u)}{n^r} +O\left( \frac{u^k}{ n^{k}}\right) \right] . \end{equation*} Inserting this into Lemma \ref{oop} and recalling \e{fay} gives \begin{equation*} I_\alpha(n) = t_0 g(t_0) \frac{\sqrt{2\pi} u}{\sqrt{(u+1)n}} \left[\sum_{r=0}^{k-1} \frac{a_r(u)}{n^r} +O\left( \frac{u^k}{ n^{k}}+ n^{1/2} u \exp\left(-\frac{ n^{1/5}}{4u}\right)\right) \right]. \end{equation*} Finally $t_0=e^u$ and $g(t_0)=u^n e^{-n/u}$ by \e{boo} gives \e{maini} and completes the proof of Theorem \ref{ian}. \end{proof} \section{Generalizing Theorem \ref{ian}} Define \begin{equation*} I_\alpha(f;n):=\int_1^\infty (\log t)^n e^{-\alpha t} f(t) \, dt \qquad \qquad (n, \alpha >0). \end{equation*} As long as $f(t)$ is reasonably well-behaved and can be developed in a power series about $t=t_0$, with coefficients that are relatively small, then the proof of Theorem \ref{ian} should go through. The examples we have in mind for our applications are $f(t)=t^\beta$ and $f(t)=e^{-(\log(t))^2/16}$. For the latter, \begin{equation} \label{hew} \frac{f(t(x+1))}{f(t)} = \exp\left(-\frac v8 \log(x+1) -\frac 1{16}\log^2(x+1) \right) \end{equation} with $t=e^v$. Treating $x$ as complex, we see (as in the proof of Lemma \ref{jow}) that the right side of \e{hew} is bounded for $|x|\lqs 1/(2v)$ and $v \gqs 1$, say. Then Taylor's theorem and the usual estimates show \begin{equation*} \frac{f(t(x+1))}{f(t)} = \sum_{m=0}^{k-1} f_m(v) x^m +O( v^k |x|^k) \qquad \text{for} \qquad |x| \lqs 1/(2v), \ k \in \Z_{\gqs 0} \end{equation*} and that $f_m(v) \ll v^{m}$. The implied constants depend only on $k$ and $m$, respectively. The coefficients $f_m(v)$ can be computed explicitly by combining the series for $\log(x+1)$ and $e^x$ with the Bell polynomials. Similarly to \e{wash} we find \begin{align*} \exp\left(-\frac v8 \log(x+1) \right) & = \sum_{i=0}^\infty x^i \sum_{j=0}^i \frac 1{j!} \left( -\frac v8\right)^j \hat{B}_{i,j}( 1,-{\textstyle \frac 12},{\textstyle \frac 13},\dots),\\ \exp\left( -\frac 1{16}\log^2(x+1) \right) & = \sum_{i=0}^\infty x^i \sum_{j=0}^{\lfloor i/2 \rfloor} \frac 1{j!} \left( -\frac 1{16}\right)^j \hat{B}_{i,2j}( 1,-{\textstyle \frac 12},{\textstyle \frac 13},\dots). \end{align*} So with \begin{equation}\label{pq} p_i(v):=\sum_{j=0}^i \frac 1{j!} \left( -\frac v8\right)^j \hat{B}_{i,j}( 1,-{\textstyle \frac 12},{\textstyle \frac 13},\dots), \qquad q_i := \sum_{j=0}^{\lfloor i/2 \rfloor} \frac 1{j!} \left( -\frac 1{16}\right)^j \hat{B}_{i,2j}( 1,-{\textstyle \frac 12},{\textstyle \frac 13},\dots), \end{equation} we find that $f_m(v) =\sum_{i=0}^m p_i(v) \cdot q_{m-i}$ is a polynomial in $v$ of degree $m$. We may isolate the properties of this example into a definition. \begin{adef} \label{suit} A continuous function $f:[1,\infty) \to \R$ is {\em suitable} if there exist real constants $b, \lambda \gqs 0$ and functions $f_m$ for $m \in \Z_{\gqs 0}$ so that the following conditions hold. \begin{enumerate} \item For $t \gqs 1$ we have $0< f(t) \ll t^b$. \item With $t=e^v$ and $v\gqs 1$ we have \begin{equation} \label{ffd} \frac{f(t(x+1))}{f(t)} = \sum_{m=0}^{k-1} f_m(v) x^m +O( v^{\lambda k} |x|^k) \qquad (|x| \lqs 1/(2v^{\lambda}), \ k \in \Z_{\gqs 0}). \end{equation} \item Lastly, $f_m(v) \ll v^{\lambda m}$ for all $m \in \Z_{\gqs 0}$. \end{enumerate} The implied constant in (ii) depends only on $k$ and $f$; the one in (iii) depends only on $m$ and $f$. \end{adef} As we have seen, $f(t)=e^{-(\log(t))^2/16}$ is suitable with $b=0$ and $\lambda=1$. The example $f(t)=t^\beta$, for any real $\beta$, is suitable with $b=\beta$, $f_m(v)=\binom{\beta}{m}$ and $\lambda =0$. It is also easy to show that products of suitable functions are suitable. An example of a function that is not suitable is $f(t)=e^{-t}$ since $f_1(v)=-e^v$ is too large. Define \begin{equation} \label{same} a_r(f;v) :=\sum_{j=0}^{2r} \frac{(2j+2r-1)!!}{j!} \left( \frac{v^2}{v+1}\right)^{j+r} \sum_{i=j}^{2r} \hat{B}_{i,j}(\ell_3(v), \ell_4(v), \dots) \cdot f_{2r-i}(v), \end{equation} with $\ell_m(v)$ given in \e{covi} as usual. Then the following theorem generalizes Theorem \ref{ian}, and reduces to it when $f(t)=1$. \begin{theorem} \label{ian2} Let $f$ be a suitable function associated with $b$, $\lambda$ and coefficients $f_m$. Suppose $\alpha>0$ and set $u :=W(n/\alpha)$. Then as $n \to \infty$ we have \begin{equation} \label{mainif} I_{\alpha}(f;n) = \sqrt{2\pi} \frac{ u^{n+1} f(e^u) e^{u-n/u}}{\sqrt{(1+u)n}} \left( 1+ \sum_{r=1}^{R-1}\frac{a_r(f;u)}{n^r}+ O\left( \frac{u^{R(1+2\lambda)}}{n^R}\right) \right) \end{equation} where the implied constant depends only on $R \gqs 1$, $\alpha$ and $b$. Also $a_r(f;u)$ is defined in \e{same} and satisfies $a_r(f;u) \ll u^{r(1+2\lambda)}$. \end{theorem} \begin{proof} We can reuse the proof of Theorem \ref{ian}. As in \e{fay}, \begin{equation*} I_{\alpha}(f;n) = t_0 g(t_0) \int_{1/t_0-1}^\infty \exp\left(n\left[h_u(x)-\frac x u\right]\right) f(t_0(x+1)) \, dx. \end{equation*} Using $f(t)\ll t^b$, $t_0=n/(\alpha u)$ and \e{gmmb} we see that \begin{multline} \label{acox} \int_{1/t_0-1}^\infty \exp\left(n\left[h_u(x)-\frac x u\right]\right) f(t_0(x+1))\, dx \\ = \int_{-\delta}^\delta \exp\left(n\left[h_u(x)-\frac x u\right]\right) f(t_0(x+1)) \, dx + O\left(u \cdot n^b \exp\left(-\frac{\delta^2 n}{4u}\right)\right) \end{multline} for $0<\delta\lqs 1/100$ and $u\gqs 2$, where the implied constant depends only on $\alpha$ and $b$. This is as in Lemma \ref{oop} with an extra factor $n^b$ in the error. For $e_r(n,u)$ defined in \e{esum}, set \begin{equation} \label{var} \varepsilon_r(n,u):= \sum_{j=0}^r e_j(n,u) f_{r-j}(u). \end{equation} \begin{lemma} For all $x$ with $|x|\lqs \delta=n^{-2/5}$ we have \begin{equation*} \exp\left(\sum_{i=3}^k n \cdot \ell_i(u) x^i \right) \frac{f(t_0(x+1))}{f(t_0)} = \sum_{r=0}^{k-1} \varepsilon_r(n,u) x^r + O\left( \frac{1}{ n^{k/15}}\right) \end{equation*} as $n\to \infty$ for an implied constant depending only on $k$ and $\lambda$. \end{lemma} \begin{proof} Recall from \e{boo} that $t_0=e^u$ and $u=W(n/\alpha) = \log n - \log(\alpha u)$. So $u\gqs 1/\alpha$ implies $u \lqs \log n$. Hence, for $n$ large enough, $n^{-2/5} \lqs 1/(2 u^\lambda)$ and we may use \e{ffd} with $t=t_0$ and $v=u$. Next note the bounds \begin{equation}\label{pinf} \exp\left(\sum_{i=3}^k n \cdot \ell_i(u) x^i \right) = O(1), \qquad \frac{f(t_0(x+1))}{f(t_0)} = O(1) \end{equation} for $|x|\lqs \delta=n^{-2/5}$, where the left bound in \e{pinf} is shown after \e{twc} and the right bound follows from \e{ffd}. Therefore, using Lemma \ref{expx} and \e{ffd}, \begin{multline*} \exp\left(\sum_{i=3}^k n \cdot \ell_i(u) x^i \right) \frac{f(t_0(x+1))}{f(t_0)} = \left( \sum_{r=0}^{k-1} e_r(n,u) x^r\right) \left(\sum_{m=0}^{k-1} f_m(u) x^m \right) + O\left( \frac{1}{ n^{k/15}}+ \frac{u^{\lambda k}}{ n^{2k/5}}\right)\\ = \sum_{r=0}^{k-1} \varepsilon_r(n,u) x^r + \sum_{r=k}^{2k-2} x^r \sum_{j=r-k+1}^{k-1} e_j(n,u) f_{r-j}(u) + O\left( \frac{1}{ n^{k/15}}\right). \end{multline*} Finally \begin{multline*} \sum_{r=k}^{2k-2} x^r \sum_{j=r-k+1}^{k-1} e_j(n,u) f_{r-j}(u) \ll \sum_{r=k}^{2k-2} \delta^r \sum_{j=r-k+1}^{k-1} n^{j/3} u^{\lambda(r-j)} \\ \ll \sum_{r=k}^{2k-2} \delta^r n^{(k-1)/3} u^{\lambda(k-1)} \ll \delta^k n^{(k-1)/3} u^{\lambda(k-1)} =n^{-k/15-1/3}u^{\lambda(k-1)} \ll n^{-k/15}, \end{multline*} where we used the bound \e{and} for $e_j(n,u)$. This completes the proof. \end{proof} It is clear from \e{and}, part (iii) of Definition \ref{suit} and \e{var} that \begin{equation*} \varepsilon_r(n,u) \ll_r n^{r/3}. \end{equation*} We may now continue the proof of Theorem \ref{ian} with $e_r(n,u)$ replaced by $\varepsilon_r(n,u)$. In \e{cumb}, $a^*_m$ is replaced by $ a^*_m(f) = \sum_{j=0}^{\lfloor 2m/3\rfloor} c_{m,j}(f) n^j$ for \begin{equation*} c_{m,j}(f):= \frac{(2m-1)!!}{j!} \left( \frac{u^2}{u+1}\right)^m \sum_{i=j}^{2m-2j}\hat{B}_{i,j}(\ell_3(u), \ell_4(u), \dots) f_{2m-2j-i}(u). \end{equation*} Then $a_r(u)$ becomes $a_r(f;u)=\sum_{j=0}^{2r} c_{r+j,j}(f)$ and so \begin{equation} \label{puin2} a_r(f;u) =\sum_{j=0}^{2r} \frac{(2j+2r-1)!!}{j!} \left( \frac{u^2}{u+1}\right)^{j+r} \sum_{i=j}^{2r} \hat{B}_{i,j}(\ell_3(u), \ell_4(u), \dots) f_{2r-i}(u) \end{equation} agreeing with \e{same}. The proof is finished as in Theorem \ref{ian} with the only difference that the bound \e{puin} is replaced by $a_r(f;u) \ll u^{r(1+2\lambda)}$ which follows from \e{puin2}. \end{proof} \section{Tur\'an's expansion of $\Xi(z)$} \label{tura} For $t>0$ and $x\in \R$ set \begin{equation} \label{mega} \theta(t) :=\sum_{k\in \Z} e^{-\pi k^2 t}, \qquad \omega(t):= \frac 12 \left(3t \theta'(t) + 2t^2 \theta''(t) \right), \qquad \Phi(x):= 2e^{x/2}\omega(e^{2x}). \end{equation} Recall Tur\'an's coefficients $b_{2n}$ from \e{turanbn}. As shown in \cite[Eq. (2.1), Thm. 2.1]{Rom}, they have the formulas \begin{align*} b_{2n} & =\frac 1{2^{2n-1}(2n)!}\int_0^\infty x^{2n} e^{-x^2/4} \Phi(x)\, dx\\ & =\frac 1{2^{4n-1}(2n)!}\int_1^\infty (\log t)^{2n} \omega(t) t^{-3/4} e^{-(\log t)^2/16}\, dt. \end{align*} \begin{proof}[Proof of Theorem \ref{turanb}] Write \begin{equation*} \omega(t)= \sum_{m=1}^\infty \left(2\pi^2 m^4 t^2-3\pi m^2 t \right) e^{-\pi m^2 t} \qquad (t > 0). \end{equation*} For $t\gqs 1$, $\omega(t)$ is dominated by its first two terms and it is easy to show that \begin{equation*} \omega(t) = \left(2\pi^2 t^2-3\pi t \right) e^{-\pi t} + O\left( t^2 e^{-4\pi t}\right) \qquad (t \gqs 1). \end{equation*} With \begin{equation*} T_{\alpha,\beta}(n):= \int_1^\infty (\log t)^n e^{-\alpha t} t^{\beta} e^{-(\log t)^2/16}\, dt, \end{equation*} we obtain \begin{equation} \label{v9} b_{2n}=\frac 1{2^{4n-1}(2n)!} \left(2\pi^2 T_{\pi,5/4}(2n) -3\pi T_{\pi,1/4}(2n) +O\left( T_{4\pi,5/4}(2n)\right)\right). \end{equation} Theorem \ref{ian2} may be used to find the asymptotics of $ T_{\alpha,\beta}(n)$ since, by the discussion around Definition \ref{suit}, $f(t) = t^{\beta} e^{-(\log t)^2/16}$ is suitable with $b=\beta$, $\lambda = 1$ and \begin{equation} \label{pq2} f_m(v) = \sum_{j_1+j_2+j_3 = m} \binom{\beta}{j_1} p_{j_2}(v) q_{j_3}, \end{equation} using the formulas \e{pq}. We write $a_r(f;v)_\beta$ for the corresponding coefficients \e{same}, adding a subscript to keep track of the parameter $\beta$. Then for $u :=W(n/\alpha)$ we have \begin{equation} \label{bor} T_{\alpha,\beta}(n) = \sqrt{2\pi} \frac{ u^{n+1} e^{u(\beta+1)-n/u-u^2/16}}{\sqrt{(1+u)n}} \left( 1+ \sum_{r=1}^{R-1}\frac{a_r(f;u)_\beta}{n^r}+ O\left( \frac{u^{3R}}{n^R}\right) \right) \end{equation} as $n \to \infty$. For $2\pi^2 T_{\pi,5/4}(2n) -3\pi T_{\pi,1/4}(2n)$, $u$ becomes $w:=W(2n/\pi)$ and in the main term of \e{bor} we have $e^{w(\beta+1)}$ for $\beta =5/4,1/4$. Using the identity $e^w=2n/(\pi w)$ gives \begin{equation} \label{dent} e^{w(1/4+1)} = e^{w(5/4+1)}\frac{\pi w}{2n} \end{equation} which lets us add the expressions. The result is \begin{multline} \label{bor2} 2\pi^2 T_{\pi,5/4}(2n) -3\pi T_{\pi,1/4}(2n) \\ =2 \pi^{5/2}\frac{ w^{2n+1}}{\sqrt{(w+1) n}} e^{9w/4-2n/w-w^2/16} \left( 1+ \sum_{r=1}^{R-1}\frac{\tau_r(w)}{n^r}+ O\left( \frac{w^{3R}}{n^R}\right) \right) \end{multline} where \begin{equation} \label{pq3} \tau_r(w) := 2^{-r-1}\left(2 a_r(f;w)_{5/4}-3w \cdot a_{r-1}(f,w)_{1/4}\right) \ll w^{3r}. \end{equation} Comparing \e{bor2} with the error \begin{equation*} T_{4\pi,5/4}(2n) \ll \frac{ \tilde{w}^{2n+1}}{\sqrt{(\tilde{w}+1) n}} e^{9\tilde{w}/4-2n/\tilde{w}-\tilde{w}^2/16} \qquad \text{for} \qquad \tilde{w}:=W(2n/(4\pi)), \end{equation*} in \e{v9}, we see the ratio is bounded by \begin{equation} \label{stot} \frac{\tilde{w}}{w} \sqrt{\frac{w+1}{\tilde{w}+1}} \exp\left(\frac{9(\tilde{w}-w)}{4}-\frac{\tilde{w}^2-w^2}{16}\right) \left( \frac{\tilde{w}}{w} e^{1/w-1/\tilde{w}}\right)^{2n}. \end{equation} The inequalities needed to estimate \e{stot} are developed next; they will also be used in Sect. \ref{hyp-jen-poly}. Beginning with $1+x \lqs e^x$ for $x\gqs 0$, we obtain \begin{equation}\label{ineqx} \log(1+x) \lqs x \qquad (x\gqs 0) \end{equation} and also $(1+x)^{1/x}\lqs e$ for $x>0$. Hence \begin{equation}\label{ineqy} \left( 1+\frac ab \right)^b \lqs e^a \qquad (a\gqs 0, \ b>0). \end{equation} \begin{lemma} \label{woo} For $x> 0$ and $c\gqs 1$ we have \begin{equation} \label{oreg} \left(1-\frac 1{W(x)}\right)\log c \lqs W(cx)-W(x)\lqs \log c. \end{equation} \end{lemma} \begin{proof} By taking logs, the identities $W(x) e^{W(x)}=x$ and $W(cx) e^{W(cx)}=c x$ imply \begin{align} \log W(x)+ W(x) & = \log x, \label{wbnd}\\ \log W(cx)+ W(cx) & = \log x + \log c. \notag \end{align} Subtracting the first equation from the second gives \begin{equation*} W(cx)-W(x) - \log c = \log \frac{W(x)}{W(c x)} \end{equation*} and the right side is $\lqs 0$ since the Lambert function is increasing. This proves the right inequality in \e{oreg}. Then \begin{equation*} \log \frac{W(c x)}{W(x)} \lqs \log \frac{W( x)+\log c}{W(x)} = \log\left( 1+ \frac{\log c}{W(x)}\right) \lqs \frac{\log c}{W(x)} \end{equation*} where we used \e{ineqx}. The left inequality in \e{oreg} is then a result of: \begin{equation*} - \frac{\log c}{W(x)} \lqs -\log \frac{W(c x)}{W(x)} = \log \frac{W(x)}{W(c x)} =W(cx)-W(x) - \log c. \qedhere \end{equation*} \end{proof} Further inequalities for the Lambert function are contained in \cite{Hoor}, for example. Also note that \e{wbnd} and $W(e)=1$ imply that $W(x)\lqs \log x$ for all $x\gqs e$. \begin{lemma} \label{www} For $w=W(2n/\pi)$, $\tilde{w}=W(2n/(4\pi))$ the expression \e{stot} is $ \ll e^{-2n/\log n}$ for large $n$. \end{lemma} \begin{proof} Lemma \ref{woo} implies that $(1-1/\tilde{w})\log 4 \lqs w-\tilde{w}\lqs \log 4$. If we write $\tilde{w} = w-\delta$ then, by choosing $n$ large enough, we can certainly ensure $1< \delta <2$. We have \begin{equation*} \frac{\tilde{w}}{w} \sqrt{\frac{w+1}{\tilde{w}+1}} \exp\left(\frac{9(\tilde{w}-w)}{4}-\frac{\tilde{w}^2-w^2}{16}\right) \ll \exp\left(-\frac{9\delta}{4}+\frac{\delta(2w-\delta)}{16}\right) \ll e^{\delta w/8}< e^{w/4}. \end{equation*} Also \begin{equation*} w=W(2n/\pi) \lqs \log(2n/\pi)<\log n, \end{equation*} so that $ e^{w/4}< n^{1/4}$. For the remaining factor in \e{stot} \begin{equation} \label{h7} \left( \frac{\tilde{w}}{w} e^{1/w-1/\tilde{w}}\right)^{2n} = \left( 1-\frac{\delta}{w}\right)^{w \cdot 2n/w} \exp\left( -\frac{2\delta n}{w \tilde{w}} \right). \end{equation} The bound \begin{equation*} \left( 1-\frac{\delta}{w}\right)^{w} \lqs e^{-\delta} \qquad (w>0, 0\lqs \delta/w <1) \end{equation*} follows by taking logs and using the inequality $\log(1-x)\lqs -x$ for $0\lqs x < 1$. Therefore \e{h7} is at most \begin{equation*} \exp\left(-\frac{2\delta n}{w} -\frac{2\delta n}{w^2} \right) < \exp\left(-\frac{2 n}{\log n} -\frac{2 n}{\log^2 n} \right). \end{equation*} Simplifying the final bound with $n^{1/4} e^{-2n/\log^2 n} \ll 1$ completes the proof of the lemma. \end{proof} We have shown that the error term in \e{v9} is exponentially small compared to the main term, and so it may be included inside the error in \e{bor2}. Simplifying with $e^{w/2} = \sqrt{2n/(\pi w)}$ gives the final result \begin{equation} \label{upo} b_{2n}=4\pi^{2}\frac{ e^{7w/4-w^2/16}}{(2n)!}\sqrt{\frac{ 2w}{w+1}} \left( \frac{w}{4e^{1/w}}\right)^{2n} \left( 1+ \sum_{k=1}^{K-1}\frac{\tau_k(w)}{n^k}+ O\left( \frac{w^{3K}}{n^K}\right) \right) \end{equation} and this completes the proof of Theorem \ref{turanb}. \end{proof} The main term of \e{upo} agrees with \cite[Thm. 2.7]{Rom}. We could also replace $1/(2n)!$ in \e{upo} with its asymptotic expansion \e{gmx2}. The numbers $\tau_r(w)$ are given explicitly by \e{pq3}, using \e{pq}, \e{pq2} to define its components $f_m(v)$ and then $a_r(f;v)_\beta$ with \e{same}. For example \begin{equation*} \tau_1(w) = -\frac{-3 w^6+78 w^5+217 w^4+468 w^3+284 w^2-32}{768 (w+1)^3}. \end{equation*} An example of the accuracy of Theorem \ref{turanb} for $2n=2000$ and different values of $K$ is displayed in Table \ref{jeb}. \begin{table}[ht] \centering \begin{tabular}{ccc} \hline $K$ & Theorem \ref{turanb} & \\ \hline $1$ & $2.37\textcolor{gray}{86738117568138992} \times 10^{-5738}$ & \\ $3$ & $2.373211179\textcolor{gray}{9604212549} \times 10^{-5738}$ & \\ $5$ & $2.373211179182932\textcolor{gray}{4664} \times 10^{-5738}$ & \\ $7$ & $2.3732111791829329059\textcolor{gray}{} \times 10^{-5738}$ & \\ \hline \phantom{$b_{2000}$} & $ 2.3732111791829329059 \times 10^{-5738}$ & $b_{2000}$\\ \hline \end{tabular \caption{The approximations of Theorem \ref{turanb} to $b_{2000}$.} \label{jeb} \end{table} \section{The Riemann $\xi$ function} Recall the definitions of $\omega$ and $\Phi$ in \e{mega}. The identity \begin{equation} \label{mom2} \xi^{(2n)}(1/2) = 2^{1-2n} \int_1^\infty (\log t)^{2n} \omega(t) t^{-3/4} \, dt \end{equation} follows easily from Riemann's formulas of 1859. It also follows from the elegant expression \begin{equation} \label{mom} \xi^{(n)}(1/2) = \int_{-\infty}^\infty \Phi(y) \cdot y^{n} \, dy \qquad \qquad (n \in \Z_{\gqs 0}) \end{equation} by changing variables and recalling that $\Phi(-y) = \Phi(y)$; see \cite[Eqs. (1.9), (6.1)]{Rom}. The identity \e{mom} is implicit in Jensen's formula from \cite[p. 189]{Je13}: \begin{equation*} g_d^*(\Xi;x)= \sum_{j=0}^d \binom{d}{j} i^j \xi^{(j)}(1/2) \cdot x^{d-j} = \int_{-\infty}^\infty \Phi(y) \cdot (x+iy)^{d} \, dy. \end{equation*} \begin{proof}[Proof of Theorem \ref{mainthm2}] We may follow the proof of Theorem \ref{turanb} in Sect. \ref{tura}. Setting \begin{equation*} I_{\alpha,\beta}(n):= \int_1^\infty (\log t)^n e^{-\alpha t} t^{\beta} \, dt, \end{equation*} we obtain from \e{mom2}, as in \e{v9}, \begin{equation} \label{v9x} \xi^{(2n)}(1/2)= 2^{1-2n} \left(2\pi^2 I_{\pi,5/4}(2n) -3\pi I_{\pi,1/4}(2n) +O\left( I_{4\pi,5/4}(2n)\right)\right). \end{equation} We noted after Definition \ref{suit} that $f(t)=t^\beta$ is suitable with $b=\beta$, $f_m(v)=\binom{\beta}{m}$ and $\lambda =0$. From \e{same} we find \begin{equation} \label{same2} a_r(f;v)_\beta =\sum_{j=0}^{2r} \frac{(2j+2r-1)!!}{j!} \left( \frac{v^2}{v+1}\right)^{j+r} \sum_{i=j}^{2r} \hat{B}_{i,j}(\ell_3(v), \ell_4(v), \dots) \binom{\beta}{2r-i} \end{equation} and by Theorem \ref{ian2} \begin{equation} \label{borx} I_{\alpha,\beta}(n) = \sqrt{2\pi} \frac{ u^{n+1} e^{u(\beta+1)-n/u}}{\sqrt{(1+u)n}} \left( 1+ \sum_{r=1}^{R-1}\frac{a_r(f;u)_\beta}{n^r}+ O\left( \frac{u^{R}}{n^R}\right) \right) \end{equation} as $n \to \infty$ for $u :=W(n/\alpha)$. The asymptotics for the terms $ I_{\pi,5/4}(2n)$ and $I_{\pi,1/4}(2n)$ may be combined, as in \e{bor2} using the identity \e{dent}, to produce \begin{equation} \label{bor2x} 2\pi^2 I_{\pi,5/4}(2n) -3\pi I_{\pi,1/4}(2n) =2 \pi^{5/2}\frac{ w^{2n+1} e^{9w/4-2n/w}}{\sqrt{(w+1) n}} \left( 1+ \sum_{r=1}^{R-1}\frac{\mu_r(w)}{n^r}+ O\left( \frac{w^{R}}{n^R}\right) \right) \end{equation} where $w:=W(2n/\pi)$ and \begin{equation} \label{pq3x} \mu_r(w) := 2^{-r-1}\left(2 a_r(f;w)_{5/4}-3w \cdot a_{r-1}(f,w)_{1/4}\right) \ll w^{r}. \end{equation} Comparing \e{bor2x} with the error \begin{equation*} I_{4\pi,5/4}(2n) \ll \frac{ \tilde{w}^{2n+1}}{\sqrt{(\tilde{w}+1) n}} e^{9\tilde{w}/4-2n/\tilde{w}} \qquad \text{for} \qquad \tilde{w}:=W(2n/(4\pi)), \end{equation*} in \e{v9x}, we see the ratio is bounded by \begin{equation} \label{stotx} \frac{\tilde{w}}{w} \sqrt{\frac{w+1}{\tilde{w}+1}} \exp\left(\frac{9(\tilde{w}-w)}{4}\right) \left( \frac{\tilde{w}}{w} e^{1/w-1/\tilde{w}}\right)^{2n}. \end{equation} Then \e{stotx} is smaller than \e{stot} and so $ \ll e^{-2n/\log n}$ for large $n$ by Lemma \ref{www}. This exponentially small error may therefore be included in the error in \e{bor2x}. Rearranging slightly with $e^{w/2} = \sqrt{2n/(\pi w)}$ completes the proof of Theorem \ref{mainthm2}. \end{proof} The coefficient $ \mu_1(w)$ may be computed as \begin{equation*} \mu_1(w) = -\frac{w^4+66 w^3+53 w^2-8}{192 (w+1)^3}. \end{equation*} \begin{proof}[Proof of Theorem \ref{mainthm3}] Recall that \begin{equation} \label{gamma} \g(n):= \frac{n!}{(2n)!}\xi^{(2n)}(1/2). \end{equation} The asymptotic expansion of the gamma function goes back to Laplace, with \begin{equation} \label{gmx} \G(n+1)= \sqrt{2\pi n}\left(\frac{n}{e}\right)^n \left(1+\frac{\kappa_1}{n}+\frac{\kappa_2}{n^2}+ \cdots + \frac{\kappa_{k}}{n^{k}} +O\left(\frac{1}{n^{k+1}}\right)\right) \end{equation} often called Stirling's formula. There are many ways to express the coefficients $\kappa_m$ and a simple example is \begin{equation} \label{stx} \kappa_m = (2m-1)!! \sum_{j=0}^{2m} \binom{-m-1/2}{j} \hat{B}_{2m,j}\left(-\frac{2}{3},\frac{2}{4},-\frac{2}{5},\frac{2}{6}, \cdots \right) \end{equation} from \cite[Eq. (8.1)]{OSper}. The same coefficients appear in the expansion of the reciprocal of gamma but with alternating signs. One way to see this is to note that the well-known asymptotic series for $\log \G(n+1)$ involves an odd function; see also \cite[Sect. VIII.3]{Fl09}. Then we have \begin{equation} \label{gmx2} \frac 1{\G(n+1)}= \frac 1{\sqrt{2\pi n}}\left(\frac{e}{n}\right)^n \left(1-\frac{\kappa_1}{n}+\frac{\kappa_2}{n^2}- \cdots + (-1)^{k}\frac{\kappa_{k}}{n^{k}} +O\left(\frac{1}{n^{k+1}}\right)\right). \end{equation} Let \begin{equation} \label{stx2} \kappa^*_k := \sum_{j=0}^k (-2)^{-j} \kappa_j \cdot \kappa_{k-j}, \qquad c_k(w):= \sum_{j=0}^k \mu_j(w) \cdot \kappa^*_{k-j}. \end{equation} Formulas \e{gmx}, \e{gmx2} imply \begin{equation} \label{n2} \frac{n!}{(2n)!} =\frac 1{\sqrt{2}}\left(\frac{e}{4n}\right)^n \left(1+\frac{\kappa^*_1}{n}+\frac{\kappa^*_2}{n^2}+ \cdots + \frac{\kappa^*_{k}}{n^{k}} +O\left(\frac{1}{n^{k+1}}\right)\right). \end{equation} Using \e{n2} and Theorem \ref{mainthm2} in \e{gamma} then proves Theorem \ref{mainthm3}. \end{proof} The coefficients $c_k(w)$ are completely described by the formulas \e{covi}, \e{same2}, \e{pq3x}, \e{stx} and \e{stx2}. The first was given in \e{c1w} and the next one is \begin{equation*} \textstyle c_2(w) = -\frac{1295 w^8+7804 w^7+21682 w^6+40124 w^5+29911 w^4+13712 w^3+2080 w^2-768 w-256}{73728 (w+1)^6}. \end{equation*} \section{Zeros of $P^{d,n}(X)$} \label{hyp-jen-poly} In this section we develop a condition for $P^{d,n}(X)$ to have only real roots. It is based on the next result, due to Tur\'an from \cite[Thm. III]{Tu59}. \begin{theorem} Let $G(z)=\sum_{j=0}^d c_j H_j(z)$ for real numbers $c_j$ and $H_j(z)$ the $j$th Hermite polynomial. Then the zeros of $G(z)$ are real and simple when \begin{equation} \label{toy} \sum_{j=0}^{d-2} 2^j j! c_j^2 < 2^d(d-1)! c_d^2. \end{equation} \end{theorem} \begin{proof}[Proof of Theorem \ref{chem}] The condition \e{toy} for $P^{d,n}(X)$ to be hyperbolic, with $c_j=\binom{d}{j} \g(n+d-j)$, is \begin{equation} \label{bolic} \sum_{j=2}^d \frac{d}{2^j j!} \binom{d}{j} \frac{\g(n+j)^2}{\g(n)^2} < 1. \end{equation} It follows from Theorem \ref{mainthm3} with $K=1$ and $w :=W(2n/\pi)$ that \begin{equation*} \g(n) = \Psi(n) \left( 1+ O\left( \frac{\log(n)}{n}\right) \right) \qquad \text{for} \qquad \Psi(n):= 4\pi^{2}e^{7w/4} \sqrt{\frac {w}{w+1}} \left(\frac{e w^2}{16 n e^{2/w}} \right)^n. \end{equation*} As a consequence, for any $\epsilon>0$ there exists an $N$ so that \begin{multline} \label{fut} \frac{\g(n+j)}{\g(n)} \lqs \frac{\Psi(n+j)}{\Psi(n)} (1+\epsilon) = \sqrt{ \frac{1+1/w}{1+1/w_j}} \left(\frac{ w_j^2}{w^2} \frac{ n}{n+j} \right)^n \\ \times \exp\left(\frac{7(w_j-w)}{4}+ \frac{2n(w_j-w)}{w w_j}\right) \left(\frac{e w_j^2}{16 (n+j) e^{2/w_j}} \right)^j (1+\epsilon) \end{multline} for all $n\gqs N$ and all $j\gqs 0$, where we set $w_j :=W(2(n+j)/\pi)$. \begin{lemma} \label{nba} Suppose $c\gqs 1$. For $n$ large enough and $0\lqs j \lqs n^c-n$, we have \begin{equation*} \frac{\g(n+j)}{\g(n)} < 1.01 \left(\frac{3 c^2\log^2(n)}{ 16 n} \right)^j. \end{equation*} \end{lemma} \begin{proof} By Lemma \ref{woo} we have $w_j-w\lqs \log(1+\lambda)$ for $\lambda:=j/n$. Examining the components of \e{fut} we first find, using \e{ineqx}, \e{ineqy}, \begin{equation*} \left(\frac{ w_j}{w} \right)^{2n} \lqs \left(\frac{ w +\log(1+\lambda)}{w} \right)^{2n} = \left(1+\frac{ \log(1+\lambda)}{w} \right)^{w \cdot 2n/w} \lqs e^{\log(1+\lambda) \cdot 2n/w} \lqs e^{2j/w}. \end{equation*} Next, \begin{align*} \exp\left(\frac{7(w_j-w)}{4}+ \frac{2n(w_j-w)}{w w_j}\right) & \lqs \exp\left(\frac{7\log(1+\lambda)}{4}+ \frac{2n \log(1+\lambda)}{w^2}\right) \\ & \lqs \exp\left(\frac{7j}{4n}+ \frac{2j}{w^2}\right). \end{align*} Bounding trivially gives \begin{equation*} \sqrt{ \frac{1+1/w}{1+1/w_j}} \left( \frac{ n}{n+j} \right)^n \lqs \sqrt{ 1+1/w}. \end{equation*} Assembling our bounds, and with $n$ large enough that $ (1+\epsilon) \sqrt{ 1+1/w} \lqs 1.01$, we have shown \begin{equation} \label{yoho} \frac{\g(n+j)}{\g(n)} \lqs 1.01 \left(\frac{ w_j^2}{16 (n+j)} \exp\left[\frac{7}{4n}+\frac 2w+ \frac{2}{w^2} - \frac{2}{w_j} +1\right] \right)^j. \end{equation} For $n$ sufficiently large the exponential term can be made close to $e<3$ and \e{yoho} is at most \begin{equation*} 1.01 \left(\frac{ 3 w_j^2}{16 (n+j)} \right)^j < 1.01 \left(\frac{ 3\log^2(2n^c/\pi)}{16 n} \right)^j. \qedhere \end{equation*} \end{proof} Applying Lemma \ref{nba} with $c=2$, (it will become apparent that any $c\gqs 4/3$ will do), the left side of \e{bolic} is \begin{align} \sum_{j=2}^d \frac{d}{2^j j!} \binom{d}{j} \frac{\g(n+j)^2}{\g(n)^2} & < \sum_{j=2}^d \frac{d}{2^j j!} \frac{d^j}{j!} \cdot 1.03 \left(\frac{ 9\log^4(n)}{ 16 n^2} \right)^j \notag\\ & = 1.03 \sum_{j=2}^d \frac{d}{ (j!)^2} \left(\frac{9 d\log^4(n)}{ 32 n^2} \right)^j.\label{bla} \end{align} If we choose $n$ large enough so that \begin{equation*} \left(\frac{9 d\log^4(n)}{ 32 n^2} \right)^2 \lqs \frac 3d \end{equation*} then \e{bla} is at most \begin{equation*} 1.03 \sum_{j=2}^d \frac{d}{ (j!)^2} \left(\frac{\sqrt{3}}{ \sqrt{d}} \right)^j < 1.03 \sum_{j=2}^\infty \frac{3^{j/2}}{ (j!)^2} <0.94<1. \end{equation*} This verifies condition \e{bolic} and Theorem \ref{chem} follows. \end{proof} {\small
{ "timestamp": "2020-12-11T02:05:16", "yymm": "2007", "arxiv_id": "2007.13582", "language": "en", "url": "https://arxiv.org/abs/2007.13582" }
\section{Introduction} Training neural networks with gradient descent is remarkably effective in a multitude of applications, see e.g. \cite{KrizhevskySutskeverHinton2012,HeZhangRenEtAl2016} for image classification, \cite{SilverHuangMaddisonEtAl2016} for reinforcement learning, \cite{SutskeverVinyalsLe2014,BahdanauChoBengio2015} for machine translation or \cite{GoodfellowBengioCourville2016} for a general overview. This is somewhat surprising because the objective function is generally non-convex and neural network training is $NP$-hard in the worst case \cite{BlumRivest1989}. The current literature contains a growing number of ideas and approaches to explain this phenomenon. Some experimental studies indicate that the loss function of common learning problems is more benign than one might assume on first sight \cite{GoodfellowVinyals2015,ZhangBengioHardtEtAl2017}. Other works \cite{ChoromanskaHenaffMathieuEtAl2015,LaurentBrecht2018,AroraCohenGolowichEtAl2019} provide a thorough understanding of simplified networks. For non-simplified general networks rigorous convergence results can be obtained under the assumption of over-parametrization \cite{SoudryCarmon2016,SafranShamir2018,LiLiang2018,Allen-ZhuLiSong2019,DuLeeLiEtAl2019}. Despite this progress, many practical networks do not satisfy all necessary assumptions and a solid understanding of the training behaviour remains a challenging question. In this article, we address the problem from a different perspective. Instead of providing conditions on networks that guarantee convergence, we consider the question if we can modify a given network so that it trains well. This is loosely inspired by practical network training, where we do not consider one single fixed network either. More often, we experiment with a multitude of hyper-parameters and architectural elements such as drop-out, attention, skip connections, etc. until we obtain satisfactory results. The question is then not necessarily if every network trains well, but rather if we can find one that does. To address this question, we provide the following universality result: For a given \emph{primary network} $f_\theta$ with weights $\theta$ and learning task, we assume that there is a Turing machine $TM$ that can compute good network weights $\bar{\theta} = TM(x,y)$ given inputs $x_i$ and labels $y_i$, $i=1, \dots, n$. This could be gradient descent training, global optimization methods or any other algorithm specifically tailored to the network and problem at hand. The existence of such a Turing machine merely asserts that some good training algorithms exits. We then construct an extended network that contains the primary network as a sub-network with the following three properties: \begin{enumerate} \item Gradient descent training of the extended network adjusts the parameters $\theta$ of the primary network $f_\theta$ to the output $\bar{\theta} = TM(x,y)$ of the Turing machine. \item After gradient descent training, a forward execution of the extended network with input $x$ yields the output $f_{\bar{\theta}}(x)$, i.e. the output of the primary network $f_{\bar{\theta}}$ with weights $\bar{\theta} = TM(x,y)$ chosen by the Turing machine. \item The number of gradient descent steps matches the number of Turing machine steps. \end{enumerate} In summary, if there exists an algorithm that produces good weights from the learning data, these weights can be computed by gradient descent training on a properly extended network. The extended network is carefully handcrafted, which, of course, is not intended as a practical algorithm, but rather an universality result to demonstrate the capabilities of gradient descent in combination with a properly chosen network. Form a practical perspective, there are multiple algorithms that aim at automatically generating good neural networks for a specific problem. Although these methods often do not optimize for gradient descent convergence directly, the extended network can be seen as a somewhat idealized potential outcome of such methods. One example is meta-learning \cite{HospedalesAntoniouMicaelliEtAl2020}, in particular methods such as MAML \cite{FinnAbbeelLevine2017} that pre-select network weights so that they can be adapted to a specific problem by one step of gradient descent. Another example is neural architecture search \cite{RealMooreSelleEtAl2017,ZophLe2017,ZophVasudevanShlensEtAl2018,WuDaiZhangEtAl2019}, which seeks to automatically generate neural networks with state of the art performance for a given learning task. In transfer learning \cite{DonahueJiaVinyalsEtAl2014,YosinskiCluneBengioEtAl2014} neural networks contain weights that have been pre-trained on related problems. Another parallel can be drawn to biologically plausible learning methods \cite{LillicrapCowndenTweed2016,XiaoChenLiaoEtAl2019}. They often use dedicated ``feedback networks'' as a replacement for the back-propagation of gradients, as e.g. \cite{BellecScherrHajekEtAl2019}. Some examples include target propagation \cite{LeeZhangFischerEtAl2015} and synthetic gradients \cite{JaderbergCzarneckiOsinderoEtAl2017}. Although the extended networks of this article do use back-propagation, a ``switch'' at the end of the network prevents gradients to be back-propagated directly to the primary network $f_\theta$. Instead, they are propagated into a parallel network where they trace the Turing machine computation and ultimately influence the weights of $f_\theta$. Therefore the network extension can also be understood as a ``feedback network''. ``Universality'' results for neural networks usually refer to their capacity to approximate arbitrary functions, with varying restrictions on network width, depth or activation functions. See \cite{Cybenko1989,HornikStinchcombeWhite1989,Barron1993} for some early results and e.g. \cite{Zhou2020} for convolutional networks, \cite{LuPuWangEtAl2017,HaninSellke2017} for deep networks and \cite{FinnLevine2017} for other learning regimes as model agnostic meta learning. More quantitative results are also available e.g. in \cite{DaubechiesDeVoreFoucartEtAl2019} and the references therein. In contrast to this body of work, the universality result in this article is concerned with a quite different question: We want to know if gradient descent training can always proceed to find desirable network weights, not if neural networks can approximate an arbitrary function. There are also numerous connections between Neural networks and Turing Machines. Similar to the universal function approximation, neural networks can simulate arbitrary Turing Machines \cite{SiegelmannSontag1995}. Other results supplement neural networks with read and writable memory \cite{GravesWayneDanihelka2014,CollierBeel2018}, trainable by gradient descent. These latter neural Turing machines aim at making neural networks even more powerful in practical applications. Although they are a natural choice for the extended network of this article, for simplicity we confine ourselves to feed forward networks with sufficient width to hold the full tape during the computation of $TM(x,y)$ for all inputs $x$ and labels $y$ of fixed size. The paper is organized as follows: Section \ref{sec:main} contains the main result of the paper and Section \ref{sec:extended-network-overview} contains a brief overview over the construction of the extended network. In Section \ref{sec:TM-via-gradient-descent}, we construct two loss functions that allow us to trace steps of Turing machines by gradient descent. Finally in Section \ref{sec:supervised-learning}, we prove the main result of the article. \section{The Main Result} \label{sec:main} Let us consider the following standard learning task: Given samples $x_i \in \mathbb{F}^M$, $i=1, \dots, n$ and corresponding labels $y_i \in \mathbb{F}^m$, we want to train the parameters $\theta$ of a \emph{primary network} $f_\theta: \mathbb{F}^M \to \mathbb{F}^m$ so that $f_\theta(x)$ predicts labels for any input $x \in \mathbb{F}^M$. In order to avoid problems with Turing computability, we use some finite precision floating point numbers $\mathbb{F}$ instead of real numbers $\mathbb{R}$. The structure of the parametric function $f_\theta$ is not important for the following considerations, although we are mainly interested in neural networks with input $x \in \mathbb{F}^M$ and network weights $\theta \in \mathbb{F}^W$. For notational convenience, we combine all $x_i$ and $y_i$ into two matrices $x \in \mathbb{F}^{n \times M}$ and $y \in \mathbb{F}^{n \times m}$. We assume that there is a computable function $TM(x,y) \to \theta$ that for given inputs $x$ and labels $y$ of a learning task produces suitable weights $\theta$. For example, the function $TM$ could simply return the result of a standard gradient descent training, a global optimization method or any algorithm that is specialized for the problem at hand. Anyways, we do not consider the meaning of ``suitable'' or the choice of the algorithm any further. Instead, we show that whatever algorithm is chosen, its resulting weights $\theta$ can be reproduced by gradient descent training of an extended network. This \emph{extended network} is a parametric function $F$ with the following properties: \begin{equation} \begin{aligned} & \text{$F: \mathbb{R}^M \times \mathcal{D} \to \mathbb{R}^m$ for some parameter domain $\mathcal{D}$} \\ & \text{$F$ is composed of: $f_\theta$, $ReLU$, square root and product non-linearities,}\\ & \quad\quad \text{stop gradient operations and quantization/de-quantization.} \end{aligned} \label{eq:intro:enlarged-network-properties} \end{equation} The stop gradient operations are used to provide a directionality for read/write operations of the Turing machine and are readily available in contemporary deep learning libraries. The quantization is never differentiated and used to transfer floating point inputs $\mathbb{F}$ to bit sequences for the Turing machine. Of course on any computer, internally any number is already in binary format, which can be passed directly to the Turing machine. Formally, one can also use a neural network to transfer floating point numbers to a bit sequence and vice versa as discussed in Appendix \ref{sec:quantization}. The weights $(s, \thz) \in \mathcal{D}$ of the extended network are constrained to the Cartesian product $\mathcal{D} = S \times \mathbb{R}^{4\tau + n \times m}$ of a simplex $S$ and a vector space. The dimensions and contents will be determined later, at this point we merely need the simplex structure to define an appropriate gradient descent method. To this end, we use the least squares loss \begin{equation} \ell(s, \thz) = \frac{1}{2} \|F(x,(s, \thz)) - y\|^2 \label{eq:lsq-loss} \end{equation} on the original dataset $(x,y)$. Since the weight $s$ is constrained to a simplex, we use the conditional gradient, or Frank-Wolfe, algorithm \begin{equation} \begin{aligned} s_{k+1} & = s_k - [\argmin_{\sigma \in S} \partial_{\sigma - s_k} \ell(s_k, \thz_k) - s_k] \\ \thz_{k+1} & = \thz_k - \alpha \odot \nabla_t \ell(s_k, \thz_k)\\ \end{aligned} \label{eq:intro:gd-tm-supervised-learning} \end{equation} with some fixed learning rate $\alpha \in \mathbb{R}^{|\mathcal{D}|}$ and component wise multiplication $\odot$. In addition to the extended network, the construction of this article provides some explicit stopping criterion \begin{equation} \ell(s_k, \thz_k) \le B_{stop} \label{eq:intro:gd-tm-supervised-stopping} \end{equation} for some $B_{stop}$ bigger than the final training error $\|f_{TM(x,y)}(x) - y)\| \le B_{stop}$. The following proposition is the main result of this article: We train the extended network with the given loss, gradient descent method, learning rate and stopping criterion. Afterwards, computing a forward pass of the extended network yields the same result as if we would run the primary network $f_\theta$ with weights $\theta = TM(x,y)$ chosen by the Turing machine. \begin{proposition} \label{prop:learn-tm} For any number of samples $n$ and finite precision floating point numbers $\mathbb{F}$, let $TM: \mathbb{F}^{n \times M} \times \mathbb{F}^{n\times m} \to \mathbb{F}^p$ be a computable function given by a Turing machine that halts for every input, using less than $\tau = \tau(n)$ tape symbols at any step of the computation. Assume that the labels satisfy \begin{equation} \|y\|_2^2 \ge \epsilon \label{eq:labels-size} \end{equation} for some $\epsilon > 0$. Then, there is an extended network $F$ that satisfies all properties in \eqref{eq:intro:enlarged-network-properties}, with initial values $(s_0, \thz_0)$, fixed learning rate $\alpha$ and stopping threshold $B_{stop}$, depending only on $TM$ such that after gradient descent training \eqref{eq:intro:gd-tm-supervised-learning} with stopping criterion \eqref{eq:intro:gd-tm-supervised-stopping} we have \[ F(x,(s_K, \thz_K)) = f_{TM(x,y)}(x) \] for the final parameter $(s_K, \thz_K) \in \mathcal{D}$ of the gradient descent method. The gradient descent stopping criterion is met in $k_t+1$ steps, where $k_t$ is the number of steps the Turing machine's uses to compute $TM(x,y)$. \end{proposition} An overview over the construction is provided in Section \ref{sec:extended-network-overview}. The proof of the proposition, as well as the following two corollaries is deferred to Sections \ref{sec:TM-via-gradient-descent} and \ref{sec:supervised-learning}. Bounds on the size of the extended network depend on its construction. First, if we insist that the gradient descent loss is strictly decreasing, we obtain the following. \begin{corollary} \label{cor:learn-tm-tape-internal} Assume the Turing machine $TM$ has two tape symbols, $d \ge 2$ tapes and $|Q|$ states in its finite control. Then, the extended network in Proposition \ref{prop:learn-tm} can be chosen with non-increasing gradient descent loss $\ell(s_k, \thz_k)$ and less than $12$ extra layers of width smaller than $|Q| 2^{d\tau} \tau^d + \mathcal{O}(nm)$ in addition to $f_\theta$ . \end{corollary} This result is included in the paper because it contains the basic ideas to trace a Turing machine by gradient descent without adding too much technicalities for read/write operations to the tapes. However, the result itself is questionable because the extra layers are large enough to encode all possible states of the Turing machine for the given input sizes. This includes states with all possible inputs and outputs written in the tapes and therefore one could just as well store all possible input/output relations (these are finite because we work with floating point numbers $\mathbb{F}$), which is obviously infeasible. The following corollary provides a similar result with a much smaller extended network, which scales linearly in the required number of tape symbols $\tau$ and therefore has a comparable size to the Turing machine itself. Unlike the last corollary, the loss function is no longer monotonically decreasing with regard to the gradient descent steps and does not work with a line search. \begin{corollary} \label{cor:learn-tm-tape-external} Assume the Turing machine $TM$ has two tape symbols, $d \ge 2$ tapes and $|Q|$ states in its finite control. If the gradient descent loss $\ell(s_k, \thz_k)$ is allowed to be non-monotonic, the size of the extended network $F$ in Proposition \ref{prop:learn-tm} can be constrained to less than $12$ extra layers of width smaller than $2^d |Q| + 2 d \tau + \mathcal{O}(nm)$, in addition to $f_\theta$ . \end{corollary} This construction contains two network weights $T$ and $H$ as components of $\thz$ for the Turing machine's tape and head position. In principle, we can allow these to be of infinite dimension leading to infinite tape length $\tau = \infty$ as for regular Turing machines. \begin{remark} The extra width $\mathcal{O}(nm)$ is required to read the full set of labels into the Turing machine $TM$. Many practical algorithms work with batches of labels, which could in principle also be implemented with a similar extended network. In this case, we would only need an extra width of $\mathcal{O}( [\text{batch size}] \cdot m)$. \end{remark} \section{Extended Network: Overview} \label{sec:extended-network-overview} This section provides a brief overview over the construction of the extended network. A detailed construction and the proof of the main results is given in Sections \ref{sec:TM-via-gradient-descent} and \ref{sec:supervised-learning} below. The construction proceeds in two steps: First, we construct a loss function $\ell_{TM}$ that allows us to trace the states of a generic Turing machine by gradient descent steps. Unlike typical loss functions, it does not compare predictions to labeled training data. Instead, it is build from standard neural network components and therefore, in the second step, used as a sub-network together with the primary network and some ``switches'' to build the extended network. The construction of $\ell_{TM}$ is given in Section \ref{sec:TM-via-gradient-descent}. To summarize, we associate each state of the Turing machine with a vertex of a simplex. Depending on the construction in Corollaries \ref{cor:learn-tm-tape-internal} or \ref{cor:learn-tm-tape-external} the state includes the tape or only the finite control and the tape symbols at the head position with secondary tape and head variables. The loss function maps the simplex to the real numbers and is chosen so that gradient descent steps with unit learning rate remain on vertices if started at an initial vertex. Therefore, the gradient descent steps can be associated with states of the Turing machine, which we referred to as ``tracing'' the Turing machine, above. In order to define $\ell_{TM}$, we subdivide the simplex into corner simplices, one for each vertex, and the remaining interior part. On each corner we can freely assign a piecewise linear loss function and then extend it continuously to the full simplex. Since the gradient descent steps remain on the vertices, this allows us to control the gradients and carve ``barriers'' and ``slides'' into $\ell_{TM}$ that guides the gradient descent updates along the states of the Turing machine execution. If the states do not contain the full tape, some extra least squares terms are added for reading and writing to a secondary tape variable. The extended network is composed of the primary network $f_\theta$ and the Turing machine network $\ell_{TM}$ as shown in Figure \ref{fig:nn-extended}. For the time being, we only consider a rudimentary overview and defer a more detailed description to Section \ref{sec:extended-network-construction} below. \begin{figure} \begin{center} \begin{tikzpicture}[ in_out/.style={rectangle,draw,rounded corners=3pt,minimum width=0.5cm,minimum height=0.5cm,fill=red!10}, layer/.style={rectangle,draw,rounded corners=3pt,minimum width=0.7cm,minimum height=0.5cm,fill=blue!10}, arrow/.style={thick,->,shorten >= 4pt,shorten <= 4pt}, dotarrow/.style={thick,dotted,->,shorten >= 4pt,shorten <= 4pt}, ] \newcommand{2}{2} \newcommand{1.5}{1.5} \node[layer] (x) at (0*2,4*1.5) {$x$}; \node[layer,fill=red!10] (z) at (3*2,4*1.5) {$z$}; \node[layer] (nnWeights) at (1*2,3*1.5) {$\theta$}; \node[layer,fill=red!10] (s) at (2*2,3*1.5) {$s,T,H$}; \node[layer] (nn) at (0*2,2*1.5) {$f_\theta$}; \node[layer] (tm) at (2*2,2*1.5) {$f_{TM}$}; \node[layer] (switchNetwork) at (1*2,1*1.5) {$s_{net}$}; \node[layer] (switchInit) at (3*2,1*1.5) {$s_{init}$}; \node[layer] (out) at (3*2,0*1.5) {$out$}; \draw[arrow] (x) -- (nn); \draw[arrow] (x) -- (s); \draw[dotarrow] (z) -- (s); \draw[arrow] (z) -- (tm); \draw[arrow] (z) -- (switchInit); \draw[arrow] (s) -- (tm); \draw[arrow] (s) -- (nnWeights); \draw[arrow] (nnWeights) -- (nn); \draw[arrow] (nn) -- (switchNetwork); \draw[arrow] (tm) -- (switchNetwork); \draw[arrow] (switchNetwork) -- (switchInit); \draw[arrow] (switchInit) -- (out); \end{tikzpicture} \end{center} \caption{Extended neural network for gradient descent training} \label{fig:nn-extended} \end{figure} The bottom two layers $s_{net}$ and $s_{init}$ contain two switches, which select the branches of the network that are passed to the output. The first step of gradient descent training writes a copy of the labels $y$ into the weights/node $z$. This copy is used as input for the Turing machine $TM$ because the actual labels $y$ are only implicitly available through the loss function. The first training step also flips the switches so that only the $f_{TM}$ branch is passed to the output and the remaining gradient descent steps trace the Turing machine as described above. $f_{TM}$ is a wrapper around $\ell_{TM}$ that matches the output dimensions from the scalar loss to the dimension of the labels $y$. Finally, once the Turing machine reaches a halting state, the last gradient descent step flips the switches again so that only the primary network is passed to the output. The parameters $\theta$ are no longer network weights but read from the weights $s,T,H$ representing the Turing machine's state and tape, containing $TM(x,y)$ after halting. Therefore, any further forward pass through the network computes the result $f_{TM(x,y)}$ of the primary network with weights computed by the Turing machine. Note that during the entire training all gradients are passed only through the subnetwork $f_{TM}$, but never through the primary network $f_\theta$. This is reminiscent of approaches in biologically plausible learning, which use separate ``feedback networks'' for the adjustment of network weights. Unlike other approaches in the literature, this effect is solely generated by the network architecture, not by a modification of the back-propagation algorithm. \section{Simulate Turing Machines via Gradient Descent} \label{sec:TM-via-gradient-descent} \subsection{Turing Machines} \label{sec:turing-machines} Let us first fix the Turing machine TM. Without loss of generality, we assume that it has $d$ tapes and only two tape symbols $\{-1,1\}$, including the blank. \begin{enumerate} \item $Q$: finite and non-empty set of states. \item $q_0 \in Q$: Initial state. \item $F \subset Q$: accepting states. \item Tape symbols: $\Gamma = \{-1,1\}$. \item $\delta: Q \times \Gamma^d \to Q \times \Gamma^d \times \{-1,1\}^d$: transition function with $-1$ and $1$ denoting left and right shift of the head. \end{enumerate} We denote the component functions of $\delta$ by $\delta_i(q,t)$ so that $\delta(q,t) = [\delta_1(q,t), \delta_2(q,t), \delta_3(q,t)]$. We make two more assumptions on the Turing machine. First a subset $RO \subset \{1, \dots, d\}$ of the tapes is read-only. This will usually be the tape holding the inputs, whereas the outputs are placed on the remaining tapes. Second, for a class $\mathcal{T}_0$ of well formed inputs of interest, we assume that the Turing machine halts in finite time and uses at most $\tau$ consecutive tape entries on each tape at any time during the computation. As for any real computer, we then assume that the tapes have finite length $\tau$. Typically, the class $\mathcal{T}_0$ consists of encodings of the training data $x$ and $y$ and the tape size $\tau$ depends on the data size. For the construction in Corollary \ref{cor:learn-tm-tape-external}, it is allowed to be infinite. \subsection{Technical preliminaries} The loss function $\ell_{TM}$ to trace the Turing machine $TM$ with gradient descent is constructed with the help of piecewise linear functions on simplices. This sections contains some explicit formulas for their evaluation, derivatives and optimal descent directions. To this end, let $S$ be a simplex with vertices $e_v$, $v \in V$ for some index set $V$ and assume that $\ell: S \to \mathbb{R}$ is affine. The vertices $e_v$ will be standard basis vectors below, but this is not necessary for this section. Then, for $x \in S$ with barycentric coordinates \[ \begin{aligned} x & = \sum_{v \in V} x_v e_v, & \sum_{v \in V} x_v & = 1, & x_v & \ge 0 \end{aligned} \] we can evaluate $\ell$ by \[ \ell(x) = \sum_{v \in V} x_v \ell(e_v). \] Moreover, affinity implies that for any convex combination we have \begin{equation*} \ell \big((1-\lambda)x + \lambda y \big) = (1-\lambda) \ell(x) + \lambda \ell(y). \end{equation*} Therefore, for any two points $x,y$ in the simplex $S$, the one-sided directional derivative in the direction $y-x$ is given by \begin{equation} \begin{aligned} \partial_{y-x} \ell(x) & = \lim_{\substack{h \to 0 \\ h \ge 0}} \frac{1}{h} \left[ \ell(x + h(y-x)) - \ell(x) \right] = \lim_{\substack{h \to 0 \\ h \ge 0}} \frac{1}{h} \left[ \ell( (1-h)x + hy) - \ell(x) \right] \\ & = \lim_{\substack{h \to 0 \\ h \ge 0}} \frac{1}{h} \left[ (1-h)\ell(x) + h\ell(y)) - \ell(x) \right] \\ & = \ell(y) - \ell(x). \end{aligned} \label{eq:directional-derivative} \end{equation} For gradient descent on simplices, we optimize over directions that point into the simplex. However, the loss function will not be affine on the entire simplex, but only in its corners. To this end, for $0 < \mu \le 1$, the \emph{corner} at $v$ is a sub-simplex defined by \begin{align*} S_v^\mu & := \operatorname{conv} \Big( \{e_v\} \cup \{e_{vw}^\mu | \, v \ne w \in V \} \Big), & e_{vw}^\mu := (1-\mu) e_v + \mu e_w, \end{align*} with ``$\operatorname{conv}$'' denoting the convex hull. In the following, we denote the vertices of a simplex $S$ by $V(S)$. If $\mu=1/2$, we also use the abbreviations \begin{align*} S_v & := S_v^{1/2}, & e_{vw} & := e_{vw}^{1/2}. \end{align*} The next lemma provides explicit formulas for optimal descent directions with eventual extra quadratic terms. \begin{lemma} \label{lemma:simplex-derivative} Let $e_v$ be a vertex of $S \subset \mathbb{R}^N$ and $\ell$ be a loss function that is affine on the corner $S_v^\mu$ for some $0 < \mu \le 1$. Then for $A \in \mathbb{R}^{M \times N}$ and $y \in \mathbb{R}^M$, we have \begin{equation*} \argmin_{z \in S} \left\{ \partial_{z-e_v} \left[ \ell(x) + \frac{1}{2} \|Ax-y\|^2 \right]_{x = e_v} \right\} - e_v = \frac{1}{\mu} (e_w^\mu-e_v) = e_w - e_v, \end{equation*} with \begin{align} e_w^\mu & = \argmin_{b_u \in V(S_v^\mu)} \left[ \ell(b_u) + (Ae_v-y)^T A b_u \right], \label{eq:lemma:simplex-derivative-1} \\ e_w & = \argmin_{e_u \in V(S)} \left[ \ell( [1-\mu] e_v + \mu e_u) + \mu (Ae_v-y)^T A e_u \right]. \label{eq:lemma:simplex-derivative-2} \end{align} \end{lemma} \begin{proof} Let us abbreviate $f(x) := \ell(x) + \frac{1}{2} \|Ax - y\|^2$. We first rescale the directional derivatives to the corner $S_v^\mu$. To this end, note that \[ \partial_{z - e_v} f(e_v) = \frac{1}{\mu} \partial_{\mu(z - e_v)} f(e_v) = \frac{1}{\mu} \partial_{[(1-\mu)e_v + \mu z] - e_v)} f(e_v), \] where the point $a := (1-\mu) e_v + \mu z$ is in the corner $S_v^\mu$ if and only if $z$ is in $S$. Hence, the last equation and the identity $\frac{1}{\mu} (a - e_v) = z - e_v$ imply that \begin{equation} \argmin_{z \in S} \left\{ \partial_{z - e_v} f(e_v) \right\} - e_v = \frac{1}{\mu} \left( \argmin_{a \in S_v^\mu} \left\{ \partial_{a - e_v} f(e_v) \right\} - e_v \right). \label{eq:lemma:proof:simplex-derivative-1} \end{equation} Since $\ell$ is affine in the corner $S_v^\mu$, we use \eqref{eq:directional-derivative} to simplify the directional derivative to \[ \partial_{a - e_v} \ell(e_v) = \ell(a) - \ell(e_v) = \sum_{u \in V} a_u \ell(b_u) - \ell(v) \] for any $a \in S_v^\mu$ with barycentric coordinates $a_u$ with respect to the vertices $\{b_u | \, u \in V\} = V(S_v^\mu)$ of $S_v^\mu$. Likewise, we have \begin{multline*} \partial_{a - e_v} \left[ \frac{1}{2} \|Ae_v - y\|^2 \right] = (Ae_v - y)^T A (a-e_v) \\ = \left( \sum_{u \in V} a_u (Ae_v - y)^T b_u \right) - (Ae_v - y)^T Ae_v. \end{multline*} In the last two equations the respective last terms $\dots - \ell(e_v)$ and $\dots - (Ae_v-y)^TAe_v$ do not depend on $a$ and therefore the minimizer is given by \[ \argmin_{a \in S_v^\mu} \partial_{a-e_v} f(e_v) = \argmin_{a \in S_v^\mu} \sum_{u \in V} a_u \left[ \ell(b_u) + (Ae_v-y)^T A b_u \right]. \] The right hand side is minimal, if $a$ puts all its weight on the smallest summand, so that \begin{align*} \argmin_{a \in S_v^\mu} \partial_{a-e_v} f(e_v) & = b_u, & u & = \argmin_{b_u \in V(S_v^\mu)} \left[ \ell(b_u) + (Ae_v-y)^T A b_u \right]. \end{align*} Since $b_u$ are the vertices of $S_v^\mu$, together with \eqref{eq:lemma:proof:simplex-derivative-1} this directly shows \eqref{eq:lemma:simplex-derivative-1}. In order to show \eqref{eq:lemma:simplex-derivative-2}, note that every vertex $b_u \in V(S_v^\mu)$ is a convex combination $(1-\mu) e_v + \mu e_u$ of vertices of $S$. Therefore, we have \[ (Ae_v - y)^T A b_u = (1-\mu)(Ae_v - y)^T A e_v + \mu (Ae_v - y)^T A e_u. \] Using that the first summand of the right hand side is independent of $u$ shows \eqref{eq:lemma:simplex-derivative-2}. \end{proof} \subsection{Tracing Turing Machines: No Extra Tape Variable} \label{sec:tm-descent-no-tape-variables} In this section we construct a loss function that is used in the construction of Corollary \ref{cor:learn-tm-tape-internal} and allows a gradient descent method to trace the steps of the Turing machine $TM$. The method is simple but also very costly, therefore we consider a related but more efficient approach in Section \ref{sec:tm-descent-with-tape-variables} below. \paragraph{The Loss Function} By the assumption in Section \ref{sec:turing-machines}, for all relevant inputs in $\mathcal{T}_0$ the Turing machine halts in finite time using at most $\tau$ tape entries at any time during the computation. Hence, we can represent every relevant computation by a finite directed graph. Its vertices $V$ are all states of the Turing machine with the given size constraint and the edges $E$ encode the computational steps, i.e. there is an edge from vertices $v$ to $w$ if and only if the state $w$ follows after $v$ in the Turing machine execution. In order to construct the corresponding loss function, we first assign weights $\omega_v$ to vertices and $ \omega_{vw}$ pairs of vertices, including non-edges, so that their minimization mimics the Turing machine execution. To this end, let \begin{equation} \begin{aligned} \omega_v & > \omega_{vw}, & & \text{for }(v,w) \in E \\ \omega_{uv} & > \omega_{vw}, & & \text{for } (u,v), (v,w) \in E \\ \omega_{vw} & = B, & & \text{for }(v,w) \not\in E\text{ and }(w,v) \not\in E \\ \omega_{vw} & = \omega_{wv}, & & v,\, w \in V, \end{aligned} \label{eq:graph-edge-weights} \end{equation} where $B$ is an upper bound \[ B > \max \left\{\max_{v \in V} \omega_v, \max_{(v,w) \in E} \omega_{vw} \right\}. \] of all vertices and computationally legitimate edge weights. Although for the time being the problem is still discrete, we consider moving between vertices along the pairs with minimal weights $\omega_{vw}$, or staying in place if the current vertex weight $\omega_v$ is smaller than the outgoing connections. The first inequality of \eqref{eq:graph-edge-weights} ensures that the correct computational step is preferred over staying in place, the second inequality ensures that we do not follow the computation backwards and the third equality ensures that we do not follow computationally non-valid steps. The last identity is used to ease the transition to a continuous optimization problem later. It is not necessarily required that $\omega_w < \omega_v$ whenever $w$ succeeds $v$ in the computation, although for some constructions below we enforce this property to ensure that the gradient descent method has strictly decreasing loss. Next, we extend theses weighs to a continuous loss function $\ell: S \subset \mathbb{R}^{|V|} \to \mathbb{R}$ that can be optimized with gradient descent. To this end, we associate each vertex $v \in V$ with the standard unit basis vector $e_v \in \mathbb{R}^V$ and define the domain $S$ as the simplex \[ S := \operatorname{conv}\{e_v | \, v \in V\} \] spanned by the basis vectors. The values at the vertices and barycenters $e_{vw}$ of each two vertices $(v,w)$ correspond to the graph weights: \begin{equation} \begin{aligned} \ell(e_v) & = \omega_v, & v & \in V \\ \ell( e_{vw}) & = \omega_{vw} , & v \ne w; \, v, w & \in V. \end{aligned} \label{eq:loss-vertices} \end{equation} The latter condition requires the symmetry of the edge weights in \eqref{eq:graph-edge-weights}. Each iterate $x_k \in S$ of the gradient descent method will be a vertex. In order to control the gradient direction, we need to fill in values for $\ell$ in their neighborhood. To this end, we assume that \begin{align} & \ell\text{ is affine on the corners }S_v, & v & \in V(S). \label{eq:loss-affine} \end{align} A loss function with all properties in \eqref{eq:graph-edge-weights}, \eqref{eq:loss-vertices} and \eqref{eq:loss-affine} can be easily constructed by linear finite elements on a subdivision of the simplex $S$. Nonetheless, in this article, we use the related explicit functions from Appendix \ref{appendix:lagrange-basis}. These have two advantages: First, we do not need all finite element basis functions and therefore can avoid the construction of subdivisions for high dimensional simplices. Second, the explicit formulas demonstrate that they can be implemented by single layer $ReLU$ neural networks. Specifically, from Appendix \ref{appendix:lagrange-basis}, we have the functions $\ell_v$ and $\ell_{vw}$, which are linear in all corners $S_v^\mu$ and have the interpolation properties \begin{align*} \ell_v(e_v) & = 1, & \ell_v(e_w) & = 0, & \ell_v(e_{uw}) & = 0, & \\ \ell_{vw}(e_r) & = 0, & \ell_{vw}(e_{vw}) & = 1, & \ell_{vw}(e_{st}) & = 0, & \end{align*} for all $v \ne w$, $u \ne \{v,w\}$, $r \in V$ and $(s,t) \not\in \{(v,w), (w,v)\}$. A loss function satisfying \eqref{eq:loss-vertices} and \eqref{eq:loss-affine} is then given by \begin{equation} \ell(x) = \sum_{v \in V(S)} \omega_v \ell_v(x) + \sum_{v \ne w \in V} \omega_{vw}\ell_{vw}(x). \label{eq:tm-loss-tape-internal} \end{equation} Let us next assign some numeric values to the weights $w_v$ and $w_{vw}$ satisfying the requirements \eqref{eq:graph-edge-weights}. By assumption, we know that for all relevant inputs in $\mathcal{T}_0$, the Turing machine halts in finite time, say with no more than $K-1$ steps. For each state $v \in V$ we run the Turing machine for at most $K$ steps and record the number $k(v)$ it took until it reaches a halting state or set $k(v) = K$ if it does not reach a halting state. For arbitrary numbers $W_0 < W_1 < \cdots < W_K = B$ we then assign \begin{align*} \omega_{v} & = W_{k(v)} \\ \omega_{vw} & = \left\{ \begin{array}{ll} \frac{1}{2} [W_{k(v)} + W_{k(w)}] & (v,w) \in E\text{ or }(w,v) \in E \\ B & \text{else} \end{array}\right. . \end{align*} \paragraph{Gradient Descent} Since the loss function is only defined on the simplex $S$, we use the conditional gradient method (Frank-Wolfe algorithm) for optimization. This is a modification of the gradient descent method, which only allows update directions that are compatible with the convex constraint $x \in S$. It is defined by \begin{equation} \begin{aligned} d_k & = \argmin_{y \in S} \left\{ \partial_{y - x_k} \ell(x_k) \right\} - x_k\\ x_{k+1} & = x_k - \alpha_k d_k. \end{aligned} \label{eq:cond-gd} \end{equation} More commonly, the update direction is defined by $d_k = y_k-x_k$ with $y_k = \argmin_{y \in S} \dualp{\nabla \ell(x_k), y}$. This is equivalent to the one given above because $\partial_{y - x_k} \ell(x_k) = \dualp{\nabla \ell(x_k), y} - \dualp{\nabla \ell(x_k), x_k}$ and the latter term is independent of $y$. In our case $\ell$ is piecewise linear and may have kinks. Nonetheless, one-sided directional derivatives are well-defined and sufficient for the method above. The learning rate is either constant $\alpha_k = 1$, or determined by a line search. For the latter, we need the extra condition that \begin{equation} \begin{aligned} \omega_v & > \omega_{vw} > \omega_w, & \ell &\phantom{=} \text{is linear on $\operatorname{conv}\{e_v, e_{vw}\}$ and $\operatorname{conv}\{e_{vw}, e_w\}$}, & (v,w) & \in E \end{aligned} \label{eq:graph-loss-extra} \end{equation} to ensure that $\omega_w$ is indeed the minimizer along the gradient direction given by the edge from $e_v$ to $e_w$. For fixed learning rate this is not required since the loss is not necessarily strictly decreasing (at least for exact computation, which we consider in this paper). \paragraph{Convergence} The loss function is defined on the simplex $S \subset \mathbb{R}^V$ of a $|V|$ dimensional vector space. Since $|V|$ is the number of all possible states of the Turing machine with tape length bounded by $\tau$, the dimension is very large. It must be at least $|Q| 2^{d\tau} \tau^d$ with $|Q|$ states of the finite control, $2^{d \tau}$ possible tape contents and $\tau^d$ head positions. Practically, this is of course unrealistic and in Section \ref{sec:tm-descent-with-tape-variables} we consider a modified construction that scales linearly in the tape length. We now show that gradient descent applied to our loss function traces the steps of the Turing machine $TM$. \begin{proposition} \label{prop:tm-descent-no-tape-variable} Let $x_k$ be defined by the gradient descent method \eqref{eq:cond-gd} with initial value $x_0 = e_{v_0}$, $v_0 \in \mathcal{T}_0 \subset V(S)$ contained in the vertices of $S$. Assume that the loss function $\ell$ satisfies the conditions \eqref{eq:graph-edge-weights}, \eqref{eq:loss-vertices} and \eqref{eq:loss-affine}. Let the learning rate be constant $\alpha_k =1$ or defined by line search if \eqref{eq:graph-loss-extra} holds in addition. For consecutive states $v_0, \dots, v_K$ of the Turing machine execution with initial state $v_0$ and halting state $v_K$, we have \begin{align*} x_k & = e_{v_k}, & \text{for }k \le K. \\ \intertext{If \eqref{eq:graph-loss-extra} holds, we have} x_k & = v_{v_K}, & \text{for }k > K \end{align*} in addition and the loss is strictly decreasing, i.e. $\ell(x_k) = \omega_{v_k}$ with $\omega_{v_0} > \cdots > \omega_{v_K}$ for $k =1, \dots, K$. \end{proposition} \begin{proof} By induction, assume that $x_k = e_v$ is a non-halting state. Since the loss $\ell$ is linear on the corner $S_v$, Lemma \ref{lemma:simplex-derivative} implies that the updated direction of the gradient method is given by \begin{equation} d_k = \argmin_{y \in S} \left\{ \partial_{y - x_k} \ell(x_k) \right\} - x_k= \frac{1}{\mu} \left( \argmin_{y \in V(S_v)} \ell(y) - x_k \right). \label{ep:prop:tm-descent-no-tape-variable-1} \end{equation} Since $v$ is a non-halting state, the interpolation property \eqref{eq:loss-vertices} and the weight properties \eqref{eq:graph-edge-weights} of the loss function, yield $d_k = \frac{1}{\mu} (e_{vw} - e_v) = e_w - e_v$, where $w \in V(S)$ is the successor state of $v$ in the Turing machine execution, i.e. $(v,w) \in E$. Therefore, the next step of the gradient method is given by \[ x_{k+1} = e_v - \alpha_k (e_w - e_v). \] If the learning rate is $\alpha_k = 1$, this directly yields $x_{k+1} = e_{v_{k+1}}$. In case the learning rate $\alpha_k$ is given by a line search, the extra condition \eqref{eq:graph-loss-extra} ensures that the loss function is piecewise linear along the line segments from $e_v$ to $e_{vw}$ and $e_{vw}$ to $e_w$ with values $\ell(e_v) > \ell(e_{vw}) > \ell(e_w)$. Thus, the line search follows this direction as far as possible without leaving the simplex $S$, resulting in the final point $x_{k+1} = e_w$. If $x_k = e_{v}$ is a halting state, by the extra condition \eqref{eq:graph-loss-extra}, the minimizer of \eqref{ep:prop:tm-descent-no-tape-variable-1} is $e_v$ itself. Therefore, we have $x_{k+1} = x_k$, irrespective of the choice of $\alpha_k$. \end{proof} \subsection{Tracing Turing Machines: With Extra Tape Variable} \label{sec:tm-descent-with-tape-variables} The construction in Section \ref{sec:tm-descent-no-tape-variables} has the disadvantage that all possible relevant computations of the Turing machine must be known beforehand and hard-coded into the loss function. This quickly leads to an excessively high dimensional domain of the loss function. In this section, we consider an alternative construction, where the states of the simplex $S$ correspond to the finite control of the Turing machine only and the tape is stored in a extra variables that scales linearly with the tape length. The drawback of this method is that we have to use a fixed step size and the gradient descent iterates are no longer strictly decreasing. \paragraph{The Loss Function} The loss function is similar to the construction in Section \ref{sec:tm-descent-with-tape-variables} with the tape content split of into a new variable. We define a computational graph with vertices $V = Q \times \Gamma^d$ composed of the state of the finite control and copies of the tapes at the head positions. The directional edges $E$ correspond to legitimate computational steps, i.e. $(v,w) \in E$ if $v = (q,t)$ and $w = (q', t')$, with successor state $q' = \delta_1(q,t)$ of the finite control and arbitrary tape symbol $t'$ read from the tape. As before, we associate each vertex $v = (q,t) \in V$ with a standard unit basis vector $e_v = e_{q,t}$ in the $|Q|2^d$-dimensional vector space $\mathbb{R}^V = \mathbb{R}^{Q \times \Gamma^d}$ and define the simplex $S$ as the convex hull of these basis vectors. To account for the $d$ tapes of length $\tau$, we introduce two extra variables: $T \in \mathbb{R}^{\tau \times d}$ with one component for each tape entry and $H \in \{0,1\}^{\tau \times d}$ with exactly one non-zero entry per tape indicating the head positions. Note that the total number of vertices $2^d |Q|$ plus tape and head dimensions $2 \tau d$ scale linearly in the number of control states and the tape size. This is significantly less than the construction in Section \ref{sec:tm-descent-no-tape-variables} with at least an exponential number $|Q| 2^{d\tau} \tau^d$ of vertices. As in Section \ref{sec:tm-descent-with-tape-variables}, for vertices $v=(q,t)$ and halting states $F$, we assign weights \[ \omega_v = \left\{ \begin{array}{rl} - b^3 \gamma & q \in F \\ 0 & \text{else}, \end{array}\right. \] with a global scaling factor $\gamma$ and some constant $b>0$ to be chosen later. The weights distinguish halting from non-halting states, but are no longer decreasing while following a computational path. This cannot be avoided because the Turing machine may pass a vertex $v$ several times during its execution, with different tape content and head position. Since the tape is no longer included in $S$, we cannot distinguish these states with different weights. For two vertices $v=(q,t)$ and $w=(q',t')$, the edge weights are given by \begin{equation} \omega_{vw} = \left\{ \begin{array}{rl} - \left(b^3 + \sum_{\substack{i=1\\t'_i = t_i}}^d b \right) \gamma & q' = \delta_1(q,t) \\ 0 & \text{else}. \end{array}\right., \label{eq:loss-weight-TM-tape-external} \end{equation} The first choice is negative and therefore favors correct computational steps. Its structure is chosen to balance some quadratic terms that we add to the loss function later, for read and write operations to the tape variables $T$ and $H$. These quadratic terms may have zero gradients, in which case the extra sum in the definition of the edge weights $\omega_{vw}$ ensures correct successor states. The powers of $b$ are used to balance several contributions to the gradient. For later reference, we choose it sufficiently large so that \begin{align} b^3 & \ge \frac{1}{2} (b^3 + db), & \frac{1}{2} b^3 - db & \ge 2d b^2, & b^2 & > b. \label{eq:loss-weight-constants-TM-tape-external} \end{align} We extend these weights to a continuous loss function $\ell_S$ on $S$ with the properties \begin{equation} \begin{aligned} \ell_S (e_v) & = \omega_v, & v & \in V \\ \ell_S \left( e_{vw}^{1/4} \right) & = \omega_{vw} , & (v,w) & \in E \\ \ell_S \left( e_{vw}^{3/4} \right) & = \frac{1}{2} \omega_{vw} , & (v,w) & \in E \\ \ell_S \left( e_{vw}^{1/4} \right) & = 0 , & (v,w) & \not\in E\text{ and }(w,v) \not\in E \\ \ell_S \left( e_{vw}^{3/4} \right) & = 0 , & (v,w) & \not\in E\text{ and }(w,v) \not\in E \\ \ell_S\text{ is affine on the Corners }&S_v^{1/4}, & v & \in V \end{aligned} \label{eq:loss-vertices-no-tape} \end{equation} Since $e_{vw}^{1/4} = e_{wv}^{3/4}$, the loss $\ell_S$ is not well defined if both $(v,w)$ and $(w,v)$ are legitimate computations in $E$, i.e. the Turing machine goes back and forth between two states in $S$. Without loss of generality, we assume that this cannot happen, i.e. that \begin{equation} \text{For all vertices $v,w \in V$ not both $(v,w)$ and $(w,v)$ are contained in $E$}. \label{eq:no-back-step} \end{equation} This can easily be achieved by the following modification of the Turing machine $TM$: We triple the states to $\bar{Q} := \{[q,r] | \, q \in Q, \, r \in \{0,1,2\}\}$ and extend the transition function to \[ \bar{\delta}([q,r],t) = ([\delta(q,t), r+1\mod 3], \delta_2(q,t), \delta_3(q,t)). \] Thus, the extended Turing machine performs the exact same computations as the original one, with the exception that in each step the new index $r$ of state $[q,r]$ cycles through the numbers $0,1,2$. Therefore, if the state $v=([q,r], t])$ is followed by $w=([q',r+1 \mod 3], t')$ the latter is followed by some $[q'',r+2 \mod 3] \ne [q, r]$ because $r + 2 \mod 3 \ne r$. Therefore, we never have the sequence of states $v \to w \to v$, which implies \eqref{eq:no-back-step}. In order to extend the loss function to the full simplex, we use the two functions $\ell_v^{1/4}$ and $\bar{\ell}_{vw}$ defined in Lemma \ref{lemma:lagrange-corner} and \eqref{eq:lagrange-interior-unsymmetric} in Appendix \ref{appendix:lagrange-basis}, which can be easily constructed with single layer $ReLU$ units. They have the properties \begin{equation*} \begin{aligned} \ell_v^{1/4}(e_v) & = 1, \\ \ell_v^{1/4}(e_u) & = 0, & & & v \ne & u \in V \\ \ell_v^{1/4}\left( e_{tu}^{1/4} \right) & = 0, & \ell_v^{1/4}\left( e_{tu}^{3/4} \right) & = 0 , & t,u & \in V \\ \ell_v^{1/4} \text{ is linear on the corners }& S_u^{1/4}, & & & u & \in V. \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} \bar{\ell}_{vw}(e_u) & = 0, & & & u & \in V \\ \bar{\ell}_{vw}\left( e_{vw}^{1/4} \right) & = 1, & \bar{\ell}_{vw}\left( e_{vw}^{3/4} \right) & = \frac{1}{2} \\ \bar{\ell}_{vw}\left( e_{tu}^{1/4} \right) & = 0, & \bar{\ell}_{vw}\left( e_{tu}^{3/4} \right) & = 0, & \{t,u\} & \ne \{v,w\} \\ \bar{\ell}_{vw} \text{ is linear on the corners }& S_u^{1/4}, & & & u & \in V. \end{aligned} \end{equation*} Along the edge $(v,w)$, the function $\bar{\ell}_{vw}$ is piecewise linear as shown in Figure \ref{fig:basis-profile}. Therefore, a loss function satisfying all properties in \eqref{eq:loss-vertices-no-tape} is given by \begin{align*} \ell_S(x) = \sum_{v \in V} \omega_v \ell_v^{1/4}(x) + \sum_{v,w \in V} \omega_{vw} \bar{\ell}_{vw}(x). \end{align*} \begin{figure} \begin{center} \begin{tikzpicture} \newcommand{0.1}{0.1} \draw[->,thick] (-0.1,0) -- (8.4,0); \draw[->,thick] (0,-0.1) -- (0,1.4) node[left] {$\bar{\ell}_{vw}(\cdot)$}; \draw[blue, thick] (0,0) -- (2,1) -- (4,1) -- (8,0); \draw[thick] (2,0.1) -- (2,-0.1); \draw[thick] (4,0.1) -- (4,-0.1); \draw[thick] (8,0.1) -- (8,-0.1); \draw[thick] (-0.1,1) -- (0.1,1); \node[below] at (0,0) {\tiny $\phantom{\frac{1}{1}}e_v$}; \node[below] at (2,0) {\tiny $\frac{3}{4} e_v + \frac{1}{4} e_w$}; \node[below] at (4,0) {\tiny $\frac{2}{4} e_v + \frac{2}{4} e_w$}; \node[below] at (8,0) {\tiny $\phantom{\frac{1}{1}}e_w$}; \node[left] at (0,0) {\tiny $0$}; \node[left] at (0,1) {\tiny $1$}; \end{tikzpicture} \end{center} \caption{Function $\bar{\ell}_{vw}$ restricted to the line segment from $e_v$ to $e_v$.} \label{fig:basis-profile} \end{figure} The loss $\ell_S$ is used to trace the steps of the finite control via gradient descent, but in addition we need some extra terms for reading and writing to the tape variable $T$ and moving the head positions $H$. To this end, we first define some matrices $\mathcal{T}_\downarrow$, $\mathcal{T}_\curvearrowright$ to read out tape symbols from vertices $v = (q,t)$ and a bilinear map $\mathcal{S}$ to shift head positions and then add corresponding least squares terms to the loss function. The matrices $\mathcal{T}_\downarrow, \mathcal{T}_\curvearrowright \in \mathbb{R}^{\tau \times d}$ yield the tape symbols of each vertex at the current and next state of the Turing machine, defined by \begin{align*} \mathcal{T}_\downarrow e_{q,t} & = t \in \Gamma^d, & \mathcal{T}_\curvearrowright e_{q,t} & = \delta_{2}(q,t) \in \Gamma^d. \end{align*} The shift of the head positions $\mathcal{S}$ is defined by \[ [\mathcal{S}(e_{q,t})H]_{i,j} = h_{i+\delta_3(q,t), j}, \] for basis vectors and then expanded to be bi-linear in both of its variables. It shifts all tape head indicators by the amount $\delta_3(q,t)$ determined by the current step of the Turing machine. Note that we always assume that the tapes are sufficiently large so that the head positions never reach its boundary. Hence, we may insert blank symbols if necessary. We can now match the tape symbols $t$ in a vertex $x = e_v = e_{q,t}$ with tape symbols in the tape $T$. To this end, let us denote by subscripts $i$ columns of tape matrices, so that e.g. $T_i \in \mathbb{R}^\tau$ is the content of the $i$-th tape of the Turing machine. Then, adding a term $|T^T_i \cdot [\mathcal{S}(x)H]_i - (\mathcal{T}_\downarrow x)_i|^2$ to the loss function matches the tape symbol of the $i$-th tape at the next head position with the tape symbol in the vertex $v$. We can do the same for all tapes by adding a term $\|\diag(T^T \mathcal{S}(x)H) - \mathcal{T}_\downarrow x\|^2$, where ``$\diag$'' is the vector of diagonal entries of the $d \times d$ matrix $T^T \mathcal{S}(x)H$. However, with this expression we have no control if we read or write a symbol from the tape to the vertex. Therefore, we include extra ``stop gradient'' operations denoted by $\stopgrad{\cdot}$, which are readily available in current neural network libraries. Terms inside $\stopgrad{\cdot}$ are considered constant when differentiated, so that e.g. $\frac{d}{dx} f(x) \stopgrad{g(x)} = f'(x)g(x)$ as opposed to the correct $f'(x) g(x) + f(x) g'(x)$. With this extra operation, we can extend our above example to $\|\stopgrad{\diag(T^T \mathcal{S}(x)H)} - \mathcal{T}_\downarrow x\|^2$. Now the first term is considered constant so that minimizing it changes $x$ so that $\mathcal{T}_\downarrow x$ matches $\diag(T^T \mathcal{S}(x)H)$, i.e. we write the tape symbols at the head positions $\mathcal{S}(x)H$ into the component $t = \mathcal{T}_\downarrow x$ of $x$. We add a global additive constant $c$ and several read/write operations to the loss function: \begin{multline} \ell(x,T,H) = c + \ell_S(x) + \frac{1}{2} \gamma \|\diag(T^T \stopgrad{H}) - \stopgrad{\mathcal{T}_\curvearrowright x}\|^2 \\ + \frac{1}{2} (4 b^2 \gamma) \|\stopgrad{\diag(T^T \mathcal{S}(x) H)} - \mathcal{T}_\downarrow x\|^2 + \frac{1}{2} \gamma \|\stopgrad{\diag(\mathcal{S}(x) H)} - H\|^2. \label{eq:tm-loss-tape-external} \end{multline} The first least squares term writes $t' = \delta_2(q,t)$ to the current head position, the second term reads the content of the next head position into the next state $(q',t')$ and the last term moves the current head position to the next head position. \paragraph{Gradient Descent} The loss function is trained by the conditional gradient method \begin{equation} \begin{aligned} d_k & = \argmin_{y \in S} \left\{ \partial_{y - x_k} \ell(x_k, T_k, H_k) \right\} - x_k\\ x_{k+1} & = x_k + d_k \\ T_{k+1} & = T_k - \frac{1}{\gamma} \nabla_T \ell(x_k,T_k, H_k) \\ H_{k+1} & = H_k - \frac{1}{\gamma} \nabla_H \ell(x_k,T_k, H_k). \end{aligned} \label{eq:gradient-descent-tape-external} \end{equation} We slightly abuse notation and denote by $\nabla_T$ the gradient with respect to the writable tapes of $T$ only, excluding the read-only tapes. As in Section \ref{sec:tm-descent-no-tape-variables}, the loss function has kinks so that the gradient with respect to $x$ is not defined everywhere. Nonetheless, the one-sided directional derivatives used in the method are well defined at all points encountered during training. \paragraph{Convergence} The following proposition shows that minimizing the loss function traces the steps of the Turing machine. \begin{proposition} \label{prop:tm-descent-tape-excluded} For $k=0, \dots, K$, let $v_k = (q_k, t_k)$ with tape variables $T_k$, $H_k$ be consecutive states of the Turing machine $TM$ with $t_k = \diag(T^T H)$, where $q_1, \dots, q_{K-1} \not\in F$ are non-halting states and $q_K \in F$ is a halting state. Assume that the loss function $\ell$ is defined by \eqref{eq:tm-loss-tape-external} and satisfies \eqref{eq:loss-weight-constants-TM-tape-external} and \eqref{eq:no-back-step}. Then $x_k := e_{v_k} \in \mathbb{R}^V$ and $T_k$, $H_k$ satisfy the gradient descent updates \eqref{eq:gradient-descent-tape-external} and in addition, we have \begin{equation} \begin{aligned} c \le \ell(x_k, T_k, H_k) & \le c + 8 b^2 d \gamma + 3d\gamma, & k & < K \\ \ell(x_k, T_k, H_k) & \le c + 8 b^2 d \gamma + 3d\gamma - b^3\gamma, & k & = K. \end{aligned} \label{eq:tm-descent-tape-excluded-stopping} \end{equation} \end{proposition} Note that unlike Proposition \ref{prop:tm-descent-no-tape-variable}, we cannot allow a line search for the gradient descent method any longer. Indeed non-halting states $x_k$ are vertices of the simplex $S$ and therefore the first component $\ell_S(x_k) = 0$ of the loss function is independent of $k$ and the remaining components are not necessarily decreasing. The loss bounds in \eqref{eq:tm-descent-tape-excluded-stopping} are second order in $b$ for $k<K$ and third order for $k=K$. Therefore, for $b$ sufficiently large, the loss in a halting state is strictly smaller than the loss in any non-halting state, which can be used as a stopping criterion. \begin{proof} Without loss of generality, we assume that the additive constant $c$ in the loss function is zero. By induction, assume that $x_k = e_v = e_{q,t}$, $T_k \in \Gamma^\tau = \{-1,1\}^\tau$, and $H_k \in \{0,1\}^\tau$ are iterates of the gradient descent method and that $q \not\in F$ is not an accepting state. The variable $x$ is updated by $x_{k+1} = x_k + d_k$ with direction \[ d_k + x_k = \argmin_{y \in S} \partial_{y-x_k} \left( \ell_S(x_k) + \frac{1}{2} (4 b^2 \gamma)\|\mathcal{T}_\downarrow x_k - \stopgrad{\diag(T_k^T \mathcal{S}(x_k)H_k}\|^2 \right) , \] where we have disregarded two least square terms in the loss function \eqref{eq:tm-loss-tape-external} because their $x$-gradient is zero by the stop gradient operation. Since all $\bar{\ell}_{vw}$ are linear in all corners $S_u^{1/4}$, $u \in V$, and we can disregard gradients of $\stopgrad{\diag(T_k^T \mathcal{H}_\curvearrowright x_k)}$, by Lemma \ref{lemma:simplex-derivative} with $\mu=1/4$, we have \begin{equation} x_{k+1} = e_v + d_k = \argmin_{e_w \in V(S)} \ell_S\left( e_{vw}^{1/4} \right) + b^2 \gamma \left( \mathcal{T}_\downarrow x_k - \diag(T_k^T \mathcal{S}(x_k)H_k) \right)^T \mathcal{T}_\downarrow e_w. \label{eq:proof:1:prop:tm-descent-tape-excluded} \end{equation} We have to show that the minimizer $e_w = e_{q',t'}$ is the next state of the Turing machine, i.e. $q' = \delta_1(q,t)$ and $t'$ contains a copy of the tape at the current head location shifted by $\delta_3(q,t)$. Let us first prove that $e_w$ has the correct state $q'$ by showing that the first term in \eqref{eq:proof:1:prop:tm-descent-tape-excluded} dominates the second. To this end, note that the numbers $(\mathcal{T}_\downarrow x_k)_i$, $(T_k)_i^T \cdot (\mathcal{S}(x_k)H_k)_i$ and $(\mathcal{T}_\downarrow e_w)_i$, for all tapes $i=1, \dots, d$ are tape symbols contained in $\{-1,1\}$ so that $b^2 \gamma \left( (\mathcal{T}_\downarrow x_k)_i - (T_k)_i^T \cdot (\mathcal{S}(x_k)H_k \right)_i ) (\mathcal{T}_\downarrow e_w)_i \in \{-2 b^2 \gamma,0,2 b^2 \gamma\}$. It follows that the second summand $b^2 \gamma \left( \mathcal{T}_\downarrow x_k - \diag(T_k^T \mathcal{S}(x_k)H_k) \right) \mathcal{T}_\downarrow e_w$ in \eqref{eq:proof:1:prop:tm-descent-tape-excluded} is contained in $\{2j b^2 \gamma | \, j \in \{-d, \dots, d\} \}$. Since $q$ is a non-halting state, the vertex weight $\omega_v=0$ is zero and therefore for $\ell_S \left( e_{vw}^{1/4} \right)$ we have three possibilities: It is in the interval $\left[- (b^3 + db) \gamma, - b^3 \gamma \right]$ if $q',t'$ is a successor state of $q,t$ in the Turing machine execution, in $\left[ -\frac{1}{2} (b^3 + db) \gamma, - \frac{1}{2} b^3 \gamma \right]$ if $q',t'$ is a predecessor state of $q,t$ with the extra factor $1/2$ coming from the profile in Figure \ref{fig:basis-profile} and $0$ else, irrespective the tape symbols $t$ and $t'$. By assumption \eqref{eq:loss-weight-constants-TM-tape-external} we have $b^3 \ge \frac{1}{2} (b^3 + db)$ so that successor states are preferred over predecessor states. By the same assumption \eqref{eq:loss-weight-constants-TM-tape-external}, we also have $\frac{1}{2} (b^3 - db) \gamma \ge 2d b^2 \gamma$, i.e. the smallest possible gap between predecessor and successor states in the first gradient component $\ell(\cdot)$ is bigger than the maximal contribution $2db^2\gamma$ of the second term in \eqref{eq:proof:1:prop:tm-descent-tape-excluded}. Hence, the minimizer $e_w$ of the directional derivative \eqref{eq:proof:1:prop:tm-descent-tape-excluded} with vertex $w = (q',t')$, is a successor state so that $q'=\delta_1(q,t)$. Next, we show that $t'$ contains the correct tape symbol. Since we already know that $q'$ is a successor state, we use the definition \eqref{eq:loss-weight-TM-tape-external} of the weights to simplify the gradient descent update \eqref{eq:proof:1:prop:tm-descent-tape-excluded} to $x_{k+1} = e_{q', t'}$ with \begin{equation*} t' = \argmin_{t' \in \mathbb{R}^d} -\left(b^3 + \sum_{\substack{i=1\\t'_i = t_i}}^d b \right) \gamma + b^2 \gamma \left( \mathcal{T}_\downarrow x_k - \diag(T_k^T \mathcal{S}(x_k)H_k) \right)^T \mathcal{T}_\downarrow e_{q',t'}. \end{equation*} Eliminating constant summands and sorting the remaining terms with respect to components $i \in \{1, \dots, d\}$, we obtain \begin{equation} t'_i = \argmin_{t'_i \in \mathbb{R}} - b \gamma \delta_{t_i, t'_i} + b^2 \gamma \left( (\mathcal{T}_\downarrow x_k)_i - (T_k)_i^T \cdot (\mathcal{S}(x_k)H_k)_i ) \right) (\mathcal{T}_\downarrow e_{q',t'})_i, \label{eq:proof:2:prop:tm-descent-tape-excluded} \end{equation} where $\delta_{t_i,t'_i}$ is one if $t_i = t'_i$ and zero else. Since we want to read the tape symbol of tape $i$ at the next head position into the component $t'_i$ of the successor state $e_w$, we have to show that the minimizer of the last equation satisfies $t'_i = (T_k)_i^T \cdot (\mathcal{S}(x_k)H_k)_i$. We distinguish two cases: \begin{enumerate} \item \emph{Case 1:} Let us assume that $t_i = (T_k)_i^T \cdot (\mathcal{S}(x_k)H_k)_i$, i.e. the tape symbol of the current vertex $x_k$ already matches the one of the successor state of the TM. In this case, the second term in \eqref{eq:proof:2:prop:tm-descent-tape-excluded} is zero and the term $\delta_{t_i,t'_i}$ in \eqref{eq:proof:2:prop:tm-descent-tape-excluded} ensures that $t_i'=t_i$, as required. \item \emph{Case 2:} Let us now assume that $t_i \ne (T_k)_i^T \cdot (\mathcal{S}(x_k)H_k)_i$. One easily verifies that the second term of \eqref{eq:proof:2:prop:tm-descent-tape-excluded} is $-2 b^2 \gamma$ for $t'_i = (T_k)_i^T \cdot (\mathcal{S}(x_k)H_k)_i$ and $2 b^2 \gamma$ for $t_i' \ne (T_k)_i^T \cdot (\mathcal{S}(x_k)H_k)_i$. The first term of \eqref{eq:proof:2:prop:tm-descent-tape-excluded} is either $b \gamma$ or zero, depending on the choice of $t'_i$. Since $b^2 \gamma > b\gamma$ by assumption \eqref{eq:loss-weight-constants-TM-tape-external}, the second term dominates the choice and we obtain $t_i' = (T_k)_i^T \cdot (\mathcal{S}(x_k)H_k)_i$ as required. \end{enumerate} It remains to consider the gradient descent updates $T_{k+1}$ and $H_{k+1}$. By the stop gradient operations, we have \begin{align*} \nabla_T \ell(x_k, T_k, H_k) & = \nabla_T \frac{\gamma}{2} \|\diag(T_k^T \stopgrad{H_k}) - \stopgrad{\mathcal{T}_\curvearrowright x_k}\|^2\\ & = \nabla_T \frac{\gamma}{2} \sum_{i \not\in RO} |(T_k)_i^T \cdot \stopgrad{(H_k)_i}) - \stopgrad{(\mathcal{T}_\curvearrowright x_k)_i}|^2\\ & = \gamma \left\{(H_k)_i [(T_k)_i^T \cdot (H_k)_i - (\mathcal{T}_\curvearrowright x_k)_i] \right\}_{i \in W} \end{align*} Recall our convention that $\nabla_T$ is the gradient excludes the read-only tapes $RO$. Therefore, we confine the sums and components above to the writable tapes not in $RO$. The remaining read-only tapes are unchanged by both the Turing machine execution and the gradient descent updates. Using that $(H_k)_i \in \{0,1\}^\tau$ is the indicator vector for the head location and $(\mathcal{T}_\curvearrowright x_k)_i = \delta_2(q,t)_i$, the last equation implies that \[ (T_{k+1})_i = (T_k)_i - (H_k)_i [ (T_k)_i^T \cdot (H_k)_i - \delta_2(q,t)_i]. \] so that $T_{k+1}$ is indeed the content of the tape in the next step of the Turing machine. The update of the head location $H_k$ works analogously. We have \begin{align*} \nabla_H \ell(x_k, T_k, H_k) & = \nabla_H \frac{\gamma}{2} \|\stopgrad{\mathcal{S}(x_k) H_k} - H_k\|^2 \\ & = - \gamma [\mathcal{S}(x_k) H_k - H_k] \end{align*} and therefore \begin{equation*} H_{k+1} = H_k - \frac{1}{\gamma} \nabla_H \ell(x_k, T_k, H_k) = H_k + [ \mathcal{S}(x_k) H_k - H_k ] = \mathcal{S}(x_k) H_k, \end{equation*} i.e. the new head position is shifted by $\mathcal{S}(x_k)$ as required. In order to show the bounds \eqref{eq:tm-descent-tape-excluded-stopping}, note that all tape symbols are contained in $\{-1,1\}$, and the head locations are indicator vectors with entries in $\{0,1\}$. Hence, the last three summands in the loss \eqref{eq:tm-loss-tape-external} are in the intervals $[0, 2 d \gamma]$ and $[0,8 b^2 d \gamma]$, $[0, d\gamma]$, respectively. The bound \eqref{eq:tm-descent-tape-excluded-stopping} then follows from the observation that on vertices $\ell_S$ is $-b^3 \gamma$ in a halting state and zero else. \end{proof} \section{Application to Supervised Learning} \label{sec:supervised-learning} In this section, we prove the main result of the paper as stated in Section \ref{sec:main}, i.e. we construct an extended network that adjusts its weights according to the computation of a Turing machine during supervised learning. This is achieved by connecting the output of the Turing machine tracing from the last section with the weights of the primary network and to ensure that proper gradients are passed back into the tracing component by an optimization of a standard least squares loss. \subsection{Construction of the Extended Network} \label{sec:extended-network-construction} The construction of the extended network $F(x, (s,t))$ is shown and briefly explained in Figure \ref{fig:nn-extended} in Section \ref{sec:extended-network-overview}. In this section, we fill in some details that we skipped over in its initial description. The extended network mostly consists of two main branches connected by some extra ``switches''. The first branch contains the primary network $f_\theta(x)$ depending on the input $x$ and its parameters $\theta$, which are no longer trainable but outputs of some hidden layers. The second branch consists of a network that emulates the Turing machine $TM$ and is constructed similar to the Turing machine tracing in Propositions \ref{prop:tm-descent-no-tape-variable} and \ref{prop:tm-descent-tape-excluded}. The switches then pass one of these branches to the network output. The components of the extended network in Figure \ref{fig:nn-extended} are defined as follows: \begin{itemize} \item \emph{$x$}: Neural network input. \item \emph{$z \in \mathbb{R}^{n \times n}$} holds a copy of the labels $y$ after the first training step. Note that we cannot pass $y$ into the Turing machine directly because it is only implicitly accessible through the loss function. \item \emph{$s,T,H$}: Trainable parameters, representing the Turing machine's state as in Sections \ref{sec:tm-descent-no-tape-variables}, \ref{sec:tm-descent-with-tape-variables}. They Contain $s \in S$ for the state state of the Turing machine and, depending on the construction, $T,H \in \mathbb{R}^{d\tau}$ for the tape symbols and head positions. Although $s,T,H$ are mostly trainable weights, the inputs $x$ and $z$ are written on the read-only tapes of the Turing machine. All edges leading into and out of this node use quantization and de-quantization, except for the one to $f_{TM}$. No derivatives of the (de)quantization are required during training. \item \emph{$\theta$}: Weights of the primary network, read from the output tape of the Turing machine and not trainable. \item \emph{$f_\theta$}: The primary neural network, depending on input $x$ and weights $\theta$. \item \emph{$f_{TM}$}: Component of the network associated with the Turing machine $TM$. With initial $x$ and $y$ on its read-only tape, it halts with weights $\theta$ on the output tape and returns a vector $u$ that is fed into the downstream node $s_{net}$ as described below. \item \emph{$s_{net}$}: Switch between the Turing machine output $f_{TM}$ and the output of the primary network $f_\theta$. \item \emph{$s_{init}$}: Switch for reading the labels $y$ into $z$ and the remaining training. \item \emph{$out$}: Output of the extended network. \item Red boxes: Trainable weights. \item \emph{Dotted Arcs} denote ``stop-gradient'' operations, i.e. for a dotted arc from node $n_1$ to node $n_2$, we artificially set the gradient $\frac{\partial n_2}{\partial n_1} := 0$, even if that does not correspond the actual gradient. This means that we consider $n_1$ non-trainable or constant whenever it passes through $n_2$. This is a common feature implemented in most current deep learning libraries. \item Function arguments are ordered left to right with respect to incoming arcs in Figure \ref{fig:nn-extended} so that e.g. $s_{init}$ takes arguments in the order $s_{init}(s_{net}, z)$. \end{itemize} The Turing Machine $TM$ contains one read-only tape for the inputs $x$ and $y$. Unlike the other tapes, the read-only tape is not a trainable network weight but a hidden layer of width $\tau$ containing a quantization of the input $x$ and the weight $z$ (which will hold a copy of the labels $y$). Likewise, the weights $\theta$ of the primary network become a hidden layer containing a de-quantization of the output tape of the Turing machine. On a practical computer all variables are already stored in quantized format and can directly be written to and from the tapes. Formally, (de)quantizations can also be computed by a neural network from floating point numbers, see Appendix \ref{sec:quantization} for more details. In the description of the main result, in particular for the loss \eqref{eq:lsq-loss}, we have split up the trainable weights as $(s,t)$ into variables $s$ that are restricted to a simplex and variables $t$ that are unrestricted. Now, with the full description of the extended network, we have $s=s$ and $\thz = T,H,z$. Likewise, the vector valued learning rate $\alpha$ in \eqref{eq:intro:gd-tm-supervised-learning} is split into three components $\alpha_T$, $\alpha_H$ and $\alpha_z$ for the three components $H,T,z$ of the trainable weights, respectively. After training, both switches $s_{init}$ and $s_{net}$ are ``open'', the network computes $f_\theta(x)$ with $\theta$ read from the Turing machine's tape. Nonetheless, in the following sections, we describe the training process bottom-up in the network starting from the initial state. \subsection{Reading the Labels} In the first gradient descent step, we read the labels into the trainable variable $z$ and then shut off this reading process for the rest of the training. This is done with the switch $s_{init}(u,z)$, which, if fully turned on or off, lets either the first or second input pass through unchanged. The switch is triggered by the size of its second input variable $z$. Specifically, for two matrices $u,z \in \mathbb{R}^{n \times m}$ with the shape of the data $y$ and $\epsilon$ given by \eqref{eq:labels-size}, the switch has the following properties: \begin{align*} s_{init}(u,0) & = 0, & \nabla_u s_{init}(u,0) & = 0, & \nabla_z s_{init}(u,0) & = I \\ s_{init}(u,z) & = u, & \nabla_u s_{init}(u,z) & = I, & \nabla_z s_{init}(u,z) & = 0, & \text{for all }\|z\|_2^2 & \ge \epsilon, \end{align*} where $I \in \mathbb{R}^{(n \times m) \times (n \times m)}$ is the identity matrix. This can be easily realized by a cutoff function $\psi: [0, \infty) \to [0,1]$, with $\psi(t) = 1$ for $0 \le t < \frac{1}{3} \epsilon$ and $\psi(t) = 0$ for $\frac{2}{3} \epsilon < t$ and \[ s_{init}(u,z) := [1-\psi(\|z\|_2^2)] u + \psi(\|z\|_2^2) z. \] The cutoff function does not need to be differentiable in the interval $[\epsilon/3, 2\epsilon/3]$ and can be constructed e.g. from two $ReLU$ units. Let us now consider the first gradient descent step. With initial value $z_0 = 0$, the chain rule implies \begin{align*} \nabla_z out & = \nabla_{s_{net}} s_{init}(s_{net},0) \nabla_z s_{net} + \nabla_z s_{init}(s_{net},0) I = I \\ \nabla_\Box out & = \nabla_{s_{net}} s_{init}(s_{net},0) \nabla_\Box s_{net} + \nabla_z s_{init}(s_{net},0) 0 = 0 \end{align*} for $\Box \in {s,T,H}$. Together with $out = 0$ for input $z_0 = 0$, it follows that \begin{align*} \nabla_\Box \frac{1}{2} \|out-y\|_2^2 & = 0, & \nabla_z \frac{1}{2} \|out-y\|_2^2 & = out-y = -y & \end{align*} and therefore, the after one gradient descent step with learning rate $\alpha_z := 1$ for $z$, we have \begin{align*} s_1 & = s_0, & T_1 & = T_0, & H_1 & = H_0, & z_1 & = z_0 - 1 (-y) = y. \end{align*} Since by assumption $\|y\|_2^2 > \epsilon$, for the next gradient descent steps we have \begin{align*} \frac{1}{2} \nabla_z\|out - y\|_2^2 & = (out-y)^T [\nabla_{s_{net}} s_{init}(s_{net},y) \nabla_z s_{net} + \nabla_z s_{init}(s_{net},y) I] \\ & = (out-y)^T \nabla_z s_{net} = 0, \end{align*} where we have used that $s_{net}$ only depends of $z$ via a stop gradient operation so that the latter gradient is zero. This directly implies that \begin{equation*} \begin{aligned} z_0 & = 0, & z_k & = y, & k & \ge 1. \end{aligned} \end{equation*} Likewise, we have \begin{equation} \begin{aligned} \frac{1}{2} \nabla_\Box \|out-y\|^2 & = (out-y)^T [\underbrace{\nabla_{s_{net}} s_{init}(s_{net},y)}_{=I} \nabla_\Box s_{net} + \nabla_\Box s_{init}(s_{net},y) \underbrace{\nabla_\Box z}_{=0}] \\ & = (out-y)^T \nabla_\Box s_{net}, \end{aligned} \label{eq:grad-s-init} \end{equation} for $\Box \in \{s,T,H\}$ so that the parameters \begin{equation*} \begin{aligned} s_{k+1} & = s_k - \left[\argmin_{\sigma \in S} \partial_{\sigma - s_k} \frac{1}{2} \|out - y\|^2 - s_k \right] \\ & = s_k - [\argmin_{\sigma \in S} (out-y)^T \partial_{\sigma - s_k} s_{net} - s_k] \\ T_{k+1} & = T_k - \alpha_T \frac{1}{2} \nabla_T \|out-y\|^2 = T_k - \alpha_T (out-y)^T \nabla_T s_{net} \\ H_{k+1} & = H_k - \alpha_H \frac{1}{2} \nabla_H \|out-y\|^2 = H_k - \alpha_H (out-y)^H \nabla_H s_{net} \end{aligned} \end{equation*} train as if we would train the partial network with output $s_{net}$ only. Technically, as in \eqref{eq:directional-derivative} the node $f_{TM}$ inside $s_{net}$ only has one-sided directional derivatives, but one easily verifies that this does not alter the chain rule. \subsection{Switching between Neural Network and Turing Machine} The second switch $s_{net}$ selects between the primary network $f_\theta(x)$ and the Turing machine $f_{TM}(s, t)$. Just as the switch $s_{init}$, it has two inputs $s_{net}(f, u)$, with $f = f_\theta$ and $u = f_{TM}$ to shorten the notation below, and the properties \begin{align} s_{net}(f,u) & = f, & \nabla_f f_{net}(f,u) & = I, & \nabla_u s_{net}(f,u) & = 0 \label{eq:def-s-net-1} \intertext{if $\|u\|_2^2 < 2 \underline{B}$ and} s_{net}(f,u) & = u, & \nabla_f s_{net}(f,u) & = 0, & \nabla_u s_{net}(f,u) & = I \label{eq:def-s-net-2} \end{align} if $\|u\|_2^2 > 2 \overline{B}$ for some $\underline{B} < \overline{B}$ to be specified later. Similar to $s_{init}$ this can be realized by a smooth cutoff function $\phi: [0, \infty) \to [0,1]$ with $\phi(t) = 0$ for $t \le 2 \underline{B} + \delta$ and $\phi(t) = 1$ for $t \ge 2 \overline{B} - \delta$ for some $\delta > 0$ with $2 \underline{B} + \delta < 2 \overline{B} - \delta$ and \[ s_{net}(f,u) = [1-\phi(\|u\|_2^2)] f + \phi(\|u\|_2^2) u. \] Let $k \in \{1, \dots, K\}$ be a gradient descent step after the initialization in the first step and before the iterate $K$ when training is terminated by the stopping criterion \eqref{eq:intro:gd-tm-supervised-stopping} and $f_{TM}^k$ the outputs of the network node $f_{TM}$ in the corresponding steps. Starting from its initial state $s_0 = s_1$ till the $(K-1)$st step, one step before the halting state, we will ensure below that \begin{equation} \begin{aligned} \|f_{TM}^k\|_2^2 & > 2 \overline{B}, & k & = 1, \dots, K-1 \\ \|f_{TM}^k\|_2^2 & < 2 \underline{B}, & k & = K. \end{aligned} \label{eq:tm-loss-bounds} \end{equation} Thus before the Turing machine has halted, we have \[ out_k = f_{TM}^k \] and \begin{equation} \nabla_\Box out \stackrel{\eqref{eq:grad-s-init}}{=} \nabla_\Box s_{net} = \nabla_{f} s_{net} \nabla_\Box f_\theta + \nabla_{f_{TM}} s_{net} \nabla_\Box f_{TM} = \nabla_\Box f_{TM} \label{eq:grad-tm} \end{equation} for $\Box \in \{s,T,H\}$, whereas after halting, we have \[ out = f_\theta. \] Again, we have used that the chain rule for the one-sided directional derivatives in $f_{TM}$ works as usual. Thus, we obtain the updates \begin{equation} \begin{aligned} s_{k+1} & = s_k - \left[\argmin_{\sigma \in S} \partial_{\sigma - s_k} \frac{1}{2} \|out - y\|^2 - s_k \right] \\ & = s_k - [\argmin_{\sigma \in S} (out-y)^T \partial_{\sigma - s_k} f_{TM} - s_k] \\ T_{k+1} & = T_k - \alpha_T \frac{1}{2} \nabla_T \|out-y\|^2 = T_k - \alpha_T (out-y)^T \nabla_T f_{TM} \\ H_{k+1} & = H_k - \alpha_H \frac{1}{2} \nabla_H \|out-y\|^2 = H_k - \alpha_H (out-y)^H \nabla_H f_{TM} \end{aligned} \label{eq:gradient-descent-updates-2} \end{equation} for $k = 1, \dots, K-1$ so that while the Turing machine has not halted, the network output corresponds to the output of $f_{TM}$ and all weight updates are equivalent to training $f_{TM}$ alone, independent of the primary network $f_\theta$. After the Turing machine is finished, the switch $s_{net}$ deactivates the Turing machine branch $f_{TM}$ an outputs the primary network $f_\theta(x)$ with the values of $\theta$ that the Turing machine has written on its output tape. \subsection{Training the Turing Machine} \label{sec:train-TM} Finally, we have to construct the node $f_{TM}$ such that the bounds \eqref{eq:tm-loss-bounds} of its input hold and gradient descent training traces the Turing machine calculations so that the final state of the output tape contains the weights $\theta = TM(x,y)$. To this end, we apply either Proposition \ref{prop:tm-descent-no-tape-variable} or Proposition \ref{prop:tm-descent-tape-excluded}, which both rely on a carefully crafted loss $\ell_{TM}$ associated with the Turing machine $TM$, which is different from the least squares loss in our application. We reconcile these two losses as follows: After reading the labels $y$ into $z_k = y$, we can calculate a vector $f_\perp(z)$ that is orthogonal to $z$ and has unit length, see Appendix \ref{appendix:orthogonal}. We then define the output of the node $f_{TM}$ by \[ f_{TM} = \sqrt{2 \ell_{TM}} f_\perp(z_k). \] By choosing appropriate constants, we ensure that $\ell_{TM}$ is always non-negative below. This directly yields \begin{align*} (out - y)^T \nabla_\Box f_{TM} & = \left(f_\perp(z) \sqrt{2 \ell_{TM}} - y \right)^T f_\perp(z) \nabla_\Box \sqrt{2 \ell_{TM}} = 2 \sqrt{\ell_{TM}} \nabla_\Box \sqrt{\ell_{TM}} \\ & = \nabla_\Box \left( \sqrt{\ell_{TM}} \right)^2 = \nabla_\Box \ell_{TM} \end{align*} for $\Box \in \{s,T,H\}$ so that with \eqref{eq:gradient-descent-updates-2} we obtain the gradient descent updates \begin{equation} \begin{aligned} s_{k+1} & = s_k - [\argmin_{\sigma \in S} \partial_{\sigma - s_k} \ell_{TM} - s_k] \\ T_{k+1} & = T_k - \alpha_T \nabla_T \ell_{TM} \\ H_{k+1} & = H_k - \alpha_H \nabla_H \ell_{TM} \end{aligned} \label{eq:trace-TM} \end{equation} for the gradient descent steps $1, \dots, K-1$. These updates are identical to the gradient descent methods in Proposition \ref{prop:tm-descent-no-tape-variable} and Proposition \ref{prop:tm-descent-tape-excluded} (with appropriate choices of the learning rates $\alpha_T$ and $\alpha_H$) so that for $k=1, \dots, K-1$, the variables $s_k$, $T_k$ and $H_k$ have the same values if we train the loss $\frac{1}{2} \|out - y\|^2$ or alternatively the loss $\ell_{TM}$ directly. Note that we also have \begin{equation} \|f_{TM}\|^2 = 2 \ell_{TM} \label{eq:f_TM-size} \end{equation} which we will need for triggering the switch $s_{net}$, later. We choose the stopping bound $B_{stop}$ in \eqref{eq:intro:gd-tm-supervised-stopping} such that $\|f_{TM(x,y)}(x) - y)\| \le B_{stop}$ and for the loss $\ell_{TM}$, we use one of the following two options: \begin{enumerate} \item With the loss function of Proposition \ref{prop:tm-descent-no-tape-variable} and extra condition \eqref{eq:graph-loss-extra}, we include the full state of the Turing machine in the simplex $S$ with $|Q|$ possible states of the finite control, $2^\tau$ possible tape contents and $\tau$ possible head positions. Since the tape is already included in the simplex $S$ we do not need the extra variables $T,H$ and set $\tau = 0$ so that $T,H \in \mathbb{R}^0$ are trivial. The states $s_k \in S$ during training will be vertices of $S$ only so that we can easily read the weights $\theta$ of the primary network $f_\theta$ from $S$ directly. In summary, we have \begin{align*} S & \subset \mathbb{R}^{|Q| 2^{d\tau} \tau^d}, & T,H & \in \mathbb{R}^0. \end{align*} In order to choose the constants $\underline{B}$, $\overline{B}$ from the definition \eqref{eq:def-s-net-1}, \eqref{eq:def-s-net-2} of $s_{net}$ and $\omega_v$ and $B$ from the loss $\ell_{TM}$ in \eqref{eq:graph-edge-weights}, let $\omega_0 = \min_{v \in V(S)} \omega_v$ and $\omega_1 \in \min \{\omega_v | \, v \in V(S), \, \omega_v \ne \omega_0\}$ be the smallest and second but smallest weights. Then, we choose all weights $\omega_v$ and $\underline{B}$, $\overline{B}$ so that \begin{equation*} \omega_0 < B_{stop} = \underline{B} < \overline{B} < \omega_1. \end{equation*} With this choice, by Proposition \ref{prop:tm-descent-no-tape-variable} and \eqref{eq:trace-TM} the minimization of $\frac{1}{2}\|out - y\|^2$ traces the steps of the Turing machine $TM$ until we reach a halting state. Then by \eqref{eq:f_TM-size}, we have $\|f_{TM}\|^2 = 2 \ell_{TM} < 2 \underline{B}$ and by definition, the switch $s_{net}$ is flipped so that it lets the primary network $f_\theta$ pass to the output with weights $\theta = TM(x,y)$ read from the final tape of $f_{TM}$. \item With the loss function $\ell_{TM}$ of Proposition \ref{prop:tm-descent-tape-excluded}, the finite control and tape symbols at the head positions are contained in the states of the simplex $S$ and the full tape and head positions in the variables $T$, $H$ so that \begin{align*} S & \subset \mathbb{R}^{2^d |Q|}, & T,H & \in \mathbb{R}^{\tau \times d}, \end{align*} where without loss of generality we assume that the Turing machine has two tapes: One for input and on for output. In order to select the constants $\underline{B}$ and $\overline{B}$ from definition \eqref{eq:def-s-net-1}, \eqref{eq:def-s-net-2} and $b$, $c$ and $\gamma$ of the loss function \eqref{eq:loss-weight-TM-tape-external} and \eqref{eq:tm-loss-tape-external}, we first choose $b$ sufficiently large so that the condition \eqref{eq:loss-weight-constants-TM-tape-external} holds and $8 b^2 d \gamma + 3d\gamma - b^3\gamma < 0$, which by \eqref{eq:tm-descent-tape-excluded-stopping} ensures that the loss for the halting states is strictly smaller than the loss for the non-halting states. Next, we select the positive global scaling factor $\gamma$, the global additive constant $c$ and the bounds $\underline{B}$, $\overline{B}$ from \eqref{eq:tm-loss-bounds} such that the loss $\ell_{TM}(x,T,H)$ is positive and \[ c + 8 b^2 d \gamma + 3d\gamma - b^3\gamma < B_{stop} = \underline{B} < \overline{B} \le c. \] By \eqref{eq:tm-descent-tape-excluded-stopping} the left hand side is an upper bound for $\ell_{TM}(x,T,H)$ at halting states and the right and side a lower bound for non-halting states. Note that the left hand side is smaller than the right hand side by our choice of $b$. As for the alternative construction, this ensures that once a halting state is reached, the switch $s_{net}$ passes the primary network $f_\theta$ to the output. \end{enumerate} Once the switch $s_{net}$ is flipped, the primary network $f_\theta$ is passed to the output with weights $\theta = TM(x,y)$ read from the final state of $f_{TM}$'s tape. Since we have chose $B_{stop}$ such that $\|f_{TM(x,y)}(x) - y)\| \le B_{stop}$, the stopping criterion of the gradient descent method applies and no further training steps are executed. \subsection{Number of Layers} Finally, let us count the number of layers in the extended network. The switches $s_{init}$ and $s_{net}$ can be implemented with $3$ layers each: E.g. for \[ s_{init}(u,z) := [1-\psi(\|z\|_2^2)] u + \psi(\|z\|_2^2) z \] we have one layer for the norm $\|z\|_2^2$, one for the cutoff function $\psi$ and one for the outer products of the scalar weights with the vectors $u$ and $z$. Next, $f_{TM}$ is defined by \[ f_{TM} = \sqrt{2 \ell_{TM}} f_\perp(z_k). \] We need one layer for the product of the square root with $f_\perp$ and can implement the latter two terms on parallel layers. The function $f_\perp$ can be implemented with $5$ layers as in Appendix \ref{appendix:orthogonal}. The square root is one layer and $\ell_{TM}$ can be implemented in one layer for the outer sum in \eqref{eq:tm-loss-tape-internal} or \eqref{eq:tm-loss-tape-external} and two layers for all involved functions from Section \eqref{appendix:lagrange-basis}. The least squares terms in \eqref{eq:tm-loss-tape-external} can be computed in one extra layer that is parallel to the others. In summary, the extend network requires at most 12 layers plus the layers form the primary network $f$ and eventually some layers for quantization and de-quantization of the Turing machine tapes variables $T$.
{ "timestamp": "2020-07-28T02:43:29", "yymm": "2007", "arxiv_id": "2007.13664", "language": "en", "url": "https://arxiv.org/abs/2007.13664" }
\section{Introduction} Lightning discharges and thunderclouds have been known as electrical phenomena in the atmosphere since the discovery by Benjamin Franklin in 1752. Thanks to recent observational and theoretical studies, they have been also found to be closely associated with high-energy phenomena comprising high-energy photons, electrons, neutrons, etc., and a new academic field ``high-energy atmospheric physics'' has been established. In 1925, Wilson \cite{Wilson_1925} proposed the first idea that strong electric fields in thunderclouds can accelerate $\beta$-particles or electrons of cosmic-ray origin to MeV energies, even in the dense atmosphere. Electrons accelerated in electric fields emit bremsstrahlung photons by colliding with atmospheric nuclei. This Wilson's runaway electron scheme was developed with multiplication processes into relativistic runaway electron avalanches (RREA) by Gurevich et al. \cite{Gurevich_1992}; secondary electrons produced by accelerated electrons via ionization loss processes also become seed electrons and are accelerated in electron fields. The first reports of high-energy atmospheric phenomena were made by Parks et al. \cite{Parks_1981} and McCarthy and Parks \cite{McCarthy_1985}. They utilized an X-ray counter onboard an F-108 aircraft and detected enhancements of count rates lasting for tens of seconds while flying in thunderclouds. This phenomenon is now called ``gamma-ray glow''. Gamma-ray glows originate from electron acceleration and multiplication in thunderclouds. Their duration ranges from seconds to tens of minutes; their life cycle is thought to be connected to the stability of electric fields inside thunderclouds. So far, gamma-ray glows have been observed by aircrafts \cite{Kelley_2015,Kochkin_2017,Ostgaard_2019}, balloons \cite{Eack_1996}, and mountain-top experiments \cite{Chubenko_2000,Alexeenko_2002,Tsuchiya_2009,Tsuchiya_2012,Torii_2009,Bowers_2019}. When they are detected by ground-based facilities, they are also referred to as thunderstorm ground enhancements (TGEs) \cite{Chilingarian_2010}. In particular, the observatory at Mount Aragats in Armenia has observed the largest number of TGEs by cosmic-ray monitors \cite{Chilingarian_2010,Chilingarian_2012,Chilingarian_2019b}. Gamma-ray glows are sometimes quenched by lightning discharges \cite{McCarthy_1985,Kelley_2015,Kochkin_2017,Alexeenko_2002,Chilingarian_2017,Chilingarian_2019}. This is evidence that electric fields responsible for gamma-ray glows can be destroyed by lightning currents. Besides airborne and mountain-top observations of gamma-ray glows, experiments during winter thunderstorms in Japan are of great importance. In coastal areas facing the Sea of Japan, northern seasonal winds blow and provide heavy snow with lightning discharges. These winter thunderstorms in Japan are distinctive comparing to typical thunderstorms, in particular cloud bases. While typical summer thunderstorms develop above an altitude of 3-km or higher, winter thunderclouds in Japan have a cloud base of lower than 1~km \cite{Kitagawa_1994}. Gamma-ray photons are absorbed in the atmosphere typically within 1~km. Therefore, we need in-situ measurements by airborne detectors or getting closer to thunderclouds by putting detectors on mountain tops, to observe gamma rays from summer thunderstorms. On the other hand, winter thunderstorms allow us to observe high-energy atmospheric phenomena at sea level. Torii et al. \cite{Torii_2002} reported gamma-ray glows lasting for $\sim$1~minute during winter thunderstorms for the first time, recorded by dosimeters installed at a nuclear power facility in a coastal area of the Sea of Japan. Another measurement with multiple dosimeters succeeded in tracking a gamma-ray glow moving with a thundercloud and ambient wind flow \cite{Torii_2011}. Another important class of high-energy atmospheric phenomena is ``terrestrial gamma-ray flash'' (TGF). TGFs are transient emission coinciding with lightning discharges. Their energy spectrum extends up to $>20$~MeV \cite{Smith_2005,Tavani_2011}, and their duration is typically several hundreds of microseconds \cite{Foley_2014}. Since their discovery by Compton Gamma-Ray Observatory \cite{Fishman_1994}, they have been routinely detected by in-orbit experiments such as RHESSI \cite{Smith_2005}, AGILE \cite{Tavani_2011,Marisaldi_2010}, Fermi \cite{Briggs_2010,Mailyan_2016}, and ASIM \cite{Neubert_2019b,Ostgaard_2019b}. TGFs are thought to be produced by $10^{16}$--$10^{19}$ energetic electrons above 1~MeV \cite{Dwyer_2005c,Mailyan_2016}. While several models have been proposed \cite{Celestin_2011,Celestin_2015,Moss_2006,Dwyer_2007,Dwyer_2012a}, the mechanism to produce such an enormous number of energetic electrons is still in debate. TGFs detected from space are upward-going, namely emitted from thunderclouds into space. More recently, downward-going ones called ``downward TGFs'' have been detected by ground-based experiments \cite{Dwyer_2004,Hare_2016,Tran_2015,Abbasi_2018,Ringuette_2013}. Motivated by the initial findings in 1990s and early 2000s, we launched the Gamma-Ray Observation of Winter Thunderclouds (GROWTH) experiment in 2006. The GROWTH experiment is a ground-based measurement of gamma rays and high-energy particles aiming at detecting and exploring high-energy atmospheric phenomena during winter thunderstorms in Japan. The experiment started with a suite of gamma-ray and electron detectors installed at Kashiwazaki-Kariwa Nuclear Power Station of Tokyo Electric Power Holdings in Niigata Prefecture, Japan. The power station faces the Sea of Japan, and frequently encounters lightning discharges in winter seasons. Tsuchiya et al. \cite{Tsuchiya_2007} reported the first detection of a gamma-ray glow lasting for $\sim$40~sec in the Kashiwazaki-Kariwa site. Its energy spectrum, a continuum extending up to 10~MeV originating from bremsstrahlung of electrons, suggested that a thundercloud continuously accelerated electrons to 10~MeV or higher energy. Combining Monte-Carlo simulations, glows in the site were found to originate at an altitude of$<$1~km \cite{Tsuchiya_2011}. Tsuchiya et al. \cite{Tsuchiya_2013} reported a glow abruptly terminated with a lightning discharges. The energy spectrum of the glow gradually became hard, i.e. the ratio of $>$10~MeV to 3--10~MeV photons was increasing as the lightning discharge was drawing near. Umemoto et al. \cite{Umemoto_2016} reported an enigmatic enhancement of electron-positron annihilation gamma rays after a lightning discharge. During the first decade of the GROWTH experiment in 2006--2015, one or two observation points were maintained at the power station. However, the sparse distribution of detectors is not sufficient to delve deeper into the nature of gamma-ray glows such as on-ground distribution of particles and the life cycle of their acceleration site, i.e. how particle acceleration is initiated, develops, and comes to an end. Therefore, we launched a new campaign of the GROWTH experiment with multiple gamma-ray detectors and observation sites, called ``Thundercloud Project'' in 2015. The initial scientific results of the campaign have been already reported as journal publications \cite{Enoto_2017,Wada_2018,Wada_2019_commphys,Wada_2019_prl,Wada_2020}. In this paper, we describe the design and the performance of our gamma-ray detector system followed by details of completed observation campaigns. Highlights of scientific achievements obtained based on data from these observation campaigns are also summarized. Throughout the paper, gamma-ray glow (gamma-ray emission from the thunder cloud, typically lasting for a few minutes) and short-duration gamma-ray burst caused by a downward TGF (lasting for a fraction of second) are collectively called thundercloud radiation bursts (TRBs). When we put a stress on the time scale of gamma-ray burst events from the observational point of view, we also call minute-lasting gamma-ray glow as ``long-duration gamma-ray burst''. \section{Experiment setup: gamma-ray detector system} At high level, our detector system is a conventional photon-counting gamma-ray spectrometer based on scintillation crystal and photo-multiplier tube (PMT). For realizing a distributed observation network of TRBs, we set miniaturization of an entire system as a primary design goal to allow easy handling and deployment in rooftop/outdoor environments. Keeping the cost of the system as low as possible is also essential because otherwise the number of detector systems manufactured would not be large due to the tight research budget and the scale of the observation network would be limited. In addition, we deploy the detector system to multiple locations (distances between detectors varying from a few to hundreds of km), frequent on-site maintenance is not an option, and remote-monitoring and remote-control capabilities are indispensable. Figure~\ref{fig:block_diagram} shows a high-level block diagram of the detector system, which consists of 1) scintillation crystal viewed with photo-multiplier tube (hereafter sensor assembly), 2) Detector-control and data-acquisition electronics subsystem (hereafter DAQ subsystem), 3) telecommunication subsystem, and 4) mechanical support structure and waterproof enclosure. The detector is supplied with AC 100~V from the commercial power line, and a switching regulator generates DC voltages (12~V and 5~V) required by the electronics subsystem and the telecommunication subsystem. The telecommunication subsystem provides internet connectivity via a cellular network, and is used for telemetry transmission from the detector system, and remote-login to a computer in the DAQ subsystem via secure shell (ssh). Typical power consumption of the entire system is about 7~W. In the following subsections, detailed specifications of individual subsystems are described. \begin{figure}[htb] \begin{center} \includegraphics[width=1\hsize]{fig1.pdf} \end{center} \caption{Exploded view of the CAD drawing (left) and system block diagram (right) of the gamma-ray detector system. The size of the detector is 35~cm (depth)$\times$45~cm (width)$\times$20~cm (height).} \label{fig:block_diagram} \end{figure} \subsection{Sensor assembly} For detailed temporal and spectral analyses, it is critically important to detect gamma rays from thundercloud and lightning at as high photon counts as possible. The only way to achieve this is to make the effective area of a sensor larger and to select sensor materials with high stopping power against gamma rays with energies of MeV to a few tens of MeV. Bismuth germanite (Bi$_{4}$Ge$_{3}$O$_{12}$; hereafter BGO) scintillation crystal is one of optimal crystals in the thundercloud gamma-ray observation due to its high stopping power and environmental durability (no deliquescence). In the pilot observation campaign in 2015, we have employed cylindrical BGO crystals each with a diameter of 7.62~cm and a height of 7.62~cm. The standard BGO crystals that we used in the regular observation campaign since 2016 have dimensions of 25~cm~$\times$~8~cm~$\times$~2.5~cm. One crystal is viewed with two HAMAMATSU R1924A PMTs, and outputs from the two PMTs are combined in the analog stage, and then amplified and digitized as a single signal. Each set of a crystal and two PMTs are enclosed in a 2-mm-thick aluminum case. We have used 15 of this BGO-based sensor assemblies since 2016. During our detector development, low-cost Thallium-doped Cesium Iodide crystals, or CsI (Tl) for short, that were extracted from a terminated accelerator experiment project became available, and we have purchased a dozen of 30~cm~$\times$~5~cm~$\times$~5~cm crystals. The effective area of the CsI-based sensor assembly is slightly smaller than that of BGO-based ones, but helped to expand our observation network at a moderate increment of the manufacturing cost. Figure~\ref{fig:effective_area} illustrates the effective area of each scintillation crystal over gamma-ray energies 0.2--20~MeV, which is a typical energy range our detectors observe. Table \ref{tab:energy_scale} summarizes the energy resolution of the BGO and CsI crystals, measured via the laboratory calibration (0.662~MeV from $^{137}$Cs isotope) and using the environmental background signal (1.46~MeV and 2.61~MeV from $^{40}$K and $^{208}$Tl, respectively). \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\hsize]{fig2.pdf} \end{center} \caption{Effective area of each scintillation crystal calculated by a Monte-Carlo simulation. Uniformly-distributed gamma rays arriving from the direction of the normal of the detection surface (25~cm~$\times$~8~cm face of BGO and 30~cm~$\times$~5~cm face of CsI) are assumed in the simulation. Photo absorption, Compton scattering, and electron-positron pair creation are the physical processes involved in the simulation, and interactions that deposited energies larger than 40~keV in the crystal were considered detectable.} \label{fig:effective_area} \end{figure} \begin{table}[!h] \caption{Typical energy resolution of the detector.} \label{tab:energy_scale} \centering \begin{tabular}{cccc} \hline Crystal & \multicolumn{3}{c}{Resolution} \\ & 0.662~MeV & 1.46~MeV & 2.61~MeV \\ \hline BGO & 19\% & 12\% & 9\% \\ CsI(Tl) & 12\% & 9\% & 7\% \\ \hline \end{tabular} \end{table} \subsection{Detector-control and data acquisition (DAQ) subsystem} We developed a data-acquisition and detector-control system based on 1) an analog front-end board, 2) digital signal-processing (DSP) board, and 3) commercial-off-the-shelf single-board computer Raspberry Pi. The analog front-end board is a custom board designed by our group specifically for the present experiment. The DSP board is a general purpose Field Programmable Gate Array (FPGA) board with 4-ch waveform-sampling Analog-to-Digital Converters (ADCs). We developed the FPGA/ADC board in collaboration with Shimafuji Electric, primarily for our experiment but also aiming for broader applications in other projects. As shown in Fig.~\ref{fig:daq}, these boards are vertically stacked using 2.54-mm-pitch board-to-board connectors, forming a standalone data acquisition system within a cube of 10$\times$10$\times$10~cm$^3$, excluding protruding high-voltage power supply connectors. This design was chosen to save footprint of the system, and also to reduce required cabling during fabrication and integration at each observation site. Though the entire DAQ system is compact, it fully implements analog and digital signal processing required to function as a gamma-ray spectrometer and autonomously collect data for several months. Since we consider this miniaturized DAQ system as one of key enablers of our multi-point observation campaign, the design of the system is detailed in the following paragraphs. The high-level technical specification of the system is also summarized in Table \ref{tab:daq_spec}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\hsize]{fig3.jpg} \end{center} \caption{Outlook of the manufactured DAQ subsystem. From top to bottom, the front-end analog signal processing board, the FPGA/ADC board, and the Raspberry Pi computer can be seen. The size of the stacked subsystem is about 10~cm (depth)$\times$10~cm (width)$\times$10~cm (height).} \label{fig:daq} \end{figure} \begin{table}[!h] \caption{Specification of the DAQ system.} \label{tab:daq_spec} \centering \begin{tabular}{cl} \hline \multicolumn{2}{c}{Analog Front-end Board}\\ \hline Function & Specification \\ \hline Amplifier & Custom design. 4~channels. Pass band $\sim$2~$\mu$s. \\ HVPS & OPTON-1.5PA (Matsusada)$\times$2. Up to 1.5~kV.\\ GPS & FGPMMOPA6H.\\ & On-chip patch antenna or external antenna via SMA connector.\\ OLED display & 128$\times$64 pixels. 0.9~inch. I2C.\\ Env. Sensor & Temperature, humidity, pressure with BME280 (Bosch Sensortec). I2C.\\ Stack connector & 20$\times$2 pin header.\\ \hline & \\ \hline \multicolumn{2}{c}{Digital Signal Processing Board}\\ \hline Function & Specification\\ \hline FPGA & Artix-7 XC7A35T-1FTG256C (Xilinx). System clock 50~MHz.\\ GPIO & HVPS output enable, GPS 1PPS and NMEA data reception.\\ & 6 GPIO pins to Raspberry Pi.\\ ADC & AD9231BCPZ-65 (Analog Devices)$\times$2.\\ & 4-ch, 12-bit, 50-Msps sampling. Input range $\pm$5~V.\\ Slow ADC & MCP3208-BI/SL (Microchip). 4-ch, 12-bit sampling. SPI.\\ & 4-ch left for user.\\ Slow DAC & MCP4822-E/MS (Microchip). 2-ch, 12-bit sampling. SPI.\\ & Used for HVPS reference voltage.\\ USB interface & FT2232HL (FTDI Chip). USB Micro-B connector.\\ Temp. sensor & LM60BIM3 (Texas Instruments).\\ & Provides FPGA and DC/DC converter temperatures.\\ Current sensor & LT6106HS5 (Linear Technology). I2C.\\ & Provides 12~V, 5~V, 3.3~V current consumption.\\ Stack connector & 20$\times$2 pin socket for Analog Front-end Board.\\ & 20$\times$2 pin header for Raspberry Pi.\\ Power & 12~V via 2.1~mm jack.\\ & $\sim7$~W power consumption in the nominal observation mode.\\ Dimension & 9.5$\times$9.5$\times$2.9~cm$^3$.\\ \hline & \\ \hline \multicolumn{2}{c}{Raspberry Pi 3}\\ \hline Function & Specification\\ CPU & Quad-core 1.2-GHz ARM Cortex-A53.\\ RAM & 1~Gigabytes.\\ Storage & 32~Gigabytes, Class 10 SD card.\\ USB & 4~ports.\\ Ethernet & 100~Base Ethernet.\\ \hline \end{tabular} \end{table} \subsubsection{Analog front-end board} The analog front-end board carriers high-voltage power supply (HVPS) modules, amplifier chains, a Global Positioning System (GPS) receiver, and an organic light-emitting-diode (OLED) display. The board also implements a combined temperature, pressure, and humidity sensor BME-280 for providing house-keeping information. We selected the OPTON-1.5PA HVPS module from Matsusada Precision as our system, because of its small footprint and volume (44$\times$30$\times$16~mm$^3$). The board can carry up to two HVPS modules and high-voltage outputs from the modules are routed to two Safe High Voltage (SHV) connectors. The reference voltage signals of the HVPS modules are connected to a 2-ch 12-bit digital-to-analog converter MCP4822-E/MS on the digital signal processing board, so that output voltages can be flexibly controlled from software on Raspberry Pi via Serial Peripheral Interface (SPI). The amplifier chain consists of a simple charge-integration amplifier that converts a charge output of the PMTs to a voltage signal, followed by a differentiator-integrator band-pass filter and a linear amplifier. Figure~\ref{fig:amp} shows a circuit diagram of the chain. Four copies of the same amplifier chains are implemented. When a pulse of charge with a decay time of $\sim$300~ns is fed from the sensor assembly (BGO and PMT) to the first-stage charge-integration amplifier, an output pulse from the band-pass-filter amplifier should look like a uni-polar pulse with a $\sim$1~$\mu$s rise and $\sim$4~$\mu$s fall timescales as shown in the right panel of the figure. These timescales are sufficiently slow compared with the sampling frequency of the waveform-sampling ADC on the digital signal processing board (see the next section), and therefore, the peak pulse height, which is proportional to the energy deposit in the scintillation crystal, can be accurately measured. The OLED display is connected to the Inter-integrated Circuit (I2C) bus of Raspberry Pi via board-to-board connectors, and is controlled by a simple Python program running on Raspberry Pi that prints a status and parameters of the system, such as observation mode, high-voltage output values, an Internet Protocol (IP) address, and so on. Although the size of the display is small ($\sim$1~inch diagonal) and the resolution is very limited (128$\times$64 pixels), the display turned out to be very helpful in understanding the state of the DAQ system during in particular outdoor deployment works thanks to its high visibility. \begin{figure}[htb] \begin{center} \begin{minipage}{0.45\hsize} \includegraphics[width=\hsize]{fig4a.pdf} \end{minipage} \begin{minipage}{0.45\hsize} \includegraphics[width=\hsize]{fig4b.pdf} \end{minipage} \end{center} \caption{Left: A circuit diagram of the analog amplifier chain. Bypass capacitors used for operational amplifiers are not shown in this diagram for simplicity. Right: SPICE-simulated pulse shapes. Blue and pink voltage waveforms are measured at Test Points 0 and 1 of the circuit diagram, respectively, against a typical charge supplied by a BGO+PMT assembly.} \label{fig:amp} \end{figure} \subsubsection{Digital Signal Processing (DSP) board} The DSP board is a custom-made digitizer consisting of a Xilinx Artix-7 FPGA (XC7A35T-1FTG256C) and two dual 12-bit ADC (Analog Devices AD9231BCPZ-65) that operate at 50~million samples/s (Msps), temperature and current sensors, USB interface (FTDI Chip FT2232H), slow ADC/DAC, and DC/DC converters. A custom hardware logic that collects timing and pulse height of gamma-ray signals in a self-trigger mode, was developed in Hardware Description Language (HDL) and programmed to FPGA. Figure~\ref{fig:fpga_logic} illustrates the high-level block diagram of the FPGA logic. Once the input voltage exceeds the trigger threshold, a predefined number of ADC values are recorded as a "waveform" in Waveform Buffer (typically covering 10~$\mu$s since the trigger), and various properties of the waveform is then computed (maximum pulse height as well as supplementary data such as ADC values of the first/last/minimum pulse heights, sample index of the maximum pulse height, and maximum derivative of waveform values). The maximum pulse height is converted to energy deposit in the crystal in the post processing, and the supplementary data can be used to verify the normal operation of the electronics (PMT, amplifier, ADC) when necessary (see e.g. \cite{Enoto_2017} for use of the supplementary data in addition to the pulse height data). The derived properties are then packed into a certain data packet structure, and stored in Event Packet, and then read by the data acquisition program running on the single-board computer via USB. The source code is publicly available on our project's online repository\footnote{https://github.com/growth-team/}. The analog front-end board and Raspberry Pi are connected to the DSP board via two 20$\times$2-pin 2.54-mm pitch connectors placed near two edges of the board. The PMT output signal amplified by the analog front-end board and the 1~PPS/NMEA output from the GPS module are routed to ADC and FPGA, respectively, via the connector. The I2C signal, HVPS reference voltage from the slow DAC, and output enable signals are also passed through the connector from the DSP board to the analog front-end board. The I2C from the analog front-end board and the SPI communication signals of the slow ADC/DAC, and the HVPS output status control signals are connected to Raspberry Pi via the other 20$\times$2 pin header connector. The DSP board was manufactured by Shimafuji Electric, and is available for customers as a general-purpose waveform-sampling ADC/FPGA board. \begin{figure}[htb] \begin{center} \includegraphics[width=0.97\hsize]{fig5.pdf} \end{center} \caption{High-level block diagram of the FPGA logic.} \label{fig:fpga_logic} \end{figure} \subsubsection{Single-board computer} We selected the Raspberry Pi single-board computer as our platform to run software programs that control the HVPS output mode and output voltages, collect gamma-ray event data from the DSP board, read housekeeping data from the house-keeping sensors, and transmit the house-keeping data and status information to the internet. Primary reasons of the selection include its small size (85$\times$56$\times$17~mm$^3$), low price ($<$US\$100 including a power adapter and an SD card), and sufficiently high performance with a quad-core ARM Cortex-A53 processor running at 1.4~GHz and 1~gigabytes (GB) main memory. The data collection program is the only performance-critical program as the processing speed of it limits the number of gamma-ray events that the entire detector can record, and is written in C++. When the detector system is powered on, the program configures the FPGA logic on the DSP board; for example, it sets enabled ADC channels, trigger threshold values, and the number of waveform samples to be recorded per trigger. When a data collection is started, the program continuously reads the data stored on the Event Packet Buffer of the DSP board, and save the read data to a file as an event list; supported output file formats are CERN/ROOT \cite{BrunRademakers_1997} and FITS \cite{Wells_1981}. Ruby and Python are used for other non-performance critical programs to expedite the development by leveraging existing software libraries provided in the ecosystem of these scripting languages such as an OLED display controller library, a digital general-purpose input/output library, and so on. The process monitoring framework God\footnote{http://godrb.com/} was used to run these programs as resident processes (so-called daemons) after power on. Configurations of the programs, such as HVPS output voltages, trigger threshold, enabled ADC channels, and data collection mode (i.e. whether to start data collection and HVPS output automatically after boot) are stored in a file on the non-volatile memory (micro SD card) along with the programs and the Linux operating system. Data collected by the programs, for example, gamma-ray event-list data, house-keeping data, are also stored on the micro SD card. An external flash memory disk connected via Universal Serial Bus (USB) was also used as a back-up storage, and the data are regularly copied from the micro SD card to the external disk. \subsection{Telecommunication subsystem} Raspberry Pi in the DAQ subsystem was connected via Ethernet to a mobile WiFi router (Aterm MR04LN from NEC) that is connected to the internet over a cellular network. Due to the stringent monthly data limitation (1~GB per 1~months) of the cellular plan that was allowed by the research grant expenditure regulation, it was infeasible to transfer all the gamma-ray event list data that amount $\sim$5--10~GB per month to a remote data-storage server, and therefore the connectivity was primarily used to transmit the low-data-rate telemetry sent every 300~s and a digest report of gamma-ray data such as binned count-rate histories and time-integrated energy spectra. The telemetry data were sent to a cloud-based data base, and this allowed centralized monitoring of the status of the distributed detector systems using a web-browser-based data visualization tool. Figure~\ref{fig:telemetry_viewer} shows an example screen shot of the temperature telemetry. During observation campaigns, occasional stoppages of Raspberry Pi, which is thought to arise from instantaneous AC power failure, were noticed as absence of telemetry data, enabling prompt actions such as power cycling (reboot) by local support personnel. The digest report of gamma-ray data were also useful in rapidly identifying gamma-ray enhancement events originating from thundercloud and/or lightning; when an enhancement event candidate is noticed, we remote-logged in to Raspberry Pi via ssh and manually transferred a limited number of data files for in-depth analyses. Having "bi-directional" connectivity to individual detector systems thus helped a day-to-day operation during observation campaigns and also contributed to reduce latency between observation and data analysis, and to expedite publication of the data. If we were unable to retrieve data remotely, the time/human resource/financial costs of frequent data retrieval, for example, once per month, would have been impractically expensive. Therefore, we consider that the cost of installing a mobile WiFi router ($\sim$US\$100) and purchasing cellular data plan for each detector system ($\sim$US\$10 per month) have been well paid off. \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\hsize]{fig6.png} \end{center} \caption{Web-browser-based telemetry visualization tool. This example screen shot shows temperature of the DC/DC module on the FPGA/ADC board over a 4-day period in February 2018. The top and the bottom panels are for detector systems deployed in Niigata Prefecture and Ishikawa Prefecture, respectively. } \label{fig:telemetry_viewer} \end{figure} \subsection{Mechanical structure} For operating detectors in outdoor environments where snow and sea wind are the norm, we selected the Takachi Electronics Enclosure's water-proof and dust-tight plastic enclosure family BCAR as containers of our detector systems. The dimensions of a standard enclosure we used are 35$\times$45$\times$20~cm$^3$. A water-proof power connector was attached to one side of an enclosure to pass through AC 100~V power. When integrating an entire detector system, a sensor assembly (or multiple of them, depending on configuration), a DAQ subsystem, and a telecommunication subsystem were screw-mounted on an aluminum base plate for ruggedization, and then the base plate was screw-mounted to the base of a water-proof enclosure, as shown in Fig.~\ref{fig:enclosure}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.8\hsize]{fig7.jpg} \end{center} \caption{Interior of the water-proof enclosure. The BGO-based sensor assembly wrapped with a pink bubble wrap and the DAQ subsystem are located in the center and in the left bottom corner, respectively. A small black module adjacent to the sensor assembly is a mobile WiFi router. A power strip and AC adapters for the DAQ subsystem and the mobile WiFi router are placed in the top right corner.} \label{fig:enclosure} \end{figure} \section{Calibration and offline data analysis} After each observation campaign in winter, data stored on the detector are retrieved from each detector system, and the energy and the timing calibrations are applied as detailed in \S\ref{sec:energy_cal} and \S\ref{sec:timing_cal}. Based on the energy- and time-calibrated data, gamma-ray enhancement events, both long- and short-duration ones, are searched using a count-history-based algorithm that is described in \S\ref{sec:search_algorithm}. \subsection{Energy scale calibration}\label{sec:energy_cal} During outdoor observations, the energy scale changes over time as ambient temperature and temperature of the scintillation crystal vary. Instead of actively compensating this change by for example dynamically adjusting the PMT or the analog amplifier gain, we let the detector operate at the pre-determined fixed gain, and corrected energy scale in the offline analysis. The correction was made by fitting the prominent gamma-ray lines seen in the environmental background radiation spectrum such as lines at 1.46~MeV ($^{40}$K), and 2.61~MeV ($^{208}$Tl) in the ADC channel space, and by constructing the best-fit linear function which returns energy in MeV for a given ADC channel. During the outdoor observation campaign, the temperature of the detector system (measured on the DSP board) varied between 25--60~$^{\circ}$C (the high temperature occurred under the clear skies, due to heating by the direct sun light). Even with this temperature variation, energy scale did not change significantly; typical shifts of 1.46~MeV ($^{40}$K) and 2.61~MeV ($^{208}$Tl) peak centers in the ADC channel (i.e. raw voltage value before energy scale correction) were less than 3\% and 4\%, respectively. The 0.609~MeV line from $^{214}$Bi, which is clearly visible when there is precipitation, is used to validate the derived energy scale, and it was confirmed that the accuracy of the linear function is better than 2\% at 0.609 MeV for the BGO scintillation crystals. An example count history is shown in Fig.~\ref{fig:background_timehistory}, and spectra of the environmental background radiation during fair weather and precipitation are plotted in Fig.~\ref{fig:background_spectra}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.8\hsize]{fig8.pdf} \end{center} \caption{Example count history of the environmental gamma-ray background recorded by one of the BGO-based detectors in Ishikawa in February, 2019 in the $> 400$~keV (top panel) and the $> 3$~MeV (bottom panel) energy bands. Three-day worth of data are shown. The count rate of the lower energy band (top panel) varies significantly after time $10^5$~s due to gamma rays from radioisotope washout due to precipitation, while that of the $> 3$~MeV band stays almost the constant. Blue and red rectangles indicate time periods of fair weather and intermittent rain, of which energy spectra are shown in Fig.~\ref{fig:background_spectra}. } \label{fig:background_timehistory} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\hsize]{fig9.pdf} \end{center} \caption{Energy spectra of the environmental background extracted from the time periods without precipitation (blue) and with precipitation (red), as indicated in Fig.~\ref{fig:background_timehistory}. Statistical errors are plotted in the figure, but can be hardly visible due to high counting statistics.} \label{fig:background_spectra} \end{figure} \subsection{Time assignment}\label{sec:timing_cal} The analog daughter board carries a single-frequency GPS receiver with an external patch antenna. The navigation message output and the 1~pulse-per-second (PPS) signal of the module are routed to the FPGA on the main board via the stacking connector. On the rising edge of the 1~PPS signal, the FPGA logic registers the absolute time information in the navigation message along with a value of the free-running 48-bit time counter which is incremented at 100~MHz. The registered information is read by the DAQ software every 30~s, and stored in an output data file. The information is then used by the offline data processing pipeline to assign absolute time to each gamma-ray pulse which is recorded with a (free-running) time-counter value at trigger (i.e. when its pulse height exceeded the threshold value). Figure~\ref{fig:clock_fluctuation} shows an example of minute-scale variation of the local clock reconstructed based on the recorded GPS-based absolute time and the free-running time counter. When the receiver tracks sufficient number of GPS satellites, the accuracy of the 1~PPS signal generated by the module is reported to be $\sim$10~ns based on its data sheet. The sampling of ADC (50~MHz) governs the time resolution of trigger time of each gamma-ray pulse signal to be 20~ns. The time scale of scintillation photon emission (de-excitation) in the scintillation crystals ($\sim$a few hundred ns to 1~$\mu$s depending on crystals) and that of the band-pass-filtered pulse ($\sim$2~$\mu$s) are longer than the 1~PPS timing accuracy and the ADC sampling interval, and jitter of these components could potentially worsen the overall time accuracy. However, based on time correlation study between our gamma-ray measurement and radio-frequency observations (for example, \cite{Wada_2019_commphys}) confirmed that an absolute time accuracy better than 1~$\mu$s is achieved in this GPS-supported time assignment mode. Occasionally, the GPS receiver did not generate navigation solution (thus no time information) due to low number of satellites in the field of view. In such a case, the pulse trigger time was converted to absolute time based on the system time of Raspberry Pi which was synchronized to the public NTP server via a cellular network. An absolute of time assignment in this mode is thought to be on the order of 10--100~ms, depending mostly on the round trip time of the cellular network. \begin{figure}[htb] \begin{center} \includegraphics[width=0.8\hsize]{fig10.pdf} \end{center} \caption{Top: Typical relation of a counter incremented by the free-running 100-MHz clock fed to FPGA versus GPS time over 30~min (one observation interval). Middle: The same counter value, but with the best-fit linear model being subtracted to visualize the time-variable drift of the local oscillator. A counter value of 1000 corresponds to 10~$\mu$s. Bottom: Temperature measured on the FPGA board.} \label{fig:clock_fluctuation} \end{figure} \subsection{Search of TRB events}\label{sec:search_algorithm} Gamma-ray enhancement events are usually found as an excess from the environmental background gamma-ray radiation, while the background itself is also variable. As shown in Figures \ref{fig:background_timehistory} and \ref{fig:background_spectra}, background gamma-ray count rate ($\sim$6.5~counts~s$^{-1}$) varies significantly below 3~MeV depending on presence of precipitation, and this variation can lower sensitivity of the search. In contrast, the $>3$~MeV energy range is dominated by cosmic-ray induced signals, of which count rate is almost stable, and relatively lower than of the $<3$~MeV range. Therefore, to lower the contamination from low-energy ($<3$~MeV) time-variable background signal, and to increase the signal-to-noise ratio, we implemented the following processes in the search algorithm; 1) count-rate history of photons with energies above 3~MeV is generated for each 30-min data chunk, 2) the 30-min count-rate history is further binned to a histogram, and a standard deviation is computed, 3) the maximum count rate in the 30-min data chunk is divided by the standard deviation to derive ``significance'' value, after the mean count rate is subtract, and then, 4) a potential TRB event is reported when the ``significance'' exceeds a threshold value. In the nominal batch analyses, we used a time bin width of 10~s and a significance threshold of 5~standard deviation; i.e. when a count-history bin contains gamma-ray counts which is more than 5~$\sigma$ apart (higher count rate) from the mean of the histogram, the bin is flagged for further examination by humans. To illustrate this event search process, Fig.~\ref{fig:trb_search} presents two example 10-second-binned 30-min count histories, one with no significant count increase, and the other with a gamma-ray glow being detected. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\hsize]{fig11a.pdf} \includegraphics[width=0.9\hsize]{fig11b.pdf} \end{center} \caption{ Example of 24-hour count histories of photons with energies above 3~MeV, without (top panels) and with (bottom panels) count rate exceeding the event detection threshold. The histogram in the right panels show distribution of the count rate, with the event-detection threshold being shown as red dashed lines. Top and bottom rows show data of December 1st and December 6th, 2016, respectively, obtained by a detector deployed in Komatsu City, Ishikawa Prefecture. } \label{fig:trb_search} \end{figure} \section{Results} \subsection{Observation campaign} In 2015, we developed 4 prototype detectors, and started a multi-point observation campaign in Kanazawa City, Ishikawa Prefecture, with 3 detectors deployed in the city. The detectors were installed on the rooftop of a building of the observation sites, as exemplified in Fig.~\ref{fig:deployed_detector}. In later years, we increased the number of detectors, and deployed in more observation sites in Ishikawa and Niigata Prefectures. Figure~\ref{fig:install_map} presents the locations of each observation site. Table \ref{tab:deployment_history} and Fig.~\ref{fig:deployment_hist} summarize the number of detectors that were deployed during annual observation campaigns since 2015. An annual observation campaign typically extends over 5 to 6 months from October or November to March next year; for example, the 2016 observation campaign started in November, 2016 and ended in March, 2017. \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\hsize]{fig12.jpg} \end{center} \caption{Photograph of a detector system deployed on the roof of one of the observation sites.} \label{fig:deployed_detector} \end{figure} \begin{table}[!h] \caption{The number of detectors deployed in each observation campaign, in each observation area, and the duration of each observation campaign in days.} \label{tab:deployment_history} \centering \begin{tabular}{cccccc} \hline Prefecture & Area & \multicolumn{4}{c}{Year}\\ & & 2015 & 2016 & 2017 & 2018\\ \hline Ishikawa & Kanazawa & 3 & 3 & 6 & 9\\ & Komatsu & 0 & 2 & 2 & 2\\ & Suzu & 0 & 1 & 0 & 0\\ Niigata & Kashiwazaki & 0 & 4 & 4 & 4\\ \hline \multicolumn{2}{c}{Duration (days)} & 94 & 189 & 127 & 141\\ \multicolumn{2}{c}{Total (days)} & \multicolumn{4}{c}{551}\\ \hline \end{tabular}\\ \end{table} \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\hsize]{fig13.pdf} \end{center} \caption{ Locations of the observation sites of the experiment. } \label{fig:install_map} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\hsize]{fig14.pdf} \end{center} \caption{Number of detectors deployed to each observation area.} \label{fig:deployment_hist} \end{figure} \subsection{Number of detected TRB events} We have applied the event search algorithm described in Section \ref{sec:search_algorithm} to the data collected through the observation campaigns in the past 4 winter seasons (late 2015 to early 2019), and detected 46 long-duration bursts and 5 short-duration bursts. The two short-duration bursts detected in the 2017 campaign happened during simultaneously-observed long-duration bursts. Figure~\ref{fig:trb_hist} presents yearly histogram of the detected events. \begin{figure}[htb] \begin{center} \includegraphics[width=0.5\hsize]{fig15a.pdf} \includegraphics[width=1\hsize]{fig15b.pdf} \end{center} \caption{ Top panel: Number of TRB events detected in each observation campaign. Blue and red rectangles correspond to long- and short-duration TRBs, respectively. Bottom panels: The same as the top panel but events detected in each observation area are shown separately.} \label{fig:trb_hist} \end{figure} \subsection{Multi-point detection of TRB} The primary objective of the present experiment is to measure TRB events (both short-and long-duration gamma-ray bursts) with multiple detectors located in different sites, to study the physical extent of the gamma-ray emitting region in the cloud and its potential temporal/spatial variability as well as the movement of the cloud in detail. In fact, 14 events of all the detected TRB events were simultaneously observed by multiple detectors. For example, Fig.~\ref{fig:komatsu_count_history} shows a gamma-ray glow event detected by two detectors in Komatsu City, Ishikawa Prefecture at $\sim$17:54:00$-$18:00:00 of December 7th, 2016 (UTC). In this event, the detector in Komatsu High School first detected enhanced gamma-ray counts starting at $\sim$17:54:00 and ending at $\sim$17:58:00. A minute later, at around 17:55:00, a similar count-rate increase was recorded by the detector in Science Hills Komatsu (the art science museum), and lasted till 18:00:00. These 3--15~MeV count-rate time profiles are well described by a gaussian function plus a constant (corresponding to background signal), \begin{equation} f(t) = a\times\exp\left(-\frac{(t-b)^2}{c^2}\right) + d~[\mathrm{counts~s}^{-1}]\label{eq:count_history} \end{equation} where $t$ is time, $a$, $b$, $c$, and $d$ are a normalization factor, a peak-center time, a width of the gaussian, and the environmental background count rate, respectively. Table \ref{tab:komatsu_count_history} lists the best-fit parameters. The normalization factors and the widths yield the total counts of gamma rays from the gamma-ray glow, in 3--15~MeV, are estimated to be $\sim755\pm36$~counts and $\sim3310\pm58$~counts, in Komatsu High School and Science Hills Komatsu, respectively. The separation of the two gaussian centers is $114\pm3$~s. Errors are 1 standard deviation. \begin{figure}[htb] \begin{center} \includegraphics[width=\hsize]{fig16.pdf} \end{center} \caption{ Gamma-ray count-rate time histories recorded by our detectors in Komatsu High School (black filled circles) and Science Hills Komatsu (red filled circles) in the 3--15~MeV energy band, with a bin size of 10~s. Error bars show statistical errors. Solid black and red curves are the best-fit ``gaussian + constant'' model functions (Eq.~\ref{eq:count_history}). } \label{fig:komatsu_count_history} \end{figure} \begin{table}[!h] \caption{Best-fit parameters of the count-rate time-profile model (Eq.~\ref{eq:count_history}). Errors are 1 standard deviation, and count rates are in the 3--15~MeV energy band.} \label{tab:komatsu_count_history} \centering \begin{tabular}{cccccc} \hline Location & $a$ & $b$ & $c$ & $d$ & $\chi^2$~(n.d.o.f.)$^{1}$\\ & counts~s$^{-1}$ & UTC & s & counts~s$^{-1}$ \\ \hline Komatsu High School & $6.7\pm0.4$ & 17:56:26$\pm4$ & $66\pm4$ & $2.0\pm0.1$ & 143.2 (133)\\ Science Hills Komatsu & $35.6\pm0.8$ & 17:58:19$\pm1$ & $52\pm1$ & $2.7\pm0.1$ & 148.8 (133)\\ \hline \end{tabular}\\ $^{1}$ Chi-square value of the fit and the number of degrees of freedom in parentheses. \end{table} The location of the two detectors deployed at Komatsu High School and Science Hills Komatsu are plotted in Fig.~\ref{fig:komatsu_radar}, with radar echo images taken during this time period being overlaid. The straight-line distance of the two sites is 1.36~km. By tracking the movement of the precipitation feature in the radar image, we estimated a wind speed of $10.9\pm1.2$~m~s$^{-1}$ and wind direction as shown in Fig.~\ref{fig:komatsu_map}. The wind direction is consistent with a hypothesis that a gamma-ray emitting region in the thundercloud was moving from west northwest to east southeast, first traveling over Komatsu High School, and arriving Science Hills Komatsu after that. Based on the estimated wind speed ($10.9\pm1.2$~m~s$^{-1}$) and the distance measured along the wind (1.20~km), a hypothetical travel time of the gamma-ray emission region can be estimated to be $110\pm12$~s. This value is consistent within errors with the peak-time difference based on the gaussian fitting ($114\pm3$~s), and therefore we consider that the wind speed and direction estimated based on the radar images are sufficiently accurate to be used in interpreting the temporal and the geometrical aspects of this particular gamma-ray glow event. As mentioned above, the total gamma-ray count of Science Hills Komatsu is larger than that of Komatsu High School by a factor of 4.4. Based on this combined with the wind direction, we infer that Science Hills Komatsu was (laterally) closer to the electron acceleration region in the thundercloud, and observed less attenuated gamma rays than the other. The high counting statistics of the Science Hills Komatsu data allowed us to extract an energy spectrum of the gamma-ray glow event, as shown in Fig.~\ref{fig:komatsu_spectrum}. The spectrum of the glow event was extracted from a time range 17:56:30--18:00:00 (UTC). The environmental background signals were extracted using two 60-s chunks of data before and after the glow event, and subtracted from that of the glow event. Based on previous spectral studies \cite{Tsuchiya_2011}, we tried to characterize the spectral shape by fitting it with a power law with an exponential cutoff: \begin{equation} f(E) = N \times E^{-\Gamma} \exp(-E/E_\mathrm{c})\label{eq:cutoffpl}~\mathrm{photons}~\mathrm{cm}^{-2}~\mathrm{s}^{-1}~\mathrm{MeV}^{-1} \end{equation} where $E$ is gamma-ray energy in MeV, and $N$, $\Gamma$, and $E_\mathrm{c}$ are a normalization factor in photons~cm$^{-2}$~s$^{-1}$~MeV$^{-1}$, a power-law photon index, and a scaling factor for the exponential cutoff, respectively. An energy response function of the detector was generated based on a Monte-Carlo simulation using the particle transport framework \texttt{Geant4} \cite{Agostinelli_2003,Allison_2006,Allison_2016}, and was convolved with the model function during the fitting which happened in the detector count-rate dimension. The $\chi^2$ value, which was computed as a square sum of difference between the model and the data divided by the statistical error, was minimized using the Levenberg-Marquardt algorithm in the SciPy software package. With the best-fit model parameters listed in Table \ref{tab:komatsu_spectrum}, the model reproduced the data reasonably well with no particular structure in the fit residual (middle panel of Fig.~\ref{fig:komatsu_spectrum}), with a null hypothesis probability of 7.5\%. When the same spectrum was fitted with a simple power law, a significant ``convex''-shaped systematic residual was seen with a large (unacceptable) $\chi^2$ value of 450 for 47~degrees of freedom, supporting the presence of a spectral cutoff feature. Because the electron bremssstrahlung is thought to be the primary emission process in the gamma-ray glow, the cutoff energy should have a close relation with the maximum energy of accelerated electrons (e.g. \cite{Dwyer_2012a} for a detailed Monte-Carlo simulation study of the acceleration and the emission processes). In addition, we anticipate that statistical analyses of the spectral shape and their temporal evolution based on multi-point observation data will allow us to better constrain the properties of the electron acceleration (electric field strength, lateral extent of the acceleration region), and plan to publish a consolidated result elsewhere. \begin{figure}[htb] \begin{center} \includegraphics[width=\hsize]{fig17.pdf} \end{center} \caption{ The location of the two observation sites in Komatsu, Ishikawa Prefecture (filled circles). Precipitation intensity map obtained by XRAIN are shown for 4 five-minute intervals of December 7th, 2016 (UTC). } \label{fig:komatsu_radar} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\hsize]{fig18.pdf} \end{center} \caption{ Close-up view of the aerial photograph of Komatsu. Filled circles indicate the observation sites. Magenta arrow presents the wind direction estimated from the radar image analysis. White double-headed arrow is the hypothesized shortest distance as traveled by the brightest part of the gamma-ray emission region which yielded the highest peaks in the two count-rate histories. } \label{fig:komatsu_map} \end{figure} \begin{figure}[htb] \begin{center} \includegraphics[width=0.8\hsize]{fig19.pdf} \end{center} \caption{ Top panel: Gamma-ray energy spectrum of the gamma-ray glow event recorded at Science Hills Komatsu (red filled circles). Solid red curve is the best-fit power-law with exponential cutoff (Eq.~\ref{eq:cutoffpl}). The model is convolved with the detector's energy response function so that the fitting is performed in the detector count-rate dimension. Middle panel: Fit residual computed as (data $-$ model)/error. Bottom panel: The same best-fit model function as that of the top panel, but without being convolved with the detector energy response function. Note that ordinate is in units of photons~cm$^{-2}$~s$^{-1}$~MeV$^{-1}$, which represents the photon flux arriving at the detector. } \label{fig:komatsu_spectrum} \end{figure} \begin{table}[!h] \caption{Result of energy-spectral model fitting with a power law with an exponential cutoff (Eq.~\ref{eq:cutoffpl}) to the Science Hills Komatsu data. Errors are at the 90\% confidence level.} \label{tab:komatsu_spectrum} \centering \begin{tabular}{cccccc} \hline N & $\Gamma$ & $E_\mathrm{c}$ & Energy flux$^{1}$ & $\chi^2$~(n.d.o.f.)$^{2}$ \\ ph~cm$^{-2}$~s$^{-1}$~MeV$^{-1}$ & & MeV & MeV~cm$^{-2}$~s$^{-1}$ & \\ \hline $0.158^{+0.015}_{-0.016}$ & $0.26^{+0.14}_{-0.15}$ & $4.10^{+0.51}_{-0.33}$ & 1.18 & 60.5 (46)\\ \hline \end{tabular}\\ $^{1}$ Energy flux in the 3--15~MeV energy band.\\ $^{2}$ Chi-square value of the fit and the number of degrees of freedom in parentheses.\\ \end{table} \clearpage \section{Science highlights} In this section, we review new findings and advancements of our understanding on high-energy radiation from lightning and thundercloud based on our publications which utilized data collected with our detector system. \subsection{Photonuclear reaction triggered by a downward TGF} Enoto et al. \cite{Enoto_2017} reported a sub-millisecond intense gamma-ray flash (downward TGF) and a subsequent short-duration gamma-ray burst lasting for $\sim200$~ms, recorded on February 6th, 2017, at our observation site in Kashiwazaki-Kariwa nuclear power station in Niigata Prefecture. As shown in Fig. \ref{fig:enotoeal_spectra}, the energy spectrum of the short-duration burst consisted of an extremely ``flat'' or ``hard'' continuum (a photon indices $\Gamma\sim0.5$ when fitted with a power-law function of $N \times (E/1~\mathrm{MeV})^{-\Gamma}$ where $N$ and $E$ are a normalization factor and gamma-ray energy), associated with an abrupt cutoff at $\sim$10~MeV. These features made the spectrum look very different from typical energy spectra of bremssstrahlung emission seen e.g. in typical gamma-ray glows (e.g. Fig.~\ref{fig:komatsu_spectrum}). The short-duration burst was followed by a minute-lasting gamma-ray burst. The energy spectrum of this distinctive emission, in turn, predominantly consisted of electron-positron annihilation gamma-ray line at $511$~keV and its Compton scattered continuum signals. After extensive spectral, temporal, and simulation studies, we showed unequivocally that a lightning discharge emitted huge amount of energetic ($>10$~MeV) gamma rays, and neutrons were produced via atmospheric photonuclear reactions (such as $\gamma + ^{14}$N $\to$ $^{13}$N$+$n). The short-duration burst was interpreted well as a superposition of nuclear gamma-ray lines emitted from nuclei that underwent neutron capture, and the peculiar minute-lasting annihilation gamma-ray line emission was explained as a result of $\beta+$ decay of unstable nuclei (again, produced via photonuclear reaction). Production of neutrons via the photonuclear reaction has been suggested based on observational results \cite{Gurevich_2012,Chilingarian_2012b,Bowers_2017}, and theoretical studies \cite{Babich_2006,Babich_2007,Carlson_2010}, there have been multiple reports on potential detection of neutron signals from thundercloud- and lightning-related high energy radiation (for complete reference list, see \cite{Enoto_2017}). Our observation provided multi-point time-resolved data that confirm a) intense gamma-ray flash that caused neutron via photonuclear reaction, b) presence of unbound neutrons (via gamma-ray lines from neutron capture), and c) 511~keV annihilation lines from $\beta^{+}$-decay radioisotopes generated by photonuclear reaction. These formed the first comprehensive observational evidence of such an exotic photonuclear reaction happening in the Earth's dense atmosphere. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\hsize]{fig20.pdf} \end{center} \caption{Observed gamma-ray spectra of the short-duration gamma-ray burst (black filled circle) and model spectrum constructed based on the Monte-Carlo simulation of neutron-induced nuclear gamma rays (green dash-dotted, blue dashed, and purple dotted curves). Red solid curve shows the sum of the individual model components.} \label{fig:enotoeal_spectra} \end{figure} \subsection{Physical properties of downward TGF} On November 24th, 2017, three of our detectors deployed in Niigata, Japan, detected four bunches of intense short-duration ($\ll1$~ms) gamma-ray flashes (TGFs), followed by exponentially-decaying $\sim200$-ms signals, which is, again, considered to be a result of photonuclear reaction (gamma-ray signals from de-excitation of isotopes generated via neutron capture). We analysed time-resolved gamma-ray signals from our detectors, integrated radiation dose measured by argon ionization chambers, and low-frequency radio (LF) observations. Our scintillation-crystal based detectors were heavily saturated by the intense gamma-ray signals from the four pulses of the downward TGFs, and therefore could not provide the total number of gamma-rays that entered the detector nor spectral information of the TGF. Though, we were able to derive arrival times of the four TGF events with an accuracy of $\sim200$~$\mu$s. Comparison of these TGF event times against LF time-series data showed clear correlation between TGFs and positive unipolar pulses (first and second gamma-ray flashes) or bipolar pulses (third and forth ones). Compared to a scintillation-crystal based photon counter, an ionization chamber is much tolerant to high-flux radiation when effective area is approximately the same because the latter measures the amount of integrated ionization at the cost of fine time and energy resolutions. In the TGF event in question, the ionization chambers successfully provided accurate dose information at 5 locations (400--1900~m horizontally from the estimated location of TGF), as anticipated. These dose data, combined with a Monte-Carlo simulation of gamma-ray emission and propagation in the atmosphere, were used to estimate an altitude of electron acceleration to be 2.5$\pm$0.5~km from the sea level. Based on the altitude and the measured radiation dose, the total number of avalanche electrons ($>1$~MeV) was computed to be $8^{+8}_{-4}\times10^{18}$, which is approximately in the same range as those of accelerated electrons estimated from space-based observations of upward TGFs ($4\times10^{16}$--$3\times10^{19}$ by \cite{Mailyan_2016}), while many of TGFs observed in space are thought to originate at altitudes higher than 8~km \cite{Cummer_2015,Mailyan_2016}. \subsection{The end of gamma-ray glow from thundercloud} One of key questions the GROWTH project is set to answer is how stable electron-accelerating region starts to form, evolves over time, and disappears in thundercloud, or in other words, the life cycle of the source of gamma-ray glow. When the close phase of the life cycle is concerned, multiple previous measurements reported abrupt termination of gamma-ray glow that coincided with lightning discharge (\cite{Tsuchiya_2013,Chilingarian_2017,Chilingarian_2020} and references therein). For revealing precise relationship between lightning discharge and cessation of gamma-ray glow, Wada et al. \cite{Wada_2019_prl} analysed an abrupt-termination event that was observed in Ishikawa Prefecture, on February 11th, 2017, by combining gamma-ray data collected by our detector and one from the GODOT project \cite{Bowers_2017} as well as LF data collected by multiple receivers located $\sim50$~km from the gamma-ray observation site. Although there have been previous reports of abruptly-terminated gamma-ray glows coinciding with radio frequency observations of lightning discharges that triggered the termination \cite{Chilingarian_2017}, the nature of single-site measurements of radio signal did not allow a detailed position and time correlation study between gamma-ray glows and lightning discharges. However, in our study \cite{Wada_2018}, a multi-site LF observation provided, for the first time, a fine time- and position-resolved structure of an intercloud/intracloud lightning discharge that coincided with the gamma-ray glow termination and extended over $\sim60$~km lateral area with a 300~ms duration. Time association with the LF data and the gamma-ray data revealed that the termination happened when a stepped leader of the lightning discharge passed over the gamma-ray observation site with a horizontal distance of $0.7$~km. Since the discharge started prior to the abrupt termination of gamma ray about 15~km away from the gamma-ray observation site, causality in this event is obvious; lightning discharge affected the electric field structure and effectively disabled acceleration. Still, due to the long distance between the event and the LF observation sites ($\sim50$~km), we were unable to resolve vertical structure of the discharge. Continued simultaneous observation in gamma ray and radio frequency is anticipated to shed light on the charge structure in the cloud in such events in the future. \section{Conclusion} \begin{itemize} \item Aiming at multi-point observation of particle acceleration and high-energy gamma-ray emission of thundercloud and lightning, we launched a new experiment campaign called ``Thundercloud Project'' in 2015, and developed a new, compact gamma-ray detector system (35$\times$45$\times$20~cm$^3$ in size) each carrying BGO or CsI scintillation crystal. \item We have deployed 15 detectors to four cities in Ishikawa Prefecture and Niigata Prefecture in Japan in four winter seasons in 2015--2019, and accumulated 46 long-duration and 5 short-duration gamma-ray burst events, respectively. \item Some of these events, for example the short-duration burst on February 6th, 2017 in Niigata, allowed us to record the whole process of downward TGF followed by photonuclear reaction and a traveling positron-emitting isotope cloud. \item On long-duration burst, we have revealed that the long-duration gamma-ray burst can be abruptly terminated by a passage of a developing lightning leader (separated by 700~m horizontally) based on February 11th, 2017 data collected in Ishikawa Prefecture \cite{Wada_2018}. This is another stepping stone for understanding the life cycle of particle acceleration region in a thundercloud. \item With accurate timing information with GPS, we have been able to correlate our gamma-ray data with radio frequency observations, enabling multi-messenger studies of high-energy activities of thundercloud and lightning. \item We will continue observation campaigns in coming winter seasons. \end{itemize} \section*{Acknowledgment} We deeply thank D.~Yonetoku and T.~Sawano (Kanazawa University), K.~Watarai (Kanazawa University High School), K.~Yoneguchi and Kanazawa Izumigaoka High School, K.~Kimura (Komatsu High School), K.~Kitano (Science Hills Komatsu), K.~Kono (Ishikawa Plating Industry Co., Ltd), Kanazawa Nishi High School, Industrial Research Institute of Ishiwaka, Sakaida Fruits, Kanazawa Institute of Technology, Television Kanazawa Corporation, Ishikawa Prefectural University, Ishikawa Prefectural Institute of Public Health and Environmental Science, Kanazawa University Noto School, Norikura Observatory of Institute of Cosmic-Ray Research, The University of Tokyo, Non-Profit Organization Mount Fuji Research Station, Niigata Prefectural Radiation Monitoring Center, and the Radiation Safety Group of the Kashiwazaki-Kariwa nuclear power station, Tokyo Electric Power Company Holdings, Incorporated for the support of detector deployment. We are grateful to H.~Sakurai, M.~Niikura and the Sakurai group members at Graduate School of Science, The University of Tokyo for providing the BGO scintillation crystals. T.~Nakano, T.~Tamagawa (RIKEN), A.~Bamba, H.~Odaka (The University of Tokyo), D.~Umemoto (Kobe University), M.~Sato and Y.~Sato (Hokkaido University), H.~Nanto, G.~Okada (Kanazawa Institute of Technology) M.~Kamogawa (University of Shizuoka), G.~S.~Bowers (Los Alamos National Laboratory), D.~M.~Smith~(University of California Santa Cruz), T.~Morimoto (Kindai University), Y.~Nakamura (Kobe City College of Technology), A.~Matsuki and M.~Kubo (Kanazawa University), T.~Ushio (Osaka University), and H.~Sakai (University of Toyama) contributed to the project through detector development, data provisioning, and discussions on results. S.~Otsuka (The University of Tokyo) and H.~Kato (RIKEN) also supported detector developments. The Monte-Carlo simulations were performed on the HOKUSAI GreatWave and BigWaterfall supercomputing systems operated by RIKEN Advanced Center for Computing and Communication. The maps and aerial photos used in the figures are taken from Geospatial Information Authority of Japan. Data of XRAIN are provided by the Ministry of Land, Infrastructure, Transport and Tourism via Data Integration and Analysis System (DIAS), The University of Tokyo. This project is supported by RIKEN Special Postdoctoral Researchers Program, JSPS/MEXT KAKENHI grants (15K05115, 15H03653, 16H04055, 16H06006, 16K05555, 17K12966, 18J13355, 19H00683), Hakubi project and SPIRITS 2017 of Kyoto University, and the joint research program of the Institute for Cosmic Ray Research (ICRR), The University of Tokyo. The bootstrapping phase of this project was supported by a crowdfunding campaign named Thundercloud Project on the academic-crowdfunding platform ``academist''. We thank Y.~Shikano, Y.~Araki, M.~T.~Hayashi, N.~Matsumoto, T.~Enoto, K.~Hayashi, S.~Koga, T.~Hamaji, Y.~Torisawa, S.~Sawamura, J.~Purser, S.~Suehiro, S.~Nakane, M.~Konishi, H.~Takami, T.~Sawara, and all other supporters of the crowdfunding campaign, and Adachi design laboratory for the support of creation of digital contents and visuals. \bibliographystyle{ptephy}
{ "timestamp": "2020-07-28T02:41:41", "yymm": "2007", "arxiv_id": "2007.13618", "language": "en", "url": "https://arxiv.org/abs/2007.13618" }
\section{Introduction} In general relativity the Doppler effect can be seen as being produced by the expansion of the universe or by the relative motion as in special relativity. The cosmological effect complies with the general redshift law \cite{SW,H0,H} while the kinetic one is treated separately by using the methods of special relativity \cite{LL}. Thus possible interferences of these two effects cannot be pointed out without resorting to a more general theory of relativistic effects in the presence of gravity as our de Sitter relativity we proposed recently \cite{CdSR1,CdSR2}. This gives us the opportunity of analysing the Doppler effect globally considering simultaneously the cosmological and kinetic contributions in the de Sitter expanding universe. In de Sitter relativity the role of inertial frames is played by any set of local charts related trough isometries. In what follows we focus mainly on the comoving charts with conformal coordinates where the Maxwell equations have similar solutions as in Minkowski spacetime allowing us to define correctly the photon energy and momentum \cite{Max}. For studying the relative motion of these frames we exploit the Lorentzian isometries defined recently \cite{CdSR1} and the corresponding transformation rules of the conserved quantities. Thus we may deduce the observed energies in different frames deriving the Doppler effect in de Sitter relativity. Note that in the de Sitter manifold the energy and momentum transform under isometries as the components of a five-dimensional skew-symmetric tensor in association with the angular momentum and a new specific conserved vector we called adjoint momentum \cite{CGRG,CdSR1}. For this reason the formalism is different from the usual one of special relativity but the philosophy of the relative motion remains the same. Since here we make the first step to this approach we restrict ourselves to the longitudinal Doppler effect in which the source carried by the mobile frame is translated with the distance $d$ from its origin only along the direction of the relative velocity. Our goal is to analyse how a photon emitted at the initial time by this source is observed by a fixed observer when we know that at that time the origin of the mobile frame is passing through the origin of the fixed one with the relative velocity ${\bf V}$. Thus we formulate a problem of relative motion in the presence of the de Sitter expansion able to reveal the interference between the cosmological and kinetic contributions to the Doppler effect. Applying this method we obtain a new formula of this effect such that in the particular case of ${\bf V}=0$ we recover the cosmological effect given by Lema\^ itre's law \cite{L1,L2} while for $d=0$ we find just the well-known formula of the Doppler effect in special relativity. Moreover, we derive the related significant quantities namely the dispersion relation, the propagation time of the photon and the real distance between source and observer at the moment of observation. The paper is organized as follows. In the second section we briefly present the de Sitter relativity in conformal and respectively de Sitter-Painlev\' e local charts showing how the coordinates and conserved quantities transform under isometries. The next section is devoted to the longitudinal Doppler effect for which we give the concrete form of the Lorentzian isometries deriving the corresponding transformation rules of the conserved quantities leading to the final formula relating the emitted and observed photon frequencies. Moreover, we analyse the mentioned related quantities pointing out their specific features resulted from the de Sitter relativity \cite{CdSR1} which uses Lorentzian isometries instead of the usual boosts of special relativity. Finally we present few concluding remarks. \section{de Sitter relativity} Let us start with the de Sitter spacetime $(M,g)$ defined as the hyperboloid of radius $1/\omega$ in the five-dimensional flat spacetime $(M^5,\eta^5)$ of coordinates $z^A$ (labeled by the indices $A,\,B,...= 0,1,2,3,4$) having the metric $\eta^5={\rm diag}(1,-1,-1,-1,-1)$. The local charts $\{x\}$ of coordinates $x^{\mu}$ ($\alpha,\mu,\nu,...=0,1,2,3$) can be introduced on $(M,g)$ giving the set of functions $z^A(x)$ which solve the hyperboloid equation, \begin{equation}\label{hip} \eta^5_{AB}z^A(x) z^B(x)=-\frac{1}{\omega^2}\,. \end{equation} where $\omega$ denotes the Hubble de Sitter constant since in our notations \cite{CGRG}. The de Sitter isometry group is just the gauge group $SO(1,4)$ of the embedding manifold $(M^5,\eta^5)$ that leave invariant its metric and implicitly Eq. (\ref{hip}). Therefore, given a system of coordinates defined by the functions $z=z(x)$, each transformation ${\frak g}\in SO(1,4)$ defines the isometry $x\to x'=\phi_{\frak g}(x)$ derived from the system of equations \begin{equation}\label{zz} z[\phi_{\frak g}(x)]={\frak g}z(x) \end{equation} that holds for any type of coordinates which means that these isometries are defined globally. The sets of local charts related through these isometries play the role of the systems of inertial frames similar to those of special relativity. In what follows we consider the comoving charts with two sets of local coordinates, the {\em conformal} pseudo-Euclidean ones, $\{t_c,{\bf x}_c\}$, and the 'physical' de Sitter-Painlev\' e coordinates, $\{t,{\bf x}\}$. The conformal time $t_c$ and Cartesian spaces coordinates $x_c^i$ ($i,j,k,...=1,2,3$) are defined by the functions \begin{eqnarray} z^0(x_c)&=&-\frac{1}{2\omega^2 t_c}\left[1-\omega^2({t_c}^2 - {\bf x}_c^2)\right]\,, \nonumber\\ z^i(x_c)&=&-\frac{1}{\omega t}x_c^i \,, \label{Zx}\\ z^4(x_c)&=&-\frac{1}{2\omega^2 t_c}\left[1+\omega^2({t_c}^2 - {\bf x}_c^2)\right]\,, \nonumber \end{eqnarray} written with the vector notation, ${\bf x}=(x^1,x^2,x^3)\in {\Bbb R}^3\subset M^5$. These charts cover the expanding part of $M$ for $t_c \in (-\infty,0)$ and ${\bf x}_c\in {\Bbb R}^3$ while the collapsing part is covered by similar charts with $t_c >0$. In both these cases we have the same conformal flat line element, \begin{equation}\label{mconf} ds^{2}=\eta^5_{AB}dz^A(x_c)dz^B(x_c)=\frac{1}{\omega^2 {t_c}^2}\left({dt_c}^{2}-d{\bf x}_c\cdot d{\bf x}_c\right)\,. \end{equation} In what follows we restrict ourselves to the expanding portion which is a plausible model of our expanding universe. The de Sitter-Painlev\' e coordinates $\{t, {\bf x}\}$ on the expanding portion can be introduced directly by substituting \begin{equation}\label{EdS} t_c=-\frac{1}{\omega}e^{-\omega t}\,, \quad {\bf x}_c={\bf x}e^{-\omega t}\,, \end{equation} where $t\in(-\infty, \infty)$ is the {proper} or cosmic time while $x^i$ are the 'physical' Cartesian space coordinates. Then the line element reads \begin{equation} ds^2=(1-\omega^2 {{\bf x}}^2)dt^2+2\omega {\bf x}\cdot d{\bf x}\,dt -d{\bf x}\cdot d{\bf x}\,. \end{equation} Notice that this chart is useful in applications since in the flat limit (when $\omega \to 0$) its coordinates become just the Cartesian ones of the Minkowski spacetime. In the charts with combined coordinates $\{t,{\bf x}_c\}$ the metric takes the Friedman-Lema\^ itre-Robertson-Walker (FLRW) form \begin{equation} ds^2=dt^2-a(t)^2\,d{\bf x}_c\cdot d{\bf x}_c\,, \quad a(t)=e^{\omega t}\,, \end{equation} where $a(t)$ is the scale factor of the expanding portion which can be rewritten in the conformal chart, \begin{equation}\label{scale} a(t_c)\equiv a[t(t_c)]=-\frac {1}{\omega t_c}\,, \end{equation} as a function defined for $t_c<0$. The classical conserved quantities under de Sitter isometries can be calculated with the help of the Killing vectors $k_{(AB)}$ of the de Sitter manifold $(M,g)$ \cite{CGRG}. According to the general definition of the Killing vectors in the pseudo-Euclidean spacetime $(M^5,\eta^5)$, we may consider the following identity \begin{equation} K^{(AB)}_Cdz^C=z^Adz^B-z^Bdz^A=k^{(AB)}_{\mu}dx^{\mu}\,, \end{equation} giving the covariant components of the Killing vectors in an arbitrary chart $\{x\}$ of $(M,g)$ as \begin{equation}\label{KIL} k_{(AB)\,\mu}=\eta^5_{AC}\eta^5_{BD}k^{(CD)}_{\mu}= z_A\partial_{\mu}z_B-z_B\partial_{\mu}z_A\,, \end{equation} where $z_A=\eta_{AB}z^B$. The principal conserved quantities along the timelike geodesics have the general form ${\cal K}_{(AB)}(x,{\bf P})=\omega k_{(AB)\,\mu}m u^{\mu}$ where $u^{\mu}=\frac{dx^{\mu}(s)}{ds}$ are the components of the covariant four-velocity that satisfy $u^2=g_{\mu\nu}u^{\mu}u^{\nu}=1$. We have shown that in a conformal chart $\{t_c,{\bf x}_c\}$ the geodesic equation of a particle of mass $m$ passing through the space point ${\bf x}_{c0}$ at time ${t_{c0}}$ is completely determined by the initial condition ${\bf x}_c({t_{c0}}) ={\bf x}_{c0}$ and the conserved momentum ${\bf P}$ as \cite{CGRG,CdSG}, \begin{equation} {x_c}^i(t_c)={x_c}_0^i+\frac{P^i}{\omega {P}^ 2} \left(\sqrt{m^2+{P}^{2}\omega^2 {t_{c0}}^2}-\sqrt{ m^2+{P}^2 \omega^2 t_c^2}\, \right)\,,\label{geodE} \end{equation} where we denote $P=|{\bf P}|$. Moreover, the other conserved quantities in an arbitrary point $(t_c,{\bf x}_c(t_c))$ of the geodesics depend only on this point and the momentum ${\bf P}$. These are the energy $E$, angular momentum ${\bf L}$ and a specific vector ${\bf Q}$ we called the adjoint momentum. In the chart $\{t_c,{\bf x}_c\}$ these quantities have the form \cite{CdSR1,CdSG} \begin{eqnarray} E&=&\omega\, {\bf x}_c(t_c)\cdot {\bf P}+\sqrt{ m^2+{P}^{2}\omega^2 t_c^2}\,,\label{Ene}\\ L_i&=&\varepsilon_{ijk} x^j_c(t_c) P^k\,,\label{L}\\ Q^i&=&2\omega x_c^i(t_c)E+\omega^2P^i[t_c^2-{\bf x}_c(t_c)^2]\,.\label{Q} \end{eqnarray} satisfying the obvious identity \begin{equation}\label{disp} E^2-\omega^2 {{\bf L}}^2-{\bf P}\cdot {\bf Q}=m^2 \end{equation} corresponding to the first Casimir invariant of the $so(1,4)$ algebra \cite{CGRG}. In the flat limit, $\omega\to 0$ and $-\omega t_c\to 1$, we have ${\bf Q} \to {\bf P}$ such that this identity becomes just the usual mass-shell condition $E^2-{\bf P}^2=m^2$ of special relativity. The conserved quantities $E$, ${\bf P}$ and the new ones, \begin{equation}\label{KR} {\bf K}=-\frac{1}{2\omega}\left({\bf P}-{\bf Q}\right)\,, \quad {\bf R}=-\frac{1}{2\omega}\left({\bf P}+{\bf Q}\right)\,, \end{equation} form a skew-symmetric tensor on $M^5$, \begin{equation} {\cal K}(x,{\bf P})= \left( \begin{array}{ccccc} 0&\omega K_1&\omega K_2&\omega K_3&E\\ -\omega K_1&0&\omega L_3&-\omega L_2&\omega R_1\\ -\omega K_2&-\omega L_3&0&\omega L_1&\omega R_2\\ -\omega K_3&\omega L_2&-\omega L_1&0&\omega R_3\\ -E&\omega R_1&-\omega R_2&-\omega R_3&0 \end{array}\right)\,,\label{KK} \end{equation} whose elements transform under an isometry $x\to x_c'=\phi_{\frak g}(x_c)$ defined by Eq. (\ref{zz}) as \begin{equation}\label{KAB} {\cal K}_{(AB)}(x_c',{{\bf P}}')={\frak g}_{A\,\cdot}^{\cdot\,C}\,{\frak g}_{B\,\cdot}^{\cdot\,D}\,{\cal K}_{(CD)}(x_c,{\bf P})\,, \end{equation} for all ${\frak g}\in SO(1,4)$. Here ${\frak g}_{A\,\cdot}^{\cdot\,B}=\eta^5_{AC}\,{\frak g}^{C\,\cdot}_{\cdot \,D}\, \eta^{5\,BD}$ are the matrix elements of the adjoint matrix $\overline{\frak g}=\eta^5\,{\frak g}\,\eta^5$. Thus, Eq. (\ref{KAB}) can be written as ${\cal K}(x',{{\bf P}}')=\overline{\frak g}\,{\cal K}(x,{\bf P})\,\overline{\frak g}^T$ or simpler, ${\cal K}'=\overline{\frak g}\,{\cal K}\,\overline{\frak g}^T$. Concluding, we can say that the de Sitter isometries are generated globally by the $SO(1,4)$ transformations which determine simultaneously the transformations of the coordinates and of the conserved quantities. We have thus a specific relativity on the de Sitter spacetime allowing us to study different relativistic effects in the presence of the de Sitter gravity. \section{Longitudinal Doppler effect} Let us consider two observers, the first one staying at rest in the origin $O$ of the fixed frame $\{t_c,{\bf x}_c\}$ and the second one staying in the origin $O'$ of a mobile frame $\{t'_c,{\bf x}'_c\}$ moving along the $x^1_c$ axis. We adopt the synchronization condition assuming that $O'$ passes through $O$ with the velocity ${\bf V}=(V,0,0)$ at the initial moment \begin{equation}\label{ini} t_{c0}=t_{c0}'=-\frac{1}{\omega} ~~~\to~~~ t_0=t_0'=0\,. \end{equation} Then, assuming that the observer $O$ measures the parameters $({t_{c}},{\bf x}_{c},{\bf P})$ while $O'$ observes other parameters, $(t'_{c},{\bf x}'_{c},{\bf P}')$, of the same particle, we may apply the general results of our de Sitter relativity. \subsection{Lorenzian isometry} In what follows we study the relative motion by using the {\em Lorenzian isometries} defined in Ref. \cite{CdSR1} instead of the usual Lorentz transformations of special relativity. In our particular case these de Sitter isometries, are generated by the particular Lorentz transformation along the $z^1$ axis (which is parallel with the $x_c^1$ axis) of the form \begin{equation} {\frak g}({\bf V})=\left( \begin{array}{ccccc} \frac{1}{\sqrt{1-V^2}}&\frac{V}{\sqrt{1-V^2}}&0&0&0\\ \frac{V}{\sqrt{1-V^2}}&\frac{1}{\sqrt{1-V^2}}&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1 \end{array}\right) \,. \end{equation} Note that this is a transformation of the $SO(1,4)$ group acting in $M^5$ that may not be confused with an usual Lorentz boost of special relativity. With their help we can derive the isometry $x_c=\phi_{{\frak g}({\bf V})}(x_c')$ by solving the system (\ref{zz}) for ${\frak g}={\frak g}({\bf V})$ and the functions (\ref{Zx}). We obtain \cite{CdSR1}, \begin{eqnarray} t_c(t_c',{\bf x}'_c)&=&\frac{t'_c}{\Delta_c'}\,,\label{Eq1}\\ {{x}^1_c}(t_c',{\bf x}^{\prime}_c)&=&\frac{1}{\Delta_c'}\left\{\gamma {x}^{\prime\, 1}_c+\frac{\gamma V}{2\omega}\left[1-\omega^2\left((t_c')^2-{{\bf x}'_c}\cdot{{\bf x}'_c}\right)\right]\right\}\,,\label{Eq2}\\ {{x}^2_c}(t_c',{\bf x}^{\prime}_c)&=&\frac{ \gamma {x}^{\prime\,2}_c}{\Delta_c'}\,,\\ {{x}^3_c}(t_c',{\bf x}^{\prime}_c)&=&\frac{\gamma {x}^{\prime\,3}_c}{\Delta_c'} \,,\label{Eq4} \end{eqnarray} where $\gamma=(1-V^2)^{-\frac{1}{2}}$ and \begin{equation} \Delta_c'=1+\omega\gamma\, {\bf x}'_c\cdot{\bf V}+\frac{\gamma-1}{2}\left[1-\omega^2\left((t_c')^2-{{\bf x}'_c}\cdot{{\bf x}'_c}\right)\right]\,. \end{equation} The conserved quantities put in the form (\ref{KK}) with the help of Eqs. (\ref{KR}) transform under this isometries as \cite{CdSR1} \begin{equation}\label{Kg} {\cal K}(t_c,{\bf x}_c,{\bf P})=\overline{\frak g}({\bf V})\,\,{\cal K}(t_c',{\bf x}'_c,{\bf P}')\overline{\frak g}({\bf V})^T\,, \end{equation} as it results from Eq. (\ref{KAB}). Thus we have all the transformation rules relating the coordinates and the conserved quantities observed by $O'$ and $O$ whose frames are related through a Lorentzian isometry of parameter ${\bf V}$. Note that in de Sitter relativity this parameter represent the relative velocity only at the initial time (\ref{ini}) since in this geometry the relative velocities of the geodesic motions depend on time. In special relativity an electromagnetic source carried by a mobile frame produces the same Doppler effect regardless its fixed position with respect of this frame since in the Minkowski spacetime the energy is independent on translations. In contrast, in de Sitter relativity the energy depends on position as in Eq. (\ref{Ene}) which means that the Doppler effect will depend on the position of the source with respect to the mobile frame. In order to avoid complicate calculations here we restrict ourselves to the particular case of the longitudinal Doppler effect when the source is translated only along the direction of the velocity of the mobile frame $\{t'_c,{\bf x}_c'\}$, staying at rest in the space point $(d,0,0)$ of this frame. The observer $O'$ and implicitly $O$ will receive signals from this source only if this remains inside the null cone in $t_{c0}=t'_{c0}=-\omega^{-1}$ such that the condition $\omega d<1$ is mandatory \cite{CdSG}. We must specify that in the associated frame with de Sitter-Painlev\' e coordinates $\{t',{\bf x}'\}$ defined by Eq. (\ref{EdS}) the source is moving because of the manifold expansion, having the coordinates $(t',d(t'),0,0)$ where $d(t')=d e^{\omega t'}$. Bering in mind that we fixed the initial moment when $O=O'$ as in Eq. (\ref{ini}) we understand that $d$ is just the physical distance of the source moving with respect to the mobile frame with the velocity \begin{equation} v=\left.\frac{d d(t')}{dt'}\right|_{t'=0}=\omega d\,, \end{equation} at the initial time $t_0=t_0'=0$. Thus we recover the well-known velocity-distance law (in the mobile frame) that occurs naturally in the de Sitter expanding universe as in any other FLRW spacetime \cite{H}. \subsection{Emitted photon} In the conformal charts the electromagnetic potential has the same form as in the Minkowski spacetime since the Maxwell equations are invariant under conformal transformations \cite{Max}. This means that a regressive plane electromagnetic wave, that has to be observed successively by $O'$ and $O$, may have the momentum ${\bf k}=(-k,0,0)$ and energy $k$ being proportional with \begin{equation} {A}_i\propto \varepsilon_i e^{-i k x^1_c-ik t_c} \end{equation} where $\varepsilon_i$ are the components of an arbitrary polarisation vector. Assuming that a photon of this type is emitted at the initial time (\ref{ini}) we find the form of its geodesic equation in the frame $O'$ \cite{CdSG} \begin{equation} x_{c\,ph}^{\prime\,1}(t_c')=d-\frac{1+\omega t_c'}{\omega}\,,\quad x_{c\,ph}^{\prime\,2}(t_c')=x_{c\,ph}^{\prime\,3}(t_c')=0\,. \end{equation} Then by using the transformation equations (\ref{Eq1}) - (\ref{Eq4}) we can derive the parametric equations of the photon geodesic observed by $O$, \begin{eqnarray} t_{ph}(t_c')&=&t\left[t_c',{\bf x}'_{c\,ph}(t'_c)\right]\,,\label{G1}\\ {\bf x}_{ph}(t_c')&=&\bf{x}\left[t_c',{\bf x}'_{c\,ph}(t'_c)\right]\,,\label{G2} \end{eqnarray} depending on the parameter $t'_c$. The conserved quantities related to the photon trajectory are observed differently by the mobile and fixed observers. According to Eq. (\ref{Ene}), the observer $O'$ measures the photon momentum ${\bf P}'={\bf k}$ energy \begin{equation} E'=k(1-\omega d)\,, \end{equation} a null angular momentum, ${\bf L}'=0$, and ${\bf Q}^{\prime}={\bf k}(1-\omega d)^2$ resulted from Eqs. (\ref{L}) and (\ref{Q}). Then the condition (\ref{disp}) is fulfilled since in this case $m=0$. Furthermore, we derive the conserved quantities measured by the fixed observer $O$ that can be deduced from Eq. (\ref{Kg}) after bringing these quantities in the form (\ref{KK}). After a little calculation we obtain \begin{eqnarray} E~&=&k\left[ \sqrt{\frac{1-V}{1+V}}(1-\omega d)-\frac{\omega^2 d^2}{2}\,\frac{V}{\sqrt{1-V^2}}\right]\,,\label{EE}\\ P^1&=&-k\left[ \omega d +\sqrt{\frac{1-V}{1+V}}(1-\omega d)+\frac{\omega^2 d^2}{2}\left(\frac{1}{\sqrt{1-V^2}}-1 \right)\right]\,,\label{PP}\\ Q^1&=&-k\left[-\omega d +\sqrt{\frac{1-V}{1+V}}(1-\omega d)+\frac{\omega^2 d^2}{2}\left(\frac{1}{\sqrt{1-V^2}}+1 \right)\right]\,,~~~~\label{QQ} \end{eqnarray} while $P^2=P^3=Q^2=Q^3=0$ and ${\bf L}=0$. These components satisfy the condition $(\ref{Kg})$ for $m=0$. The parameters involved in these equations satisfy the natural conditions $\omega d<1$ and $V<1$ but which are not enough for assuring positive energies observed by $O$. Therefore, we must impose the supplemental condition $E>0$ which restricts the relative velocity as \begin{equation}\label{VV} V<V_{lim}(d)=\frac{2(1-\omega d)}{1+(1-\omega d)^2}\,. \end{equation} This is in fact the mandatory condition for observing the photon in $O$ at finite time. When the relative velocity $V$ exceeds this limit then the photon cannot arrive in $O$ at finite time because of the background expansion. Thus $V_{lim}(d)$ defines a new velocity horizon restricting the velocities such that for very far sources, with $\omega d\sim 1$, this limit vanishes. \subsection{Redshift and related quantities} Eq. (\ref{EE}) allows us to derive the formula of the longitudinal Doppler effect in de Sitter relativity. Denoting with $\nu_0$ the frequency of the emitted photon and by $\nu$ that measured by the observer $O$ we can rewrite Eq. (\ref{EE}) as \begin{equation}\label{nu} \frac{1}{1+z}=\frac{\nu}{\nu_0}=\frac{E}{k}= \sqrt{\frac{1-V}{1+V}}\left(1-\omega d -\frac{\omega^2 d^2}{2}\,\frac{V}{1-V}\right)\,, \end{equation} pointing out the redshift $z$. What is new here is the last term proportional with $\omega^2 d^2$ which is generated by the fact that we used the Lorentzian isometries of our de Sitter relativity instead of the usual Lorentz boosts of special relativity. This term is important since it cannot be seen as a mere small correction as long as for relative velocities approaching to $V_{lim}(d)$ the redshift can increase significantly. Obviously, dropping out this new term we recover the well-known factorization obtained when the special relativity is used for calculating the Doppler effect. Note that this supplemental term vanishes in both the particular cases of $V=0$ or $d=0$. Therefore, for $V=0$ we obtain the Lema\^ itre redshift \cite{L1,L2} \begin{equation} 1+z=\frac{1}{1-\omega d}=\frac{a(t_{c\,obs})}{a(t_{c0})}=\frac{t_{c0}}{t_{c\, obs}}\,, \end{equation} since the photon is emitted at $t_{c0}=t_{c0}'=-\omega^{-1}$ and observed at $t_{c\, obs}=-{\omega}^{-1}+d$. For $d=0$ we obtain the familiar formula of the Doppler effect in special relativity since Eq. (\ref{nu}) depends only on the product $\omega d$ such that the limit $d\to 0$ is equivalent to the flat limit $\omega\to 0$. In de Sitter relativity the conserved quantities satisfy the condition (\ref{disp}) which can be interpreted as a Lorentz violation produced by the de Sitter gravity. Under such circumstances we expect to find new dispersion relations depending on the parameters $d$ and $V$. Indeed, from Eqs. (\ref{Eq1}) and (\ref{Eq2}) we deduce the linear dispersion relation $E=|{\bf P}| f(d,V)$ where \begin{equation} f(d,V)=\frac{1}{V}\frac{(V\omega d-V+1)^2+V^2-1}{(1-\sqrt{1-V^2})\left[1-(\omega d-1)^2\right]-2(V\omega d -V+1)}\,. \end{equation} This function is positively defined, $0<f(d,V)\le 1$, for $0\leq d<1$ and $0\leq V<V_{lim}(d)$ vanishing when $V \to V_{lim}(d)$. Moreover, the expansion \begin{equation} f(d,V)=1+\omega d \sqrt{\frac{1+V}{1-V}}+{\cal O}(\omega^2 d^2)\,, \end{equation} shows that this function is produced exclusively by the de Sitter gravity where the energy depends on the space position. For this reason we find that for $d\to 0$ we recover the result of special relativity $f(0,V)=\lim_{\omega\to 0}f(d,V)=1$. Apart from the study of the conserved quantities it is interesting to find which is the propagation time of the photon as well as the real distance between $O$ and $S$ at the time when the redshift is observed by $O$. It is convenient to start this investigation in the charts with conformal coordinates calculating the time $t_{c\,f}'$ when the photon is measured by $O$. Obviously, this is the solution of the equation ${x}^1_{ph}(t_c')=0$ which can be calculated according to Eqs. (\ref{G2}) and (\ref{Eq2}) as \begin{equation}\label{tf} t_{c\,f}'=\frac{(V \omega d-V+1)^2+V^2-1}{2\omega V (V\omega d-V+1)}\,. \end{equation} This is a conformal time which must remain negative for any values of parameters. It is not difficult to show that $t_{c\,f}'<0$ only if $V$ satisfies the condition (\ref{VV}) which guarantees that the photon arrives in $O$. Eq. (\ref{tf}) helps us to find the propagation time in the frame with de Sitter-Painlev\' e coordinates by using the function (\ref{Eq1}) as \begin{equation} t_f=-\frac{1}{\omega}\ln\left\{-\omega\, t_c[t'_{c\,f},{\bf x}'_{c\,ph}(t'_{c\,f})]\right\} \end{equation} since in this frame the photon was emitted at $t_0=0$. This time can be expanded as \begin{equation} t_f=\sqrt{\frac{1+V}{1-V}}\left(d+\frac{\omega d^2}{2} +\frac{\omega^2 d^3}{6}\frac{2-V}{1-V}+{\cal O}(\omega^3)\right)\,, \end{equation} pointing out that in the flat limit, when $\omega \to 0$, we obtain just the propagation time predicted in special relativity. Now we can derive the distance between $O$ and $S$ at the moment when the photon is observed in $O$. This can be calculated either in the conformal chart $\{t_c,{\bf x}_c\}$ ($D_c$) or in the de Sitter-Painlev\' e one ($D$) as \begin{equation} D_c=x^1_c(t'_{c\,f},d,0,0)~\to~ D=-\frac{D_c}{\omega t'_{c\,f}}\,, \end{equation} where the function $x^1_c(t'_{c\,f},d,0,0)$ is given by Eq. (\ref{Eq2}) in which we substitute $t'_c=t'_{c\,f}$ and ${\bf x}'_c=(d,0,0)$. The resulted physical distance $D$ is a complicated function of $(d, v, \omega)$ which cannot be written here. Nevertheless, by using a suitable algebraic code on computer we find the expansion \begin{eqnarray} D&=&\frac{d}{(1-V)\sqrt{1-V^2}}\nonumber\\ &+&\omega d^2\, \frac{2V^2-2V-2+\sqrt{1-V^2}(4-3V^2)}{2(1-V^2)(1-V)^2} +{\cal O}(\omega^2)\,, \end{eqnarray} where the first term is just the distance calculated in special relativity. Another useful expansion can be obtained for small relative velocities as \begin{eqnarray} D&=&\frac{d}{1-\omega d}+\frac{d\,V}{2}\left[1+\frac{1}{(1-\omega d)^2}\right]\nonumber\\ &+&\frac{d\,V^2}{4} \left[3(1-\omega d)+\frac{2}{1-\omega d}+\frac{1}{(1-\omega d)^3}\right] +{\cal O}(V^3)\,. \end{eqnarray} Here the first term is given by the de Sitter gravity in the absence of the relative motion. These approximations will help one to use simpler analytical formulas in some domains of variables but for results of higher accuracy the numerical calculation cannot be avoided. \section{Concluding remarks} We derived the formula of the longitudinal Doppler effect in the de Sitter expanding universe. The redshift formula obtained with the help of our new Lorentzian isometries \cite{CdSR1} combines the cosmological and kinetic contribution in a non trivial manner in a new term proportional with $\omega d V$. Consequently, when we eliminate one of these contributions, taking either $V=0$ or $d=0$ this term vanishes allowing us to recover the particular cases of Lema\^ itre's law or the Doppler effect of special relativity. In addition, we have seen that the related quantities, the dispersion relation, the propagation time and the final distance to the source, have specific features determined by the de Sitter gravity but becoming in the flat limit just those of special relativity. The principal results reported here, the redshift formula and the dispersion relation, depend only on the conserved quantities which are independent on the local chart we choose. The other related quantities are obtained in conformal coordinates since these are suitable for studying the affects of the space translations as that between $O'$ and $S$ depending on the parameter $d$. Furthermore, for obtaining formulas that may be helpful in interpreting astronomical measurements we have rewrote these results in de Sitter-Painlev\' e coordinates substituting the conformal coordinates according to Eqs. (\ref{EdS}). This method seems to be flexible and appropriate for studying the effects of the relative motion in de Sitter relativity. The next challenge is to consider the general case when the source has an arbitrary position in the mobile frame generating transverse effects.
{ "timestamp": "2020-10-02T02:17:41", "yymm": "2007", "arxiv_id": "2007.13590", "language": "en", "url": "https://arxiv.org/abs/2007.13590" }
\section{Introduction} In the last decades, the theory of optimal transport has made impressive inroads into other disciplines of mathematics, probably most notably with the Lott--Sturm--Villani theory \cite{LV09,Stu06a,Stu06b} of synthetic Ricci curvature bounds for metric measure spaces. More recently, optimal transport techniques have also been used to extend this theory to cover also discrete \cite{CHLZ12,EM12,Maa11,Mie11} and noncommutative geometries \cite{CM14,CM17a,MM17}. The starting point of our investigation are the results from \cite{CM14,CM17a} and their partial generalizations to the infinite-dimensional case in \cite{Hor18,Wir18}. For a symmetric quantum Markov semigroup $(P_t)$ the authors construct a noncommutative version of the $2$-Wasserstein metric, which allows to obtain a quantum analog of the characterization \cite{JKO98,Ott01} of the heat flow as $2$-Wasserstein gradient flow of the entropy. On this basis, the geodesic semi-convexity of the entropy in noncommutative $2$-Wasserstein space can be understood as a lower Ricci curvature bound for the quantum Markov semigroup, and it can be used to obtain a series of prominent functional inequalities such as a Talagrand inequality, a modified logarithmic Sobolev inequality and the Poincaré inequality \cite{BGJ20,CM17a,JLL20,RD19}. One of the major challenges in the development of this program so far has been to verify semi-convexity in concrete examples, and only few noncommutative examples have been known to date, even less infinite-dimensional ones. To prove geodesic semi-convexity, the gradient estimate \begin{equation}\label{eq:GE} \norm{\partial P_t a}_\rho^2\leq e^{-2Kt}\norm{\partial a}_{P_t \rho}^2,\tag{GE} \end{equation} or, equivalently, its integrated form has proven central. They can be seen as noncommutative analogs of the Bakry--Émery gradient estimate and the $\Gamma_2$ criterion. Indeed, if the underlying quantum Markov semigroup is the heat semigroup on a complete Riemannian manifold, (\ref{eq:GE}) reduces to the classical Bakry--Émery estimate \begin{equation*} \Gamma(P_t f)\leq e^{-2Kt}P_t \Gamma(f). \end{equation*} As often in noncommutative geometry, tensorization of inequalities is more difficult than that in the commutative case. It is not known whether the gradient estimate in the form (\ref{eq:GE}) has good tensorization properties. For this reason we introduce $(\mathrm{CGE})$, a complete version of (\ref{eq:GE}), and establish some of its stability properties. Using these in combination with a variant of the intertwining technique from \cite{CM17a} and a fine analysis of specific generators of Lindblad type, we are able to establish this tensor stable gradient estimate $(\mathrm{CGE})$ for a number of examples for which geodesic convexity was not known before. Let us briefly outline the content of the individual sections of this article. In Section \ref{sec:basics} we recall some basics of quantum Markov semigroups, the construction of the noncommutative transport distance $\mathcal{W}$ and the connection between the gradient estimate (\ref{eq:GE}) and the geodesic semi-convexity of the entropy. In Section \ref{sec:intertwining} we extend the intertwining technique from \cite{CM17a,CM20} to the infinite-dimensional setting. Working with arbitrary operator means, our result does not only cover the gradient estimate implying semi-convexity of the entropy in noncommutative $2$-Wasserstein space, but also the noncommutative Bakry--Émery estimate studied in \cite{JZ15a}. As examples we show that the Ornstein--Uhlenbeck semigroup on the mixed $q$-Gaussian algebras satisfies ($\mathrm{CGE}$) with constant $K=1$, the heat semigroup on quantum tori satisfies ($\mathrm{CGE}$) with constant $K=0$, and that a class of quantum Markov semigroups on discrete group von Neumann algebras and quantum groups $O_N^+,S_N^+$ satisfy ($\mathrm{CGE}$) with constant $K=0$. Moreover, this intertwining result is also central for the stability properties studied in the next section. In Section \ref{sec:stability} we show that the complete gradient estimate is stable under tensor products and free products of quantum Markov semigroups. Besides the applications investigated later in the article, these results also open the door for applications of group transference to get complete gradient estimates for Lindblad generators on matrix algebras. In Section \ref{sec:com_proj} we prove the complete gradient estimate ($\mathrm{CGE}$) with constant $K=1$ for quantum Markov semigroups whose generators are of the form \begin{equation*} \mathscr{L} x=\sum_j p_j x+xp_j -2p_j x p_j, \end{equation*} where the operators $p_j$ are commuting projections. In a number of cases, this result is better than the ones we could obtain by intertwining and yields the optimal constant in the gradient estimate. As examples we show that this result applies to the quantum Markov semigroups associated with the word length function on finite cyclic groups and the non-normalized Hamming length function on symmetric groups. Using ultraproducts and the stability under free products, we finally extend this result to Poisson-type quantum Markov semigroups on group von Neumann algebras of groups $\mathbb{Z}^{\ast k}\ast \mathbb{Z}_2^{\ast l}$ with $k,l\geq 0$. In particular, this implies the complete modified logarithmic Sobolev inequality with optimal constant for these groups. \subsection*{Note added.} When preparing this preprint for submission, we were made aware that several of the examples have been obtained independently by Brannan, Gao and Junge (see \cite{BGJ20,BGJ20b}). \subsection*{Acknowledgments} H.Z. is supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie grant agreement No. 754411. M.W. was supported by the Austrian Science Fund (FWF) through grant number F65. Both authors would like to thank Jan Maas for fruitful discussions and helpful comments. \section{\texorpdfstring{The noncommutative transport metric $\mathcal{W}$ and geodesic convexity of the entropy}{The noncommutative transport metric W and geodesic convexity of the entropy}}\label{sec:basics} In this section we briefly recall the definition and basic properties of the noncommutative transport distance $\mathcal{W}$ associated with a tracially symmetric quantum Markov semigroup. For a more detailed description we refer readers to \cite{Wir18}. Let $\mathcal{M}$ be a separable von Neumann algebra equipped with a normal faithful tracial state $\tau\colon \mathcal{M}\to \mathbb{C}$. Denote by $\mathcal{M}_+$ the positive cone of $\mathcal{M}$. Given $1\le p<\infty$, we define $$\Vert x\Vert_p=[\tau(|x|^p)]^{\frac{1}{p}},~~x\in\mathcal{M},$$ where $|x|=(x^*x)^{\frac{1}{2}}$ is the modulus of $x$. One can show that $\|\cdot \|_p$ is a norm on $\mathcal{M}$. The completion of $( \mathcal{M}, \|\cdot \|_p )$ is denoted by $L^p (\mathcal{M}, \tau)$, or simply $L^p (\mathcal{M})$. As usual, we put $L^\infty(\mathcal{M})=\mathcal{M}$ with the operator norm. In this article, we are only interested in $p=1$ and $p=2$. In particular, $L^2(\mathcal{M})$ is a Hilbert space with the inner product $$\langle x,y\rangle_2=\tau(x^*y).$$ A family $(P_t)_{t\geq 0}$ of bounded linear operators on $\mathcal{M}$ is called a \emph{quantum Markov semigroup (QMS)} if \begin{itemize} \item $P_t$ is a normal unital completely positive map for every $t\geq 0$, \item $P_s P_t=P_{s+t}$ for all $s,t\geq 0$, \item $P_t x\to x$ in the weak$^\ast$ topology as $t\searrow 0$ for every $x\in \mathcal{M}$. \end{itemize} A QMS $(P_t)$ is called \emph{$\tau$-symmetric} if \begin{equation*} \tau((P_t x)y)=\tau(x P_t y) \end{equation*} for all $x,y\in \mathcal{M}$ and $t\geq 0$. The generator of $(P_t)$ is the operator $\mathscr{L}$ given by \begin{align*} D(\mathscr{L})&=\left\{x\in \mathcal{M}\mid \lim_{t\searrow 0}\frac1 t (x-P_t x)\text{ exists in the weak$^\ast$ topology}\right\},\\ \mathscr{L}(x)&=\lim_{t\to 0}\frac1 t (x-P_t x),~~x\in D(\mathscr{L}). \end{align*} Here and in what follows, $D(T)$ always means the domain of $T$. For all $p\in [1,\infty]$, the $\tau$-symmetric QMS $(P_t)$ extends to a strongly continuous contraction semigroup $(P_t^{(p)})$ on $L^p(\mathcal{M},\tau)$ with generator $\mathscr{L}_p$. Let $\cC=D(\mathscr{L}_2^{1/2})\cap \mathcal{M}$, which is a $\sigma$-weakly dense $\ast$-subalgebra of $\mathcal{M}$ \cite[Proposition 3.4]{DL92}. According to \cite[Section 8]{CS03}, there exists an (essentially unique) quintuple $(\mathcal{H}, L, R, \mathcal{J}, \partial)$ consisting of a Hilbert space $\mathcal{H}$, commuting non-degenerate $^\ast$-homomorphisms $L\colon \cC\to B(\mathcal{H})$, $R\colon \cC^\circ\to B(\mathcal{H})$, an anti-unitary involution $\mathcal{J}\colon \mathcal{H}\to \mathcal{H}$ and a closed operator $\partial\colon D(\mathscr{L}_2^{1/2})\to \mathcal{H}$ such that \begin{itemize} \item $\{R(x)\partial y\mid x,y\in \cC\}$ is dense in $\mathcal{H}$, \item $\partial(xy)=L(x)\partial y+R(y)\partial x$ for $x,y\in \cC$, \item $\mathcal{J}(L(x)R(y)\partial(z))=L(y^\ast)R(x^\ast)\partial(z^\ast)$ for $x,y,z\in \cC$, \item $\mathscr{L}_2=\partial^\dagger \partial$, \end{itemize} where $\cC^\circ$ is the opposite algebra of $\cC$. We will write $a\xi$ and $\xi b$ for $L(a)\xi$ and $R(b)\xi$, respectively. For $x,y\in D(\mathscr{L}_2^{1/2})$, the \emph{carré du champ} is defined as \begin{equation*} \Gamma(x,y)\colon \cC\to\mathbb{C},\,\Gamma(x,y)(z)=\langle \partial x,(\partial y)z\rangle_\mathcal{H}. \end{equation*} We write $\Gamma(x)$ to denote $\Gamma(x,x)$. A $\tau$-symmetric QMS is called \emph{$\Gamma$-regular} (see \cite{JLL20}) if the representations $L$ and $R$ are normal. Under this assumption, $\mathcal{H}$ is a \emph{correspondence} from $\mathcal{M}$ to itself in the sense of Connes \cite[Appendix B of Chapter 5]{Con94} (sometimes also called \emph{$\mathcal{M}$-bimodule} or \emph{Hilbert bimodule}). By \cite[Theorem 2.4]{Wir18}, $(P_t)$ is a $\Gamma$-regular semigroup if and only if $\Gamma(x,y)$ extends to a normal linear functional on $\mathcal{M}$ for all $x,y\in D(\mathscr{L}^{1/2}_2)$. By a slight abuse of notation, we write $\Gamma(x,y)$ for the unique element $h\in L^1(\mathcal{M},\tau)$ such that \begin{align*} \tau(z h)=\Gamma(x,y)(z) \end{align*} for all $z\in \cC$. If $(P_t)$ is $\Gamma$-regular, then we can extend $L$ to a map on the operators affiliated with $\mathcal{M}$ by defining \begin{align*} L(x)=u\int_{[0,\infty)}\lambda\,d(L\circ e)(\lambda), \end{align*} for any operator $x$ affiliated with $\mathcal{M}$, where $u$ is the partial isometry in the polar decomposition of $x$ and $e$ is the spectral measure of $\abs{x}$. The same goes for $R$. Let $\Lambda$ be an \emph{operator mean} in the sense of Kubo and Ando \cite{KA80}, that is, $\Lambda$ is a map from $B(\mathcal{H})_+\times B(\mathcal{H})_+$ to $B(\mathcal{H})_+$ satisfying \begin{enumerate}[(a)] \item if $A\leq C$ and $B\leq D$, then $\Lambda(A,B)\leq\Lambda(C,D)$, \item the \emph{transformer inequality}: $C \Lambda(A,B)C\leq \Lambda(C A C,C B C)$ for all $A,B,C\in B(\mathcal{H})_+$, \item if $A_n\searrow A$, $B_n\searrow B$, then $\Lambda(A_n,B_n)\searrow \Lambda(A,B)$, \item $\Lambda(\mathrm{id}_{\mathcal{H}},\mathrm{id}_{\mathcal{H}})=\mathrm{id}_{\mathcal{H}}$. \end{enumerate} Here and in what follows, by $A_n\searrow A$ we mean $A_1\ge A_2\ge \cdots$ and $A_n$ converges strongly to $A$. From (b), any operator mean $\Lambda$ is \emph{positively homogeneous}: $$\Lambda(\lambda A,\lambda B)=\lambda\Lambda (A,B),~~\lambda >0,A,B\in B(\mathcal{H})_+.$$ An operator mean $\Lambda$ is \emph{symmetric} if $\Lambda(A,B)=\Lambda(B,A)$ for all $A,B\in B(\mathcal{H})_+$. For a positive self-adjoint operator $\rho$ affiliated with $\mathcal{M}$, we define \begin{equation*} \hat \rho=\Lambda(L(\rho), R(\rho)). \end{equation*} Of particular interest for us are the cases when $\Lambda$ is the \emph{logarithmic mean} \begin{equation*} \Lambda_{\text{log}}(L(\rho), R(\rho))=\int_{0}^{1}L(\rho)^s R(\rho)^{1-s}\,ds, \end{equation*} or the \emph{arithmetic mean} \begin{equation*} \Lambda_{\text{ari}}(L(\rho), R(\rho))=\frac{L(\rho)+R(\rho)}{2}. \end{equation*} We write $\norm{\cdot}_\rho^2$ for the quadratic form associated with $\hat \rho$, that is, \begin{align*} \norm{\xi}_\rho^2=\begin{cases}\norm{\hat \rho^{1/2}\xi}_\mathcal{H}^2&\text{if }\xi\in D(\hat \rho^{1/2}),\\ \infty&\text{otherwise}.\end{cases} \end{align*} Given an operator mean $\Lambda$, consider the set \begin{align*} \mathcal{A}_\Lambda=\{a\in D(\mathscr{L}_2^{1/2})\cap\mathcal{M}\mid \exists C>0\,\forall \rho\in L^1_+(\mathcal{M},\tau)\colon \norm{\partial a}_\rho^2\leq C\norm{\rho}_1\}, \end{align*} equipped with the seminorm \begin{align*} \norm{a}_\Lambda^2=\sup_{0\neq \rho\in L^1_+(\mathcal{M},\tau)}\frac{\norm{\partial a}_\rho^2}{\norm{\rho}_1}. \end{align*} If $\Lambda$ is the arithmetic mean $\Lambda_{\text{ari}}$, then this set coincides with \begin{align*} \mathcal{A}_\Gamma=\{x\in D(\mathscr{L}_2^{1/2})\cap\mathcal{M}\mid \Gamma(x),\Gamma(x^\ast)\in \mathcal{M}\}. \end{align*} In fact, when $\Lambda=\Lambda_{\text{ari}}$, one has $\norm{\partial a}_{\rho}^2=\frac{1}{2}\tau\left((\Gamma(a)+\Gamma(a^*))\rho\right)$. If the operator mean $\Lambda$ is symmetric, then it is dominated by the arithmetic mean and therefore $\mathcal{A}_\Gamma\subset \mathcal{A}_\Lambda $ \cite[Theorem 4.5]{KA80},\cite[Lemma 3.24]{Wir18}. The following definition states that this inclusion is dense in a suitable sense. \begin{definition}\label{def:regular_mean} The operator mean $\Lambda$ is a \emph{regular mean} for $(P_t)$ if for every $x\in \mathcal{A}_\Lambda$ there exists a sequence $(x_n)$ in $\mathcal{A}_\Gamma$ that is bounded in $\mathcal{A}_\Lambda$ and converges to $x$ $\sigma$-weakly. \end{definition} Of course the arithmetic mean is always regular. In general it seems not easy to check this definition directly, but we will discuss a sufficient condition below. Given an operator mean $\Lambda$, let $\mathcal{H}_\rho$ be the Hilbert space obtained from $\partial(\mathcal{A}_\Lambda)$ after separation and completion with respect to $\langle\cdot,\cdot\rangle_\rho$ defined by \begin{equation*} \langle\xi,\eta\rangle_\rho=\langle\hat\rho^{1/2}\xi,\hat{\rho}^{1/2}\eta\rangle_{\mathcal{H}}. \end{equation*} If $\Lambda$ is regular, then $\partial(\mathcal{A}_\Gamma)$ is dense in $\mathcal{H}_\rho$. Let $\cD(\mathcal{M},\tau)$ be the set of all \emph{density operators}, that is, \begin{equation*} \cD(\mathcal{M},\tau)=\{\rho\in L^1_+(\mathcal{M},\tau)\mid \tau(\rho)=1\}. \end{equation*} \begin{definition}\label{def:admissible_curve} Fix an operator mean $\Lambda$. A curve $(\rho_t)_{t\in [0,1]}\subset \cD(\mathcal{M},\tau)$ is \emph{admissible} if \begin{itemize} \item the map $t\mapsto \tau(a\rho_t)$ is measurable for all $a\in \mathcal{A}_\Gamma$, \item there exists a curve $(\xi_t)_{t\in [0,1]}$ such that $\xi_t\in \mathcal{H}_{\rho_t}$ for all $t\in [0,1]$, the map $t\mapsto \langle \partial a,\xi_t\rangle_{\rho_t}$ is measurable for all $a\in \mathcal{A}_\Gamma$ and for every $a\in \mathcal{A}_\Gamma$ one has \begin{equation}\label{eq:CE} \frac{d}{dt}\tau(a\rho_t)=\langle \xi_t,\partial a\rangle_{\rho_t} \end{equation} for a.e. $t\in [0,1]$. \end{itemize} \end{definition} For an admissible curve $(\rho_t)$, the vector field $(\xi_t)$ is uniquely determined up to equality a.e. and will be denoted by $(D\rho_t)$. If $\Lambda$ is a regular mean, the set $\mathcal{A}_\Gamma$ can be replaced by $\mathcal{A}_\Lambda$ everywhere in Definition \ref{def:admissible_curve}. \begin{remark} The equation (\ref{eq:CE}) is a weak formulation of \begin{equation*} \dot\rho_t=\partial^\dagger (\hat\rho_t \xi_t), \end{equation*} which can be understood as noncommutative version of the continuity equation. Indeed, if $(P_t)$ is the heat semigroup on a compact Riemannian manifold, it reduces to the classical continuity equation $\dot\rho_t+\operatorname{div}(\rho_t \xi_t)=0$. \end{remark} \begin{definition} The noncommutative transport distance $\mathcal{W}$ on $\cD(\mathcal{M},\tau)$ is defined as \begin{equation*} \mathcal{W}(\bar\rho_0,\bar\rho_1)=\inf_{(\rho_t)}\int_0^1 \norm{D\rho_t}_{\rho_t}\,dt, \end{equation*} where the infimum is taken over all admissible curves $(\rho_t)$ connecting $\bar \rho_0$ and $\bar \rho_1$. \end{definition} \begin{definition}\label{defn:CGE} Let $K\in \mathbb{R}$. A $\Gamma$-regular QMS $(P_t)$ is said to satisfy the gradient estimate $\mathrm{GE}(K,\infty)$ if \begin{equation*} \norm{\partial P_t a}_\rho^2\leq e^{-2Kt}\norm{\partial a}_{P_t \rho}^2 \end{equation*} for $t\geq 0$, $a\in D(\mathscr{L}_2^{1/2})$ and $\rho\in \cD(\mathcal{M},\tau)$. It satisfies $\mathrm{CGE}(K,\infty)$ if $(P_t\otimes \mathrm{id}_{\mathcal{N}})$ satisfies $\mathrm{GE}(K,\infty)$ for any finite von Neumann algebra $\mathcal{N}$. \end{definition} Note that the gradient estimate $\mathrm{GE}(K,\infty)$ depends implicitly on the chosen operator mean $\Lambda$. As observed in \cite[Proposition 6.12]{Wir18}, if $(P_t)$ satisfies $\mathrm{GE}(K,\infty)$ for the arithmetic mean $\Lambda_{\text{ari}}$ and for $\Lambda$, then $\Lambda$ is regular for $(P_t)$. If $\Lambda$ is the right trivial mean, i.e., $\Lambda(L(\rho),R(\rho))=R(\rho)$, then $\mathrm{GE}(K,\infty)$ reduces to the Bakry--Émery criterion \begin{equation*} \Gamma(P_t a)\leq e^{-2Kt}P_t\Gamma(a), \end{equation*} which was considered in \cite{JZ15a}. \begin{remark} Recently, Li, Junge and LaRacuente \cite{JLL20} introduced a closely related notion of lower Ricci curvature bound for quantum Markov semigroups, the \emph{geometric Ricci curvature condition} (see also \cite[Definition 3.22]{BGJ20}). Like $\mathrm{CGE}$, this condition is tensor stable, and it implies $\mathrm{CGE}$ for arbitrary operator means \cite[Theorem 3.6]{JLL20} (the result is only formulated for the logarithmic mean, but the proof only uses the transformer inequality for operator means). In the opposite direction, the picture is less clear. For $\mathrm{GE}$, a direct computation on the two-point graph shows that the optimal constant depends on the mean in general. It seems reasonable to expect the same behavior for $\mathrm{CGE}$, which would imply that the optimal constant in $\mathrm{CGE}$ for a specific mean is in general bigger than the optimal constant in the geometric Ricci curvature condition. \end{remark} This gradient estimate is closely related to convexity properties of the logarithmic entropy \begin{equation*} \mathrm{Ent}\colon \cD(\mathcal{M},\tau)\to [0,\infty],\,\mathrm{Ent}(\rho)=\tau(\rho\log \rho). \end{equation*} As usual let $D(\mathrm{Ent})=\{\rho\in \cD(\mathcal{M},\tau)\mid \mathrm{Ent}(\rho)<\infty\}$. \begin{theorem}[{\cite[Theorem 7.12]{Wir18}}]\label{thm:geodesic_convex} Assume that $(P_t)$ is a $\Gamma$-regular QMS. Suppose that $\Lambda=\Lambda_{\log}$ is the logarithmic mean and is regular for $(P_t)$. If $(P_t)$ satisfies $\mathrm{GE}(K,\infty)$, then \begin{enumerate}[(a)] \item for every $\rho\in D(\mathrm{Ent})$ the curve $(P_t \rho)$ satisfies the \emph{evolution variational inequality} $(\mathrm{EVI}_K)$ \begin{equation*} \frac{d}{dt}\frac 1 2\mathcal{W}^2(P_t \rho,\sigma)+\frac K 2 \mathcal{W}^2(P_t \rho,\sigma)+\mathrm{Ent}(\rho)\leq \mathrm{Ent}(\sigma) \end{equation*} for a.e. $t\geq 0$ and $\sigma \in \cD(\mathcal{M},\tau)$ with $\mathcal{W}(\rho,\sigma)<\infty$, \item any $\rho_0,\rho_1\in D(\mathrm{Ent})$ with $\mathcal{W}(\rho_0,\rho_1)<\infty$ are connected by a $\mathcal{W}$-geodesic and $\mathrm{Ent}$ is $K$-convex along any constant speed $\mathcal{W}$-geodesic $(\rho_t)$, that is, $\frac{d^2}{dt^2}\mathrm{Ent}(\rho_t)\geq K$ in the sense of distributions. \end{enumerate} \end{theorem} This gradient flow characterization implies a number of functional inequalities for the QMS, see e.g. \cite[Section 8]{CM17a}, \cite[Section 7]{Wir18}, \cite[Section 11]{CM20}. Here we will focus on the modified logarithmic Sobolev inequality and its complete version (see \cite[Definition 2.8]{GJL18}, \cite[Definition 2.12]{JLL20} for the latter). For $\rho,\sigma\in \cD(\mathcal{M},\tau)$ the \emph{relative entropy} of $\rho$ with respect to $\sigma$ is defined as \begin{equation*} \mathrm{Ent}(\rho\Vert \sigma)=\begin{cases}\tau(\rho\log \rho)-\tau(\rho\log\sigma)&\text{if }\operatorname{supp} \rho\subset \operatorname{supp} \sigma,\\ \infty&\text{otherwise}.\end{cases} \end{equation*} If $\mathcal{N}\subset \mathcal{M}$ is a von Neumann subalgebra with $E\colon \mathcal{M}\to \mathcal{N}$ being the conditional expectation, then we define \begin{align*} \mathrm{Ent}_\mathcal{N}(\rho)=\mathrm{Ent}(\rho\lVert E(\rho)). \end{align*} Recall that a \emph{conditional expectation} $E\colon\mathcal{M}\to\mathcal{N}$ is a normal contractive positive projection from $\mathcal{M}$ onto $\mathcal{N}$ which preserves the trace and satisfies \begin{equation*} E(axb)=aE(x)b,~~a,b\in\mathcal{N}, x\in\mathcal{M}. \end{equation*} For $x\in D(\mathscr{L}_2^{1/2})\cap \mathcal{M}_+$ the \emph{Fisher information} is defined as \begin{equation*} \mathcal{I}(x)=\lim_{\epsilon\searrow 0}\langle \mathscr{L}_2^{1/2} x,\mathscr{L}_2^{1/2}\log(x+\epsilon)\rangle_2\in [0,\infty]. \end{equation*} This definition can be extended to $x\in L^1_+(\mathcal{M},\tau)$ by setting \begin{equation*} \mathcal{I}(x)=\begin{cases}\lim_{n\to\infty}\mathcal{I}(x\wedge n)&\text{if }x\wedge n\in D(\mathscr{L}_2^{1/2})\cap \mathcal{M}\text{ for all }n\in\mathbb{N},\\ \infty&\text{otherwise}.\end{cases} \end{equation*} Recall that the fixed-point algebra of $(P_t)$ is \begin{equation*} \mathcal{M}^{\mathrm{fix}}=\{x\in\mathcal{M}:P_t(x)=x\text{ for all }t\geq 0\}. \end{equation*} It is a von Neumann subalgebra of $\mathcal{M}$ \cite[Proposition 3.5]{DL92}. \begin{definition} Let $(P_t)$ be a $\Gamma$-regular QMS with the fixed-point subalgebra $\mathcal{M}^{\mathrm{fix}}$. For $\lambda>0$, we say that $(P_t)$ satisfies the modified logarithmic Sobolev inequality with constant $\lambda$ $(\mathrm{MLSI}(\lambda))$, if \begin{equation*} \lambda \mathrm{Ent}_{\mathcal{M}^{\mathrm{fix}}}(\rho)\leq \mathcal{I}(\rho) \end{equation*} for $\rho\in\cD(\mathcal{M},\tau)\cap D(\mathscr{L}_2^{1/2})\cap \mathcal{M}$. We say that $(P_t)$ satisfies the complete modified logarithmic Sobolev inequality with constant $\lambda$ $(\mathrm{CLSI}(\lambda))$ if $(P_t\otimes \mathrm{id}_\mathcal{N})$ satisfies the modified logarithmic Sobolev inequality with constant $\lambda$ for any finite von Neumann algebra $\mathcal{N}$. \end{definition} For ergodic QMS satisfying $\mathrm{GE}(K,\infty)$, the inequality $\mathrm{MLSI}(2K)$ is essentially contained in the proof of \cite[Proposition 7.9]{Wir18}. Since $(P_t\otimes \mathrm{id}_\mathcal{N})$ is not ergodic (unless $\mathcal{N}=\mathbb{C}$), this result cannot imply the complete modified logarithmic Sobolev inequality. However, the modified logarithmic Sobolev inequality for non-ergodic QMS can also still be derived from the gradient flow characterization, as we will see next. \begin{corollary}\label{cor:MLSI} Assume that $(P_t)$ is a $\Gamma$-regular QMS. Suppose that $\Lambda=\Lambda_{\log}$ is the logarithmic mean and is regular for $(P_t)$. If $(P_t)$ satisfies $\mathrm{GE}(K,\infty)$, then it satisfies \begin{equation*} \mathcal{I}(P_t \rho)\leq e^{-2Kt}\mathcal{I}(\rho) \end{equation*} for $\rho \in D(\mathscr{L}_2^{1/2})\cap\mathcal{M}_+$ and $t\geq 0$. Moreover, if $K>0$, then $(P_t)$ satisfies $\mathrm{MLSI}(2K)$. The same is true for the complete gradient estimate and the complete modified logarithmic Sobolev inequality. \end{corollary} \begin{proof} Let $\rho\in\cD(\mathcal{M},\tau)\cap D(\mathscr{L}_2^{1/2})\cap\mathcal{M}$ and $\rho_t=P_t\rho$. Since $(\rho_t)$ is an $\mathrm{EVI}_K$ gradient flow curve of $\mathrm{Ent}$ by Theorem \ref{thm:geodesic_convex} and $\frac{d}{dt}\mathrm{Ent}(\rho_t)=-\mathcal{I}(\rho_t)$, it follows from \cite[Theorem 3.5]{MS20} that \begin{equation*} \mathcal{I}(P_t \rho)\leq e^{-2Kt}\mathcal{I}(\rho) \end{equation*} for $t\geq 0$ (using the continuity of both sides in $t$). If $K>0$, then $\mathrm{MLSI}(2K)$ follows from a standard argument; see for example \cite[Lemma 2.15]{JLL20}. The implication for the complete versions is clear. \end{proof} \begin{remark} The inequality $\mathcal{I}(P_t \rho)\leq e^{-2Kt}\mathcal{I}(\rho)$ is called $K$-Fisher monotonicity in \cite{BGJ20} and plays a central role there in obtaining complete logarithmic Sobolev inequalities. \end{remark} \section{Gradient estimates through intertwining}\label{sec:intertwining} Following the ideas from \cite{CM17a,CM20}, we will show in this section how one can obtain gradient estimates for quantum Markov semigroups through intertwining. As examples we discuss the Ornstein--Uhlenbeck semigroup on the mixed $q$-Gaussian algebras, the heat semigroup on quantum tori, and a family of quantum Markov semigroups on discrete group von Neumann algebras and the quantum groups $O_N^+$ and $S_N^+$. Throughout this section we assume that $\mathcal{M}$ is a separable von Neumann algebra with normal faithful tracial state $\tau$ and $(P_t)$ is a $\Gamma$-regular QMS. We fix the corresponding first order differential calculus $(\mathcal{H}, L, R, \mathcal{J}, \partial)$. We do not make any assumptions on $\Lambda$ beyond being an operator mean. In particular, all results from this section apply to the logarithmic mean -- thus yielding geodesic convexity by Theorem \ref{thm:geodesic_convex} --- as well as the right-trivial mean -- thus giving Bakry--Émery estimates. \begin{theorem}\label{thm:intertwining} Let $K\in\mathbb{R}$. If there exists a family $(\vec P_t)$ of bounded linear operators on $\mathcal{H}$ such that \begin{enumerate}[(i)] \item $\partial P_t =\vec P_t \partial$ for $t\geq 0$, \item $\vec P_t^\dagger L(\rho) \vec P_t\leq e^{-2Kt}L(P_t \rho)$ for $\rho\in\mathcal{M}_+$, $t\geq 0$, \item $\vec P_t^\dagger R(\rho) \vec P_t\leq e^{-2Kt}R(P_t \rho)$ for $\rho\in\mathcal{M}_+$, $t\geq 0$, \end{enumerate} then $(P_t)$ satisfies $\mathrm{GE}(K,\infty)$. \end{theorem} \begin{proof} Let $\rho\in \mathcal{M}_+$ and $a\in D(\partial)$. Since $\Lambda$ is an operator mean, we have \cite[Theorem 3.5]{KA80} \begin{equation*} \vec P_t^\dagger \Lambda(L(\rho),R(\rho))\vec P_t\leq \Lambda(\vec P_t^\dagger L(\rho)\vec P_t,\vec P_t^\dagger R(\rho) \vec P_t). \end{equation*} Thus \begin{equation*} \langle \hat \rho \partial P_t a,\partial P_t a\rangle_\mathcal{H}=\langle \vec P_t^\dagger \hat \rho \vec P_t \partial a,\partial a\rangle_\mathcal{H}\leq \langle \Lambda(\vec P_t^\dagger L(\rho)\vec P_t,\vec P_t^\dagger R(\rho)\vec P_t)\partial a,\partial a\rangle_\mathcal{H}. \end{equation*} As $\Lambda$ is monotone in both arguments and positively homogeneous, conditions (ii) and (iii) imply \begin{equation*} \langle \Lambda(\vec P_t^\dagger L(\rho)\vec P_t,\vec P_t^\dagger R(\rho)\vec P_t)\partial a,\partial a\rangle_\mathcal{H}\leq e^{-2Kt}\langle \Lambda(L(P_t \rho),R(P_t \rho))\partial a,\partial a\rangle_\mathcal{H}. \end{equation*} All combined this yields \begin{equation*} \norm{\partial P_t a}_\rho^2\leq e^{-2Kt}\norm{\partial a}_{P_t \rho}^2.\qedhere \end{equation*} \end{proof} \begin{remark}\label{rmk:diff_calc_ind} The proof shows that assumptions (i)--(iii) still imply \begin{equation*} \norm{\partial P_t a}_{\rho}^2\leq e^{-2Kt}\norm{\partial a}_{P_t \rho}^2 \end{equation*} if the differential calculus is not the one associated with $(P_t)$. We will use this observation in the proofs of Theorem \ref{thm:tensor_product} and Theorem \ref{thm:com_proj}. \end{remark} \begin{remark} A similar technique to obtain geodesic convexity of the entropy has been employed in \cite{CM17a,CM20}. Our proof using the transformer inequality for operator means is in some sense dual to the monotonicity argument used there (see \cite{Pet96}). Apart from working in the infinite-dimensional setting, let us point out two main differences to the results from these two articles: In contrast to \cite{CM17a}, we do not assume that $\vec P_t$ is a direct sum of copies of $P_t$ (in fact, we do not even assume that $\mathcal{H}$ is a direct sum of copies of the trivial bimodule). This enhanced flexibility can lead to better bounds even for finite-dimensional examples (see Example \ref{ex:cond_exp}). In contrast to \cite{CM20}, our conditions (ii) and (iii) are more restrictive, but they are also linear in $\rho$, which makes them potentially more feasible to check in concrete examples. \end{remark} \begin{remark} We do not assume that the operators $\vec P_t$ form a semigroup or that they are completely positive (if $\mathcal{H}$ is realized as a subspace of $L^2(\mathcal{N})$ for some von Neumann algebra $\mathcal{N}$). However, this is the case for most of the concrete examples where we can prove (i)--(iii). \end{remark} \begin{remark} In particular, the conclusion of the previous theorem holds for all symmetric operator means, and in view of the discussions after Definition \ref{defn:CGE}, it implies that any symmetric operator mean is regular for $(P_t)$. \end{remark} Under a slightly stronger assumption, conditions (ii) and (iii) can be rewritten in a way that resembles the classical Bakry--Émery criterion. For that purpose define \begin{equation*} \vec \Gamma\left(\sum_{k=1}^n (\partial x_k)y_k\right)=\sum_{k,l=1}^n y_k^\ast \Gamma(x_k,x_l)y_l. \end{equation*} In particular, $\vec \Gamma(\partial x)=\Gamma(x)$. Since $(P_t)$ is $\Gamma$-regular, $\vec \Gamma$ extends to a continuous quadratic map from $\mathcal{H}$ to $L^1(\mathcal{M},\tau)$ that is uniquely determined by the property $\tau(x\vec \Gamma(\xi))=\langle\xi,\xi x\rangle_\mathcal{H}$ for all $x\in \mathcal{M}$ and $\xi\in \mathcal{H}$ (see \cite[Section 2]{Wir18}). \begin{lemma} If $(\vec P_t)$ is a family of bounded linear operators on $\mathcal{H}$ that commute with $\mathcal{J}$, then conditions (ii) and (iii) from Theorem \ref{thm:intertwining} are equivalent. Moreover, they are equivalent to \begin{equation}\label{eq:vec_Bakry_Emery} \vec \Gamma(\vec P_t \xi)\leq e^{-2Kt}P_t \vec\Gamma(\xi) \end{equation} for $\xi\in \mathcal{H}$, $t\geq 0$. \end{lemma} \begin{proof} To see the equivalence of (ii) and (iii), it suffices to notice that $\mathcal{J}$ is a bijection and $\mathcal{J} L(\rho)\mathcal{J}=R(\rho)$ for $\rho\in \mathcal{M}_+$. The equivalence of (iii) and (\ref{eq:vec_Bakry_Emery}) follows from the identities: for all $\rho\in \mathcal{M}_+$: \begin{align*} \langle \vec P_t \xi,R(\rho)\vec P_t \xi\rangle_\mathcal{H}&=\tau(\rho \vec \Gamma(\vec P_t \xi)),\\ \langle \xi,R(P_t \rho)\xi\rangle_\mathcal{H}&=\tau(P_t \rho\vec \Gamma(\xi))=\tau(\rho P_t \vec \Gamma(\xi)).\qedhere \end{align*} \end{proof} As indicated before, our theorem recovers the intertwining result in \cite{CM17a} (in the tracially symmetric case): \begin{corollary}\label{cor:intertwining_dir_sum} Assume that $\mathcal{H}\cong\bigoplus_j L^2(\mathcal{M},\tau)$, $L$ and $R$ act componentwise as left and right multiplication and $\mathcal{J}$ acts componentwise as the usual involution. If $\partial_j P_t=e^{-Kt}P_t \partial_j$, then $(P_t)$ satisfies $\mathrm{CGE}(K,\infty)$. \end{corollary} \begin{proof} Let $\vec P_t=e^{-Kt}\bigoplus_j P_t$. Condition (i) from Theorem \ref{thm:intertwining} is satisfied by assumption. Since $\vec P_t$ commutes with $\mathcal{J}$, conditions (ii) and (iii) are equivalent. Condition (iii) follows directly from the Kadison--Schwarz inequality: \begin{align*} \langle \vec P_t\xi,R(\rho)\vec P_t\xi\rangle_\mathcal{H}&=\sum_{j}e^{-2Kt}\tau((P_t \xi_j)^\ast (P_t \xi_j)\rho)\\ &\leq e^{-2Kt}\sum_j \tau(\xi_j^\ast \xi_j P_t\rho)\\ &=e^{-2Kt}\langle \xi,R(P_t \rho)\xi\rangle_\mathcal{H}. \end{align*} This settles $\mathrm{GE}(K,\infty)$. Applying the same argument to $(P_t\otimes \mathrm{id}_\mathcal{N})$ then yields the complete gradient estimate. \end{proof} \begin{example}[Conditional expectations]\label{ex:cond_exp} Let $E\colon \mathcal{M}\to \mathcal{N}$ be the conditional expectation onto a von Neumann subalgebra $\mathcal{N}$ and let $(P_t)$ be the QMS with generator $\mathscr{L}=I-E$, where $I=\mathrm{id}_{\mathcal{M}}$ is the identity operator on $\mathcal{M}$. Then $(P_t)$ satisfies $\mathrm{CGE}(1/2,\infty)$: A direct computation shows that $P_t =e^{-t}I+(1-e^{-t})E$. Let $\vec P_t=e^{-t}\mathrm{id}_{\mathcal{H}}$. Since $\mathscr{L} E=0$, we have $\partial E=0$ and therefore $\partial P_t =e^{-t}\partial=\vec{P}_t\partial$, which settles condition (i) from Theorem \ref{thm:intertwining}. Conditions (ii) and (iii) with $K=1/2$ follow immediately from $P_t \rho\geq e^{-t}\rho$ for $\rho\in \mathcal{M}_+$. So $(P_t)$ satisfies $\mathrm{CGE}(1/2,\infty)$. This result has been independently obtained in \cite[Theorem 4.16]{BGJ20}. In contrast, if for example $p$ is a projection and $E(x)=pxp+(1-p)x(1-p)$, then $\mathscr{L}$ has the Lindblad form $\mathscr{L} x=[p,[p,x]]$. Clearly, $[p,\cdot]$ commutes with $\mathscr{L}$, so that the intertwining criterion from \cite{CM17a} only implies $\mathrm{CGE}(0,\infty)$. In fact, in this case we may obtain a better result; see Theorem \ref{thm:com_proj}. \end{example} \begin{example}[Mixed $q$-Gaussian algebras] Let us recall the mixed $q$-Gaussian algebras. Our references are \cite{BS91,BS94,BKS97,LP99}. Let $H$ be a real Hilbert space with orthonormal basis $(e_j)_{j\in J}$. For $k\ge 1$, denote by $S_k$ the set of permutations of $\{1,2,\dots,k\}$. For $k\ge 2$ and $1\le j\le k-1$, denote by $\sigma_{j}$ the adjacent transposition between $j$ and $j+1$. For any $\sigma\in S_k$, $I(\sigma)$ is the number of inversions of the permutations $\sigma$: $$I(\sigma)=\sharp\{(i,j):1\le i<j\le k,~\sigma(i)>\sigma(j)\}.$$ For $k\ge 1$, a \emph{$k$-atom} on $H$ is an element of the form $f_1\otimes \cdots \otimes f_k$ with each $f_j\in H$. A \emph{$k$-basis atom} is an element of the form $e_{j_1}\otimes \cdots\otimes e_{j_k}$. Clearly all the $k$-basis atoms form a basis of $H^{\otimes k}$. For any $k$-basis atom $u=e_{j_1}\otimes \cdots\otimes e_{j_k}$, we use the notation that $\sigma(u)=e_{j_{\sigma(1)}}\otimes \cdots\otimes e_{j_{\sigma(k)}}$. Let $Q=(q_{ij})_{i,j\in J}\in\mathbb{R}^{J\times J}$ be such that $q_{ij}=q_{ji}$ for all $i,j\in J$ and $\sup_{i,j\in J}|q_{ij}|\le1$. For convenience, in the following we actually assume that $\sup_{i,j\in J}|q_{ij}|<1$. This is to simplify the definition of Fock space; our main results still apply to the general $\sup_{i,j\in J}|q_{ij}|\le1$ case. Put $P^{(0)}=\mathrm{id}_{H}$. For any $k\ge 1$, denote by $P^{(k)}$ the linear operator on $H^{\otimes k}$ such that \begin{equation*} P^{(k)}(u)=\sum_{\sigma\in S_k}a(Q,\sigma,u)\sigma^{-1}(u), \end{equation*} where $u=e_{j_1}\otimes \cdots \otimes e_{j_k}$ is any $k$-basis atom and \begin{equation*} a(Q,\sigma,u) =\begin{cases} 1&\text{if }\sigma=\mathrm{id},\\ q_{j_{m_l}j_{m_l+1}}\prod_{i=0}^{l-1}q_{j_{\varphi_{i}(m_{l-i})}j_{\varphi_{i}(m_{l-i}+1)}}&\text{if }\sigma=\sigma_{m_1}\cdots\sigma_{m_l}, \end{cases} \end{equation*} with $\varphi_i=\sigma_{m_{l-i+1}}\cdots\sigma_{m_l}$. Notice that if $\sigma=\sigma_{m_1}\cdots\sigma_{m_l}$, the coefficient $a(Q,\sigma,u)$ is well-defined, though such representation of $\sigma$ is not unique. When all the entries of $Q$ are the same, that is, $q_{ij} \equiv q$, the operator $P^{(k)}$ reduces to \begin{equation*} P^{(k)}(u)=\sum_{\sigma\in S_k} q^{I(\sigma)}\sigma(u). \end{equation*} Under the condition that $\sup_{i,j\in J}|q_{ij}|<1$, the operator $P^{(k)}$ is strictly positive \cite[Theorem 2.3]{BS94}. Let $\mathcal{F}_{Q}^{\text{finite}}$ be the subspace of finite sums of the spaces $H^{\otimes k},k\ge 0$, where $H^{\otimes 0}=\mathbb{R}\Omega$ and $\Omega$ is the vacuum vector. Then $\mathcal{F}_{Q}^{\text{finite}}$ is a dense subset of $\oplus_{k\ge 0}H^{\otimes k}$, and we define an inner product $\langle\cdot,\cdot \rangle_{Q}$ on $\mathcal{F}_{Q}^{\text{finite}}$ as: \begin{equation*} \langle \xi,\eta\rangle_{Q}=\delta_{kl}\langle \xi,P^{(l)}\eta\rangle_0,\text{ for }\xi\in H^{\otimes k},\eta\in H^{\otimes l},\text{ and }k,l\ge0, \end{equation*} where $\langle \cdot,\cdot\rangle_0$ is the usual inner product. The Fock space $\mathcal{F}_{Q}(H)$ is the completion of $\mathcal{F}_{Q}^{\text{finite}}$ with respect to the inner product $\langle\cdot,\cdot \rangle_{Q}$. When $q_{ij}\equiv q$, the Fock space $\mathcal{F}_{Q}(H)$ is also denoted by $\mathcal{F}_{q}(H)$ for short. Notice that if we only have $\sup_{i,j\in J}|q_{ij}|\le 1$, then each $P^{(k)}$ is only positive. One should divide $\mathcal{F}_{Q}^{\text{finite}}$ by the kernel of $\langle\cdot,\cdot \rangle_{Q}$ before taking the completion. The definition of Fock space here is actually the same as the one in \cite{BS94} associated to the Yang--Baxter operator $$T:H\otimes H\to H\otimes H,~~e_i\otimes e_j\mapsto q_{ji}e_j\otimes e_{i}.$$ See \cite[Part I]{LP99} for a detailed proof for this when $\dim H<\infty$. Now we recall the mixed $q$-Gaussian algebra $\Gamma_{Q}(H)$. For any $i\in J$, the \emph{left creation operator} $l_i$ is defined by \begin{equation*} l_i(\xi)=e_i\otimes \xi,~~\xi\in\mathcal{F}_{Q}(H). \end{equation*} Its adjoint with respect to $\langle \cdot,\cdot\rangle_{Q}$, the \emph{left annihilation operator} $l_i^*$, is given by \begin{equation*} l_i^*(\Omega)=0, \end{equation*} \begin{align*} l^*_i(e_{j_1}\otimes \cdots \otimes e_{j_k})=\sum_{m=1}^{k}&\big(\delta_{i j_m}q_{j_{m}j_{m-1}}q_{j_{m}j_{m-2}}\cdots q_{j_{m}j_{1}}\\ &\quad e_{j_1}\otimes \cdots \otimes e_{j_{m-1}}\otimes e_{j_{m+1}}\otimes \cdots \otimes e_{j_k}\big). \end{align*} The left annihilation operators and left creation operators satisfy the deformed communication relations on $\mathcal{F}_{Q}(H)$: \begin{equation*} l_i^* l_j-q_{ij}l_j l_i^*=\delta_{ij}\mathrm{id},~~i,j\in J. \end{equation*} The mixed $q$-Gaussian algebra $\Gamma_{Q}(H)$ is defined as the von Neumann subalgebra of $B(\mathcal{F}_{Q}(H))$ generated by self-adjoint operators $s_i=l_i+l_i^*,i\in J$. It is equipped with a normal faithful tracial state $\tau_Q$ given by \begin{equation*} \tau_Q(x)=\langle x \Omega,\Omega\rangle_Q. \end{equation*} The map $\phi_{H}\colon\Gamma_{Q}(H)\to\mathcal{F}_{Q}(H),x\mapsto x(\Omega)$, extends to a unitary, which we still denote by $\phi_H$, from $L^2(\Gamma_{Q}(H),\tau_Q)$ to $\mathcal{F}_{Q}(H)$. Note that $\phi_H(s_i)=e_i$. Let $T\colon H\to H$ be a contraction. Then it induces a contraction $\mathcal{F}_Q(T)$ on $\mathcal{F}_Q(H)$ such that \cite[Lemma 1.1]{LP99} $$\mathcal{F}_Q(T)\Omega=\Omega,$$ $$\mathcal{F}_Q(T)(f_1\otimes \cdots \otimes f_k)=T(f_1)\otimes \cdots \otimes T(f_k),$$ for any $k$-atom $f_1\otimes \cdots \otimes f_k$ and any $k\ge 1$. Moreover, there exists a unique unital and completely positive map $\Gamma_Q(T)$ on $\Gamma_Q(H)$ such that \cite[Lemma 3.1]{LP99} $$\Gamma_Q(T)=\phi_{H}^{-1}\mathcal{F}_{Q}(T)\phi_{H}.$$ Remark that $\Gamma_Q$ is a functor, that is, $\Gamma_Q(ST)=\Gamma_Q(S)\Gamma_Q(T)$ for two contractions $S,T$ on $H$. If $q_{ij}\equiv q\in[-1,1]$, then we write the functor $\Gamma_Q$ as $\Gamma_q$ for short. It interpolates between the bosonic and the fermionic functors by taking $q=+1$ and $q=-1$ respectively. When $q=0$, it becomes the free functor by Voiculescu \cite{Voi85}. For more examples, see \cite[Introduction]{LP99}. In particular, $T_t=T_t^Q=\mathcal{F}_Q(e^{-t}\mathrm{id}_{H})$ is a semigroup of contractions on $\mathcal{F}_Q(H)$. The mixed $q$-Ornstein--Uhlenbeck semigroup is defined as $P_t=P_t^Q=\Gamma_Q(e^{-t}\mathrm{id}_H), t\ge 0$. It extends to a semigroup of contractions on $L^2(\Gamma_Q(H),\tau_Q)$ and is $\tau_Q$-symmetric. Note that the generator of $P_t$ is $L=\phi^{-1}_{H}N\phi_{H}$, where $N\colon\mathcal{F}_{Q}^{\text{finite}}(H)\to \mathcal{F}_{Q}^{\text{finite}}(H)$, is the number operator defined as $k\mathrm{id}$ on its eigenspace $H^{\otimes k},k\ge 0$. Put \begin{equation*} Q'=Q\otimes \begin{pmatrix} 1&1\\ 1&1 \end{pmatrix}, \end{equation*} and $$e=\begin{pmatrix} 1\\ 0 \end{pmatrix},~~ f=\begin{pmatrix} 0\\ 1 \end{pmatrix}. $$ Then $H'\colon =H\oplus H$ can be identified with $H\otimes \mathbb{R}^2$, as a direct sum of $H\otimes \mathbb{R}e$ and $H\otimes \mathbb{R}f$. The number operator $N$ admits the following form \cite[Lemma 1.2]{LP99}: $N=\nabla^\dagger\nabla$, where $\nabla\colon\mathcal{F}_{Q}^{\text{finite}}(H)\to \mathcal{F}_{Q'}^{\text{finite}}(H')$ is the \emph{gradient operator} such that $\nabla(\Omega)=0$, and \begin{equation*} \nabla(u)=\sum_{i=1}^{k}u\otimes v_i, \end{equation*} for $k\ge 1$, $u$ being any $k$-atom on $H$ and $v_i=e\otimes \cdots\otimes f\otimes \cdots \otimes e\in(\mathbb{R}^2)^{\otimes k}$, $f$ occurring in the $i$-th factor. Remark that similar to the second quantization of any contraction $T:H\to H$, the natural embedding $\iota_H:H\to H',x\mapsto x\otimes e$ also induces a unique map $h_H\colon\Gamma_{Q}(H)\to \Gamma_{Q'}(H')$ such that \cite[Lemma 3.1]{LP99} \begin{equation}\label{eq:h_H} h_H=\Gamma_Q(\iota_H)=\phi_{H'}^{-1}\mathcal{F}_Q(\iota_H)\phi_H, \end{equation} where $\mathcal{F}_Q(\iota_H)$ is defined as $\iota_H\otimes \cdots \otimes \iota_H$ on $H^{\otimes k}$, $k\ge 0$. Set $\partial\colon=\phi_{H'}^{-1}\nabla \phi_{H}$. Then the generator $L$ of $P_t$ takes the form $L=\partial^\dagger\partial$ and $\partial$ is a derivation \cite[Proposition 3.2]{LP99}: \begin{equation*} \partial(xy)=\partial(x)h_{H}(y)+h_{H}(x)\partial(y), \end{equation*} for all $x,y\in \phi_H^{-1}(\mathcal{F}_{Q}^{\text{finite}}(H))$. \smallskip Now we prove that $P_t=e^{-tL}$ on $\Gamma_{Q}(H)$ satisfies $\mathrm{CGE}(1,\infty)$. For this let us first take a look of the semigroup $T_t=e^{-tN}$ on $\mathcal{F}_{Q}(H)$. By definition, it equals $e^{-kt}\mathrm{id}$ on its eigenspace $H^{\otimes k}$. For each $t\ge 0$, consider the map $$\vec{T}_t=e^{-t}\mathcal{F}_{Q'}(S_t)\colon\mathcal{F}_{Q'}(H')\to \mathcal{F}_{Q'}(H'),$$ where $S_t$ is a contraction on $H'$ given by $$S_t(x\otimes e)=e^{-t}x\otimes e,~~S_t(x\otimes f)=x\otimes f,~~x\in H.$$ Then by definition, we have the intertwining condition \begin{equation}\label{eq:intertwining for Fock space} \nabla T_t=\vec{T}_t \nabla. \end{equation} In fact, it is obvious when acting on $\mathbb{R}\Omega$. If $u$ is a $k$-atom on $H$, $k\ge1$, then \begin{equation*} \nabla T_t(u)=e^{-kt}\nabla (u)=e^{-kt}\sum_{i=1}^{k}u\otimes v_i, \end{equation*} and \begin{equation*} \vec{T}_t \nabla (u) =\sum_{i=1}^{k}\vec{T}_t (u\otimes v_i) =e^{-t}\sum_{i=1}^{k}\mathcal{F}_{Q'}(S_t)(u\otimes v_i) =e^{-kt}\sum_{i=1}^{k}u\otimes v_i. \end{equation*} Remark that if one chooses $\vec{T}_t=\mathcal{F}_{Q'}(e^{-t}\mathrm{id}_{H'})$, then we can only obtain $\mathrm{CGE}(0,\infty)$. Put $\vec{P}_t=\phi_{H'}^{-1}\vec{T}_t\phi_{H'}$. Then $\vec{P}_t$ is $\tau_{Q'}$-symmetric. Note that $P_t=\phi_H^{-1}T_t\phi_H$, thus by \eqref{eq:intertwining for Fock space} we have the intertwining condition \begin{equation*} \partial P_t =\phi_{H'}^{-1}\nabla T_t\phi_{H} =\phi_{H'}^{-1}\vec{T}_t\nabla\phi_{H} =\vec{P}_t \partial,~t\ge 0. \end{equation*} Note that $S_t\circ \iota_H=e^{-t}\iota_H\circ\mathrm{id}_H,t\ge 0$. This, together with the definitions of $h_H$ \eqref{eq:h_H} and $\vec{P}_t$, yields \begin{align} \begin{split}\label{eq:intertwining h_H P_t} \vec{P}_t h_{H} &=e^{-t}\phi_{H'}^{-1}\mathcal{F}_{Q'}(S_t)\mathcal{F}_Q(\iota_H)\phi_H\\ &=e^{-t}\phi_{H'}^{-1}\mathcal{F}_Q(\iota_H)\mathcal{F}_Q(e^{-t}\mathrm{id}_H)\phi_H\\ &=e^{-t}h_{H} P_t. \end{split} \end{align} By Theorem \ref{thm:intertwining}, to show that $P_t$ satisfies $\mathrm{GE}(1,\infty)$, it remains to check (ii) and (iii) with $\vec{P}_t$ as above and the left and right action of $\Gamma_Q(H)$ on $\Gamma_{Q'}(H')$ being $$L(\rho)a= h_{H}(\rho)a,~~ R(\rho)a=a h_{H}(\rho).$$ To prove (ii) we need to show that for any $\rho\in \Gamma_Q(H)_+$ and $a\in\Gamma_{Q'}(H')$: \begin{equation*} \langle\vec{P}_t (a),L(\rho)\vec{P}_t (a)\rangle_2 \le e^{-2t} \langle a,L(P_t(\rho))(a)\rangle_2, \end{equation*} where the inner product is induced by $\tau_{Q'}$. To see this, note that $\vec{P}_t$ is completely positive and $\vec{P}_t(1)=e^{-t}1$ \cite[Lemma 3.1]{LP99}. By the Kadison--Schwarz inequality and \eqref{eq:intertwining h_H P_t}, we have \begin{align*} \langle\vec{P}_t (a),L(\rho)\vec{P}_t (a)\rangle_2 &=\tau_{Q'}\left(\vec{P}_t (a)\vec{P}_t (a)^*h_{H}(\rho)\right)\\ &\le e^{-t}\tau_{Q'}\left(\vec{P}_t (aa^*)h_{H}(\rho)\right)\\ &=e^{-t}\tau_{Q'}\left(aa^*\vec{P}_t h_{H}(\rho)\right)\\ &=e^{-2t}\tau_{Q'}\left(aa^* h_{H} P_t(\rho)\right)\\ &=e^{-2t} \langle a,L(P_t(\rho))(a)\rangle_2, \end{align*} which finishes the proof of (ii). The proof of (iii) is similar. So $P_t$ satisfies $\mathrm{GE}(1,\infty)$. Applying the same argument to $P_t\otimes \mathrm{id}_{\mathcal{N}}$, we obtain $\mathrm{CGE}(1,\infty)$. \end{example} \begin{remark} As mentioned in \cite[Section 4.4]{JLL20}, the previous example can also be deduced from the complete gradient estimate for the classical Ornstein--Uhlenbeck semigroup using the ultraproduct methods from \cite{JZ15b}. However, in contrast to this approach we do not need to use the Ricci curvature bound for the classical Ornstein--Uhlenbeck semigroup, but get it as a special case (with minor modifications accounting for $\abs{q}=1$ in this case). \end{remark} \begin{example}[Quantum Tori] For $\theta\in [0,1)$ let $A_\theta$ be the universal $C^\ast$-algebra generated by unitaries $u=u_{\theta},v=v_{\theta}$ subject to the relation $vu=e^{2\pi i\theta}uv$. Let $\tau=\tau_{\theta}$ be the unique faithful tracial state on $A_\theta$ given by $\tau(u^m v^n)=\delta_{m,0}\delta_{n,0}$. The semigroup $(P_t)=(P^\theta_t)$ given by $P_t(u^m v^n)=e^{-t(m^2+n^2)}u^m v^n$ extends to a $\tau$-symmetric QMS on $L^\infty(A_\theta,\tau)$, which satisfies $\mathrm{CGE}(0,\infty)$. Here $L^\infty(A_\theta,\tau)$ denotes the strong closure of $A_\theta$ in the GNS representation associated with $\tau$. In fact, according to \cite[Section 10.6]{CS03}, $\mathcal{H}=L^2(A_{\theta},\tau)\oplus L^2(A_{\theta},\tau)$ and $\partial(u^m v^n)=(\partial_1(u^m v^n),\partial_2(u^m v^n))=i(mu^m v^n,n u^m v^n)$. Clearly, $\partial_j$ commutes with $P_t$ for $j=1,2$, so that $\mathrm{CGE}(0,\infty)$ follows from Corollary \ref{cor:intertwining_dir_sum}. In the commutative case $\theta=0$, $A_{\theta}=C(\mathbb{T}^2)$ is the C*-algebra of all continuous functions on flat $2$ torus $\mathbb{T}^2$ and the semigroup $(P_t)$ is the heat semigroup generated by the Laplace--Beltrami operator on the flat $2$-torus, which has vanishing Ricci curvature. Thus the constant $0$ in the gradient estimate is optimal. In fact, for any $\theta,\theta'\in [0,1)$, the semigroup $P_t^\theta$ on $L^\infty(A_{\theta},\tau_\theta)$ satisfies $\mathrm{CGE}(K,\infty)$ if and only if the semigroup $(P_t^{\theta'})$ on $L^\infty(A_{\theta'},\tau_{\theta'})$ satisfies $\mathrm{CGE}(K,\infty)$. Thus the gradient estimate $\mathrm{CGE}(0,\infty)$ is optimal for any $\theta\in[0,1)$. To see this, note first that by standard approximation arguments it suffices to show $\mathrm{GE}(K,\infty)$ for $\rho\in (A_{\theta})_+$ and $a\in D(\mathscr{L}_2^{1/2})\cap A_\theta$. By universal property of $A_{\theta+\theta'}$, there exists a $^\ast$-homomorphism $\pi\colon A_{\theta+\theta'}\to A_{\theta}\otimes A_{\theta'}$ such that \begin{equation*} \pi(u_{\theta+\theta'})=u_{\theta}\otimes u_{\theta'},~~\pi(v_{\theta+\theta'})=v_{\theta}\otimes v_{\theta'}. \end{equation*} Clearly $\pi$ is trace preserving and satisfies \begin{equation*} (P^{\theta}_t\otimes \mathrm{id}_{A_{\theta'}})\circ\pi=\pi\circ P_t^{\theta+\theta'}. \end{equation*} So if $P^{\theta}_t$ satisfies $\mathrm{CGE}(K,\infty)$, then so does $P^{\theta+\theta'}_t$. Since $\theta$ and $\theta'$ are arbitrary, we finish the proof of the assertion. This idea of transference was used in \cite{Ric16} to give a simple proof that the completely bounded Fourier multipliers on noncommutative $L_p$-spaces associated with quantum tori $A_{\theta}$ do not depend on the parameter $\theta$. The transference technique has been used in \cite{GJL18,JLL20} to study complete logarithmic Sobolev inequality. The same conclusion goes for $d$-dimensional quantum torus $A_{\theta}$ with $\theta$ being a $d$-by-$d$ real skew-symmetric matrix. \end{example} \begin{example}[Quantum groups]\label{ex:quantum groups} A \emph{compact quantum group} is a pair $\mathbb{G}=(A,\Delta)$ consisting of a unital C*-algebra $A$ and a unital $^\ast$-homomorphism $\Delta\colon A\to A\otimes A$ such that \begin{enumerate} \item $(\Delta\otimes\mathrm{id}_A)\Delta=(\mathrm{id}_A\otimes\Delta)\Delta$; \item $\{\Delta(a)(1\otimes b):a,b\in A\}$ and $\{\Delta(a)(b\otimes1):a,b\in A\}$ are linearly dense in $A\otimes A$. \end{enumerate} Here $A\otimes A$ is the minimal C*-algebra tensor product. The homomorphism $\Delta$ is called the \emph{comultiplication} on $A$. We denote $A=C(\mathbb{G})$. Any compact quantum group $\mathbb{G}=(A,\Delta)$ admits a unique \textit{Haar state}, i.e.\ a state $h$ on $A$ such that \begin{equation*} (h\otimes\mathrm{id}_A)\Delta(a)=h(a)1=(\mathrm{id}_A\otimes h)\Delta(a),~~a\in A. \end{equation*} Consider an element $u\in A\otimes B(H)$ with $\dim H=n$. By identifying $A\otimes B(H)$ with $M_n(A)$ we can write $u=[u_{ij}]_{i,j=1}^{n}$, where $u_{ij}\in A$. The matrix $u$ is called an \textit{n-dimensional representation} of $\mathbb{G}$ if we have \[ \Delta(u_{ij})=\sum_{k=1}^{n}u_{ik}\otimes u_{kj},~~i,j=1,\dots,n. \] A representation $u$ is called \textit{unitary} if $u$ is unitary as an element in $M_n(A)$, and \textit{irreducible} if the only matrices $T\in M_n(\mathbb{C})$ such that $uT=Tu$ are multiples of identity matrix. Two representations $u,v\in M_n(A)$ are said to be \textit{equivalent} if there exists an invertible matrix $T\in M_n(\mathbb{C})$ such that $Tu=vT$. Denote by $\mathrm{Irr}(\mathbb{G})$ the set of equivalence classes of irreducible unitary representations of $\mathbb{G}$. For each $\alpha\in\mathrm{Irr}(\mathbb{G})$, denote by $u^\alpha\in A\otimes B(H_\alpha)$ a representative of the class $\alpha$, where $H_\alpha$ is the finite dimensional Hilbert space on which $u^\alpha$ acts. In the sequel we write $n_\alpha=\dim H_\alpha$. Denote $\mathrm{Pol}(\mathbb{G})=\text{span} \left\{u^\alpha_{ij}:1\leq i,j\leq n_\alpha,\alpha\in\mathrm{Irr}(\mathbb{G})\right\}$. This is a dense subalgebra of $A$. On $\mathrm{Pol}(\mathbb{G})$ the Haar state $h$ is faithful. It is well-known that $(\mathrm{Pol}(\mathbb{G}),\Delta)$ is equipped with the Hopf*-algebra structure, that is, there exist a linear antihomormophism $S$ on $\mathrm{Pol}(\mathbb{G})$, called the \textit{antipode}, and a unital $^\ast$-homomorphism $\epsilon\colon\mathrm{Pol}(\mathbb{G})\to\mathbb{C}$, called the \textit{counit}, such that \begin{equation*} (\epsilon\otimes\mathrm{id}_{\mathrm{Pol}(\mathbb{G})})\Delta(a)=a=(\mathrm{id}_{\mathrm{Pol}(\mathbb{G})}\otimes\epsilon)\Delta(a),~~a\in\mathrm{Pol}(\mathbb{G}), \end{equation*} and \begin{equation*} m(S\otimes\mathrm{id}_{\mathrm{Pol}(\mathbb{G})})\Delta(a)=\epsilon(a)1=m(\mathrm{id}_{\mathrm{Pol}(\mathbb{G})}\otimes S)\Delta(a),~~a\in\mathrm{Pol}(\mathbb{G}). \end{equation*} Here $m$ denotes the multiplication map $m\colon\mathrm{Pol}(\mathbb{G})\otimes_{\text{alg}}\mathrm{Pol}(\mathbb{G})\to\mathrm{Pol}(\mathbb{G}),~~a\otimes b\mapsto ab$. Indeed, the antipode and the counit are uniquely determined by \begin{equation*} S(u^\alpha_{ij})=(u^{\alpha}_{ji})^*,~~1\leq i,j\leq n_\alpha,~~\alpha\in\mathrm{Irr}(\mathbb{G}), \end{equation*} \begin{equation*} \epsilon(u^\alpha_{ij})=\delta_{ij},~~1\leq i,j\leq n_\alpha,~~\alpha\in\mathrm{Irr}(\mathbb{G}). \end{equation*} \smallskip Since the Haar state $h$ is faithful on $\mathrm{Pol}(\mathbb{G})$, one may consider the corresponding GNS construction $(\pi_h,H_h,\xi_h)$ such that $h(x)=\langle \xi_h,\pi_h(x)\xi_h \rangle_{H_h}$ for all $x\in \mathrm{Pol}(\mathbb{G})$. The \emph{reduced $C^\ast$-algebra} $C_{r}(\mathbb{G})$ is the norm completion of $\pi_h(\mathrm{Pol}(\mathbb{G}))$ in $B(H_{h})$. Then the restriction of comultiplication $\Delta$ to $\mathrm{Pol}(\mathbb{G})$, extends to a unital $^\ast$-homomorphism on $C_r(\mathbb{G})$, which we still denote by $\Delta$. The pair $(C_r(\mathbb{G}),\Delta)$ forms a compact quantum group, and in the following we always consider this reduced version (instead of the \emph{universal} one, since the Haar state $h$ is always faithful on $C_{r}(\mathbb{G})$). Denote by $L^\infty(\mathbb{G})=C_r(\mathbb{G})''$ the von Neumann subalgebra of $B(H_h)$ generated by $C_r(\mathbb{G})$, and we can define the noncommutative $L^p$-spaces associated with $(L^\infty(\mathbb{G}),h)$. In particular, we identify $L^2(\mathbb{G})$ with $H_h$. We refer to \cite{MV98} and \cite{Wor98} for more details about compact quantum groups. \smallskip A compact quantum group $\mathbb{G}$ is of \emph{Kac type} if the Haar state is tracial. In the following $\mathbb{G}$ is always a compact quantum group of Kac type, which is the case for later examples $O_N^+$ and $S_N^+$. Given a L\'evy process $(j_t)_{t\ge 0}$ \cite[Definition 2.4]{CFK14} on $\mathrm{Pol}(\mathbb{G})$ one can associate it to a semigroup $P_t=(\mathrm{id}\otimes \phi_t)\Delta$ on $C_r(\mathbb{G})$, where $\phi_t$ is the marginal distribution of $j_t$. This $(P_t)$ is a strongly continuous semigroup of unital completely positive maps on $C_r(\mathbb{G})$ that are symmetric with respect to the Haar state $h$ \cite[Theorem 3.2]{CFK14}. Then $(P_t)$ extends to a $h$-symmetric QMS on $L^\infty(\mathbb{G})$. The corresponding first-order differential calculus can be described in terms of a \emph{Schürmann triple} $((H,\pi),\eta,\varphi)$ \cite[Propositions 8.1, 8.2]{CFK14}. The tangent bimodule $\mathcal{H}$ is then a submodule of $L^2(\mathbb{G})\otimes H$ with the left and right action given by $L=(\lambda_L\otimes \pi)\Delta$ and $R=\lambda_R\otimes \mathrm{id}_H$, respectively. Here $\lambda_L$ and $\lambda_R$ are the left and right action of $L^\infty(\mathbb{G})$ on $L^2(\mathbb{G})$: $$\lambda_L(a)(b\xi_h)=ab\xi_h,~~\lambda_R(a)(b\xi_h)=ba\xi_h.$$ The derivation \cite[Proposition 8.1]{CFK14} is given on $\mathrm{Pol}(\mathbb{G})$ by $\partial=(\iota_h\otimes\eta)\Delta$, where $\iota_h\colon L^\infty(\mathbb{G})\to L^2(\mathbb{G})$ is the natural embedding: $$\iota_h(a)=a\xi_h.$$ Note that the QMS $(P_t)$ is always \emph{right translation invariant}: $(\mathrm{id}\otimes P_t)\Delta=\Delta P_t$ for all $t\ge0$. In fact, any right translation invariant QMS must arise in this way \cite[Theorem 3.4]{CFK14}. Here we are interested in semigroups $(P_t)$ that are not only right translation invariant but also \emph{left translation invariant}, or \emph{translation bi-invariant}: for all $t\ge0$ \begin{equation}\label{eq:bi-invariance} (P_t\otimes \mathrm{id})\Delta=\Delta P_t=(\mathrm{id}\otimes P_t)\Delta. \end{equation} In this case, let $\vec P_t=P_t\otimes \mathrm{id}_H$, and we have \begin{equation*} \vec P_t \partial=(P_t\otimes \mathrm{id}_H)(\iota_h\otimes \eta)\Delta=(\iota_h\otimes \eta)(P_t\otimes \mathrm{id}_A) \Delta=(\iota_h\otimes\eta)\Delta P_t=\partial P_t. \end{equation*} It is not hard to check that $\vec P_t$ is $\mathcal{J}$-real. We will show that it also satisfies the condition (iii) from Theorem \ref{thm:intertwining} for $K=0$. For $\xi_1,\dots,\xi_n\in H$ and $x_1,\dots,x_n\in A$ we have \begin{align*} &\quad\left\langle (P_t\otimes \mathrm{id}_H)\sum_k x_k\otimes \xi_k,R(\rho)(P_t\otimes \mathrm{id})\sum_k x_k\otimes \xi_k\right\rangle\\ &=\sum_{k,l}\langle \xi_k,\xi_l\rangle h((P_t x_k)^\ast (P_t x_l)\rho), \end{align*} and \begin{equation*} \left\langle \sum_k x_k\otimes \xi_k,R(P_t \rho)\sum_k x_k\otimes \xi_k\right\rangle =\sum_{k,l}\langle \xi_k,\xi_l\rangle h(x_k^\ast x_l P_t\rho). \end{equation*} Clearly, the matrix $[\langle \xi_k,\xi_l\rangle]_{k,l}$ is positive semi-definite. By Kadison--Schwarz inequality, \begin{equation*} [(P_t x_k)^\ast (P_t x_l)]_{k,l}\leq [P_t (x_k^\ast x_l)]_{k,l}. \end{equation*} Thus also $[h((P_t x_k)^\ast (P_t x_l)\rho)]_{k,l}\leq [h(x_k^\ast x_l P_t \rho)]_{k,l}$. Since the Hadamard product of positive semi-definite matrices is positive semi-definite, it follows that \begin{equation*} [\langle \xi_k,\xi_l\rangle h((P_t x_k)^\ast(P_t x_l)\rho)]_{k,l}\leq [\langle \xi_k,\xi_l\rangle h(x_k^\ast x_l P_t\rho)]_{k,l}. \end{equation*} Hence \begin{equation*} \sum_{k,l}\langle \xi_k,\xi_l\rangle h((P_t x_k)^\ast (P_t x_l)\rho) \le\sum_{k,l}\langle \xi_k,\xi_l\rangle h(x_k^\ast x_l P_t\rho), \end{equation*} and we get the desired result. Thus $(P_t)$ satisfies $\mathrm{GE}(0,\infty)$. Applying the same argument to $(P_t\otimes \mathrm{id}_{\mathcal{N}})$, we get $\mathrm{CGE}(0,\infty)$. \smallskip If each $\phi_t$ is \emph{central}: \begin{equation}\label{eq:central} (\phi_t\otimes \mathrm{id})\Delta=(\mathrm{id}\otimes \phi_t)\Delta. \end{equation} then the QMS $P_t=(\mathrm{id}\otimes \phi_t)\Delta$ is translation-bi-invariant. Recall that the convolution of two functionals $\psi_1,\psi_2$ on $C(\mathbb{G})$ (or $C_r(\mathbb{G})$, $\mathrm{Pol}(\mathbb{G})$) is defined as $\psi_1\star \psi_2=(\psi_1\otimes \psi_2)\Delta$. The \emph{convolution semigroup of states} $\phi_t=\epsilon+\sum_{n\ge 1}\frac{t^{ n}}{n!}\psi^{\star n}$ is generated by $\psi$, called the \emph{generating functional}, where $\psi$ is hermitian, conditionally positive and vanishes on $1$ (see \cite[Section 2.5]{CFK14} for details). Then once the generating functional $\psi$ is central, the QMS $P_t=(\mathrm{id}\otimes \phi_t)\Delta=e^{tT_\psi}$ is translation-bi-invariant, and thus satisfies $\mathrm{CGE}(0,\infty)$, where $T_{\psi}=(\mathrm{id}\otimes \psi)\Delta$. For the geometric Ricci curvature condition this result was independently proven in \cite[Lemma 4.6]{BGJ20}. \end{example} In the next few examples we collect some specific instances of QMS on quantum groups which are translation-bi-invariant. Firstly we give some commutative examples. \begin{example}[Compact Lie groups] For any compact group $G$, $(C(G),\Delta)$ forms a compact quantum group, where $C(G)$ is the C*-algebra of all continuous functions on $G$ and the comultiplication $\Delta\colon C(G)\to C(G)\otimes C(G)\cong C(G\times G)$ is given by $\Delta f(s,t)=f(st)$. The Haar state $h$ is nothing but $\int \cdot\, d\mu$, with $\mu$ being the Haar (probability) measure. Consider the QMS $(P_t)$ on $C(G)$: $P_t(f)(s)=\int_{G}f(r)K_t(r,s)d\mu(r)$. Then $(P_t)$ is translation bi-invariant if and only if the kernel $K_t$ is bi-invariant under $G$: $K_t(gr,gs)=K_t(r,s)=K_t(rg,sg)$ for all $g,r,s\in G$, or equivalently, $(P_t)$ is a convolution semigroup with the kernel $\tilde{K}_t(s)=K(e,s)$ being conjugate-invariant: $\tilde{K}(s)=\tilde{K}(gsg^{-1})$ for all $g,s\in G$. Let $G$ be a compact Lie group with a bi-invariant Riemann metric $g$. If $(P_t)$ is the heat semigroup generated by the Laplace--Beltrami operator, then a direct computation shows that the bi-invariance of the metric implies the translation-bi-invariance of $(P_t)$. Thus we recover the well-known fact from Riemannian geometry that the Ricci curvature of a compact Lie group with bi-invariant metric is always nonnegative (see e.g. \cite[Section 7]{Mil76}). \end{example} Secondly, we give co-commutative examples. By saying co-commutative we mean $\Delta=\Pi\circ\Delta$, where $\Pi$ is the tensor flip, i.e., $\Pi(a\otimes b)=b\otimes a$. \begin{example}[Group von Neumann algebras]\label{ex:group_alg} Let $G$ be a countable discrete group with unit $e$, $C_r^\ast(G)$ the reduced $C^\ast$-algebra generated by the left regular representation $\lambda$ of $G$ on $\ell^2(G)$ and $L(G)$ the group von Neumann algebra $L(G)=C_r^\ast(G)^{\prime\prime}\subset B(\ell^2(G))$. Then $\mathbb{G}=(C_r^\ast(G),\Delta)$ is a quantum group with comultiplication given by $\Delta(\lambda_g)=\lambda_g\otimes \lambda_g$. The Haar state on $\mathbb{G}$ is given by $\tau(x)=\langle x\delta_e,\delta_e\rangle$, which is tracial and faithful. Here and in what follows, $\delta_g$ always denotes the function on $G$ that takes value 1 at $g$ and vanishes elsewhere. A function $\psi\colon G\to [0,\infty)$ is a \emph{conditionally negative definite} (cnd) length function if $\psi(e)=0$, $\psi(g^{-1})=\psi(g)$ and \begin{equation*} \sum_{g,h\in G}\overline{f(g)}f(h)\psi(g^{-1}h)\leq 0 \end{equation*} for every $f\in G\to \mathbb{C}$ with finite support such that $\sum_{g\in G} f(g)=0$. By Schoenberg's Theorem (see for example \cite[Theorem D.11]{BO08}), to every cnd function one can associate a $\tau$-symmetric QMS on $L(G)$ given by \begin{equation*} P_t \lambda_g=e^{-t\psi(g)}\lambda_g. \end{equation*} It is easy to check that $(P_t)$ satisfies the translation-bi-invariant condition (\ref{eq:bi-invariance}). Thus it satisfies $\mathrm{CGE}(0,\infty)$. Now we give some genuine quantum group examples. \begin{example}[Free orthogonal quantum group $O^+_N$ \cite{Wan95}]\label{ex:free orthogonal quantum group} Let $N\ge2$. The free orthogonal quantum group $O^+_N$ consists of a pair $(C_u(O^+_N),\Delta)$, where $C_u(O^+_N)$ is the universal C*-algebra generated by $N^2$ self-adjoint operators $u_{ij},1\le i,j\le N$, such that $U=[u_{ij}]_{1\le i,j\le N}\in M_N(\mathbb{C})\otimes C_u(O^+_N)$ is unitary, that is, \begin{equation*} \sum_{k=1}^{N}u_{ik}u_{jk}=\delta_{ij}=\sum_{k=1}^{N}u_{ki}u_{kj},~~1\le i,j\le N, \end{equation*} and the comultiplication $\Delta$ is given by \begin{equation*} \Delta(u_{ij})=\sum_{k=1}^{N}u_{ik}\otimes u_{kj},~~1\le i,j\le N. \end{equation*} The equivalent classes of irreducible unitary representations of $O_N^+$ can be indexed by $\mathbb{N}$, with $u^{(0)}=1$ the trivial representation and $u^{(1)}=U$ the fundamental representation. By \cite[Corollary 10.3]{CFK14}, the central generating functionals $\psi$ on $\mathrm{Pol}(O^+_N)$ are given by \begin{equation*} \psi(u^{(s)}_{ij})=\frac{\delta_{ij}}{U_s(N)}\left(-bU_s'(N)+\int_{-N}^{N}\frac{U_s(x)-U_s(N)}{N-x}\nu(dx)\right), \end{equation*} for $s\in \mathrm{Irr}(O_N^+)=\mathbb{N},1\le i,j\le n_s$, where $U_s$ denotes the $s$-th Chebyshev polynomial of second kind, $b\ge 0$, and $\nu$ is a finite measure on $[-N,N]$ with $\nu(\{N\})=0$. Then given $(b,\nu)$, the central functional $\psi$ defined as above induces a QMS $P_t^\psi=e^{t T_\psi}$ satisfying \eqref{eq:bi-invariance}, where $T_\psi=(\mathrm{id}\otimes \psi)\Delta$. Hence it satisfies $\mathrm{CGE}(0,\infty$). \end{example} \begin{example}[Free permutation quantum group $S^+_N$ \cite{Wan98}]\label{ex:free permutation quantum group} Let $N\ge2$. The free permutation quantum group $S^+_N$ consists of a pair $(C_u(S^+_N),\Delta)$, where $C_u(O^+_N)$ is the universal C*-algebra generated by $N^2$ self-adjoint operators $p_{ij},1\le i,j\le N$, such that \begin{equation*} p_{ij}^2=p_{ij}=p_{ij}^*,~~\sum_{k=1}^{N}p_{ik}=1=\sum_{k=1}^{N}p_{kj},~~1\le i,j\le N, \end{equation*} and the comultiplication $\Delta$ is given by \begin{equation*} \Delta(p_{ij})=\sum_{k=1}^{N}p_{ik}\otimes p_{kj},~~1\le i,j\le N. \end{equation*} The equivalent classes of irreducible unitary representations of $S_N^+$ can be indexed by $\mathbb{N}$. By \cite[Theorem 10.10]{FKS16}, the central generating functionals $\psi$ on $\mathrm{Pol}(S^+_N)$ are given by \begin{equation*} \psi(u^{(s)}_{ij})=\frac{\delta_{ij}}{U_{2s}(\sqrt{N})}\left(-b\frac{U_{2s}'(\sqrt{N})}{2\sqrt{N}}+\int_{0}^{N}\frac{U_{2s}(\sqrt{x})-U_{2s}(\sqrt{N})}{N-x}\nu(dx)\right), \end{equation*} for $s\in \mathrm{Irr}(S_N^+)=\mathbb{N},1\le i,j\le n_s$, where $U_s$ denotes the $s$-th Chebyshev polynomial of second kind, $b> 0$, and $\nu$ is a finite measure on $[0,N]$. Similarly, given $(b,\nu)$, the central functional $\psi$ defined as above induces a QMS $P_t^\psi=e^{t T_\psi}$ satisfying \eqref{eq:bi-invariance}, where $T_\psi=(\mathrm{id}\otimes \psi)\Delta$. Hence it satisfies $\mathrm{CGE}(0,\infty$). \end{example} \end{example} \begin{remark} Although many interesting functional inequalities like the Poincaré and the modified logarithmic Sobolev inequality only follow directly from $\mathrm{GE}(K,\infty)$ for $K>0$, the gradient estimate with constant $K\leq 0$ can still be helpful in conjunction with additional assumptions to prove such functional inequalities (see \cite{DR20,BGJ20}). \end{remark} \section{Stability under tensor products and free products}\label{sec:stability} In this section we prove that the complete gradient estimate $\mathrm{CGE}(K,\infty)$ is stable under taking tensor products and free products of quantum Markov semigroups. We refer to \cite{VDN92} and \cite{BD01} for more information on free products of von Neumann algebras and to \cite{Boc91} for free products of completely positive maps. \begin{theorem}\label{thm:tensor_product} Let $(\mathcal{M}_j,\tau_j)$, $j\in \{1,\dots,n\}$, be tracial von Neumann algebras and $(P_t^j)$ a $\tau_j$-symmetric $\Gamma$-regular QMS on $\mathcal{M}_j$. If for every $j\in\{1,\dots,n\}$ the QMS $(P_t^j)$ satisfies $\mathrm{CGE}(K,\infty)$, then $\bigotimes_j P_t^j$ satisfies $\mathrm{CGE}(K,\infty)$. \end{theorem} \begin{proof} Let $\mathcal{H}_j$ and $\partial_j$ denote the tangent bimodule and derivation for $(P_t^j)$ and let \begin{align*} \bar \mathcal{H}_j&=\bigotimes_{k=1}^{j-1}L^2(\mathcal{M}_k,\tau_k)\otimes \mathcal{H}_j \otimes \bigotimes_{k=j+1}^n L^2(\mathcal{M}_k,\tau_k),\\ \bar\partial_j&=\bigotimes_{k=1}^{j-1}\mathrm{id}_{\mathcal{M}_k}\otimes \partial_j \otimes \bigotimes_{k=j+1}^n \mathrm{id}_{\mathcal{M}_k}. \end{align*} The tangent module $\mathcal{H}$ for $P_t=\bigotimes_j P_t^j$ is a submodule of $\mathcal{H}=\bigoplus_j \bar \mathcal{H}_j$ with the natural left and right action and derivation $\partial=(\bar\partial_1,\dots,\bar\partial_n)$. For $j\in\{1,\dots,n\}$, put \begin{equation*} \tilde P_t^j=\bigotimes_{k=1}^{j-1}P_t^k\otimes \mathrm{id}_{\mathcal{M}_j}\otimes\bigotimes_{k=j+1}^n P_t^k \end{equation*} and \begin{equation*} \bar P_t^j=\bigotimes_{k=1}^{j-1}\mathrm{id}_{\mathcal{M}_k} \otimes P_t^j\otimes\bigotimes_{k=j+1}^n \mathrm{id}_{\mathcal{M}_k} \end{equation*} on $\bigotimes_k \mathcal{M}_k$, so that $P_t=\bar P_t^j \tilde P_t^j=\tilde P_t^j \bar P_t^j$. Then \begin{align*} \norm{\partial P_t a}_\rho^2&=\sum_{j=1}^n \norm{\bar \partial_j P_t a}_\rho^2\\ &=\sum_{j=1}^n \norm{\bar \partial_j \bar P_t^j \tilde P_t^j a}_\rho^2\\ &\leq \sum_{j=1}^n e^{-2K t}\norm{\bar \partial_j \tilde P_t^j a}_{\bar P_t^j\rho}^2 \end{align*} by $\mathrm{CGE}(K,\infty)$ for $(P_t^j)$. Let \begin{equation*} Q_t^j=\bigotimes_{k=1}^{j-1}P_t^k \otimes \mathrm{id}_{\mathcal{H}_j}\otimes \bigotimes_{k=j+1}^n P_t^k \end{equation*} on $\bar \mathcal{H}_j$. Then $\bar \partial_j \tilde P_t^j =Q_t^j \bar\partial_j$, and conditions (ii), (iii) in Theorem \ref{thm:intertwining} follow from the Kadison--Schwarz inequality (compare with Example \ref{ex:quantum groups}). Taking into account Remark \ref{rmk:diff_calc_ind}, we get \begin{align*} \norm{\bar\partial_j \tilde P_t^j a}_\rho^2\leq \norm{\bar\partial_j a}_{\tilde P_t^j \rho}^2. \end{align*} Together with the previous estimate, we obtain \begin{equation*} \norm{\partial P_t a}_\rho^2 \leq \sum_{j=1}^n e^{-2K t}\norm{\bar \partial_j \tilde P_t^j a}_{\bar P_t^j\rho}^2 \leq\sum_{j=1}^n e^{-2Kt}\norm{\bar\partial_j a}_{P_t \rho}^2 = e^{-2Kt}\norm{\partial a}_{P_t \rho}^2. \end{equation*} So $(P_t)$ satisfies $\mathrm{GE}(K,\infty)$. The same argument can be applied to $(P_t\otimes \mathrm{id}_\mathcal{N})$, so that we obtain $\mathrm{CGE}(K,\infty)$. \end{proof} \begin{theorem}\label{thm:free_product} For $j\in\{1,\dots,n\}$ let $(\mathcal{M}_j,\tau_j)$ be a tracial von Neumann algebras and $(P_t^j)$ a tracially symmetric $\Gamma$-regular QMS on $\mathcal{M}_j$. If for every $j\in\{1,\dots,n\}$ the QMS $(P_t^j)$ satisfies $\mathrm{CGE}(K,\infty)$, then $\ast_j P_t^j$ satisfies $\mathrm{CGE}(K,\infty)$. \end{theorem} \begin{proof} Let $\mathcal{M}=\ast_j \mathcal{M}_j$, $\tau=\ast_j \tau_j$ and $P_t=\ast_j P_t^j$. Recall that $L^2(\mathcal{M},\tau)$ is canonically identified with \begin{align*} \ast_j L^2(\mathcal{M}_j,\tau_j)=\mathbb{C} 1\oplus \bigoplus_{n\geq 1}\bigoplus_{j_1\neq\dots\neq j_n}\bigotimes_{l=1}^n L^2_0(\mathcal{M}_{j_l},\tau_{j_l}), \end{align*} where $L^2_0$ denotes the orthogonal complement of $\mathbb{C} 1$ in $L^2$. Then $\mathcal{H}$ can be identified with a submodule of \begin{equation*} \bigoplus_{n\geq 1}\bigoplus_{j_1\neq\dots\neq j_n}\bigoplus_{k=1}^n\left(\bigotimes_{l=1}^{k-1} L^2(\mathcal{M}_{j_l},\tau_{j_l})\otimes \mathcal{H}_{j_k}\otimes \bigotimes_{l=k+1}^n L^2(\mathcal{M}_{j_l},\tau_{j_l})\right) \end{equation*} with the natural left and right action on each direct summand and $\partial$ acts as $0$ on $\mathbb{C} 1$ and as \begin{align*} \partial(a_1\otimes\dots\otimes a_n)=(\partial_{j_1}(a_1)\otimes a_2\dots \otimes a_n,\dots,a_1\otimes a_2\otimes \dots\otimes \partial_{j_n}(a_n)) \end{align*} on the direct summand $\bigotimes_{j_1\neq\dots\neq j_n}L^2(\mathcal{M}_{j_l},\tau_{j_l})$. Since $\partial$ and $(P_t)$ restrict nicely to the direct summand of $L^2(\mathcal{M},\tau)$, the rest of the proof is similar to the one of Theorem \ref{thm:tensor_product}. \end{proof} \begin{remark} The same argument applies to free products with amalgamation if the common subalgebra over which one amalgates is contained in the fixed-point algebra of $(P_t^j)$ for all $j\in\{1,\dots,n\}$ (compare with the results from \cite[Section 6.2]{JZ15a} for the $\Gamma_2$ condition). \end{remark} \section{Quantum Markov semigroups generated by commuting projections}\label{sec:com_proj} In this section we move beyond applications of the intertwining result Theorem \ref{thm:intertwining} and obtain complete gradient estimate for quantum Markov semigroups whose generators take special Lindblad forms. \begin{theorem}\label{thm:com_proj} Let $p_1,\dots,p_n\in \mathcal{M}$ be commuting projections. The QMS $(P_t)$ generated by \begin{align*} \mathscr{L}\colon \mathcal{M}\to\mathcal{M},\,\mathscr{L} x=\sum_{j=1}^n p_j x+x p_j-2p_j x p_j \end{align*} is $\Gamma$-regular and satisfies $\mathrm{CGE}(1,\infty)$. \end{theorem} \begin{proof} For $1\leq j\leq n$ consider the operator $\mathscr{L}_j\colon\mathcal{M}\to \mathcal{M}$ defined by \begin{equation*} \mathscr{L}_j x=p_j x+xp_j-2p_j xp_j=x-p_j xp_j-(1-p_j)x(1-p_j). \end{equation*} In particular, $\mathscr{L}_j$ is of the form $\mathscr{L}_j=I-\Phi_j$ with $I=\mathrm{id}_{\mathcal{M}}$ and the conditional expectation $\Phi_j(x)=p_j x p_j+(1-p_j)x(1-p_j)$. Thus the QMS $(P_t^j)$ generated by $\mathscr{L}_j$ is given by \begin{align*} P_t^j x=x+(e^{-t}-1)\mathscr{L}_j x=e^{-t}x+(1-e^{-t})\Phi_j(x). \end{align*} A first-order differential calculus for $(P_t)$ is given by $\mathcal{H}=\bigoplus_{j=1}^n L^2(\mathcal{M},\tau)$ as bimodules, $L=(L_j)_j,R=(R_j)_j$ with $L_j$ and $R_j$ being the usual left and right multiplications of $\mathcal{M}$ on $L^2(\mathcal{M},\tau)$ respectively, and $\partial=(\partial_j)$, where $\partial_j x=[p_j,x]$. Thus $(P_t)$ is $\Gamma$-regular. Moreover, $\partial_j P_t^j x=e^{-t}\partial_j x$ and consequently \begin{equation}\label{eq:partial_P_t} \norm{\partial_j P_t^j x}_\rho^2=e^{-2t}\norm{\partial_j x}_\rho^2. \end{equation} On the other hand, by the concavity of operator means \cite[Theorem 3.5]{KA80} we have \begin{equation}\label{eq:conc_decom_QMS} \widehat{P_t^j \rho}\geq e^{-t}\hat \rho+(1-e^{-t})\widehat{ \Phi_j(\rho)}. \end{equation} Since \begin{align*} &\quad\;\mathscr{L}_j ((\partial_j x)^\ast (\partial_j x))\\ &=p_j x^\ast x p_j+p_j x^\ast p_j x-p_j x^\ast p_j x-p_j x^\ast p_j x p_j\\ &\quad+p_j x^\ast x p_j +x^\ast p_j x p_j-p_j x^\ast p_j x p_j-x^\ast p_j x p_j\\ &\quad -2 p_j x^\ast x p_j-2 p_j x^\ast p_j x p_j+2p_j x^\ast p_j v p_j+2 p_j x^\ast p_j x p_j\\ &=0, \end{align*} we have $$\Phi_j((\partial_j x)^\ast (\partial_j x)) =(I-\mathscr{L}_j)\left((\partial_j x)^\ast (\partial_j x)\right) =(\partial_j x)^\ast (\partial_j x).$$ Recall that $L_j$ and $R_j$ are respectively the usual left and right multiplications of $\mathcal{M}$ on $L^2(\mathcal{M},\tau)$ and denote by $E_j$ the projection onto $\overline{\operatorname{ran} \partial_j}$ in $L^2(\mathcal{M},\tau)$. It follows that \begin{align*} \langle R_j(\Phi_j(\rho))(\partial_j x),\partial_j x\rangle_2 &=\tau(\Phi_j(\rho)(\partial_j x)^\ast (\partial_j x))\\ &=\tau(\rho \Phi_j((\partial_j x)^\ast (\partial_j x)))\\ &=\tau(\rho (\partial_j x)^\ast (\partial_j x))\\ &=\langle R_j(\rho)\partial_j x,\partial_j x\rangle_2. \end{align*} Hence $E_j R_j(\Phi_j(\rho)) E_j=E_j R_j(\rho)E_j$. The analogous identity for the left multiplication follows similarly. Note that both the left and right multiplication by $\Phi_j(x)=p_j x p_j+(1-p_j)x(1-p_j)$ leave $\overline{\operatorname{ran} \partial_j}$ invariant. In fact, for any $x,y\in \mathcal{M}$ one has \begin{align*} \Phi_j(x)\partial_j(y) &=p_j(p_jx p_j y)- (p_jx p_j y )p_j\\ &\quad\;+p_j((1-p_j)x(1-p_j)y)-((1-p_j)x(1-p_j)y)p_j\\ &=\partial_j(p_j x p_j y)+\partial_j ((1-p_j)x(1-p_j)y), \end{align*} and a similar equation holds for the right multiplication. Therefore we have \begin{align*} E_j L_j(\Phi_j(\rho))E_j&\le L_j(\Phi_j(\rho)),\\ E_j R_j(\Phi_j(\rho))E_j&\le R_j(\Phi_j(\rho)). \end{align*} This, together with the conditions (a) and (b) in the definition of operator means, implies \begin{align*} E_j\widehat{\Phi_j(\rho)} E_j &\ge E_j\Lambda(E_j L_j(\Phi_j(\rho))E_j,E_j R_j(\Phi_j(\rho)) E_j)E_j\\ &=E_j\Lambda(E_j L_j(\rho)E_j,E_j R_j(\rho)E_j)E_j\\ &\geq E_j \hat \rho E_j. \end{align*} In other words, \begin{equation*} \langle \widehat{\Phi_j(\rho)} \partial_j x,\partial_j x\rangle_2\geq \langle \hat \rho \partial_j x,\partial_j x\rangle_2. \end{equation*} Together with (\ref{eq:conc_decom_QMS}) we conclude \begin{equation*} \norm{\partial_j x}_{P_t^j \rho}^2\geq e^{-t}\norm{\partial_j x}_\rho^2+(1-e^{-t})\norm{\partial_j x}_\rho^2=\norm{\partial_j x}_\rho^2. \end{equation*} In view of \eqref{eq:partial_P_t}, we have proved \begin{equation}\label{eq:estimate_P_t^j} \norm{\partial_j P_t^j x}_\rho^2\le e^{-2t}\norm{\partial_j x}_{P_t^j \rho}^2. \end{equation} Now let us come back to our original semigroup $(P_t)$. Let \begin{equation*} Q_t^j=\prod_{k\neq j} P_t^k. \end{equation*} Since the $p_j$'s commute, so do the generators $\mathscr{L}_j$'s and the semigroups $P_t^j$'s. This means that the order in the definition of $Q_t^j$ does not matter and $P_t =P_t^j Q_t^j$ for all $j\in\{1,\dots,n\}$. From the intertwining technique and Remark \ref{rmk:diff_calc_ind} we deduce \begin{equation*} \norm{\partial_j Q_t^j x}_\rho^2\leq \norm{\partial_j x}_{Q_t^j \rho}^2. \end{equation*} Combined with the estimate \eqref{eq:estimate_P_t^j} for $(P_t^j)$, we obtain \begin{equation*} \norm{\partial P_t x}_\rho^2=\sum_{j=1}^n \norm{\partial_j P_t^j Q_t^j x}_\rho^2\leq e^{-2t}\sum_{j=1}^n \norm{\partial_j Q_t^j x}_{P_t ^j \rho}^2\leq e^{-2t} \norm{\partial x}_{P_t \rho}^2. \end{equation*} So $(P_t)$ satisfies $\mathrm{GE}(1,\infty)$. To prove $\mathrm{CGE}(1,\infty)$, it suffices to note that the generator of $(P_t\otimes \mathrm{id}_\mathcal{N})$ is given by \begin{equation*} (\mathscr{L}\otimes \mathrm{id}_\mathcal{N})x=\sum_{j=1}^n (p_j\otimes 1) x+x(p_j\otimes 1)-2(p_j\otimes 1) x(p_j\otimes 1) \end{equation*} and the elements $(p_j\otimes 1)$ are again commuting projections. \end{proof} \begin{remark} Since $\mathscr{L}_j^2=\mathscr{L}_j$, the spectrum of $\mathscr{L}_j$ is contained in $\{0,1\}$ with equality unless $\mathscr{L}_j=0$. Thus the gradient estimate for the individual semigroups $(P_t^j)$ is optimal (unless $v_j=0$). It should also be noted that it is better than the gradient estimate one would get from Example \ref{ex:cond_exp}. \end{remark} \begin{remark} Inspection of the proof shows that the same result holds if the generator of $(P_t)$ is of the form $\mathscr{L}=\frac 1 2\sum_{j=1}^n (x-u_j xu_j)$ with commuting self-adjoint unitaries $u_j$. \end{remark} \begin{example} Let $X=\{0,1\}^n$ and $\epsilon_j\colon X\to X$ the map that swaps the $j$-th coordinate and leaves the other coordinates fixed. Let $v_j=\sum_x \ket{\epsilon_j(x)}\bra{x}\in B(\ell^2(X))$. By the previous remark, the QMS on $B(\ell^2(X))$ with generator \begin{align*} \mathscr{L} \colon B(\ell^2(X))\to B(\ell^2(X)),\,\mathscr{L} A=\frac 1 2\sum_{j=1}^n (A-v_j A v_j) \end{align*} satisfies $\mathrm{CGE}(1,\infty)$. The restriction of this semigroup to the diagonal algebra is (up to rescaling of the time parameter, depending on the normalization) the Markov semigroup associated with the simple random walk on the discrete hypercube (see \cite[Example 5.7]{EM12}). \end{example} To apply the theorem above to group von Neumann algebras, we will use the following Lindblad form for QMS generated by cnd length functions. Recall that for a countable discrete group $G$, a $1$-cocycle is a triple $(H,\pi,b)$, where $H$ is a real Hilbert space, $\pi\colon G\to O(H)$ is an orthogonal representation, and $b\colon G\to H$ satisfies the cocycle law: $b(gh)=b(g)+\pi(g)b(h),g,h\in G.$ To any cnd function $\psi$ on a countable discrete group $G$, one can associate with a $1$-cocycle $(H,\pi,b)$ such that $\psi(gh^{-1})=\|b(g)-b(h)\|^2,g,h\in G$. See \cite[Appendix D]{BO08} for more information. \begin{lemma} Let $G$ be a countable discrete group and $\psi\colon G\to [0,\infty)$ a cnd length function. Then $\mathscr{L}\colon\lambda_g\mapsto \psi(g)\lambda_g$ generates a QMS on the group von Neumann algebra of $G$. Assume that the associated $1$-cocycle $b\colon G\to H$ takes values in a finite-dimensional real Hilbert space $H$ with an orthonormal basis $(e_1,\dots,e_n)$. Then the generator $\mathscr{L}$ is of the form \begin{align*} \mathscr{L} x=\sum_{j=1}^n v_j^2 x+x v_j^2 -2v_j x v_j, \end{align*} where $v_j$ is a linear operator on $\ell^2(G)$ given by $v_j \delta_g=\langle b(g),e_j\rangle \delta_g$. \end{lemma} \begin{proof} By definition we have \begin{align*} v_j^2 \lambda_g(\delta_h)&=v_j^2 (\delta_{gh})=\langle b(gh),e_j\rangle v_j(\delta_{gh})=\langle b(gh),e_j\rangle^2\delta_{gh},\\ \lambda_g v_j^2(\delta_h)&=\langle b(h),e_j\rangle\lambda_g v_j(\delta_h)=\langle b(h),e_j\rangle^2\lambda_g (\delta_h)=\langle b(h),e_j\rangle^2\delta_{gh},\\ v_j\lambda_g v_j(\delta_h)&=\langle b(h),e_j\rangle v_j\lambda_g(\delta_h)=\langle b(h),e_j\rangle v_j(\delta_{gh})=\langle b(h),e_j\rangle \langle b(gh),e_j\rangle\delta_{gh}. \end{align*} Thus \begin{equation*} \begin{split} &\sum_{j}\left(v_j^2\lambda_g+\lambda_g v_j^2-2v_j\lambda_g v_j\right)(\delta_h)\\ =&\sum_{j}\left(\langle b(gh),e_j\rangle^2+\langle b(h),e_j\rangle^2-2\langle b(h),e_j\rangle \langle b(gh),e_j\rangle\right)\delta_{gh}\\ =&\sum_{j}\langle b(gh)-b(h),e_j\rangle^2 \delta_{gh}\\ =&\|b(gh)-b(h)\|^2\delta_{gh}. \end{split} \end{equation*} This is nothing but $\mathscr{L}(\lambda_g)(\delta_h)=\psi(g)\lambda_g(\delta_h)=\psi(g)\delta_{gh}$. \end{proof} \begin{remark}\label{rmk:extension_group_vna} The elements $v_j$ are not contained in the group von Neumann algebra $L(G)$ so that Theorem \ref{thm:com_proj} is not directly applicable (even if the $v_j$ are projections). However, if $G$ is finite, then the operator \begin{equation*} \mathscr{L}\colon B(\ell^2(G))\to B(\ell^2(G)),\,\mathscr{L} x=\sum_{j=1}^n v_j^2 x+x v_j^2 -2v_j x v_j, \end{equation*} generates a tracially symmetric QMS on $B(\ell^2(G))$ and we can apply Theorem \ref{thm:com_proj} to that semigroup instead. It is an interesting open question how to treat infinite groups for which the generator has such a Lindblad form. \end{remark} \begin{example} The cyclic group $\mathbb{Z}_n=\{0,1,\dots,n-1\}$; see \cite[Example 5.9]{JZ15a}: Its group (von Neumann) algebra is spanned by $\lambda_k,0\le k\le n-1$. One can embed $\mathbb{Z}_n$ to $\mathbb{Z}_{2n}$, so let us assume that $n$ is even. The word length of $k\in\mathbb{Z}_n$ is given by $\psi(k)=\min\{k,n-k\}$. Define $b\colon\mathbb{Z}_n\to \mathbb{R}^{\frac{n}{2}}$ via\begin{equation*} b(k)=\begin{cases} 0,&k=0,\\ \sum_{j=1}^{k}e_j,&1\le k\le \frac{n}{2},\\ \sum_{j=k-\frac{n}{2}+1}^{\frac{n}{2}}e_j,&\frac{n}{2}+1\le k\le n-1, \end{cases} \end{equation*} where $(e_j)_{1\le j\le \frac{n}{2}}$ is an orthonormal basis of $\mathbb{R}^{\frac{n}{2}}$. Then the linear operators $v_j\colon\ell^2(\mathbb{Z}_n)\to \ell^2(\mathbb{Z}_n)$ given by \begin{equation*} v_j(\delta_k)=\langle b(k),e_j\rangle \delta_k,~~1\le j\le \frac{n}{2} \end{equation*} are commuting projections. Thus the QMS associated with $\psi(g)=\norm{b(g)}^2$ satisfies $\mathrm{CGE}(1,\infty)$. \end{example} \begin{example}\label{ex:symmetric_group} The symmetric group $S_n$: Let $\psi$ be the length function induced by the (non-normalized) Hamming metric, that is, $\psi(\sigma)=\#\{j : \sigma(j)\neq j\}$. Let $A_\sigma\in M_n(\mathbb{R})$ be the permutation matrix associated with $\sigma$, i.e., $A_\sigma \delta_j =\delta_{\sigma(j)}$. Then the associated $1$-cocycle is given by $H=L^2(M_n(\mathbb{R}),\frac 1 2 \mathrm{tr})$, $b(\sigma)=A_\sigma-1$, $\pi(\sigma)=A_\sigma$. The matrices $E_{jk}=\sqrt{2}\ket{j}\bra{k}$ for $j\neq k$ and $E_{jj}=-\sqrt{2}\ket{j}\bra{j}$ form an orthonormal basis of $H$. Define $v_{jk}\in B(\ell^2(S_n))$ by $v_{jk}\delta_\sigma=\sqrt{2}\langle b(\sigma),E_{jk}\rangle \delta_{\sigma}$. Then $v_{jk}$ is a projection. Moreover, \begin{align*} \mathscr{L} x=\frac 1 2\sum_{j,k}v_{jk}^2x+x v_{jk}^2-2v_{jk}x v_{jk}. \end{align*} Thus the associated QMS satisfies $\mathrm{CGE}(1/2,\infty)$. \end{example} To extend the last example to the infinite symmetric group $S_\infty$, we need the following approximation result. \begin{lemma} Let $(\mathcal{M}_n)$ be an ascending sequence of von Neumann subalgebras such that $\bigcup_n \mathcal{M}_n$ is $\sigma$-weakly dense in $\mathcal{M}$. Further let $(P_t)$ be a $\Gamma$-regular QMS on $\mathcal{M}$ and assume that $P_t(\mathcal{M}_n)\subset \mathcal{M}_n$. Let $(P_t^n)$ denote the restriction of $(P_t)$ to $\mathcal{M}_n$. If $(P_t^n)$ satisfies $\mathrm{GE}(K,\infty)$ for all $n\in\mathbb{N}$, then $(P_t)$ also satisfies $\mathrm{GE}(K,\infty)$. The same is true for $\mathrm{CGE}$. \end{lemma} \begin{proof} It is not hard to see that $\bigcup_n \mathcal{M}_n$ is dense in $L^2(\mathcal{M},\tau)$. Since $P_t (\mathcal{M}_n)\subset \mathcal{M}_n$ and $P_t$ maps into the domain of its $L^2$ generator $\mathscr{L}_2$, the space $V=D(\mathscr{L}_2^{1/2})\cap \left(\bigcup_n \mathcal{M}_n\right)$ is also dense in $L^2(\mathcal{M},\tau)$ and invariant under $(P_t)$. Using a standard result in semigroup theory, this implies that $V$ is a form core for $\mathscr{L}$. Thus it suffices to prove \begin{equation*} \norm{\partial P_t a}_\rho^2\leq e^{-2Kt}\norm{\partial a}_{P_t \rho}^2 \end{equation*} for $a\in V$ and $\rho\in \mathcal{M}_+$. Moreover, by Kaplansky's density theorem and the strong continuity of functional calculus, checking it for $\rho \in (\bigcup_n \mathcal{M}_n)_+$ is enough. But for $a\in D(\mathscr{L}^{1/2}_2)\cap \mathcal{M}_n$ and $\rho\in(\mathcal{M}_n)_+$, this is simply the gradient estimate for $(P_t^n)$, which holds by assumption. The argument for $\mathrm{CGE}$ is similar. \end{proof} \begin{corollary} If $G$ is the ascending union of subgroups $G_n$ and $\psi$ is a cnd length function on $G$ such that for every $n$ the QMS associated with $\psi|_{G_n}$ satisfies $\mathrm{GE}(K,\infty)$, then the QMS associated with $\psi$ satisfies $\mathrm{GE}(K,\infty)$. The same is true for $\mathrm{CGE}$. \end{corollary} \begin{example}[Infinite symmetric group] Let $S_\infty$ be the group of permutations of $\mathbb N$ that keep all but finitely many elements fixed. The QMS associated with length function induced by the non-normalized Hamming metric on $S_\infty$ satisfies $\mathrm{CGE}(\frac 1 2,\infty)$. \end{example} Recall that for a countable discrete group $G$, a \emph{F\o lner sequence} is a sequence $\{F_n\}_{n\ge 1}$ of nonempty finite subsets of $G$ such that $$\lim_{n\to\infty}\frac{|gF_n\Delta F_n|}{|F_n|}=0,$$ for every $g\in G$, where $gF=\{gh:h\in F\}$ and $A\Delta B=[A\setminus (A\cap B)]\cup [B\setminus (A\cap B)]$. The group $G$ is called \emph{amenable} if it admits a F\o lner sequence. We refer to \cite[Chapter 2.6]{BO08} for more equivalent definitions and basic properties of amenable groups. \begin{proposition} Let $G$ be an amenable group, $\psi\colon G\to[0,\infty)$ a cnd function with associated $1$-cocycle $(H,\pi,b)$. If there exists an orthonormal basis $(e_j)_{j\in J}$ of $H$ such that $\langle b(g),e_j\rangle\in \{0,1\}$ for all $g\in G$, $j\in J$, then the QMS $(P_t)$ associated with $\psi$ satisfies $\mathrm{CGE}(1,\infty)$. \end{proposition} \begin{proof} To ease notation, we will only deal with $\mathrm{GE}(1,\infty)$. The proof of complete gradient estimate is similar, embedding $L(G)\otimes \mathcal{N}$ into a suitable ultraproduct. Let $(F_n)$ be a F\o lner sequence for $G$ and $\omega\in \beta\mathbb{N}\setminus\mathbb{N}$. Endow $B(\ell^2(F_n))$ with the normalized trace $\tau_n$ and let $p_n$ denote the projection from $\ell^2(G)$ onto $\ell^2(F_n)$. Then we have a trace-preserving embedding \begin{equation*} L(G)\to \prod_\omega B(\ell^2(F_n)),\,x\mapsto (p_n x p_n)^\bullet. \end{equation*} For each $j$, let $v_j$ be the linear operator on $\ell^2(G)$ given by $v_j (\delta_g) =\langle b(g),e_j\rangle \delta_g$, and denote its restriction to $\ell^2(F_n)$ by the same symbol. Note that for every fixed $n\in\mathbb{N}$, there are only finitely many indices $j\in J$ such that $v_j$ is non-zero on $\ell^2(F_n)$. Let \begin{equation*} \mathcal{H}_n=\bigoplus_{j\in J}L^2(B(\ell^2(F_n)),\tau_n) \end{equation*} and \begin{equation*} \partial_n\colon B(\ell^2(F_n))\to \mathcal{H}_n,\,a\mapsto ([v_j,a])_j. \end{equation*} For $x=\sum_g x_g\lambda_g$ with $\sum_g \psi(g)\abs{x_g}^2<\infty$, we have \begin{align*} v_j p_n x p_n (\delta_g) &=1_{F_n}(g)v_j p_n x (\delta_g)\\ &=\sum_{h\in G}1_{F_n}(g)x_h v_j p_n (\delta_{hg})\\ &=\sum_{h\in G}1_{F_n}(g)1_{F_n}(hg)x_h v_j (\delta_{hg})\\ &=\sum_{h\in G}1_{F_n}(g)1_{F_n}(hg)x_h \langle b(hg),e_j\rangle \delta_{hg}, \end{align*} and \begin{align*} p_n x p_n v_j (\delta_g) &=\langle b(g),e_j\rangle p_n x p_n(\delta_g)\\ &= 1_{F_n}(g) \langle b(g),e_j\rangle p_n x(\delta_{g})\\ &=\sum_{h\in G}1_{F_n}(g)x_h \langle b(g),e_j\rangle p_n (\delta_{hg})\\ &=\sum_{h\in G}1_{F_n}(g)1_{F_n}(hg)x_h \langle b(g),e_j\rangle \delta_{hg}. \end{align*} Hence \begin{align*} [v_j,p_n x p_n](\delta_g)&=(v_j p_n x p_n-p_n x p_n v_j) (\delta_g)\\ &=\sum_{h\in G}1_{F_n}(g)1_{F_n}(hg)x_h \langle b(hg)- b(g),e_j\rangle \delta_{hg}, \end{align*} and we get \begin{align*} &\quad\,\norm{\partial_n (p_n x p_n)}_{\mathcal{H}_n}^2\\ &=\frac 1{\abs{F_n}}\sum_{g\in F_n}\sum_{j\in J}\langle [v_j,p_n x p_n]\delta_g,[v_j,p_n x p_n]\delta_g\rangle\\ &=\frac 1{\abs{F_n}}\sum_{g\in F_n}\sum_{j\in J}\sum_{h,h'\in G}\bigg(\overline{x_h} x_{h'}\overline{\langle b(hg)-b(g),e_j\rangle} \langle b(h'g)-b(g),e_j\rangle\\ &\qquad\qquad1_{F_n}(hg)1_{F_n}(h'g)\langle \delta_{h g},\delta_{h'g}\rangle\bigg)\\ &=\frac 1{\abs{F_n}}\sum_{g\in F_n}\sum_{j\in J}\sum_{h\in G}|x_h|^2\langle b(hg)-b(g),e_j\rangle|^2 1_{F_n}(hg)\\ &=\frac 1{\abs{F_n}}\sum_{g\in F_n}\sum_{h\in G}|x_h|^2\norm{b(hg)-b(g)}^2 1_{F_n}(hg)\\ &=\frac 1{\abs{F_n}}\sum_{g\in F_n}\sum_{h\in G}\psi(h)|x_h|^2 1_{F_n}(hg)\\ &=\sum_{h\in G}\psi(h)\abs{x_h}^2 \frac {\abs{h^{-1}F_n\cap F_n}}{\abs{F_n}}, \end{align*} which converges to $$\sum_{h\in G}\psi(h)\abs{x_h}^2=\norm{\partial x}^2_{\mathcal{H}}$$ as $n\to\omega$ by the F\o lner property of $(F_n)$ after an application of the dominated convergence theorem. Thus the tangent bimodule $\mathcal{H}$ for $(P_t)$ can be viewed as a submodule of $\prod_\omega \mathcal{H}_n$ with the obvious left and right action and $\partial=(\partial_n)^\bullet$. Let $(P_t^n)$ be the QMS on $B(\ell^2(F_n))$ generated by $\partial_n^\dagger\partial_n$. Since $\langle b(\cdot),e_j\rangle$ takes values in $\{0,1\}$, the operators $v_j$'s are projections. Clearly all the $v_j$'s commute. Hence by Theorem \ref{thm:com_proj} and Remark \ref{rmk:extension_group_vna}, $(P_t^n)$ satisfies $\mathrm{GE}(1,\infty)$. From the ultraproduct structure of $\mathcal{H}$ and $\partial$ we deduce \begin{align*} \norm{\partial P_t(x_n)^\bullet}_{(\rho_n)^\bullet}^2&=\lim_{n\to\omega}\norm{\partial_n P_t^n x_n}_{\rho_n}^2\\ &\leq \lim_{n\to\omega} e^{-2t}\norm{\partial_n x_n}_{P_t^n \rho_n}^2\\ &=e^{-2t}\norm{\partial (x_n)^\bullet}_{P_t (\rho_n)^\bullet}^2 \end{align*} for $(x_n)^\bullet\in L(G)$ and $(\rho_n)^\bullet\in L(G)_+$. In other words, $(P_t)$ satisfies $\mathrm{GE}(1,\infty)$. \end{proof} \begin{remark} The group von Neumann algebra embeds into an ultraproduct of matrix algebras if and only if the underlying group is hyperlinear, so it might be possible to extend the previous proposition beyond amenable groups. \end{remark} \begin{example}[Amenable groups acting on trees] Let $\mathcal{T}$ be a tree (viewed as unoriented graph) and $G$ an amenable subgroup of $\mathrm{Aut}(\mathcal{T})$. For fixed $x_0\in \mathcal{T}$ define the length function $\psi$ on $G$ by $\psi(g)=d(x_0,gx_0)$, where $d$ is the combinatorial graph distance. As in the case of free groups, one sees that $\psi$ is conditionally negative definite and the associated $1$-cocycle can be described as follows (see \cite[Example C.2.2]{BHV08}): Let $E=\{(x,y)\mid x\sim y\}$ be the set of oriented edges of $\mathcal{T}$, and for $e=(x,y)\in E$ write $\bar e=(y,x)$. Let $H=\{\xi\in \ell^2(E)\mid \xi(\bar e)=-\xi(e)\}$ with inner product \begin{equation*} \langle \xi,\eta\rangle=\frac 1 2\sum_{e\in E}\xi(e)\eta(e). \end{equation*} The action of $G$ on $H$ is given by $\pi(g)\xi(x,y)=\xi(gx,gy)$, and the $1$-cocycle $b$ is given by \begin{equation*} b(g)(e)=\begin{cases}1&\text{if }e\text{ lies on }[x_0,gx_0],\\-1&\text{if }\bar e\text{ lies on }[x_0,gx_0],\\0&\text{otherwise},\end{cases} \end{equation*} where $[x_0,gx_0]$ denotes the unique geodesic joining $x_0$ and $gx_0$. Put $F=\{(x,y)\in E\mid d(x_0,x)<d(x_0,y)\}$. Then $(1_e-1_{\bar e})_{e\in F}$ is an orthonormal basis of $H$ and $\langle b(g),1_e-1_{\bar e}\rangle\in \{0,1\}$ for all $g\in G$ and $e\in F$. Thus the QMS associated with $\psi$ satisfies $\mathrm{CGE}(1,\infty)$. For example this is the case for $G=\mathbb{Z}$ with $\psi(k)=\abs{k}$. Here the tree is the Cayley graph of $\mathbb{Z}$ and the action is given by the left-regular representation. This QMS on $L(\mathbb{Z})$ corresponds, under the Fourier transform, to the Poisson semigroup on $L^\infty(S^1)$. \end{example} More generally, the Cayley graph of a group is a tree if and only if it is of the form $\mathbb{Z}^{\ast k}\ast \mathbb{Z}_2^{\ast l}$ for $k,l\geq 0$. This group is not amenable unless $k+l\leq 1$, but the free product structure allows us to obtain the same bound. \begin{theorem} If $G$ is a group whose Cayley graph is a tree and the cnd function $\psi$ is given by $\psi(g)=d(g,e)$, where $d$ is the combinatorial metric on the Cayley graph, then the QMS associated with $\psi$ satisfies $\mathrm{CGE}(1,\infty)$ and $\mathrm{CLSI}(2)$ and the constants in both inequalities are optimal. \end{theorem} \begin{proof} As previously mentioned, $G$ is of the form $\mathbb{Z}^{\ast k}\ast \mathbb{Z}_2^{\ast l}$ with $k,l\geq 0$. It is not hard to see that the QMS associated with $\psi$ decomposes as free product of the QMS associated with the word length functions on the factors. Thus it satisfies $\mathrm{CGE}(1,\infty)$ by Theorem \ref{thm:free_product} and $\mathrm{CLSI}(2)$ by Corollary \ref{cor:MLSI}. Since both the gradient estimate and the modified logarithmic Sobolev inequality imply that the generator has a spectral gap of $1$, the constants are optimal. \end{proof} \begin{example} If $G$ is a free group and $\psi$ the word length function, then the associated QMS satisfies $\mathrm{CGE}(1,\infty)$ and $\mathrm{CLSI}(2)$. Note that the usual logarithmic Sobolev inequality, which is equivalent to the optimal hypercontractivity, is still open. Some partial results have been obtained in \cite{JPPR15,RX16}. Our optimal modified LSI supports the validity of optimal LSI from another perspective. \end{example} \newcommand{\etalchar}[1]{$^{#1}$}
{ "timestamp": "2021-09-06T02:09:06", "yymm": "2007", "arxiv_id": "2007.13506", "language": "en", "url": "https://arxiv.org/abs/2007.13506" }